(by Ajay Agrawal, Joshua Gans and Avi Goldfarb); originally published in HBR Online 17 April 2018.
There is no shortage of hot takes regarding the significant impact that artificial intelligence (AI) is going to have on business in the near future. Much less has been written about how, exactly, companies should get started with it. In our research and in our book, we begin by distilling AI down to its very simplest economics, and we offer one approach to taking that first step.
We start with a simple insight: Recent developments in AI are about lowering the cost of prediction. AI makes prediction better, faster, and cheaper. Not only can you more easily predict the future (What’s the weather going to be like next week?), but you can also predict the present (what is the English translation of this Spanish website?). Prediction is about using information you have to generate information you don’t have. Anywhere you have lots of information (data) and want to filter, squeeze, or sort it into insights that will facilitate decision making, prediction will help get that done. And now machines can do it.
Better predictions matter when you make decisions in the face of uncertainty, as every business does, constantly. But how do you think through what it would take to incorporate a prediction machine into your decision-making process?
In teaching this subject to MBA graduates at the University of Toronto’s Rotman School of Management, we have introduced a simple decision-making tool: the AI Canvas. Each space on the canvas contains one of the requirements for machine-assisted decision making, beginning with a prediction.
To explain how the AI Canvas works, we’ll use an example crafted during one of our AI strategy workshops by Craig Campbell, CEO of Peloton Innovations, a venture tackling the security industry with AI. (It’s a real example, based on a product that Peloton is commercializing, called RSPNDR.ai.)
Over 97% of the time that a home security alarm goes off, it’s a false alarm. That is, something other than an unknown intruder (threat) triggered it. This requires security companies to make a decision as to what to do: Dispatch police or a guard? Phone the homeowner? Ignore it? If the security company decides to take action, more than 90 out of 100 times, it will turn out that the action was wasted. However, always taking an action in response to an alarm signal means that when a threat is indeed present, the security company responds.
How can you decide whether employing a prediction machine will improve matters? The AI Canvas is a simple tool that helps you organize what you need to know into seven categories in order to systematically make that assessment. We provide an example for the security alarm case.
First, you specify what you are trying to predict. In the alarm case, you want to know whether an alarm is caused by an unknown person or not (true versus false alarm). A prediction machine can potentially tell you this — after all, an alarm with a simple movement sensor is already a sort of prediction machine. With machine learning, you can take a richer range of sensor inputs to determine what you really want to predict: whether the movement was caused specifically by an unknown person. With the right sensors — say, a camera in the home to identify known faces or pets, a door key that recognizes when someone is present, and so on — today’s AI techniques can provide a more nuanced prediction. The prediction is no longer “movement = alarm” but, for example, “movement + unrecognized face = alarm.” This more sophisticated prediction reduces the number of false alarms, making the decision to send a response, as opposed to trying to contact the owner first, an easier one.
No prediction is 100% accurate. So, in order to determine the value of investing in better prediction, you need to know the cost of a false alarm, as compared with the cost of dismissing an alarm when it is true. This will depend on the situation and requires human judgment. How costly is a response phone call to verify what is happening? How expensive is it to dispatch a security guard in response to an alarm? How much is it worth to respond quickly? How costly is it to not respond if it turns out that there was an intruder in the home? There are many factors to consider; determining their relative weights requires judgment.
Such judgment can change the nature of the prediction machine you deploy. In the alarm case, having cameras all over the house may be the best way of determining the presence of an unknown intruder. But many people will be uncomfortable with this. Some people would prefer to trade the cost of dealing with more false alarms for enhanced privacy. Judgment sometimes requires determining the relative value of factors that are difficult to quantify and thus compare. While the cost of false alarms may be easy to quantify, the value of privacy is not.
Next, you identify the action that is dependent on the predictions generated. This may be a simple “dispatch/don’t dispatch” decision, or it may be more nuanced. Perhaps the options for action include not just dispatching someone but also enabling immediate remote monitoring of who is in the home or some form of contact with the home owner.
An action leads to an outcome. For example, the security company dispatched a security guard (action), and the guard discovered an intruder (outcome). In other words, looking back, we are able to see for each decision whether the right response occurred. Knowing this is important for evaluating whether there is scope to improve predictions over time. If you do not know what outcome you want, improvement is difficult, if not impossible.
The top row of the canvas — prediction, judgment, action, and outcome — describes the critical aspects of a decision. On the bottom row of the canvas are three final considerations. They all relate to data. To generate a useful prediction, you need to know what is going on at the time a decision needs to be made — in this case, when an alarm is triggered. In our example, this includes motion data and image data collected at the home in real time. That is your basic input data.
But to develop the prediction machine in the first place, you need to train a machine learning model. Training data matches historical sensor data with prior outcomes to calibrate the algorithms at the heart of the prediction machine. In this case, imagine a giant spreadsheet where each row is a time the alarm went off, whether there was in fact an intruder, and a bunch of other data like time of day and location. The richer and more varied that training data, the better your predictions will be out of the gate. If that data is not available, then you might have to deploy a mediocre prediction machine and wait for it to improve over time.
Those improvements come from feedback data. This is data that you collect when the prediction machine is operating in real situations. Feedback data is often generated from a richer set of environments than training data. In our example, you may correlate outcomes with data collected from sensors through windows, which affect how movements are detected and how cameras capture a facial image — perhaps more realistic than the data used for training. So, you can improve the accuracy of predictions further with continual training using feedback data. Sometimes feedback data will be tailored to an individual home. Other times, it might aggregate data from many homes.
Clarifying these seven factors for each critical decision throughout your organization will help you get started on identifying opportunities for AIs to either reduce costs or enhance performance. Here we discussed a decision associated with a specific situation. To get started with AI, your challenge is to identify the key decisions in your organization where the outcome hinges on uncertainty. Filling out the AI Canvas won’t tell you whether you should make your own AI or buy one from a vendor, but it will help you clarify what the AI will contribute (the prediction), how it will interface with humans (judgment), how it will be used to influence decisions (action), how you will measure success (outcome), and the types of data that will be required to train, operate, and improve the AI.
The potential is enormous. For example, alarms communicate predictions to a remote agent. Part of the reason for this approach is that there are so many false signals. But just think: If our prediction machine became so good that there were no false alarms, then is dispatch still the right response? One can imagine alternative responses, such as an on-site intruder capture system (as in cartoons!), which could be more feasible with significantly more-accurate and high-fidelity predictions. More generally, better predictions will create opportunities for entirely new ways to approach security, potentially predicting the intent of intruders before they even enter.