The Simple Economics of Machine Intelligence

[This post was co-written with Ajay Agrawal and Avi Goldfarb and appeared in HBR.org on 17 November 2016]

sat-ai-head-640x353The year 1995 was heralded as the beginning of the “New Economy.” Digital communication was set to upend markets and change everything. But economists by and large didn’t buy into the hype. It wasn’t that we didn’t recognize that something changed. It was that we recognized that the old economics lens remained useful for looking at the changes taking place. The economics of the “New Economy” could be described at a high level: Digital technology would cause a reduction in the cost of search and communication. This would lead to more search, more communication, and more activities that go together with search and communication. That’s essentially what happened.

Today we are seeing similar hype about machine intelligence. But once again, as economists, we believe some simple rules apply. Technological revolutions tend to involve some important activity becoming cheap, like the cost of communication or finding information. Machine intelligence is, in its essence, a prediction technology, so the economic shift will center around a drop in the cost of prediction.

The first effect of machine intelligence will be to lower the cost of goods and services that rely on prediction. This matters because prediction is an input to a host of activities including transportation, agriculture, healthcare, energy manufacturing, and retail.

When the cost of any input falls so precipitously, there are two other well-established economic implications. First, we will start using prediction to perform tasks where we previously didn’t. Second, the value of other things that complement prediction will rise.

Lots of tasks will be reframed as prediction problems

As machine intelligence lowers the cost of prediction, we will begin to use it as an input for things for which we never previously did. As a historical example, consider semiconductors, an area of technological advance that caused a significant drop in the cost of a different input: arithmetic. With semiconductors we could calculate cheaply, so activities for which arithmetic was a key input, such as data analysis and accounting, became much cheaper. However, we also started using the newly cheap arithmetic to solve problems that were not historically arithmetic problems. An example is photography. We shifted from a film-oriented, chemistry-based approach to a digital-oriented, arithmetic-based approach. Other new applications for cheap arithmetic include communications, music, and drug discovery.

The same goes for machine intelligence and prediction. As the cost of prediction falls, not only will activities that were historically prediction-oriented become cheaper — like inventory management and demand forecasting — but we will also use prediction to tackle other problems for which prediction was not historically an input.

Consider navigation. Until recently, autonomous driving was limited to highly controlled environments such as warehouses and factories where programmers could anticipate the range of scenarios a vehicle may encounter, and could program if-then-else-type decision algorithms accordingly (e.g., “If an object approaches the vehicle, then slowdown”). It was inconceivable to put an autonomous vehicle on a city street because the number of possible scenarios in such an uncontrolled environment would require programming an almost infinite number of if-then-else statements.

Inconceivable, that is, until recently. Once prediction became cheap, innovators reframed driving as a prediction problem. Rather than programing endless if-then-else statements, they instead simply asked the AI to predict: “What would a human driver do?” They outfitted vehicles with a variety of sensors – cameras, lidar, radar, etc. – and then collected millions of miles of human driving data. By linking the incoming environmental data from sensors on the outside of the car to the driving decisions made by the human inside the car (steering, braking, accelerating), the AI learned to predict how humans would react to each second of incoming data about their environment. Thus, prediction is now a major component of the solution to a problem that was previously not considered a prediction problem.

Judgment will become more valuable

When the cost of a foundational input plummets, it often affects the value of other inputs. The value goes up for complements and down for substitutes. In the case of photography, the value of the hardware and software components associated with digital cameras went up as the cost of arithmetic dropped because demand increased – we wanted more of them. These components were complements to arithmetic; they were used together.  In contrast, the value of film-related chemicals fell – we wanted less of them.

All human activities can be described by five high-level components: data, prediction, judgment, action, and outcomes. For example, a visit to the doctor in response to pain leads to: 1) x-rays, blood tests, monitoring (data), 2) diagnosis of the problem, such as “if we administer treatment A, then we predict outcome X, but if we administer treatment B, then we predict outcome Y” (prediction), 3) weighing options: “given your age, lifestyle, and family status, I think you might be best with treatment A; let’s discuss how you feel about the risks and side effects” (judgment); 4) administering treatment A (action), and 5) full recovery with minor side effects (outcome).

As machine intelligence improves, the value of human prediction skills will decrease because machine prediction will provide a cheaper and better substitute for human prediction, just as machines did for arithmetic. However, this does not spell doom for human jobs, as many experts suggest. That’s because the value of human judgment skills will increase. Using the language of economics, judgment is a complement to prediction and therefore when the cost of prediction falls demand for judgment rises. We’ll want more human judgment.

For example, when prediction is cheap, diagnosis will be more frequent and convenient, and thus we’ll detect many more early-stage, treatable conditions. This will mean more decisions will be made about medical treatment, which means greater demand for the application of ethics, and for emotional support, which are provided by humans. The line between judgment and prediction isn’t clear cut – some judgment tasks will even be reframed as a series of predictions. Yet, overall the value of prediction-related human skills will fall, and the value of judgment-related skills will rise.

Interpreting the rise of machine intelligence as a drop in the cost of prediction doesn’t offer an answer to every specific question of how the technology will play out. But it yields two key implications: 1) an expanded role of prediction as an input to more goods and services, and 2) a change in the value of other inputs, driven by the extent to which they are complements to or substitutes for prediction. These changes are coming. The speed and extent to which managers should invest in judgment-related capabilities will depend on the how fast the changes arrive.

44 Replies to “The Simple Economics of Machine Intelligence”

  1. Very helpful analysis. However some of your assertions in the last few paragraphs puzzle me.

    What is the support for analyzing all tasks into those five components (data, prediction, judgment, action, and outcomes)? I’m sure as a narrative move this can always be made to work, but are these five components somehow better than others?

    Also, regarding judgement: You mention empathy — is that for some reason a component of judgemnt? If not, where does it fit into the five components? Maybe action?

    Finally, I don’t see why judgement is a complement to prediction rather than a substitute. For example consider ethical issues. If a bot can predict the human consensus with respect to an ethical question, then can’t it just adopt that? This seems strictly parallel to your driving example where the bot predicts what the human(s) would do.

    Of course in any domain prediction will have limits. Maybe complex / difficult ethical decisions will be hard for bots to predict (until they have more data about what humans do). But this is true for prediction in *any* domain so does not define a boundary between judgment and prediction.

    Thanks again for a stimulating post.

    1. The third remark is crucial. The authors seem to assume that predictions are about exogenous state of the world. Then indeed judgement would be a complement to prediction. If prediction is on actions (as the G-car example makes clear), it is a substitute.

      1. The key difference between prediction and judgment lies in the irrationality of humans and who undertakes the liability of that irrationality.

        Taking the medical example, if a loved one was given a predicted survival rate of 10% by an AI at the cost of thousands of dollars of treatment, the AI may then judge the reasonable positive outcomes of that treatment too small to justify the spending.
        (i.e. most people in this situation tend not to undergo treatment or for those that do, the outcome is still unfavourable so you should not do it either).

        Even assuming that the AI has your financial information and a rudimentary “risk” profile for you in your regular financial or spending habits, it cannot predict your emotional state in a extreme/stressed situation, not without there being a large possibility of false positives/negatives.

        If you find the anecdotal case of even one person who survived despite the 10% odds you will have resentments towards the AI for disallowing you from taking the treatment. Thus the liability of a behind-the-scenes AI corporation getting that decision wrong (wrong in the eyes of the irrational human, regardless of the accuracy of the prediction) becomes huge and expensive, especially in countries with tendencies to sue easily.

        Compare this to a doctor who can talk to you and discuss your decision with you in detail – both logically and emotionally, and the economics of that liability fall in favour of individual humans making that judgement.

  2. What happens if (when) machine results alone are shown to be better than:
    1. human results alone (of course)
    2. human and machine combined results?
    It does feel good to imagine that human judgment is somehow different in coherent ways than machine judgment, but what happens if you were to be shown (at a higher level of machine development) that this feel-good conception of things costs human lives?

    1. Especially in that what the authors describe as a judgement in their example seems to me to be more complex prediction. That is, aren’t age, life-style etc, simply more data upon which to build a prediction of outcomes? Why is human judgement even neccessary in this case?

  3. Human use logic to make judgement. Scientists have proven that logic is used to rationalise emotional perspective with regards to issue. Hence human emotion makes judgement.

    For this reason, surgeons do not operate on immediate family member. Objectivity have always been stressed in professional judgement. But human inherently can not escape emotion.

    Computer logic is devoid of emotion. Thus, it makes better judgement. US drones already fly autonomously to operation area where operator takes over. Applied R&D is underway to go fully autonomous. Programming in various scenarios is simply machine learning curve. Machine learning from multiple machines drastically shortens the curve. Same for cars.

    Well thought out article, but need to get around Black Swan perspective.

  4. Interesting article.

    I agree that machine learning will replace the “human” element of defining the algorithm to define and solve the problems in more and more domains.

    we have moved from trying to solve problems using “expert systems” as this was the focus previously. We would try to develop systems to replicate the heuristics used by “experts” but these failed not only because of the decling cost of computing but also because the models used to try to replicate human experts were faulty.

    Allowing machines to learn and find the patterns is more efficient than asking people in many cases now.

    It takes time for the human mind to find the patterns. The challenge will still be to determine the data set to feed the machine.

    The interesting point is in solving creative problems, the human mind takes feeds from many more sources intuitively. What we learn to accept or ignore is interesting. Think of Archimedes and his Eureka moment. Would an AI have learnt to solve that problem…

    For example we learn intuitively that when it rains, traffic will be slower and so we “learn” these patterns. For ML/AI who will define the data sets or parameters in novel and new situations or develop creative solutions to problems.

    E.g. How would an AI system design a mars lander… or is imagination an prediction algorithm.

    Another element is that we as moving to solve things at a smaller level, e.g. Gene therapy rather than surgery.

  5. If AI prediction becomes that good, it will be quite natural for human to accept the AI’s prediction. This is because, if any wrong prediction is made the human (usually an employee) blames it on the AI. This may occur, say one wrong prediction in a thousand or millionth. Such a scenario may be alright for wrong predictions that do not cause serious damages to the organization. Basing on human prediction, we all know that under certain circumstances grouping (another word biased grouping [races, religion, sexes, etc.] will likely be one of the factor or factors) and this can be very sensitive to the user (whether he likes it or not). As human ourselves, do we have to learn to accept this from an AI ?

  6. Would love to see a more in depth analysis into the difference between prediction and judgment, understanding that the distinction between them, however small, will be the key to interpreting the true effects of machine intelligence on human life.

  7. I don’t think I understand what you consider the attributes of “human judgment” that are immune to AI-based replacement, let alone an increase in value driven by prediction-based processing. For example, how would the use of more prediction in medical applications lead to an increase in the demand for ethical discussions or emotional support in the context of medical applications? Are you assuming that more prediction leads to a large increase in total treatments, and the increase in treatments leads to more ethical quandaries and emotional support scenarios?

    I think it’s more likely that better prediction across a range of medical applications will lower the number of emergency interventions, late interventions and hopeless interventions, which should reduce ethical dilemmas and emotionally fraught scenarios.

    I do think that fields requiring empathetic human interaction will be the “last to go” when it comes to AI-based replacement. So for example, we will have capable AI-based computer “coders” and capable AI-based investment bankers before we have capable AI-based pre-school teachers. But that won’t increase the demand for empathy-based skills, it will just make more people available to do them (the former coders and bankers). That portends a declining market price for empathy-based skills, as the labor pool expands and competition increases.

  8. Sound economic analysis. The issues raised over judgement, above, are, I think, associated with the extremes of the Bayesian analysis that comes out of many current AI approaches (specifically, the multi-layered neural nets), which rely on interpretation from and to the ‘real-world’. As the cost of the bulk of predictions drops, the failures will become more prominent. In machine learning contexts, where cases are not encountered often enough, or where the training and outcome translations are not sufficiently high fidelity, wrong conclusions are expected.

    A good example of the latter is the poor understanding of the statistical models that over-looked asset correlations in extreme circumstances, leading to the 2008 crash. These models are very similar to the types of models that arise in machine learning.

  9. Sorry. When machine prediction and diagnosis is becoming much superior to human “judgment”, why should the value of human judgment skills increase? Sorry, I don’t get it.

    When machine mathematics became cheap and fast and infallible, did the value of the skill of extracting square roots increase? No. Therefore, the word “judgment” is wrong.

    What happened is that cheap maths increased the value of those with skills able to use maths, such as statisticians and quants. Cheap prediction will create a new class of professions requiring extreme IQ. Artillery officers come to my mind.

    Regarding increasing the value of emotional support, mother cannot be bested.

Leave a comment