2019 NBER AI Conference

Ajay Agrawal, Joshua Gans, Catherine Tucker, and I recently hosted the third NBER Conference in the Economics of Artificial Intelligence in Toronto. The conference provides a place for scholars from different fields of economics to discuss the implications of the rise of AI. The fields this year included macro, labor, theory, development, mechanism design, econometrics, industrial organization, finance, and health. Below, I summarize some ideas that I saw for the first time at this year’s conference.

  • Mara Lederman described the key role of economists in AI research [more video from the conference coming soon]. As economists, our comparative advantage is understanding how technology will change equilibrium outcomes. It is not in horse-racing different prediction technologies. In the context of recent advances in AI, better prediction will affect decision-making. The core questions are whether upgrades to prediction technology change equilibrium outcomes, and how this will affect welfare.
  • Skill-biased technical change has created a puzzle. David Autor highlighted that a direct implication of our models of technical change is that should be developing new tasks for unskilled men. Generally, that isn’t happening. The few places where it is happening, such as Uber, are not widely appreciated as a new opportunity for work for a group that has experienced a deterioration of employment prospects since the 1980s.
  • Patrick Francois emphasized that the Acemoglu-Restrepo-Autor task-based model of automation and AI has become the canonical framework for understanding the impact of AI. This set up a debate later in the day, starting with Tim Bresnahan’s discussion of the flaws in the task-based model for understanding the bigger picture. The strengths and weaknesses of the task-based model become clear. It helps understand labor demand, but provides little insight into competition between production systems.
  • Regulating AI is difficult. How can a government bureaucracy keep up with rapid changes in technology? Gillian Hadfield and Jack Clark suggested competitive private regulators, that are themselves regulated by the government. While there were plenty of questions about the details of how this would work (avoiding regulatory capture, ensuring the government still had sufficient expertise, etc.), it struck me as a new approach that bears unpacking.
  • Keynote speakers Steve Jurvetson and Jack Clark both emphasized the importance of increased computing power in driving technical advances, rather than algorithms or data per se. The central role of computation creates challenges, particularly for less well-resourced firms and researchers. While economists have been debating economies of scale in data, it might be economies of scale in computation that generate market concentration and market power.
  • At least since the Lucas critique, econometricians have emphasized how pure prediction fails in the presence of strategic behavior. As Bjorkegren and Blumenstock explained, there are two approaches to addressing this. The “Silicon Valley” approach of opacity and continual retraining and a mechanism design approach of finding a manipulation-robust (or equilibrium-focused) prediction tool. They showed the feasibility of operating a manipulation-robust tool in a large field experiment in Kenya.
  • Any understanding of the impact of technology will centre on substitutes and complements. Complements can, in turn, complement each other. Daniel Rock argued that TensorFlow was a complement to the skills of AI workers that increased the value of firms that employed those workers. TensorFlow shifted power in industry by opening up computation and algorithms to a wider set of skilled workers.
  • How has AI changed the role of skilled workers? Jillian Grennan examined equity analysts. She showed that equity analysts are getting worse as AI diffuses into finance. It isn’t that AI is making them worse at their jobs. Instead, the best analysts are quitting to do other things. It seems those analysts that remain in what seems like a job with relatively low future prospects are those with worse outside options. I think this paper gives us a great window into understanding how AI might change the prospects of high skilled workers. Michael Webb has shown us many well-paid jobs involve prediction tasks. Grennan’s work gives us a sense of which workers will adapt to the changes and which might be left behind.
  • Jonathan Kolstad showed that AI (as a decision support tool) could dramatically improve health insurance plan choices. In the discussion, a critique arose that it wasn’t a generic AI, but an AI built by several of the world’s top health economists. In other words, it wasn’t a statement about a generic AI, but of the expertise of the research team. In my view, this wasn’t a critique. It clarified one possible advantage of AI: It can scale the decisions of the most skilled people in a profession.

Just a few observations from a very stimulating couple of days. Video full of the conference will be posted shortly. Looking forward to next year!

4 Replies to “2019 NBER AI Conference”

  1. “competitive private regulators” – we already have these, after a fashion. Facebook and YouTube, Uber and Lyft, TripAdvisor and Airbnb…. Most platforms play a quasi-regulatory role to keep users safe and promote trust. The issue is they are only accountable to themselves, and indirectly to their users. But new private regulators may not be needed so much as better ways of harnessing the firms already doing that job.

Leave a comment