Misapplied metaphors in AI policy

Many querulous conversations fan the flames in policy debates about artificial intelligence. Everyone agrees we are transitioning to something, but not on what that will be. Anyone want to venture a guess? It is safe to bet on widespread use of neural networks and deep learning. Anything else?

Some futurists also forecast a confrontation between the US and China. The Chinese government has played no small role in that forecast by broadcasting its aspirations for Chinese firms to take a leading position in AI. That has set off a predictable debate in Washington about whether the US government should do something similar.

That policy question creates jumping off point for today’s column. This column attempts to correct a few of the misapplied metaphors.

Not comparable

Start with the obvious. Unlike the Chinese government, the US government does not directly subsidize any technology company outside of the military, nor does it compel banks to lend to specific firms at favorable rates, nor will it ever speed the permits of national champions, and, frankly, it usually avoids designating a champion. Outside of the activities at the NSA, the US government also cannot suspend privacy laws in order to nurture technology development (not legally). To summarize, the US government rarely has intervened directly in technical industries except during wartime, or except when the panic of war seems closer, as with the cold war.

That does not mean the US government sits on its hands today when it comes to funding AI. It funds a considerable amount of AI research through the National Science Foundation (NSF) – more than $100 million annually for AI, according to its own web site. The NIH also funds research into medical applications for AI – scores of multi-million dollar projects, again, according to its own web site. The US government also procures a considerable number of AI-enhanced products from US suppliers, especially for military applications, which acts as an indirect subsidy for products that employ a lot of data scientists and engineers.

Private funding swamps government funding, and therein lies a crucial feature of the North American setting. The top five US states in the US contained over $2.5B of AI VC funding in 2018. Though the data are not public, the private funding for internal projects from Google, Amazon, Microsoft, and Facebook surely equal the VC funding. While nobody in the Valley likes to think Wall Street and insurance firms lead the way in AI software, add them too. These firms have always led development of frontier software.

That all adds up. Even if it tried, the US government could not pick winners, nor pick national champions for international competition. By design, the university system leaves enormous discretion to the students and faculty, and markets leave discretion to firms, who can invest as they see fit.

One more observation: though its size is about 10% of the US university system, the Canadian universities are, de facto, part of this system too. That matters because there is a big AI startup scene in Toronto.

It also matters for immigration. Between the two countries, around 40% of the graduate students are non-native, and a high fraction stay in North America after their training ends. To say it another way, many talented students come to North American for masters and PhDs because they want to participate in the economy.

More pointedly, if the US government makes immigration harder, our Canadian neighbors will benefit, and happily. The talent is staying one way or another.

Innovation from the edges

Due to its decentralization, the North American system of universities permits researchers to pursue their own muse for either fame, fortune, or mere curiosity. In turn, that supports ideas outside the mainstream, and it adapts quickly to new opportunity when these outsiders turn out to be right, as they are occasionally. This dynamic has played a role in recent developments in AI.

A great illustration comes from the actions of Fei-fei Li, who was a new professor at the University of Illinois when she initiated a big project more than a decade ago. She recognized the need for a standardized benchmark, and so she embarked on collecting and tagging millions of online photos. It took several years, but eventually she assembled them and in 2010 began contests among algorithms to identify objects.

Many regard the results of the 2012 test as catalytic. Geoffrey Hinton, along with collaborators Ilya Sutskever and Alex Krizhevsky, won with a convolution neural network architecture. That win brought attention to their approach. The paper describing their algorithm, AlexNet, has since received over 30,000 citations in the references of scientific publications, which is extraordinary for such a short period.

Hinton’s experience almost defines what it is like to be an outsider. He found employment at the University of Toronto. His approach to neural networks encountered considerable resistant among mainstream researchers, and his persistence has proven the mainstream wrong. Years ago he developed the approach to deep learning along with collaborators Yushio Bengio, and Yann LeCun. The three of them recently won the Turing award.

Their recent experience also illustrates how good ideas spread to industry, which does not suffer from not-invented-here. Today Hinton goes between Toronto and Google, LeCun between NYU and Facebook. Bengio has stayed in academics (though works prominently with industry), and holds the recent record for most academic citations per day. Li runs an AI institute at Stanford and recently spent her sabbatical at Google Cloud. Sutskever spent time at Google Brain, and now runs the aspirational non-profit, OpenAI. Krizhevsky also spent time at Google, and by all reports, has left these pursuits.

The point is this: a US program will not resemble the Chinese government’s program at all. It cannot.

That is not an argument for pulling funding from research. Quite the opposite. If North American academics invent more, then North American industry knows how to take advantage. If the government funds research at, say, twice the present size, that raises the chances of some new invention, which – for certain – will work its way into commercial activity quickly. That is a big bang for not much money.

Government subsidies for invention

A reasonable response to the above is “What about DARPA and its investment in autonomous vehicles? DARPA can target a specific area in AI, just like the Chinese government.”

Well, not quite. That is so in only some circumstances.

As reminder, Congress established DARPA decades ago after Sputnik. DARPA gets its mission from the neglect of the future, just as in any skunk works in any large organization. It tries to anticipate and accelerate the big technical changes that keep the US military at the forefront.

DARPA cannot just do whatever it wants. DARPA focuses only on applications related to its military aims and missions. Yes, only military. Laws forbid it to work outside this mission. Sometimes that policy binds, and, yes, that means sometimes it can only target a subset of issues. Moreover, the military cannot target development of any civilian technology.

Most than a decade ago DARPA famously funded the DARPA challenge for autonomous vehicles, as part of its larger efforts. A number of the research teams put together attempts to address the challenge, and after a few years of trial and error in the designs, made it through the course. Most experts regard DARPA’s efforts as a success. Many alumni from those teams, including the winners and quite a few of the runners-up, gained notoriety, accelerated their efforts, and have gone on to develop advances in autonomous vehicles for civilian and military uses.

The military needs alone were sufficient to justify DARPA’s action, and subsequent events have reinforced the observation. Since then there has been a breakthrough in autonomous vehicles for both ground and airborne military vehicles. To say it another way, the military acted as the “lead user” – the actor in the economy who had sufficient motive to fund initial development for a primitive technology.

What about the civilian applications? These are welcome now because it reduces costs for all such vehicles, including those used by the military. However, whether anticipated or not, the civilian benefits were largely irrelevant to the initial justification for funding the research and the challenge.

Summarizing, DARPA funding can target civilian technology if, and when, it overlaps with its aims at a primitive area of technology with military uses.
As an aside, a similar set of observations describes DARPA’s role in funding the research that led to the internet. While the leading lights anticipated the results for society, that was not central to the justification for DARPA’s funding. Also, frankly, DARPA alone did not create the internet. It really is a long story, and many books have been written about it (even by yours truly).

Lessons.

The fight for the future of AI finds its way into debates about the past. As always, facile futurists juggle intuition and historical metaphor to paint a picture of what lies around the corner. Skeptics answer with grounded fact and doses of attitude.

Call me a skeptic. As you look to the future of AI, make sure to look at the past with a clear eye. Do not infer the wrong lesson from an incomplete metaphor.

More positively, the US system has a long history of successfully leading to unanticipated gains that no futurist could have forecast. That provides the best reason for the US and Canadian governments to fund more AI research. Ironically, the past also tells us to expect increases in funding to yield unanticipated benefits.

Copyright held by IEEE. To view the original essay click here. 

3 Replies to “Misapplied metaphors in AI policy”

Leave a comment