Amongst economists I know (myself included), Ted Chiang is universally beloved as a science fiction writer. So it was with great excitement that I saw yesterday he had written a Buzzfeed piece on artificial intelligence and the power of corporations. Alas, it turned out to be an incoherent mess that went nowhere.
Chiang’s thesis is the following. Take your average Silicon Valley mogul — aka Elon Musk — who is fearful of artificial intelligence. Chiang notes their concern is less Terminator and more ‘unintentionally unleashing a beast’:
Speaking to Maureen Dowd for a Vanity Fair article published in April, Musk gave an example of an artificial intelligence that’s given the task of picking strawberries. It seems harmless enough, but as the AI redesigns itself to be more effective, it might decide that the best way to maximize its output would be to destroy civilization and convert the entire surface of the Earth into strawberry fields. Thus, in its pursuit of a seemingly innocuous goal, an AI could bring about the extinction of humanity purely as an unintended side effect.
This is a familiar topic. Indeed, I have been rather obsessed with the same idea of late. Chiang thinks that notion is pretty stupid but he wonders why Silicon Valley types are worried about it. And he has a thesis:
This scenario sounds absurd to most people, yet there are a surprising number of technologists who think it illustrates a real danger. Why? Perhaps it’s because they’re already accustomed to entities that operate this way: Silicon Valley tech companies.
So far so good. Why are they scared? Because they have been engaging in unbridled use of power themselves. If they can do it, why not AI?
Now if you (a) believed that unintentional destruction by AI was absurd and (b) the people who believe it have been tarnished by their own view of the power of corporations, then the logical next step is surely to argue why the power of corporations isn’t real in some way, ergo, the inference that AI could be the same is misplaced.
Chiang didn’t go that way. Instead, the remainder of the piece is about how the Silicon Valley tech companies do have unbridled power. And he is sceptical, therefore, of our ability to control AI precisely because we can’t control corporations.
There are industry observers talking about the need for AIs to have a sense of ethics, and some have proposed that we ensure that any superintelligent AIs we create be “friendly,” meaning that their goals are aligned with human goals. I find these suggestions ironic given that we as a society have failed to teach corporations a sense of ethics, that we did nothing to ensure that Facebook’s and Amazon’s goals were aligned with the public good. But I shouldn’t be surprised; the question of how to create friendly AI is simply more fun to think about than the problem of industry regulation, just as imagining what you’d do during the zombie apocalypse is more fun than thinking about how to mitigate global warming.
So I am lost. Does he think AI is a problem or not? Well, we never actually find out. But Chiang does think Silicon Valley tech companies are a problem. In other words, I think he is saying “stop worrying about AI controlling you, you are already controlled by others in the form of corporations.”
What I’m far more concerned about is the concentration of power in Google, Facebook, and Amazon. They’ve achieved a level of market dominance that is profoundly anticompetitive, but because they operate in a way that doesn’t raise prices for consumers, they don’t meet the traditional criteria for monopolies and so they avoid antitrust scrutiny from the government. We don’t need to worry about Google’s DeepMind research division, we need to worry about the fact that it’s almost impossible to run a business online without using Google’s services.
The problem I have here is that these statements are overblown. I am not saying these companies do not have power. They do. But I can’t see them as “profoundly anticompetitive.” Nor is it obvious that they do not meet the traditional criteria for monopolies. They surely do (for Google and Facebook in their main markets, for Amazon in some markets). And that attracts considerable antitrust scrutiny that I am convinced forces these companies to constrain their exercise of monopoly power.
Indeed, the last statement that you can’t run a business online without using Google is just plain wrong. For starters, Facebook and Amazon do it. But so do many, many others. I am sure there are businesses somewhere beholden to Google but I do not believe they are significant in any way.
It gets worse when he turns to Facebook.
It’d be tempting to say that fearmongering about superintelligent AI is a deliberate ploy by tech behemoths like Google and Facebook to distract us from what they themselves are doing, which is selling their users’ data to advertisers. If you doubt that’s their goal, ask yourself, why doesn’t Facebook offer a paid version that’s ad free and collects no private information? Most of the apps on your smartphone are available in premium versions that remove the ads; if those developers can manage it, why can’t Facebook? Because Facebook doesn’t want to. Its goal as a company is not to connect you to your friends, it’s to show you ads while making you believe that it’s doing you a favor because the ads are targeted.
Woah. His argument is that he and others want to pay for an ad-free Facebook but Facebook won’t let them. He paints a picture that Facebook is all about ads for ads sake. Surely, he doesn’t mean that. Surely, Facebook don’t offer a premium version because they do not believe that a user will pay more for that than Facebook earns from placing ads in front of them. Unlike other apps, this likely means that Facebook’s ads aren’t that annoying to their users. That is hardly a company playing the role of an ad placing obsessed machine. It is a company that has found balance. No small wonder that Chiang is confused that Zuckerberg doesn’t fit his Silicon Valley AI scared mogul narrative.
For a person who conceived of how we would talk with aliens (that eventually became the movie Arrival), if this piece has a message, it is confused and unsubstantiated. I think Chiang wants Silicon Valley tech types to see their place in the world more clearly. Sadly, he doesn’t lead by example here.
Hi, Chiang may be thinking this way:
– If we can make Silicon Valley tech behave ethically, then making AI ethical should be unquestionably straightforward
– If we can’t make Silicon Valley tech behave ethically, it is still possible, even easy, to make AI ethical.
The fact that we can’t control human led Silicon Valley need not mean it is difficult to make AI ethical.
Personally, I am optimistic about getting Silicon Valley tech to be better ethically. It is not easy, but with perseverance and patience, I am sure over time Silicon Valley will improve. The path will involve technocrats (including economists) to start exercising their “moral muscles” (per Michael Sandel). This may mean making moral philosophy compulsory in the curriculum (as it was for Adam Smith and David Hume).
Kien, which set of monopolies, historically, has had leaders who chose voluntarily to grow & use moral muscles?
Note that Silicon Valley monopolies cannot even be given away by those who own or otherwise control them: major parts of the system (the internet, web standards, Linux…) have been given to the public; the current generation of Silicon Valley companies owe their existence to finding ways to lock down & monetize what was given free to us all. Choose to become a philanthropist and you will simply be replaced.
Hi, Frederick. Thanks for your comment! It is admittedly difficult to think of good examples of business leaders who engage on moral issues, but that may be due more to my ignorance. It is interesting to see how Uber and Facebook have each dealt with criticisms of their respective moral failings. Public criticism does seem to play an important role!
The moral failing of Facebook has the same roots as any monopoly, or any empire: they have a business model that manages to tax the whole world and bring the loot back to a small group of shareholders and employees. Of course that model puts huge power in their hands, power which they may choose to exercise more or less nicely. But the basic problem is the concentration of power.