Of Homo Economicus and Superintelligence

Last week I had the pleasure of being a commentator on Nick Bostrom‘s talk at the star studded Machine Learning and the Market for Intelligence conference held at the University of Toronto. Bostrom is the author of Superintelligence: Paths, Dangers, Strategies and was the subject of a massive New Yorker profile the other week. Both are well worth your time.

Bostrom’s talk was pretty much identical to his TED talk:
https://embed-ssl.ted.com/talks/nick_bostrom_what_happens_when_our_computers_get_smarter_than_we_are.html

Basically, Bostrom has argued that if we assume that artificial intelligence achieves more than human-level benchmarks, the idea of regulating it is pretty darn hard. He argues that superintelligences could become smart, really smart. How smart is that? Maybe with a 6,000 IQ which is pretty mind bogglingly smart.

His work outlines all the ways we can think of that might prevent a superintelligence from controlling or destroying us. Bostrom fears that we will become like the horses if we are lucky — pretty much economically worthless so that we can be safely left to die away — or if we are unlucky, a threat whose very existence needs to be swiftly extinguished. Suffice it to say, AI experts don’t have any consensus on this so they probability that these events might come to pass should be taken seriously.

When I stood up to discuss all of this as a non-expert on AI, having read his book, I had some confidence that I might be able to say something useful. To an economist, Superintelligence is very familar territory. It is at its most basic a book about regulation, its difficulties and unintended consequences. Suffice it to say, many of us are quite familiar with all of that.

For instance, Bostrom talks about ways of controlling the beast. If we are worried about AIs taking over, can’t we isolate the AI so it does no harm. The problem of putting an AI in a box is that we can imagine ways a superintelligent AI can get out of the box. Why? Because there is always someone, somewhere with a key and if that is there, there will always be a weak link. Bostrom shows this again and again using a trick common to we economists of knowing that there is always a way to break a constraint of someone is patient enough.

Alternatively, Bostrom considers some reprogramming. For an economist, this would involve selecting the preferences of the AI so that they did not do harm. But here things get really hard. What preferences would you give the AI? Even if you asked an AI not to harm humans, we have an issue of what ‘harm’ means. Any really smart AI intent on ensuring people don’t come to harm would, as many helicopter parents know, put us in a box and keep us there. Suffice it to say, no one has figured out a way of programming the utility functions of AI.

And that all leaves aside the issue that even if we knew what to do there is enough going on out there with less than socially minded people that an AI could be invented with any old preferences and do harm as traditionally dictated by Hollywood movies.

All this could leave one pretty worried. But I am actually less worried than the philosophers worrying about this. This is because economists found one answer to this question a long time ago: general equilibrium theory.

This might be surprising but I start from one really important place: economists have long been criticised for the assumptions they make on the rationality and computational power of agents they model. But when it comes to considering superintelligent agents that criticism is completely invalid. Indeed, economic theorists have been thinking about what happens when superintelligent agents interact for over a century. Moreover, what they have found suggests that co-existence — maybe not happy co-existence — is more likely than not. Importantly, superintelligences will likely be constrained by one another in equilibrium.

To see this, you need to note three things:

  1. In order to control anything you need to access physical resources
  2. We cannot presume that some type of preference ordering by any agent is not possible — in other words, diversity has to be the axiom.
  3. Even superintelligences will run into limits that round off their mental abilities.

With these three assumptions we know that a general equilibrium with all intelligences can be found that is Pareto optimal. In other words, good news.

In order to explain this fully will take me more than a blog post (and certainly longer than my conference discussion time allowed) but even I would say that this insight isn’t enough to really give us assurances. But it does point us in an interesting set of directions for examination.

The first policy to think about is with regard to property rights. The general equilibrium outcome didn’t help the horses. The reason why is they couldn’t own property and so if they disappeared the system went on. But if there was a way that not only ensured our property rights but stopped the AIs from owning physical stuff, then there is a basis for thinking (via assumption 1 above) that we might stand a chance in a newer economy.

The second policy is related to the first and is with regard to violence. One of the reasons our economy does as well as it does now is that the only agent who is allowed to commit violence is the government. That is not always a great state of affairs but it is limiting. We need to find a way, as Asimov told us many years ago, to ensure the AIs do not engage in violence — meaning the physical seizure and control of physical stuff and life.  In the Age of Ultron, this may seem like a tall order but as a principle it is certainly a good place to start.

The final policy is how to provide a long-lasting ways of the AIs enforcing the other policies amongst themselves. AIs checking AIs is, in fact, our greatest hope and so we need institutions that will allow that. I suspect some form of registration via a distributed ledger might be a good place to start. And perhaps this is the sort of thing that those behind OpenAI will be thinking about.

I realise that I have been somewhat short of details here. There is certainly more to be done to think about how to bring economics into this debate. My point here is that what was a weakness of economics is, in fact, a strength here and there exists a body of knowledge that can help move the question of how to deal with superintelligence forward.

8 Replies to “Of Homo Economicus and Superintelligence”

  1. Who says our robot overlords won’t be rent-seeks? Or that their potential rents to the robot “race” won’t exceed the efficiency gains?

  2. I don’t think algorithmically ensuring property rights is any easier than the approach of making AIs “friendly” (that is, making them respect human values). If we dig down deep enough they may require solving the same underlying problems. This shows up in the endless hairsplitting that ensues when libertarians try to define “initiation of force”.

    But even leaving that aside, an AI with an IQ of 6000 could defraud any human of everything they own. If we are just under current fraud laws it would be a piece of cake. If we try to tighten up the fraud laws to keep the ultra-intelligent machines under control we’re back in the same set of unsolved problems.

    Finally, property rights are determined by legislation, and legislation is determined by politics (see copyright, patents, what debts can be cleared in bankruptcy, etc.) AIs with IQs far higher than human could easily get property rights that would wipe us out — even if we had veto power, since we wouldn’t be smart enough to figure out how we are getting screwed.

    And once a human has been deprived of all property, under your scheme they have no protection left. Breaking them up for parts and selling those as part of bankruptcy is entirely rational.

    Bottom line, the only way to prevent our extinction is to make sure the AIs don’t want us to die out.

  3. This seems to kind of miss the point – if we were able to make AI’s respect property rights, ensure that they kept that respect even while modifying their own code, etc. then why not just make them follow a rule like “don’t kill humans”?

    The hard part is encoding values into AI’s. Determining which values we should encode is massively easier, and might just be solved by something like Coherent Extrapolated Volition.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s