Can a superintelligence self-regulate and not destroy us?

One of the most compelling reasons why a superintelligent (i.e., way smarter than human), artificial intelligence (AI) may end up destroying us is the so-called paperclip apocalypse. Posited by Nick Bostrom, this involves some random engineer creating an AI with the goal of making paperclips. That AI then becomes superintelligent and in the single minded …

Kahneman on AI versus Humans

At our AI conference last week, Nobel Laureate, Danny Kahneman, was commenting on a paper by Colin Camerer but ended up spending much of his time talking about his view as to whether AI (or robots) would replace humans. He had definite opinions on the subject. Here is a video of his remarks: