One of the most compelling reasons why a superintelligent (i.e., way smarter than human), artificial intelligence (AI) may end up destroying us is the so-called paperclip apocalypse. Posited by Nick Bostrom, this involves some random engineer creating an AI with the goal of making paperclips. That AI then becomes superintelligent and in the single minded …
Continue reading "Can a superintelligence self-regulate and not destroy us?"

