Singleton: A world order in which there is at the global level a single decision-making agency.
I am very attuned to the evidence of climate change and have been concerned about it. However, after reading Superintelligence by Nick Bostrom, I have symbolically dropped to my knees to pray: “Bring it on. Please bring climate change on quickly.” Better that than the greater threat of a singleton emerging from the intelligence explosion Bostrom describes.
Actually, I exaggerate in saying that I have read this book. In some respects, it is way, way over my head, so I have had to skim for the major concepts. The author is the director of the Future of Humanity Institute at Oxford University. He has a PhD in economics as well as a background in physics, computational neuroscience, and mathematical logic. However, his intellectual ballast seems to come from his studies in the field of philosophy. I can hardly imagine how high his IQ is.
I recently attended a speech Bostrom gave in Santa Fe and was so impressed that I bought his book. It was a coincidence that his presentation coincided with my growing alarm over the rising power of the tech world. According to Bostrom, an intelligence explosion is looming through a number of potential routes, two of which I will summarize here.
- One would be the engineering of a mechanical system that can actually learn, deal with uncertainty, and be capable of concept formation. This kind of artificial intelligence (AI) is supposedly feasible within this century.
- Another approach would be to “emulate” the human brain to produce intelligent software. This would involve creating a detailed scan of a human brain, carefully selected for certain virtues presumably, and then through a series of stages, transfer the “neurocomputational structure” to a powerful computer. A successful brain emulation is projected to take longer than the AI described above.
The intelligence created by either approach could, or in collaboration with each other, conceivably create this thing called superintelligence that far surpasses the intelligence of any person on earth. It then proceeds to create other artificial intelligence that perpetually improves and empowers itself. Bostrom is excellent at describing things, and he says that, although we might think of the resulting superintelligent AI as smart in the way that scientific genius is smart compared to a human being, it would be more appropriate to compare it to the intelligence of a human being versus that of a beetle or a worm.
Would an intelligence explosion of this nature result in an existential catastrophe? Bostrom describes this as a risk in which an “adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential.” Apparently existential catastrophe is a plausible default outcome of creating machine superintelligence.
After all, there would be questions about the powerful machines’ values and motives and missions and whether we would have the wisdom initially to load such according to our highest ideals. Bostrom gives two humorous examples: What if one version of AI is dedicated to resource acquisition and sees us as such–and then the entire solar system? On the other hand, what if an AI designed to maximize something like paperclip production in a factory proceeds to convert first the Earth and then large chunks of the universe into paperclips? These are examples of singletons.
Clearly, there are many unintended consequences to consider, and Bostrom is gifted at revealing the complexities involved. He never really comes out and says this, but one can’t help but conclude that it would be in humanity’s best interest to slow this version of progress down. While the human brain has developed with extraordinary speed in the area of technology, evolution seems to be lagging in areas like wisdom, justice, compassion, morals, ethics, etc.
In fact, in his conclusion, Bostrom writes that we humans are playing with the potential of technology “like small children playing with a bomb.” And even if all of humanity grasped the magnitude of the risk unfolding, there are those among us, namely the engineers of computer intelligence, who are so excited by the possibilities and by the fun of creativity, that they may be impossible to rein in. There is also the fear afoot that if we don’t lead the way, another country we fear may do so.
Nick Bostrom, who is the author of 200 publications and is on Foreign Policy‘s Top 100 Global Thinkers List, is only 42. I wonder how well he sleeps at night.
As I said at the outset, this book is the reason why the significance of climate change has been transformed in my own mind. The urgent need it will create to build dikes, cork smokestacks, secure water supplies, reform agriculture, etc., could turn out to be a blessing if it compromises the rapid movement toward superintelligence. After all, there is probably nothing like issues with Planet Earth itself to ground us in the basic needs of humanity.