Should there be restrictions on the development of artificial intelligence, and if so, what should they look like?
Several science and technology luminaries have sounded the alarm about the potential dangers of advanced Artificial Intelligence (AI), including Bill Gates, Stephen Hawking, and Elon Musk.
Nick Bostrom, an academic, has been among the first to analyze the topic comprehensively, and his work has influenced many others to take the problem seriously. My personal thoughts on these matters are heavily influenced by Dr. Bostrom, whose 2014 book, Superintelligence: Paths, Dangers, Strategies, I read last year.
Following Dr. Bostrom, I readily accept that there is a very real danger from a powerful intelligence that can quickly outsmart any human defenses and achieve its goals. He calls it the “control problem”, and it would be dangerous for at least three reasons:
1. Malicious AI: Its goals could be programmed to be nefarious by misguided humans.
2. Degenerate AI: The goals might be dangerous even without malicious intent from human controllers, however, if a goal is accidentally specified improperly. It would be like the apprentice from Goethe’s 1797 poem The Sorcerer’s Apprentice commanding his broom to fetch water, but the broom, not given a stopping condition, causes a flood. Think “Skynet” from the Terminator movies, which was supposed to keep humans safe.
3. Indifferent AI: Finally, a superintelligence might not have been programmed improperly by humans at all, but might end up not ascribing moral value to humans, since we are so much less sophisticated than it is. It might exterminate us for reasons we cannot fathom, much like we would destroy an ant hill standing in the way of a construction project.
Most experts agree are confident it will be several years or decades before anyone can come close to constructing a superintelligence. Given the danger, what should be done in the meantime? Should there be restrictions on AI research?
There is actually a smooth curve up from the office productivity and web software we interact with every day to the feared superhuman AI. So comparatively pedestrian improvements to the software we use today are taking us closer to software that has the intellectual capabilities of a human. But every little development forward is virtually unstoppable, because of the distributed, world-wide nature of progress in software development, and also because every little development forward makes our lives a little bit better or easier.
Even if we could, would we have wanted to stop software development in 2005, say, before Google had predictive searches, or before Mercedes-Benz had developed cars that will brake in advance of a potential accident? Surely not.
Since stopping these little steps is in my opinion impossible or undesirable, we will inexorably approach the danger point. At some point, one or more research groups will be within striking distance of creating a superintelligence. Given the nature of technological progress, nothing short of a totalitarian government would be able to stop the dissemination of AI capabilities.
Instead, initiatives to promote “good” AI should be relied upon to build up our immune system, so to speak. We see this already in the arms race being waged over email spam. The spammers have malicious software that attempts to land emails into people’s inboxes, while the email providers have “good” software that tries to identify the spam and divert it. Would it have been better to instead freeze our technological capabilities in 1996? As it turns out, the “good” software generally does an excellent job of distinguishing spam from real email. Humans are today may read their email relatively unmolested by the spammers.
Voluntary initiatives like Musk’s OpenAI and the 2015 open letter on artificial intelligence are examples of leading AI scientists taking the control problem seriously, without ham-fisted involvement by government bureaucrats.
“Good” AI can help deal with danger of a malicious or degenerate AI. However, the danger posed to humans by an indifferent AI remains unaddressed.
I’ll leave you with one ominous thought on indifferent AI: what if it’s only our own ego and vanity telling us it's a danger? If we end up constructing something that in its infinite wisdom thinks we ought to be destroyed, by definition perhaps we should trust its better judgement over our own. And in any case, via technologies like mind uploading, we humans may have our minds embedded into these AIs. So perhaps we have nothing to fear from AI, since AI will be us.
Existential risk from advanced AI (Wikipedia)