Will AI Bring The Apocalypse?

What Will Happen When Machines Become Smarter Than Us?

Artificial Intelligence, interview Elon Mask
 

FUTURE PROOF – BLOG BY FUTURES PLATFORM


The Singularity. This idea of technology improving itself ever more rapidly and having a profound impact in society (and possibly leading to the extinction of our own species) is not new. It has been around since I. J. Good described an "intelligence explosion" in 1965, and has been the subject of many sci-fi novels and movies, most of which with a dark ending for human civilization. But for the most part, these have been distant worries.  Maybe not for long.

 

Last month, the topic of superintelligence, what we would call an artificial intelligence that is smarter than humans and capable of improving itself, caused some headlines. But it wasn’t because some significant development or breakthrough took place. The headlines came because two of the most distinguished figures in tech, Mark Zuckerberg and Elon Musk came head-to-head about the dangers of artificial intelligence.

Musk has frequently shown concern about the possible negative consequences of an all-powerful AI, and has recommended that we start looking into its implications and regulate it early, in order to avoid a possible catastrophe. Zuckerberg, on the other hand, during a live stream said that warnings against AI, like those of Musk, are “pretty irresponsible,” and instead opted for a more positive outcome. Musk’s response? “I have talked to Mark about this. His understanding of the subject is limited.”

We might not know who is right for many decades. According to Nick Boström, author of Superintelligence, there is a 10% chance that superintelligence will be achieved by 2022, a 50% chance it will arrive by 2040, and a 90% chance it will arrive by 2075. There’s also a chance that superintelligence will simply never arrive. But, nonetheless, Nick Boström, Elon Musk, Bill Gates and Stephen Hawking, among many other influential thinkers, seem to believe we should start preparing for it.

What would the advent of such an artificial intelligence mean? It is important to keep in mind that it is quite hard to predict it. As appears in the New Scientist , the reason we call it a singularity is because we cannot see beyond that point (just like we cannot see what is beyond the singularity point in a black hole). But we have made some attempts, mostly speculation, and they have not been very rosy.

For one, as Boström argues in his book, it would be very difficult to give the AI commands that would be entirely benign. He gives the example of the paperclip maximizer. If we asked a superintelligent AI to maximize the paperclips in the world, though it may sound innocuous, it could lead to an AI that begins depleting all the resources on earth, turning all its infrastructure into paperclip manufacturing plants. After Earth’s resources are depleted, it would look into ways of venturing into space and the rest of the solar system to continue its mission. Even if we were to ask it to build only 1000 paperclips, it would start gathering and converting resources to increase its levels of certainty that it is building 1000 and only 1000 paperclips. This could lead it down the same path as when it was trying to maximize paperclips on Earth.

Does this sound far-fetched? Probably. It is hard to blame anyone for thinking that. But it is also hard to argue against the great support that a more cautious view of the future of AI commands. If a superintelligent entity were to arise, whether it could be controlled or not, it would certainly pose some dangers. Possibly more than nuclear weapons, as Musk suggested. Whether it acted on its own or was used by someone with malevolent intentions, it would be hard to stop it. And if you think we would notice something before it happened, or that we would be able to just shut it down the moment it misbehaved, Boström has an answer for that too: being superintelligent, it could just fake its innocence and devise creative, intelligence ways to stay on forever.

But then there is the more optimistic camp, with people like Mark Zuckerberg and Eric Schmidt, who believe that AI-turned-rogue is the stuff of movies. Indeed, if we for a minute consider only the good that may come from a superintelligence, we could be talking about cures for all diseases, solutions for most, if not all, problems, and fulfilling life for all human beings from then onwards. This is no doubt a much more reassuring scenario.

What do you think? Will a singularity eventually lead to our extinction? Will it make our world better than ever before? Something in middle? Or will it never arrive?

Recode, Artificial Intelligence | Elon Musk, SpaceX and Tesla | Code Conference 2016


Make collaborative foresight easy and engaging with Futures Platform. Access a futurist-curated library of 900+ future trends, visualise future scenarios and document your process all in one place.

 

RELATED


 
Previous
Previous

Can You Survive with a Pig's Heart?

Next
Next

Game Over for Nation States?