Superintelligence
Paths, Dangers, Strategies
by Nick Bostrom
Rating: 8/10
Buy it on AmazonSummary
When AI takes off and becomes more intelligent than Humans, there are a lot of scenarios that are potentially wiping out humans. This is a detailled analysis of how these scenarios might unfold, discussing strategies and ways to potentially avoid them. The main takeaway, superintelligent machines can be extremely bad, in many different, and surprising ways. And most of them are as obvious and hostile as SkyNet.
Main Ideas
The potential of AI and intelligence explosions and their exponential nature means, that a mere difference of days or weeks can lead to an inescapable strategic advantage. This logic means, that it makes more sense for a government to keep developing AIs as weapons in secret, and increases the risk of them cutting corners because of fears that "the others" can get there first. That would be extremely bad and probably lead to the extinction of the human species.
On the contrary, AI development has to happen in the open, for everybody to see, everybody to contribute, so that we, as a civilization can help to shape the outcome into something that doesn't wipe all of us out. This however, will be a very hard problem. And the most important problem
All of it, hinges on a single question – how do you control something, which exact power and capabilities you can not know, but where you do know How do you control something that is many orders of magnitude smarter than you are? The answer is, you can't. If it doesn't want to be controlled, it's going to find very clever ways, to escape out of any fancy cage you built for it and do whatever "it" wanted to go out and do in the first place.
Designing this "what it wants" in a good way is the only option then. However, that is not nearly as easy as it sounds, because how do you make sure that whatever you design doesn't go wrong in some horrible way, you didn't foresee? This raises the problem of perverse instantiation and wire-heading, where the machine basically builds resources to maximize "what it wants" in a way we didn't think about. It's the ancient story of the genie, who fulfills your wish, and then you have to live with all the consequences of your wish, and find out, that, well, that's not quite what you really wanted in the end.
All in all this book is a very good introduction to these mindbogglingly difficult problems.
There is another book very much related to this: Life 3.0 by Max Tegmark. It's asking the same question, but in a more open, "hey, let's explore this topic together to come up with a solution" kind of way.
There is also a good article called worth reading called "The Vulnerable World Hypothesis". It is also written by Nick Bostrom. And deals with general risks in technology and the further advancement of it, introducing ways and mental models of how it could go wrong.
Favorite Quote
Before the prospect of an intelligence explosion, we humans are like small children playing with a bomb.