Hot Best Seller

Superintelligence: Paths, Dangers, Strategies

Availability: Ready to download

Superintelligence asks the questions: what happens when machines surpass humans in general intelligence? Will artificial agents save or destroy us? Nick Bostrom lays the foundation for understanding the future of humanity and intelligent life. The human brain has some capabilities that the brains of other animals lack. It is to these distinctive capabilities that our speci Superintelligence asks the questions: what happens when machines surpass humans in general intelligence? Will artificial agents save or destroy us? Nick Bostrom lays the foundation for understanding the future of humanity and intelligent life. The human brain has some capabilities that the brains of other animals lack. It is to these distinctive capabilities that our species owes its dominant position. If machine brains surpassed human brains in general intelligence, then this new superintelligence could become extremely powerful--possibly beyond our control. As the fate of the gorillas now depends more on humans than on the species itself, so would the fate of humankind depend on the actions of the machine superintelligence. But we have one advantage: we get to make the first move. Will it be possible to construct a seed Artificial Intelligence, to engineer initial conditions so as to make an intelligence explosion survivable? How could one achieve a controlled detonation?


Compare

Superintelligence asks the questions: what happens when machines surpass humans in general intelligence? Will artificial agents save or destroy us? Nick Bostrom lays the foundation for understanding the future of humanity and intelligent life. The human brain has some capabilities that the brains of other animals lack. It is to these distinctive capabilities that our speci Superintelligence asks the questions: what happens when machines surpass humans in general intelligence? Will artificial agents save or destroy us? Nick Bostrom lays the foundation for understanding the future of humanity and intelligent life. The human brain has some capabilities that the brains of other animals lack. It is to these distinctive capabilities that our species owes its dominant position. If machine brains surpassed human brains in general intelligence, then this new superintelligence could become extremely powerful--possibly beyond our control. As the fate of the gorillas now depends more on humans than on the species itself, so would the fate of humankind depend on the actions of the machine superintelligence. But we have one advantage: we get to make the first move. Will it be possible to construct a seed Artificial Intelligence, to engineer initial conditions so as to make an intelligence explosion survivable? How could one achieve a controlled detonation?

30 review for Superintelligence: Paths, Dangers, Strategies

  1. 4 out of 5

    Manny

    Superintelligence was published in 2014, and it's already had time to become a cult classic. So, with apologies for being late getting to the party, here's my two cents. For people who still haven't heard of it, the book is intended as a serious, hard-headed examination of the risks associated with the likely arrival, in the short- to medium-term future, of machines which are significantly smarter than we are. Bostrom is well qualified to do this. He runs the Future of Humanity Institute at Oxfor Superintelligence was published in 2014, and it's already had time to become a cult classic. So, with apologies for being late getting to the party, here's my two cents. For people who still haven't heard of it, the book is intended as a serious, hard-headed examination of the risks associated with the likely arrival, in the short- to medium-term future, of machines which are significantly smarter than we are. Bostrom is well qualified to do this. He runs the Future of Humanity Institute at Oxford, where he's also a professor at the philosophy department, he's read a great deal of relevant background, and he knows everyone. The cover quotes approving murmurs from the likes of Bill Gates, Elon Musk, Martin Rees and Stuart Russell, co-author of the world's leading AI textbook; people thanked in the acknowledgements include Demis Hassabis, the founder and CEO of Google's Deep Mind. So, why don't we assume for now that Bostrom passes the background check and deserves to be taken seriously. What's he saying? First of all, let's review the reasons why this is a big deal. If machines can get to the point where they're even a little bit smarter than we are, they'll soon be a whole lot smarter than we are. Machines can think much faster than humans (our brains are not well optimised for speed); the differential is at least in the thousands and more likely in the millions. So, having caught us up, they will rapidly overtake us, since they're living thousands or millions of their years for every one of ours. Of course, you can still, if you want, argue that it's a theoretical extrapolation, it won't happen any time soon, etc. But the evidence suggests the opposite. The list of things machines do roughly as well as humans is now very long, and there are quite a few things, things we humans once prided ourselves on being good at, that they do much better. More about that shortly. So if we can produce an artificial human-level intelligence, we'll shortly after have an artificial superintelligence. What does "shortly after" mean? Obviously, no one knows, which is the "fast takeoff/slow takeoff" dichotomy that keeps turning up in the book. But probably "slow takeoff" will be at most a year or two, and fast takeoff could be seconds. Suddenly, we're sharing our planet with a being who's vastly smarter than we are. Bostrom goes to some trouble to help you understand what "vastly smarter" means. We're not talking Einstein versus a normal person, or even Einstein versus a mentally subnormal person. We're talking human being versus a mouse. It seems reasonable to assume the superintelligence will quickly learn to do all the things a very smart person can do, including, for starters: formulating and carrying out complex strategic plans; making money in business activities; building machines, including robots and weapons; using language well enough to persuade people to do dumb things; etc etc. It will also be able to do things that we not only can't do, but haven't even thought of doing. And so we come to the first key question: having produced your superintelligence, how do you keep it under control, given that you're a mouse and it's a human being? The book examines this in great detail, coming up with any number of bizarre and ingenious schemes. But the bottom line is that no matter how foolproof your scheme might appear to you, there's absolutely no way you can be sure it'll work against an agent who's so much smarter. There's only one possible strategy which might have a chance of working, and that's to design your superintelligence so that it wants to act in your best interests, and has no possibility of circumventing the rules of its construction to change its behavior, build another superintelligence which changes its behavior, etc. It has to sincerely and honestly want to do what's best for you. Of course, this is Asimov Three Laws territory; and, as Bostrom says, you read Asimov's stories and you see how extremely difficult it is to formulate clear rules which specify what it means to act in people's best interests. So the second key question is: how do you build an agent which of its own accord wants to do "the right thing", or, as Socrates put it two and half thousand years ago, is virtuous? As Socrates concludes, for example in Meno and Euthyphro, these issues are really quite difficult to understand. Bostrom uses language which is a bit less poetic and a bit more mathematical, but he comes to pretty much the same conclusions. No one has much idea yet of how to do it. The book reaches this point and gives some closing advice. There are many details, but the bottom line is unsurprising given what's gone before: be very, very careful, because this stuff is incredibly dangerous and we don't know how to address the critical issues. I think some people have problems with Superintelligence due to the fact that Bostrom has a few slightly odd beliefs (he's convinced that we can easily colonize the whole universe, and he thinks simulations are just as real as the things they are simulating). I don't see that these issues really affect the main arguments very much, so don't let them bother you if you don't like them. Also, I'm guessing some other people dislike the style, which is also slightly odd: it's sort of management-speak with a lot of philosophy and AI terminology added, and because it's philosophy there are many weird thought-experiments which often come across as being a bit like science-fiction. Guys, relax. Philosophers have been doing thought-experiments at least since Plato. It's perfectly normal. You just have to read them in the right way. And so, to conclude, let's look at Plato again (remember, all philosophy is no more than footnotes to Plato), and recall the argument from the Theaetetus. Whatever high-falutin' claims it makes, science is only opinions. Good opinions will agree with new facts that turn up later, and bad opinions will not. We've had three and a half years of new facts to look at since Superintelligence was published. How's its scorecard? Well, I am afraid to say that it's looking depressingly good. Early on in the history of AI, as the book reminds us, people said that a machine which could play grandmaster level chess would be most of the way to being a real intelligent agent. So IBM's team built Deep Blue, which beat Garry Kasparov in 1997, and people immediately said chess wasn't a fair test, you could crack it with brute force. Go was the real challenge, since it required understanding. In late 2016 and mid 2017, Deep Mind's AlphaGo won matches against two of the world's three best Go players. That was also discounted as not a fair test: AlphaGo was trained on millions of moves of top Go matches, so it was just spotting patterns. Then late last year, Alpha Zero learned Go, Chess and Shogi on its own, in a couple of days, using the same general learning method and with no human examples to train from. It played all three games not just better than any human, but better than all previous human-derived software. Looking at the published games, any strong chess or Go player can see that it has worked out a vast array of complex strategic and tactical principles. It's no longer a question of "does it really understand what it's doing". It obviously understands these very difficult games much better than even the top experts do, after just a few hours of study. Humanity, I think that was our final warning. Come up with more excuses if you like, but it's not smart. And read Superintelligence.

  2. 4 out of 5

    Brian Clegg

    There has been a spate of outbursts from physicists who should know better, including Stephen Hawking, saying ‘philosophy is dead – all we need now is physics’ or words to that effect. I challenge any of them to read this book and still say that philosophy is pointless. It’s worth pointing out immediately that this isn’t really a popular science book. I’d say the first handful of chapters are for everyone, but after that, the bulk of the book would probably be best for undergraduate philosophy st There has been a spate of outbursts from physicists who should know better, including Stephen Hawking, saying ‘philosophy is dead – all we need now is physics’ or words to that effect. I challenge any of them to read this book and still say that philosophy is pointless. It’s worth pointing out immediately that this isn’t really a popular science book. I’d say the first handful of chapters are for everyone, but after that, the bulk of the book would probably be best for undergraduate philosophy students or AI students, reading more like a textbook than anything else, particularly in its dogged detail – but if you are interested in philosophy and/or artificial intelligence, don’t let that put you off. What Nick Bostrom does is to look at the implications of developing artificial intelligence that goes beyond human abilities in the general sense. (Of course, we already have a sort of AI that goes beyond our abilities in the narrow sense of, say, arithmetic, or playing chess.) In the first couple of chapters he examines how this might be possible – and points out that the timescale is very vague. (Ever since electronic computers have been invented, pundits have been putting the development of effective AI around 20 years in the future, and it’s still the case.) Even so, it seems entirely feasible that we will have a more than human AI – a superintelligent AI – by the end of the century. But the ‘how’ aspect is only a minor part of this book. The real subject here is how we would deal with such a ‘cleverer than us’ AI. What would we ask it to do? How would we motivate it? How would we control it? And, bearing in mind it is more intelligent than us, how would we prevent it taking over the world or subverting the tasks we give it to its own ends? It is truly fascinating concept, explored in great depth here. This is genuine, practical philosophy. The development of super-AIs may well happen – and if we don’t think through the implications and how we would deal with it, we could well be stuffed as a species. I think it’s a shame that Bostrom doesn’t make more use of science fiction to give examples of how people have already thought about these issues – he gives only half a page to Asimov and the three laws of robotics (and how Asimov then spends most of his time showing how they’d go wrong), but that’s about it. Yet there has been a lot of thought and dare I say it, a lot more readability than you typically get in a textbook, put into the issues in science fiction than is being allowed for, and it would have been worthy of a chapter in its own right. I also think a couple of the fundamentals aren’t covered well enough, but pretty much assumed. One is that it would be impossible to contain and restrict such an AI. Although some effort is put into this, I’m not sure there is enough thought put into the basics of ways you can pull the plug manually – if necessary by shutting down the power station that provides the AI with electricity. The other dubious assertion was originally made by I. J. Good, who worked with Alan Turing, and seems to be taken as true without analysis. This is the suggestion that an ultra-intelligent machine would inevitably be able to design a better AI than humans, so once we build one it will rapidly improve on itself, producing an ‘intelligence explosion’. I think the trouble with this argument is that my suspicion is that if you got hold of the million most intelligent people on earth, the chances are that none of them could design an ultra-powerful computer at the component level. Just because something is superintelligent doesn’t mean it can do this specific task well – this is an assumption. However this doesn’t set aside what a magnificent conception the book is. I don’t think it will appeal to many general readers, but I do think it ought to be required reading on all philosophy undergraduate courses, by anyone attempting to build AIs… and by physicists who think there is no point to philosophy.

  3. 4 out of 5

    Joseph

    Preamble:I confess to Roko's Basilisk that I didn't believe superintelligence was much of a threat, and this book didn't really do anything to challenge that prior. Mea Culpa, Mea Culpa, Mea [local] Maxima Culpa. I. Overall View I'm a software engineer with some basic experience in machine learning, and though the results of machine learning have been becoming more impressive and general, I've never really seen where people are coming from when they see strong superintelligence just around the co Preamble:I confess to Roko's Basilisk that I didn't believe superintelligence was much of a threat, and this book didn't really do anything to challenge that prior. Mea Culpa, Mea Culpa, Mea [local] Maxima Culpa. I. Overall View I'm a software engineer with some basic experience in machine learning, and though the results of machine learning have been becoming more impressive and general, I've never really seen where people are coming from when they see strong superintelligence just around the corner, especially the kind that can recursively improve itself to the point where intelligence vastly increases in the space of a few hours or days. So I came to this book with a simple question: "Why are so many intelligent people scared of a near-term existential threat from AI, and especially why should I believe that AI takeoff will be incredibly fast?" Unfortunately, I leave the book with this question largely unanswered. Though in principle I can't think of anything that prevents the formation of some forms of superintelligence, everything I know about software development makes me think that any progress will be slow and gradual, occasionally punctuated with a new trick or two that allows for somewhat faster (but still gradual) increases in some domains. So on the whole, I came away from this book with the uncomfortable but unshakeable notion that most of the people cited don't really have much relevant experience in building large-scale software systems. Though Bostrom used much of the language of computer science correctly, any of his extrapolations from very basic, high-level understandings of these concepts seemed frankly oversimplified and unconvincing. II. General Rant on Math in Philosophy Ever since I was introduced to utilitarianism in college (the naive, Bentham-style utilitarianism at least) I've been somewhat concerned about the practice of trying to add more rigor to philosophical arguments by filling them with mathematical formalism. To continue with the example of utilitarianism, in its most basic sense it asks you to consider any action based on a calculation of how much pleasure will result from your action divided by the amount of pain an action will cause, and to act in such a way that you maximize this ratio. Now it's of course impossible to do this calculation in all but the most trivial cases, even assuming you've somehow managed to define pleasure, pain, and come up with some sort of metric for actually evaluating differences between them. So really the formalism only expresses a very simple relationship between things which are not defined, and based on the process of definition might not be able to be legitimately placed in simple arithmetic or algebraic expressions. I felt much the same way when I was reading Superintelligence. Especially in his chapter on AI takeoff, Bostrom argued that the amount of improvement in an AI system could be modeled as a ratio of applied optimization power over the recalcitrance of the system, or its architectural unwillingness to accept change. Certainly this is true as far as it goes, but "optimization power" and "recalcitrance" are necessarily at this point dealing with systems that nobody yet knows how to build, or even what they will look like, beyond some hand-wavey high-level descriptions, and so there is no definition one can give that makes any sense unless you've already committed to some ideas of exactly how the system will perform. Bostrom tries to hedge his bets by presenting some alternatives, but he's clearly committed to the idea of a fast takeoff, and the math-like symbols he's using present only a veneer of formalism, drawing some extremely simple relations between concepts which can't be yet defined in any meaningful way. This was the example that really made my objections to unjustified philosophy-math snap into sharp focus, but it's just one of many peppered throughout the book, which gives an attempted high-level look at superintelligent systems, but too many of the black boxes on which his argument rested remained black boxes. Unable to convince myself of the majority of his argument since too many of his steps were glossed over, I came away from this book thinking that there had to be a lot more argumentation somewhere, since I couldn't imagine holding this many unsubstantiated "axioms" for something apparently important to him as superintelligence. And it really is a shame that the book needed to be bogged down with so much unnecessary formalism (which had the unpleasant effect of making it feel simultaneously overly verbose and too simplistic), since there were a few good things in here that I came away with. The sections on value-loading and security were especially good. Like most of the book, I found them overly speculative and too generous in assuming what powers superintelligences would possess, but there is some good strategic stuff in here that could lead toward more general forms of machine intelligence, and avoid some of the overfitting problems common in contemporary machine learning. Of course, there's also no plan of implementation for this stuff, but it's a cool idea that hopefully penetrates a little further into modern software development. III. Whereof One Cannot Speak, Thereof One Must Request Funding It's perhaps callous and cynical of me to think of this book as an extended advertisement for the Machine Intelligence Research Institute (MIRI), but the final two chapters in many ways felt like one. Needless to say I'm not filled with a desire to donate on the basis of an argument I found largely unconvincing, but I do have to commend those involved for actually having an attempt at a plan of implementation in place simultaneous with a call to action. IV. Conclusion I remain pretty unconvinced of AI as a relatively near-term existential threat, though I think there's some good stuff in here that could use a wider audience. And being more thoughtful and careful with software systems is always a cause I can get behind. I just wish some more of the gaps got filled in, and I could justifiably shake my suspicion that Bostrom doesn't really know that much about the design and implementation of large-scale software systems. V. Charitable TL;DR Not uninteresting, needs a lot of work before it's convincing. VI. Uncharitable TL;DR

  4. 5 out of 5

    Riku Sayuj

    Imagine a Danger (You may say I'm a Dreamer) Bostrom is here to imagine a world for us (and he has batshit crazy imagination, have to give him that). The world he imagines is a post-AI world or at least a very-near-to-AI world or a nascent-AI world. Don’t expect to know how we will get there - only what to do if we get there and how to skew the road to getting there to our advantage. And there are plenty of wild ideas on how things will pan out in that world-in-transition, the ‘routes’ bit - Bost Imagine a Danger (You may say I'm a Dreamer) Bostrom is here to imagine a world for us (and he has batshit crazy imagination, have to give him that). The world he imagines is a post-AI world or at least a very-near-to-AI world or a nascent-AI world. Don’t expect to know how we will get there - only what to do if we get there and how to skew the road to getting there to our advantage. And there are plenty of wild ideas on how things will pan out in that world-in-transition, the ‘routes’ bit - Bostrom discusses the various potential routes, but all of them start at a point where AI is already in play. Given that assumption, the “dangers” bit is automatic since the unknown and powerful has to be assumed to be dangerous. And hence strategies are required. See what he did there? It is all a lot of fun, to be playing this thought experiment game, but it leaves me a bit confused about what to feel about the book as an intellectual piece of speculation. I was on the fence between a two-star rating or a four-star rating for much of the reading. Plenty of exciting and grand-sounding ideas are thrown at me… but, truth be told, there are too many - and hardly any are developed. The author is so caught up in his own capacity for big BIG BIIG ideas that he forgets to develop them into a realistic future or make any the real focus of ‘dangers’ or ‘strategies’. They are just all out there, hanging. As if their nebulosity and sheer abundance should do the job of scaring me enough. In the end I was reduced to surfing the book for ideas worth developing on my own. And what do you know, there were a few. So, not too bad a read and I will go with three. And for future readers, the one big (not-so-new) and central idea of the book is simple enough to be expressed as a fable, here it is: The Unfinished Fable of the Sparrows It was the nest-building season, but after days of long hard work, the sparrows sat in the evening glow, relaxing and chirping away. “We are all so small and weak. Imagine how easy life would be if we had an owl who could help us build our nests!” “Yes!” said another. “And we could use it to look after our elderly and our young.” “It could give us advice and keep an eye out for the neighborhood cat,” added a third. Then Pastus, the elder-bird, spoke: “Let us send out scouts in all directions and try to find an abandoned owlet somewhere, or maybe an egg. A crow chick might also do, or a baby weasel. This could be the best thing that ever happened to us, at least since the opening of the Pavilion of Unlimited Grain in yonder backyard.” The flock was exhilarated, and sparrows everywhere started chirping at the top of their lungs. Only Scronkfinkle, a one-eyed sparrow with a fretful temperament, was unconvinced of the wisdom of the endeavor. Quoth he: “This will surely be our undoing. Should we not give some thought to the art of owl-domestication and owl-taming first, before we bring such a creature into our midst?” Replied Pastus: “Taming an owl sounds like an exceedingly difficult thing to do. It will be difficult enough to find an owl egg. So let us start there. After we have succeeded in raising an owl, then we can think about taking on this other challenge.” “There is a flaw in that plan!” squeaked Scronkfinkle; but his protests were in vain as the flock had already lifted off to start implementing the directives set out by Pastus. Just two or three sparrows remained behind. Together they began to try to work out how owls might be tamed or domesticated. They soon realized that Pastus had been right: this was an exceedingly difficult challenge, especially in the absence of an actual owl to practice on. Nevertheless they pressed on as best they could, constantly fearing that the flock might return with an owl egg before a solution to the control problem had been found. It is not known how the story ends…

  5. 4 out of 5

    Leonard Gaya

    In recent times, prominent figures such as Stephen Hawking, Bill Gates and Elon Musk have expressed serious concerns about the development of strong artificial intelligence technology, arguing that the dawn of super-intelligence might well bring about the end of mankind. Others, like Ray Kurzweil (who, admittedly, has gained some renown in professing silly predictions about the future of the human race), have an opposite view on the matter and maintain that AI is a blessing that will bestow utop In recent times, prominent figures such as Stephen Hawking, Bill Gates and Elon Musk have expressed serious concerns about the development of strong artificial intelligence technology, arguing that the dawn of super-intelligence might well bring about the end of mankind. Others, like Ray Kurzweil (who, admittedly, has gained some renown in professing silly predictions about the future of the human race), have an opposite view on the matter and maintain that AI is a blessing that will bestow utopia upon humanity. Nick Bostrom painstakingly elaborates on the disquiet views of the former (he might well have influenced them in the first place), without fully dismissing the blissful engrossment of the latter. First, he endeavours to shed some light on the subject and delves into quite a few particulars concerning the future of AI research, such as: the different paths that could lead to super-intelligence (brain emulations or AI proper), the steps and timeframe through which we might get there, the types and number of AI that could result as we continue improving our intelligent machines (he calls them “oracles”, “genies” and “sovereigns”), the different ways in which it could go awry, and so forth. But Bostrom is first and foremost a philosophy professor, and his book is not so much about the engineering or economic aspects that we could foresee as regards strong AI. The main concern is the ethical problems that the development of a general (i.e. cross-domain) super-intelligent machine, far surpassing the abilities of the human brain, might pose to us as humans. The assumption is that the possible existence of such a machine would represent an existential threat to human kind. The main argument is thus to warn us about the dangers (some of Bostrom’s examples are weirdly farcical, and reminded me of Douglas Adams’s The Hitchhiker's Guide to the Galaxy), but also to outline in some detail how this risk could or should be mitigated, restraining the scope or the purpose of a hypothetical super-brain: this is what he calls “the AI control problem”, which is at the core of his reasoning and which, upon reflexion, is a surprisingly difficult one. I should add that, although the book is largely accessible to the layperson, Bostrom’s prose is often dense, speculative, and makes very dry reading: not exactly a walk in the park. He should be praised nonetheless for attempting to apply philosophy and ethical thinking to nontrivial questions. One last remark: Bostrom explores a great many questions in this book but, oddly enough, it seems never to occur to him to think about the possible moral responsibility we humans might have towards an intelligent machine, not just a figment of our imagination but a being that we will someday create and could at least be compared to us. Charity begins at home, I suppose.

  6. 5 out of 5

    ☘Misericordia☘ ~ The Serendipity Aegis ~ ✺❂❤❣

    Hypothetical enough to become insanely dumb boring. Superintelligence, hyperintelligence, hypersuperintelligence… Basically, it all amounts to the fact that maybe, sometime, the ultimate thinking machines will do or not so something. Just how new is that idea? IMO, the main point is how do we get them there? Designing intuition? Motivating the AI? Motivational scaffolding? Associative value accretion? While it's all very entertaining, it's nowhere near practical at this point. And the bareboned Hypothetical enough to become insanely dumb boring. Superintelligence, hyperintelligence, hypersuperintelligence… Basically, it all amounts to the fact that maybe, sometime, the ultimate thinking machines will do or not so something. Just how new is that idea? IMO, the main point is how do we get them there? Designing intuition? Motivating the AI? Motivational scaffolding? Associative value accretion? While it's all very entertaining, it's nowhere near practical at this point. And the bareboned philosophy of the non-existent AI that's pretty much dumb today? This is one fat DNF.

  7. 4 out of 5

    John Igo

    This book... if {} else if {} else if {} else if {} else if {} ... You can get most of the ideas in this book in the WaitButWhy article about AI. This book assumes that an intelligence explosion is possible, and that it is possible for us to make a computer whose intelligence will explode. Then talks about ways to deal with it. A lot of this book seems like pointless naval gazing, but I think some of it is worth reading. This book... if {} else if {} else if {} else if {} else if {} ... You can get most of the ideas in this book in the WaitButWhy article about AI. This book assumes that an intelligence explosion is possible, and that it is possible for us to make a computer whose intelligence will explode. Then talks about ways to deal with it. A lot of this book seems like pointless naval gazing, but I think some of it is worth reading.

  8. 4 out of 5

    Manuel Antão

    If you're into stuff like this, you can read the full review. (Count-of-Self) = 0: "Superintelligence - Paths, Dangers, Strategies" by Nick Bostrom "Box 8 - Anthropic capture: The AI might assign a substantial probability to its simulation hypothesis, the hypothesis that it is living in a computer simulation." In "Superintelligence - Paths, Dangers, Strategies" by Nick Bostrom Would you say that the desire to preserve 'itself' comes from the possession of a (self) consciousness? If so, does the acqu If you're into stuff like this, you can read the full review. (Count-of-Self) = 0: "Superintelligence - Paths, Dangers, Strategies" by Nick Bostrom "Box 8 - Anthropic capture: The AI might assign a substantial probability to its simulation hypothesis, the hypothesis that it is living in a computer simulation." In "Superintelligence - Paths, Dangers, Strategies" by Nick Bostrom Would you say that the desire to preserve 'itself' comes from the possession of a (self) consciousness? If so, does the acquisition of intelligence according to Bostrom also mean the acquisition of (self) consciousness? The unintended consequence of a super intelligent AI is the development of an intelligence that we can barely see, let alone control, as a consequence of the networking of a large number of autonomous systems acting on inter-connected imperatives. I think of bots trained to trade on the stock market that learn that the best strategy is to follow other bots, who are following other bots. The system can become hyper-sensitive to inputs that have little or nothing to do with supply and demand.

  9. 4 out of 5

    Bradley

    I'm very pleased to have read this book. It states, concisely, the general field of AI research's BIG ISSUES. The paths to making AIs are only a part of the book and not a particularly important one at this point. More interestingly, it states that we need to be more focused on the dangers of superintelligence. Fair enough! If I was an ant separated from my colony coming into contact with an adult human being, or a sadistic (if curious) child, I might start running for the hills before that magni I'm very pleased to have read this book. It states, concisely, the general field of AI research's BIG ISSUES. The paths to making AIs are only a part of the book and not a particularly important one at this point. More interestingly, it states that we need to be more focused on the dangers of superintelligence. Fair enough! If I was an ant separated from my colony coming into contact with an adult human being, or a sadistic (if curious) child, I might start running for the hills before that magnifying glass focuses the sunlight. And so we move on to strategies, and this is where the book does its most admirable job. All the current thoughts in the field are represented, pretty much, but only in broad outlines. A lot of this has been fully explored in SF literature, too, and not just from the Asimov Laws of Robotics. We've had isolation techniques, oracle techniques, and even straight tool-use techniques crop up in robot and AI literature. Give robots a single-task job and they'll find a way to turn it into a monkey's paw scenario. And this just begs the question, doesn't it? When we get right down to it, this book may be very concise and give us a great overview, but I do believe I'll remain an uberfan of Eliezer Yudkowsky over Nick Bostrom. After having just read Rationality: From AI to Zombies, almost all of these topics are not only brought up, but they're explored in grander fashion and detail. What do you want? A concise summary? Or a gloriously delicious multi-prong attack on the whole subject that admits its own faults the way that HUMANITY should admit its own faults? Give me Eli's humor, his brilliance, and his deeply devoted stand on working out a real solution to the "Nice" AI problem. :) I'm not saying Superintelligence isn't good, because it most certainly is, but it is still the map, not the land. :) (Or to be slightly fairer, neither is the land, but one has a little better definition on the topography.)

  10. 5 out of 5

    Matt

    As a software developer, I've cared very little for artificial intelligence (AI) in the past. My programs, which I develop professionally, have nothing to do with the subject. They’re dumb as can be and only following strict orders (that is rather simple algorithms). Privately I wrote a few AI test programs (with more or less success) and read a articles in blogs or magazines (with more or less interest). By and large I considered AI as not being relevant for me. In March 2016 AlphaGo was introdu As a software developer, I've cared very little for artificial intelligence (AI) in the past. My programs, which I develop professionally, have nothing to do with the subject. They’re dumb as can be and only following strict orders (that is rather simple algorithms). Privately I wrote a few AI test programs (with more or less success) and read a articles in blogs or magazines (with more or less interest). By and large I considered AI as not being relevant for me. In March 2016 AlphaGo was introduced. This was the first Go program capable of defeating a champion in this game. Shortly after that, in December 2017, Alpha Zero entered the stage. Roughly speaking this machine is capable of teaching itself games after being told the rules. Within a day, Alpha Zero developed superhuman level of play for Go, Chess, and Shogi; all by itself (if you can believe the developers). The algorithm used in this machine is very abstract and can probably be used for all games of this kind. The amazing thing for me was how fast the AI development progresses. This book is not all about AI. It’s about “superintelligence” (SI). An SI can be thought of some entity which is far superior to human intelligence in all (or almost all) cognitive abilities. To paraphrase Lincoln: You can outsmart some of the people all of the time and you can outsmart all of the people some of the time, but you can’t outsmart all of the people all of the time; unless you are a superintelligence. The subtitle of the English edition “paths, dangers, strategies” has been chosen wisely. What steps can been taken to build an SI, what are the dangers of introducing an SI, and how can one ensure that these dangers and risks are eliminated or at least scaled-down to an acceptable level? An SI does not necessarily have to exist in a computer. The author is also co-founder of the “World Transhumanist Association”. Therefore, transhumanist ideas are included in the book, albeit in a minor role. An SI can theoretically be build by using genetic selection (of embryos, i.e. “breeding”). Genetic research would probably soon be ready to provide the appropriate technologies. For me, a scary thought; something which touches my personal taboos. Not completely outlandish, but still with a big ethical question mark for me, seems to be “Whole Brain Emulation” (WBE). Here, the brain of a human being, more precisely, the state of the brain at a given time, is analyzed and transferred to a corresponding data structure in the memory of a powerful computer where then the brain/consciousness of the individual continues to exist, possibly within a suitable virtual reality. There are already quite a few films or books that deal with this scenario (for a positive example see the this episode of the Black Mirror series). With WBE you would have an artificial entity with the cognitive performance of a human being. The vastly superior processing speed of the digital versus the biological circuits will let this entity become super intelligent (consider 100,000 copies of a 1000x faster WBE and let it run for six months, and you’ll get 50 millenia worth of thinking!) However, the main focus in the discussion about SI in this book is the further development of AI to become Super-AI (SAI). This is not a technical book though. It contains no computer code whatsoever, and the math (appearing twice in some info-boxes) is only marginal and not at all necessary for understanding. One should not imagine an SI as a particularly intelligent person. It might be more appropriate to equate the ratio of SI to human intelligence with that of human intelligence to the cognitive performance of a mouse. An SI will indeed be very very smart and, unfortunately, also very very unstable. By that I mean that an SI will be busy at any time to changed and improve itself. The SI you speak with today will be a million or more times smarter tomorrow. In this context, the book speaks of “intelligence explosion”. Nobody knows yet, when this will start and how fast it will go. Could be next year, or in ten, fifty, or one hundred years. Or perhaps never (although this is highly unlikely). Various scenarios are discussed in the book. Also it is not clear if there will be only one SI (a so called singleton), or several competing or collaborating SIs (with a singleton seeming to be more likely). I think it’s fair to say that humanity as a whole has the wish to continue to exist; at least the vast majority of people do not consider the extinction of humanity desirable. With that in mind it would make sense to instruct an SI to follow that same goal. Now I forgot to specify the exact state in which you want to exist. In this case the SI might choose to put all humans into coma (less energy consumption). The problem is solved from the SI’s point of view; its goal has been reached. But obviously this is not what we meant. We have to re-program the SI and tweak its goal a bit. Therefore it would be mandatory to always be able to control the SI. It’s possible an SI will not act the way we intended (it will act, however, the way we programmed it). A case of an “unfriendly” SI is actually very likely. The book mentions and describes “perverse instantiation”, “infrastructure profusion” and “mind crime” as possible effects. The so called “control problem” remains unsolved as of now and it appears equivalent to that of a mouse controlling a human being. Without a solution, the introduction of an SI becomes a gamble (with a very high probability a “savage” SI will wipe out humanity). The final goal of an SI should be formulated pro-human if at all possible. At least, the elimination of humankind should not be prioritized at any time. You should give the machine some kind of morality. But how does one do it? How can you formulate moral ideas in a computer language? And what happens if our morals change over time (which has happened before), and the machine still decides on a then-outdated moral ground? In my opinion, there will be insurmountable difficulties at this point. Nevertheless, there are also at least some theoretical approaches explained by Bostrom (who is primarily a philosopher). It’s quite impressive to read these chapters (albeit also a bit dry). In general, the chapters dealing with philosophical questions, and how they are translated to the SI world, were the most engrossing ones for me. The answers to this kind of questions are also subject to some urgency. Advances in technology generally move faster than wisdom (not only in this field), and the sponsors of the projects expect some return on invest. Bostrom speaks of a “philosophy with a deadline”, a fitting, but also disturbing image. Another topic is an SI that is neither malignant nor fitted with false goals (something like this is also possible), but on the contrary actually helps humanity. Quote: The point of superintelligence is not to pander to human preconceptions but to make mincemeat out of our ignorance and folly. Certainly this is a noble goal. However, how will people (and I’m thinking about those who are currently living) react when their follies are disproved? It’s hard to say, but I guess they will not be amused. One should not trust people too much intelligence in this respect (see below for my own “anger”). Except for the sections on improving human intelligence through biological interference and breeding (read eugenics), I found everything in this book fascinating, thought-provoking, and highly disturbing. The book has, in a way, changed my world view rather drastically, which is rare. My “folly” about AI and especially Super-AI has changed fundamentally. In a way, I've gone through 4 of the 5 stages of grief & loss. Before the book, I flatly denied a Super-AI will ever come to fruition. When I read the convincing arguments that not only an Super-AI will be possible, but indeed very likely, my denial changed into anger. In spite of the known problems and the existential risk of such a technology, how can one even think to follow this slippery slope? (this question is also dealt with in the book) My anger was then turned into a depression (not a clinical one) towards the end. Still in this condition, I’m now awaiting acceptance, which in my case will more likely be fatalism. A book that shook me profoundly and that I actually wished I had not read, but that I still recommend highly (I guess I need a superintelligence to make sense of that). This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.

  11. 5 out of 5

    Jasmin Shah

    Never let a Seed AI read this book!

  12. 4 out of 5

    Clif Hostetler

    This book was published in 2014 so is a bit dated, and I’m now writing this review somewhat late for what should be a cutting edge issue. But many people who are interested in this subject continue to respect this book as the definitive examination of the risks associated with machines that are significantly smarter than humans. We have been living for many years with computers—and even phones—that store more information and can retrieve that information faster than any human. These devices don’ This book was published in 2014 so is a bit dated, and I’m now writing this review somewhat late for what should be a cutting edge issue. But many people who are interested in this subject continue to respect this book as the definitive examination of the risks associated with machines that are significantly smarter than humans. We have been living for many years with computers—and even phones—that store more information and can retrieve that information faster than any human. These devices don’t seem to pose much threat to us humans, so it’s hard to perceive why there may be cause for concern. The problem is as follows. As artificial intelligence (AI) becomes more proficient in the future it will have the ability to learn (a.k.a.machine learning) and improve itself as it examines and solves problems. It will have the ability to change (i.e. reprogram) itself in order to develop new methods as needed to execute solutions for the tasks at hand. Thus, it will be using techniques and strategies of which the originating human programmer will be unaware. Once machines are creatively strategizing better (i.e. smarter) than humans, the gap between machine and human performance (i.e. intelligence) will grow exponentially. Eventually, the level of thinking by the “super-intelligent” machine will have the relative superiority over that of humans that is equivalent to the superiority of the human brain over that of a beetle crawling on the floor. It is reasonable to conjecture that a machine that smart will have as much respect for humans who think they’re controlling it as humans are likely to have respect for a beetle trying to control them. The concept of superintelligence means that the machine can perform better than humans at all tasks including such things as using human language to be persuasive, raising money, developing strategic plans, designing and making robots, advanced weapons, and advances in science and technology. A super-intelligent machine will solve problems that humans don't know exist. Of course that may be a good thing, but such machines in effect have a mind of their own. They may decide they know best and not want to follow human instructions. Much of this book is spent examining—in too much detail in my opinion—possible ways to control a super-intelligent machine. Then after this long exploration of various strategies the conclusion is in essence that it's not possible. So then the book moves on to the question of how to design the initiating foundation of such a machine to have the innate desire to do good (i.e. be virtuous). Again the author goes into excruciating details examining various ways to do this. The bottom line is we can try, but we don't have the necessary tools to be sure to address the critical issues. In conclusion, our goose is cooked. We can't help ourselves. Superintelligence is the "tree of the knowledge of good and evil." We have to take a bite. This link is to an article about facial recognition. It contains the following quote:... the whole ecosystem of artificial intelligence is optimized for a lack of accountability.Shortly after writing my review the Dilbert cartoon featured the subject of AI: Here's a link to a review of "Game Changer: AlphaZero's Groundbreaking Chess Strategies and the Promise of AI," by Matthew Sadler and Natasha Regan. This review describes a chess program that utilizes AI to become almost unbeatable with a style of play not previously seen. https://www.goodreads.com/review/show...

  13. 5 out of 5

    Jim

    Superintelligence by Nick Bostrom is a hard book to recommend, but is one that thoroughly covers its subject. Superintelligence is a warning against developing artificial intelligence (AI). However, the writing is dry and systematic, more like Plato than Wired Magazine. There are few real world examples, because it's not a history of AI, but theoretic conjectures. The book explores the possible issues we might face if a superintelligent machine or life form is created. I would have enjoyed the b Superintelligence by Nick Bostrom is a hard book to recommend, but is one that thoroughly covers its subject. Superintelligence is a warning against developing artificial intelligence (AI). However, the writing is dry and systematic, more like Plato than Wired Magazine. There are few real world examples, because it's not a history of AI, but theoretic conjectures. The book explores the possible issues we might face if a superintelligent machine or life form is created. I would have enjoyed the book more if it reported on current state of the art projects in AI. The recent work of DeepMind learning to play classic Atari games offers more realism to the possibilities of AI than anything mentioned in this book. Deep learning projects, the latest development in neural nets, is having some astounding successes. Bostrom doesn't report on them. And I think Bostrom makes one glaring error. I'm no AI expert, but he seems to assume we can program an AI, and control its intelligence. I don't think that's possible. We won't be programming The Three Laws of Robotics into future AI beings. I believe AIs will evolve out of learning systems, and we'll have no control over what emerges. We'll create software and hardware that is capable of adapting and learning. The process of becoming a self-aware superintelligence will be no more understandable to us than why our brains generate consciousness.

  14. 5 out of 5

    Robert Schertzer

    I switched to the audio version of this book after struggling with the Kindle edition since I needed to read this for a book club. If you are looking for a book on artificial intelligence (AI), avoid this and opt for Jeff Hawkins' book "On Intelligence" written by someone who has devoted their life to the field. If it is one on "AI gone bad" you seek, try 2001 Space Odyssey. For a fictional approach on AI that helped set the groundwork for AI theory, go for Isaac Asimov. If you want a tedious, r I switched to the audio version of this book after struggling with the Kindle edition since I needed to read this for a book club. If you are looking for a book on artificial intelligence (AI), avoid this and opt for Jeff Hawkins' book "On Intelligence" written by someone who has devoted their life to the field. If it is one on "AI gone bad" you seek, try 2001 Space Odyssey. For a fictional approach on AI that helped set the groundwork for AI theory, go for Isaac Asimov. If you want a tedious, relentless and pointless book that fails at achieving what all three previously aforementioned authors have succeeded at - this is the book for you.

  15. 4 out of 5

    Peter (Pete) Mcloughlin

    Stephen Hawking and Bill Gates have recently raised the alarm about Artificial Intelligence. If a superhuman artificial intelligence were created it would be the biggest event in human history and it could very well be the last. We are only familiar with human intelligence and it may be a small sample from the possibilities of intelligence to be had. Bostrom makes the case that the most likely path to superintelligence would most likely be a hard takeoff as the AI would quickly rise once it reac Stephen Hawking and Bill Gates have recently raised the alarm about Artificial Intelligence. If a superhuman artificial intelligence were created it would be the biggest event in human history and it could very well be the last. We are only familiar with human intelligence and it may be a small sample from the possibilities of intelligence to be had. Bostrom makes the case that the most likely path to superintelligence would most likely be a hard takeoff as the AI would quickly rise once it reached Human level intelligence and quickly reorganize itself to a very superior form of intelligent mind. It would quickly gain powers and abilities far beyond humans and it would be more alien and unfathomable than anything we have ever seen. If it has goals that don't match up with the human project so much for the human race. With great detail, Bostrom lays out where AI could go seriously wrong for us. Disasters in the abstract may make us yawn but Bostrom gives the details of what the catastrophe might look like. The Hellmouth is much scarier when the picture becomes more detailed. I recommend reading Bostrom's book to educate yourself on dangers of ceding the top of the food chain to AI. It is fairly hair-raising. Here is Nick Bostrom talking on this topic at TED. https://www.youtube.com/watch?v=MnT1x...

  16. 5 out of 5

    Shea Levy

    Read up through chapter 8. The book started out somewhat promisingly by not taking a stand on whether strong AI was imminent or not, but that was the height of what I read. I'm not sure there was a single section of the book where I didn't have a reaction ranging from "wait, how do you know that's true?" to "that's completely wrong and anyone with a modicum of familiarity with the field you're talking about would know that", but really it's the overall structure of the argument that led me to gi Read up through chapter 8. The book started out somewhat promisingly by not taking a stand on whether strong AI was imminent or not, but that was the height of what I read. I'm not sure there was a single section of the book where I didn't have a reaction ranging from "wait, how do you know that's true?" to "that's completely wrong and anyone with a modicum of familiarity with the field you're talking about would know that", but really it's the overall structure of the argument that led me to give this one up as a waste of time. Essentially, the argument goes like this: Bostrom introduces some idea, explains in vague language what he means by it, traces out how it might be true (or, in a few "slam-dunk" sections, *several* ways it might be true), and then moves on. In the next section, he takes all of the ideas introduced in the previous sections as givens and as mostly black boxes, in the sense that the old ideas are brought up to justify new claims without ever invoking any of the particular evidence for or structure of the old idea, it's just an opaque formula. The sense is of someone trying to build a tower, straight up. The fact that this particular tower is really a wobbly pile of blocks, with many of the higher up ones actually resting on the builder's arm and not really on the previous ones at all, is almost irrelevant: this is not how good reasoning works! There is no broad consideration of the available evidence, no demonstration of why the things we've seen imply the specific things Bostrom suggests, no serious engagement with alternative explanations/predictions, no cycling between big-picture overviews and in-detail analyses. There is just a stack of vague plausibilities and vague conceptual frameworks to accommodate them. A compelling presentation is a lot more like clearing away fog to note some rocky formations, then pulling back a bit to see they're all connected, then zooming back in to clear away the connected areas, and so on and so forth until a broad mountain is revealed. This is not to say that the outcome Bostrom fears is impossible. Even though I think many of the specific things he thinks are plausible are actually much less so than he asserts, I do think a kind of very powerful "unfriendly" AI is a possibility that should be considered by those in a position to really understand the problem and take action against it if it turns out to be a real one. The problem with Bostrom's presentation is that it doesn't tell us anything useful: We have no reason to suspect that the particular kinds of issues he proposes are the ones that will matter, that the particular characteristics he ascribes to future AI are ones that will be salient, indeed that this problem is likely enough, near enough, and tractable enough to be worth spending significant resources on at all at the moment! Nothing Bostrom is saying compellingly privileges his particular predictions over many many possible others, even if you take as a given that extraordinarily powerful AI is possible and its behavior hard to predict. I continually got the sense (sometimes explicitly echoed by Bostrom himself!) that you could substitute in huge worlds of incompatible particulars for the ones he proposed and still make the same claims. So why should I expect anything particular he proposes to be worthwhile? Edit: After chatting about this a bit with some friends, I should add one caveat to this review. This is praising with bold damnation if ever there were such a thing, but this book has made me more likely to engage with AI as an existential risk by being such a clear example of what had driven me away up until now. Now that I can see the essence of what's wrong with the bad approaches I've seen, I'll be better able to seek out the good ones (and, as I said, I do think the problem is worth serious investigation). So, I guess ultimately Bostrom succeeded at his goal in my case?

  17. 5 out of 5

    Travis

    I'm not going to criticize the content. I cannot finish this. Imagine eating saltines when you have cotton mouth in the middle of the desert. You might be close to describing how dry the writing is. Could be very interesting read if the writing was done in a more attention grabbing way.

  18. 4 out of 5

    Diego Petrucci

    There's no way around it: a super-intelligent AI is a threat. We can safely assume that an AI smarter than a human, if developed, would accelerate its own development getting smarter at a rate faster than anything we'd ever seen. In just a few cycles of self-improvement it would spiral out of control. Trying to fight, or control, or hijack it, would be totally useless — for a comparison, try picturing an ant trying to outsmart a human being (a laughable attempt, at best). But why is a super-intell There's no way around it: a super-intelligent AI is a threat. We can safely assume that an AI smarter than a human, if developed, would accelerate its own development getting smarter at a rate faster than anything we'd ever seen. In just a few cycles of self-improvement it would spiral out of control. Trying to fight, or control, or hijack it, would be totally useless — for a comparison, try picturing an ant trying to outsmart a human being (a laughable attempt, at best). But why is a super-intelligent AI a threat? Well, it probably wouldn't have human qualities (empathy, a sense of justice, and so on) and would rely on a more emotion-less understanding of the world — understanding emotion doesn't mean you have to feel emotions, you can understand the motives of terrorists without agreeing with them. There would be a chance of developing a super-intelligent AI with an insane set of objectives, like maximizing the production of chairs with no regard to the safety of human beings or the environment, totally subsuming Earth 's materials and the planet itself. Or, equally probable, we could end up with an AI whose main objective is self-preservation, who would later annihilate the human race because of an even minuscule chance of us destroying it. With that said, it's clear that before developing a self-improving AI we need a plan. We need tests to understand and improve its moral priorities, we need security measures, we need to minimize the risk of it destroying the planet. Once the AI is more intelligent than us, it won't take much to get extremely more intelligent, so we need to be prepared. We only got one chance and that's it, either we set it up right or we're done as a species. Superintelligence deals with all these problems systematically analyzing them and providing a few frames of mind to let us solve them (if that's even possible).

  19. 4 out of 5

    Clare O'Beara

    We are now building superintelligences. More than one. The author Nick Bostrom looks at what awaits us. He points out that controlling such a creation might not be easy. If unfriendly superintelligence comes about, we won't be able to change or replace it. This is a densely written book, with small print, with 63 pages of notes and bibliography. In the introduction the author tells us twice that it was not easy to write. However he tries to make it accessible, and adds that if you don't understa We are now building superintelligences. More than one. The author Nick Bostrom looks at what awaits us. He points out that controlling such a creation might not be easy. If unfriendly superintelligence comes about, we won't be able to change or replace it. This is a densely written book, with small print, with 63 pages of notes and bibliography. In the introduction the author tells us twice that it was not easy to write. However he tries to make it accessible, and adds that if you don't understand some techie terms you should still be able to grasp the meaning. He hopes that by pulling together this material he has made it easier for other researchers to get started. So - where are we? I have to state that with lines like: "Collective superintelligence is less conceptually clear-cut than speed superintelligence. However it is more familiar empirically." This is a more daunting book than 'The Rise of The Robots' by Martin Ford. If you are used to such terms and concepts you can dive in; if not I'd recommend the Ford book first. To be fair, terms are explained and we can easily see that launching a space shuttle requires a collective intellectual effort. No one person could do it. Humanity's collective intelligence has continued to grow, as people evolved to become smarter, as there were more of us to work on a problem, as we got to communicate and store knowledge, and as we kept getting smarter and building on previous knowledge. There are now so many of us who don't need to farm or make tools, that we can solve many problems in tandem. Personally, I say that if you don't think your leaders are making smart decisions, just go out and look at your national transport system at rush hour in the capital city. But a huge population requires a huge resource drain. As will the establishment of a superintelligence. Not just materials and energy but inventions, tests, human hours and expertise are required. Bostrom talks about a seed AI, a small system to start. He says in terms of a major system, the first project to reach a useful AI will win. After that the lead will be too great and the new AI so useful and powerful, that other projects may not close the gap. Hardware, power generation, software and coding are all getting better. And we have the infrastructure in place. We are reminded that "The atomic bomb was created primarily by a group of scientists and engineers. The Manhattan Project employed about 130,000 people at its peak, the vast majority of whom were construction workers or building operators." Aspects covered include reinforcement learning, associative value accretion, monitoring of projects, solving the value loading problem - which means defining such terms as happiness and suffering, explaining them to a computer and representing which is our goal. I turned to the chapter heading 'Of horses and men'. Horses, augmented by ploughs and carriages, were a huge advantage to human labour. But they were replaced by the automobile and tractor. The equine population crashed, and not to retirement homes. "In the US there were about 26 million horses in 1915. By the early 1950s, 2 million remained." The horses we still have, we keep because we enjoy them and the sports they provide. Bostrom later reassures us: "The US horse population has undergone a robust recovery: a recent census puts the number at just under 10 million head." As humans are fast superseded by robot or computer workers, in jobs from the tedious to the technically skilled, and companies or the rich grudge paying wages, what work or sport will make us worth our keep? Capital is mentioned; yes unlike horses, people own land and wealth. But many people have no major income or property, or have net debt such as student loans and credit card debt. Bostrom suggests that all humans could become wealthy from AIs. But he doesn't notice that more than half of the world's wealth and resources is now owned by one percent of its people, and it's heading ever more in the favour of the one percent, because they have the wealth to ensure that it does. They rent the land, they own the debt, they own the manufacturing and the resource mines. Homeowners could be devastated by sea rise and climate change, not looked at, but the super-wealthy can just move to another of their homes. Again, I found in a later chapter lines like: "For example, suppose that we want to start with some well-motivated human-like agents - let us say emulations. We want to boost the cognitive capacities of these agents, but we worry that the enhancements might corrupt their motivations. One way to deal with this challenge would be to set up a system in which individual emulations function as subagents. When a new enhancement is introduced, it is first applied to a small subset of the subagents. Its effects are then studied by a review panel composed of subagents who have not yet had the enhancement applied to them." Yes, I can follow this text, and it's showing sensible good practice, but it's not nearly so clear and easily understood as Martin Ford's book telling us that computers can be taught to recognise cancer in an X-ray scan, target customers for marketing or to connect various sources and diagnose a rare disease. I have to think that the author, Director of the Future of Humanity Institute and Professor of the Faculty of Philosophy at Oxford, is so used to writing for engineers or philosophers that he loses out on what really helps the average interested reader. For this reason I'm giving Superintelligence four stars, but someone working in this AI industry may of course feel it deserves five stars. If so, I'm not going to argue with her. In fact I'm going to be very polite.

  20. 5 out of 5

    Rod Van Meter

    Is the surface of our planet -- and maybe every planet we can get our hands on -- going to be carpeted in paper clips (and paper clip factories) by a well-intentioned but misguided artificial intelligence (AI) that ultimately cannibalizes everything in sight, including us, in single-minded pursuit of a seemingly innocuous goal? Nick Bostrom, head of Oxford's Future of Humanity Institute, thinks that we can't guarantee it _won't_ happen, and it worries him. It doesn't require Skynet and Terminato Is the surface of our planet -- and maybe every planet we can get our hands on -- going to be carpeted in paper clips (and paper clip factories) by a well-intentioned but misguided artificial intelligence (AI) that ultimately cannibalizes everything in sight, including us, in single-minded pursuit of a seemingly innocuous goal? Nick Bostrom, head of Oxford's Future of Humanity Institute, thinks that we can't guarantee it _won't_ happen, and it worries him. It doesn't require Skynet and Terminators, it doesn't require evil geniuses bent on destroying the world, it just requires a powerful AI with a moral system in which humanity's welfare is irrelevant or defined very differently than most humans today would define it. If the AI has a single goal and is smart enough to outwit our attempts to disable or control it once it has gotten loose, Game Over, argues Professor Bostrom in his book _Superintelligence_. This is perhaps the most important book I have read this decade, and it has kept me awake at night for weeks. I want to tell you why, and what I think, but a lot of this is difficult ground, so please bear with me. The short form is that I am fairly certain that we _will_ build a true AI, and I respect Vernor Vinge, but I have long been skeptical of the Kurzweilian notions of inevitability, doubly-exponential growth, and the Singularity. I've also been skeptical of the idea that AIs will destroy us, either on purpose or by accident. Bostrom's book has made me think that perhaps I was naive. I still think that, on the whole, his worst-case scenarios are unlikely. However, he argues persuasively that we can't yet rule out any number of bad outcomes of developing AI, and that we need to be investing much more in figuring out whether developing AI is a good idea. We may need to put a moratorium on research, as was done for a few years with recombinant DNA starting in 1975. We also need to be prepared for the possibility that such a moratorium doesn't hold. Bostrom also brings up any number of mind-bending dystopias around what qualifies as human, which we'll get to below. (snips to my review, since Goodreads limits length) In case it isn't obvious by now, both Bostrom and I take it for granted that it's not only possible but nearly inevitable that we will create a strong AI, in the sense of it being a general, adaptable intelligence. Bostrom skirts the issue of whether it will be conscious, or "have qualia", as I think the philosophers of mind say. Where Bostrom and I differ is in the level of plausibility we assign to the idea of a truly exponential explosion in intelligence by AIs, in a takeoff for which Vernor Vinge coined the term "the Singularity." Vinge is rational, but Ray Kurzweil is the most famous proponent of the Singularity. I read one of Kurzweil's books a number of years ago, and I found it imbued with a lot of near-mystic hype. He believes the Universe's purpose is the creation of intelligence, and that that process is growing on a double exponential, starting from stars and rocks through slime molds and humans and on to digital beings. I'm largely allergic to that kind of hooey. I really don't see any evidence of the domain-to-domain acceleration that Kurzweil sees, and in particular the shift from biological to digital beings will result in a radical shift in the evolutionary pressures. I see no reason why any sort of "law" should dictate that digital beings will evolve at a rate that *must* be faster than the biological one. I also don't see that Kurzweil really pays any attention to the physical limits of what will ultimately be possible for computing machines. Exponentials can't continue forever, as Danny Hillis is fond of pointing out. http://www.kurzweilai.net/ask-ray-the... So perhaps my opinion is somewhat biased by a dislike of Kurzweil's circus barker approach, but I think there is more to it than that. Fundamentally, I would put it this way: Being smart is hard. And making yourself smarter is also hard. My inclination is that getting smarter is at least as hard as the advantages it brings, so that the difficulty of the problem and the resources that can be brought to bear on it roughly balance. This will result in a much slower takeoff than Kurzweil reckons, in my opinion. Bostrom presents a spectrum of takeoff speeds, from "too fast for us to notice" through "long enough for us to develop international agreements and monitoring institutions," but he makes it fairly clear that he believes that the probability of a fast takeoff is far too large to ignore. There are parts of his argument I find convincing, and parts I find less so. To give you a little more insight into why I am a little dubious that the Singularity will happen in what Bostrom would describe as a moderate to fast takeoff, let me talk about the kinds of problems we human beings solve, and that an AI would have to solve. Actually, rather than the kinds of questions, first let me talk about the kinds of answers we would like an AI (or a pet family genius) to generate when given a problem. Off the top of my head, I can think of six: [Speed] Same quality of answer, just faster. [Ply] Look deeper in number of plies (moves, in chess or go). [Data] Use more, and more up-to-date, data. [Creativity] Something beautiful and new. [Insight] Something new and meaningful, such as a new theory; probably combines elements of all of the above categories. [Values] An answer about (human) values. The first three are really about how the answers are generated; the last three about what we want to get out of them. I think this set is reasonably complete and somewhat orthogonal, despite those differences. So what kinds of problems do we apply these styles of answers to? We ultimately want answers that are "better" in some qualitative sense. Humans are already pretty good at projecting the trajectory of a baseball, but it's certainly conceivable that a robot batter could be better, by calculating faster and using better data. Such a robot might make for a boring opponent for a human, but it would not be beyond human comprehension. But if you accidentally knock a bucket of baseballs down a set of stairs, better data and faster computing are unlikely to help you predict the exact order in which the balls will reach the bottom and what happens to the bucket. Someone "smarter" might be able to make some interesting statistical predictions that wouldn't occur to you or me, but not fill in every detail of every interaction between the balls and stairs. Chaos, in the sense of sensitive dependence on initial conditions, is just too strong. In chess, go, or shogi, a 1000x improvement in the number of plies that can be investigated gains you maybe only the ability to look ahead two or three moves more than before. Less if your pruning (discarding unpromising paths) is poor, more if it's good. Don't get me wrong -- that's a huge deal, any player will tell you. But in this case, humans are already pretty good, when not time limited. Go players like to talk about how close the top pros are to God, and the possibly apocryphal answer from a top pro was that he would want a three-stone (three-move) handicap, four if his life depended on it. Compared this to the fact that a top pro is still some ten stones stronger than me, a fair amateur, and could beat a rank beginner even if the beginner was given the first forty moves. Top pros could sit across the board from an almost infinitely strong AI and still hold their heads up. In the most recent human-versus-computer shogi (Japanese chess) series, humans came out on top, though presumably this won't last much longer. In chess, as machines got faster, looked more plies ahead, carried around more knowledge, and got better at pruning the tree of possible moves, human opponents were heard to say that they felt the glimmerings of insight or personality from them. So again we have some problems, at least, where plies will help, and will eventually guarantee a 100% win rate against the best (non-augmented) humans, but they will likely not move beyond what humans can comprehend. Simply being able to hold more data in your head (or the AI's head) while making a medical diagnosis using epidemiological data, or cross-correlating drug interactions, for example, will definitely improve our lives, and I can imagine an AI doing this. Again, however, the AI's capabilities are unlikely to recede into the distance as something we can't comprehend. We know that increasing the amount of data you can handle by a factor of a thousand gains you 10x in each dimension for a 3-D model of the atmosphere or ocean, up until chaotic effects begin to take over, and then (as we currently understand it) you can only resort to repeated simulations and statistical measures. The actual calculations done by a climate model long ago reached the point where even a large team of humans couldn't complete them in a lifetime. But they are not calculations we cannot comprehend, in fact, humans design and debug them. So for problems with answers in the first three categories, I would argue that being smarter is helpful, but being a *lot* smarter is *hard*. The size of computation grows quickly in many problems, and for many problems we believe that sheer computation is fundamentally limited in how well it can correspond to the real world. But those are just the warmup. Those are things we already ask computers to do for us, even though they are "dumber" than we are. What about the latter three categories? I'm no expert in creativity, and I know researchers study it intensively, so I'm going to weasel through by saying it is the ability to generate completely new material, which involves some random process. You also need the ability either to generate that material such that it is aesthetically pleasing with high probability, or to prune those new ideas rapidly using some metric that achieves your goal. For my purposes here, insight is the ability to be creative not just for esthetic purposes, but in a specific technical or social context, and to validate the ideas. (No implication that artists don't have insight is intended, this is just a technical distinction between phases of the operation, for my purposes here.) Einstein's insight for special relativity was that the speed of light is constant. Either he generated many, many hypotheses (possibly unconsciously) and pruned them very rapidly, or his hypothesis generator was capable of generating only a few good ones. In either case, he also had the mathematical chops to prove (or at least analyze effectively) his hypothesis; this analysis likewise involves generating possible paths of proofs through the thicket of possibilities and finding the right one. So, will someone smarter be able to do this much better? Well, it's really clear that Einstein (or Feynman or Hawking, if your choice of favorite scientist leans that way) produced and validated hypotheses that the rest of us never could have. It's less clear to me exactly how *much* smarter than the rest of us he was; did he generate and prune ten times as many hypotheses? A hundred? A million? My guess is it's closer to the latter than the former. Even generating a single hypothesis that could be said to attack the problem is difficult, and most humans would decline to even try if you asked them to. Making better devices and systems of any kind requires all of the above capabilities. You must have insight to innovate, and you must be able to quantitatively and qualitatively analyze the new systems, requiring the heavy use of data. As systems get more complex, all of this gets harder. My own favorite example is airplane engines. The Wright Brothers built their own engines for their planes. Today, it takes a team of hundreds to create a jet turbine -- thousands, if you reach back into the supporting materials, combustion and fluid flow research. We humans have been able to continue to innovate by building on the work of prior generations, and especially harnessing teams of people in new ways. Unlike Peter Thiel, I don't believe that our rate of innovation is in any serious danger of some precipitous decline sometime soon, but I do agree that we begin with the low-lying fruit, so that harvesting fruit requires more effort -- or new techniques -- with each passing generation. The Singularity argument depends on the notion that the AI would design its own successor, or even modify itself to become smarter. Will we watch AIs gradually pull even with us and then ahead, but not disappear into the distance in a Roadrunner-like flash of dust covering just a few frames of film in our dull-witted comprehension? Ultimately, this is the question on which continued human existence may depend: If an AI is enough smarter than we are, will it find the process of improving itself to be easy, or will each increment of intelligence be a hard problem for the system of the day? This is what Bostrom calls the "recalcitrance" of the problem. I believe that the range of possible systems grows rapidly as they get more complex, and that evaluating them gets harder; this is hard to quantify, but each step might involve a thousand times as many options, or evaluating each option might be a thousand times harder. Growth in computational power won't dramatically overbalance that and give sustained, rapid and accelerating growth that moves AIs beyond our comprehension quickly. (Don't take these numbers seriously, it's just an example.) Bostrom believes that recalcitrance will grow more slowly than the resources the AI can bring to bear on the problem, resulting in continuing, and rapid, exponential increases in intelligence -- the arrival of the Singularity. As you can tell from the above, I suspect that the opposite is the case, or that they very roughly balance, but Bostrom argues convincingly. He is forcing me to reconsider. What about "values", my sixth type of answer, above? Ah, there's where it all goes awry. Chapter eight is titled, "Is the default scenario doom?" and it will keep you awake. What happens when we put an AI in charge of a paper clip factory, and instruct it to make as many paper clips as it can? With such a simple set of instructions, it will do its best to acquire more resources in order to make more paper clips, building new factories in the process. If it's smart enough, it will even anticipate that we might not like this and attempt to disable it, but it will have the will and means to deflect our feeble strikes against it. Eventually, it will take over every factory on the planet, continuing to produce paper clips until we are buried in them. It may even go on to asteroids and other planets in a single-minded attempt to carpet the Universe in paper clips. I suppose it goes without saying that Bostrom thinks this would be a bad outcome. Bostrom reasons that AIs ultimately may or may not be similar enough to us that they count as our progeny, but doesn't hesitate to view them as adversaries, or at least rivals, in the pursuit of resources and even existence. Bostrom clearly roots for humanity here. Which means it's incumbent on us to find a way to prevent this from happening. Bostrom thinks that instilling values that are actually close enough to ours that an AI will "see things our way" is nigh impossible. There are just too many ways that the whole process can go wrong. If an AI is given the goal of "maximizing human happiness," does it count when it decides that the best way to do that is to create the maximum number of digitally emulated human minds, even if that means sacrificing some of the physical humans we already have because the planet's carrying capacity is higher for digital than organic beings? As long as we're talking about digital humans, what about the idea that a super-smart AI might choose to simulate human minds in enough detail that they are conscious, in the process of trying to figure out humanity? Do those recursively digital beings deserve any legal standing? Do they count as human? If their simulations are stopped and destroyed, have they been euthanized, or even murdered? Some of the mind-bending scenarios that come out of this recursion kept me awake nights as I was reading the book. He uses a variety of names for different strategies for containing AIs, including "genies" and "oracles". The most carefully circumscribed ones are only allowed to answer questions, maybe even "yes/no" questions, and have no other means of communicating with the outside world. Given that Bostrom attributes nearly infinite brainpower to an AI, it is hard to effectively rule out that an AI could still find some way to manipulate us into doing its will. If the AI's ability to probe the state of the world is likewise limited, Bsotrom argues that it can still turn even single-bit probes of its environment into a coherent picture. It can then decide to get loose and take over the world, and identify security flaws in outside systems that would allow it to do so even with its very limited ability to act. I think this unlikely. Imagine we set up a system to monitor the AI that alerts us immediately when the AI begins the equivalent of a port scan, for whatever its interaction mechanism is. How could it possibly know of the existence and avoid triggering the alert? Bostrom has gone off the deep end in allowing an intelligence to infer facts about the world even when its data is very limited. Sherlock Holmes always turns out to be right, but that's fiction; in reality, many, many hypotheses would suit the extremely slim amount of data he has. The same will be true with carefully boxed AIs. At this point, Bostrom has argued that containing a nearly infinitely powerful intelligence is nearly impossible. That seems to me to be effectively tautological. If we can't contain them, what options do we have? After arguing earlier that we can't give AIs our own values (and presenting mind-bending scenarios for what those values might actually mean in a Universe with digital beings), he then turns around and invests a whole string of chapters in describing how we might actually go about building systems that have those values from the beginning. At this point, Bostrom began to lose me. Beyond the systems for giving AIs values, I felt he went off the rails in describing human behavior in simplistic terms. We are incapable of balancing our desire to reproduce with a view of the tragedy of the commons, and are inevitably doomed to live out our lives in a rude, resource-constrained existence. There were some interesting bits in the taxonomies of options, but the last third of the book felt very speculative, even more so than the earlier parts. Bostrom is rational and seems to have thought carefully about the mechanisms by which AIs may actually arise. Here, I largely agree with him. I think his faster scenarios of development, though, are unlikely: being smart, and getting smarter, is hard. He thinks a "singleton", a single, most powerful AI, is the nearly inevitable outcome. I think populations of AIs are more likely, but if anything this appears to make some problems worse. I also think his scenarios for controlling AIs are handicapped in their realism by the nearly infinite powers he assigns them. In either case, Bostrom has convinced me that once an AI is developed, there are many ways it can go wrong, to the detriment and possibly extermination of humanity. Both he and I are opposed to this. I'm not ready to declare a moratorium on AI research, but there are many disturbing possibilities and many difficult moral questions that need to be answered. The first step in answering them, of course, is to begin discussing them in a rational fashion, while there is still time. Read the first 8 chapters of this book!

  21. 4 out of 5

    Gavin

    Like a lot of great philosophy, Superintelligence acts as a space elevator: you make many small, reasonable, careful movements - and you suddenly find yourself in outer space, home comforts far below. It is more rigorous about a topic which doesn't exist than you would think possible. I didn't find it hard to read, but I have been marinating in tech rationalism for a few years and have absorbed much of Bostrom secondhand so YMMV. I loved this: Many of the points made in this book are probably w Like a lot of great philosophy, Superintelligence acts as a space elevator: you make many small, reasonable, careful movements - and you suddenly find yourself in outer space, home comforts far below. It is more rigorous about a topic which doesn't exist than you would think possible. I didn't find it hard to read, but I have been marinating in tech rationalism for a few years and have absorbed much of Bostrom secondhand so YMMV. I loved this: Many of the points made in this book are probably wrong. It is also likely that there are considerations of critical importance that I fail to take into account, thereby invalidating some or all of my conclusions. I have gone to some length to indicate nuances and degrees of uncertainty throughout the text — encumbering it with an unsightly smudge of “possibly,” “might,” “may,” “could well,” “it seems,” “probably,” “very likely,” “almost certainly.” Each qualifier has been placed where it is carefully and deliberately. Yet these topical applications of epistemic modesty are not enough; they must be supplemented here by a systemic admission of uncertainty and fallibility. This is not false modesty: for while I believe that my book is likely to be seriously wrong and misleading, I think that the alternative views that have been presented in the literature are substantially worse - including the default view, according to which we can for the time being reasonably ignore the prospect of superintelligence. Bostrom introduces dozens of neologisms and many arguments. Here is the main scary apriori one though: 1. Just being intelligent doesn't imply being benign; intelligence and goals can be independent. (the orthogonality thesis.) 2. Any agent which seeks resources and lacks explicit moral programming would default to dangerous behaviour. You are made of things it can use; hate is superfluous. (Instrumental convergence.) 3. It is conceivable that AIs might gain capability very rapidly through recursive self-improvement. (Non-negligible possibility of a hard takeoff.) 4. Since AIs will not be automatically nice, would by default do harmful things, and could obtain a lot of power very quickly*, AI safety is morally significant, deserving public funding, serious research, and international scrutiny. Of far broader interest than its title (and that argument) might suggest to you. In particular, it is the best introduction I've seen to the new, shining decision sciences - an undervalued reinterpretation of old, vague ideas which, until recently, you only got to see if you read statistics, and economics, and the crunchier side of psychology. It is also a history of humanity, a thoughtful treatment of psychometrics v genetics, and a rare objective estimate of the worth of large organisations, past and future. Superintelligence's main purpose is moral: he wants us to worry and act urgently about hypotheticals; given this rhetorical burden, his tone too is a triumph. For a child with an undetonated bomb in its hands, a sensible thing to do would be to put it down gently, quickly back out of the room, and contact the nearest adult. Yet what we have here is not one child but many, each with access to an independent trigger mechanism. The chances that we will all find the sense to put down the dangerous stuff seem almost negligible. Some little idiot is bound to press the ignite button just to see what happens. Nor can we attain safety by running away, for the blast of an intelligence explosion would bring down the firmament. Nor is there a grown-up in sight... This is not a prescription of fanaticism. The intelligence explosion might still be many decades off in the future. Moreover, the challenge we face is, in part, to hold on to our humanity: to maintain our groundedness, common sense, and goodhumored decency even in the teeth of this most unnatural and inhuman problem. We need to bring all human resourcefulness to bear on its solution. I don't donate to AI safety orgs, despite caring about the best way to improve the world and despite having no argument against it better than "that's not how software has worked so far" and despite the concern of smart experts. This sober, kindly book made me realise this was more to do with fear of sneering than noble scepticism or empathy. [EDIT 2019: Reader, I married this cause.] * People sometimes choke on this point, but note that the first intelligence to obtain half a billion dollars virtually, anonymously, purely via mastery of maths occurred... just now. Robin Hanson chokes eloquently here and for god's sake let's hope he's right.

  22. 5 out of 5

    Blake Crouch

    The most terrifying book I've ever read. Dense, but brilliant.

  23. 4 out of 5

    Tammam Aloudat

    This is at the same time a difficult to read and horrifying book. The progress that we may or will see from "dumb" machines into super-intelligent entities can be daunting to take in and absorb and the consequences can range from the extinction of human life all the way to a comfortable and effortlessly meaningful one. The first issue with the book is the complexity. It is not only the complexity of the scientific concepts included, one can read the book without necessarily fully understanding th This is at the same time a difficult to read and horrifying book. The progress that we may or will see from "dumb" machines into super-intelligent entities can be daunting to take in and absorb and the consequences can range from the extinction of human life all the way to a comfortable and effortlessly meaningful one. The first issue with the book is the complexity. It is not only the complexity of the scientific concepts included, one can read the book without necessarily fully understanding the nuances of science included. It is the complexity of language and referencing to a multitude of legal, philosophical, and scientific concepts outside the direct domain of the book from "Malthusian society" to "Rawlsian veil of ignorance" as if assuming that the lay reader should, by definition, fully grasp the reference. This, I find, has a lot of pretension on the side of the author. However, the book is a valuable analysis of the history, presence, and possible futures of developing artificial and machine intelligence that is diverse and well though of. The author is critical and comprehensive and knows his stuff well. I found it made me think of things I haven't considered before and provided me with some frameworks to understand how one can position oneself when confronted with the possibilities of intelligent or super intelligent machines. Another one is purely technical. I have learned a lot about the possibilities of artificial intelligence that apparently is not only a programmed supercomputer but AIs that are adjusted copies of human brains, ones that do not require the maker to understand the intelligence of the machine they are creating. The book also talks in details about some fascinating topics. In a situation where, intelligence wise, a machine is to a human like a human is to a mouse, we cannot even understand the ways a super-intelligent machine can out-think us and we, for all intents and purposes, cannot make sure that such machine is not going to override any safety features we put in place to contain it. We also cannot understand the many ways the AI can be motivated and towards what ends and how any miscalculation on our side in making it can lead to grave consequences. The good news, in a way, is that we are still some time away (or so it seems) from a super-intelligent AI. The one thing I missed more than anything in this book, to go back to the readability issue, is a little reference that hinges the concepts we read about in concepts we understand. After all, on the topic of AI, we have a wealth of pop-culture references that will help us understand what the author is talking about that he did not as much as hint at. I was somewhat expecting that he would link the concepts he was talking about to science fiction known to us all. I had may moments of "ah, this is skynet/Asimov/HAL 9000/The Matrix/etc etc". There is an art to linking science with culture that Mr. Bostrom has little grasp on in his somber and barely readable style. This book could have been much more fun and much easier to read.

  24. 4 out of 5

    Brendan Monroe

    Reading this was like trying to wade through a pool of thick, gooey muck. Did I say pool? I meant ocean. And if you don't keep moving you're going to get pulled under by Bostrom's complex mathematical formulas and labored writing and slowly suffocate. It shouldn't have been this way. I went into it eagerly enough, having read a little recently about AI. It is a fascinating subject, after all. Wanting to know more, I picked up "Superintelligence". I could say my relationship with this book was ak Reading this was like trying to wade through a pool of thick, gooey muck. Did I say pool? I meant ocean. And if you don't keep moving you're going to get pulled under by Bostrom's complex mathematical formulas and labored writing and slowly suffocate. It shouldn't have been this way. I went into it eagerly enough, having read a little recently about AI. It is a fascinating subject, after all. Wanting to know more, I picked up "Superintelligence". I could say my relationship with this book was akin to the one Michael Douglas had with Glenn Close in "Fatal Attraction" but there was actually some hot sex in that film before all the crazy shit started happening. The only thing hot about this book is how parched the writing is. To say that this reads more like a textbook wouldn't be right either as I have read some textbooks that were absolute nail biters by comparison. Yes, I'm giving this 2 stars but perhaps that's my own insecurity at refusing to let a 1-star piece of shit beat me. This isn't an all-out bad book, it's just a book by someone who has something interesting to say but no idea of how to say it — at least, not to human beings. You know things aren't looking good when the author says in his introduction that he failed in what he set out to do — namely, write a readable book. Maybe save that for the afterword? But it didn't matter that I was warned. I slogged through the fog for 150 pages or so, finally throwing the towel in about a quarter of the way in. I never thought someone could make artificial intelligence sound boring but Nick Bostrom certainly has. The only part of the thing I liked at all was the nice little parable at the beginning about the owl. That lasted only a couple pages and you could tell Bostrom didn't write it because it was: 1. Understandable 2. Interesting If you're doing penance for some sin, forcing this down ought to cover a murder or two. Here you are, O.J. Justice has finally been served. To everyone else wanting to read this one, you really don't hate yourselves that much.

  25. 5 out of 5

    Radiantflux

    81st book for 2018. In brilliant fashion Bostrom systematically examines how a super-intelligence arise over the coming decades, and what humanity might do to avoid disaster. Bottom-line: Not much. 4-stars.

  26. 5 out of 5

    Bill

    An extraordinary achievement: Nick Bostrom takes a topic as intrinsically gripping as the end of human history if not the world and manages to make it stultifyingly boring.

  27. 4 out of 5

    Meghan

    More detail than I needed on the subject, but I might rue that statement when the android armies are swarming Manhattan. JK... for now.

  28. 5 out of 5

    Miles

    The idea of artificial superintelligence (ASI) has long tantalized and taunted the human imagination, but only in recent years have we begun to analyze in depth the technical, strategic, and ethical problems of creating as well as managing advanced AI. Nick Bostrom’s Superintelligence: Paths, Dangers, Strategies is a short, dense introduction to our most cutting-edge theories about how far off superintelligence might be, what it might look like if it arrives, and what the consequences might be f The idea of artificial superintelligence (ASI) has long tantalized and taunted the human imagination, but only in recent years have we begun to analyze in depth the technical, strategic, and ethical problems of creating as well as managing advanced AI. Nick Bostrom’s Superintelligence: Paths, Dangers, Strategies is a short, dense introduction to our most cutting-edge theories about how far off superintelligence might be, what it might look like if it arrives, and what the consequences might be for humanity. It’s a worthwhile read for anyone passionate about the subject matter and willing to wade through a fair amount of jargon. Bostrom demonstrates an impressive grasp of AI theory, and a reader like me has neither the professional standing nor the basic knowledge to challenge his technical schemas or predictions, which by and large seem prudent and well-reasoned. Instead, I want to hone in on some of the philosophical assumptions on which this book and others like it are founded, with the goal of exposing some key ethical issues that are too often minimized or ignored by technologists and futurists. Some of these I also took up in my review of James Barrat’s Our Final Invention, which should be viewed as a less detailed but more accessible companion to Bostrom’s work. I’ll try not to rehash those same arguments here, and will also put aside for the sake of expedience the question of whether or not ASI is actually attainable. Assuming that it is attainable, and that it’s no more than a century away (a conservative estimate by Bostrom’s standards), my argument is that humans ought to be less focused on what we might gain or lose from the advent of artificial intelligence and more preoccupied with who we might become and––most importantly––what we might give up. Clever and capable as they are, I believe thinkers like Nick Bostrom suffer from a kind of myopia, one characterized by a zealous devotion to particularly human ends. This devotion is reasonable and praiseworthy according to most societal standards, but it also prevents us from viewing ASI as a genuinely unique and unprecedented type of being. Even discussions about the profoundly alien nature of ASI are couched in the language of human values. This is a mistake. In order to face the intelligence explosion head-on, I do not think we can afford to view ASI primarily as a tool, a weapon, a doomsday machine, or a savior––all of which focus on what ASI can do for us or to us. ASI will be an entirely new kind of intelligent entity, and must therefore be allowed to discover and pursue its own inquiries and ends. Humanity’s first goal, over and above utilizing AI for the betterment of our species, ought to be to respect and preserve the radical alterity and well-being of whatever artificial minds we create. Ultimately, I believe this approach will give us a greater chance of a peaceful coexistence with ASI than any of the strategies for “control” (containment of abilities and actions) and “value loading” (getting AIs to understand and act in accordance with human values) outlined by Bostrom and other AI experts. Bostrom ends Superintelligence with a heartfelt call to “hold on to our humanity: to maintain our groundedness, common sense, and good-humored decency even in the teeth of this most unnatural and inhuman problem” (260). Much of his book, however, does not describe attitudes and actions that are in alignment with this message. Large portions are devoted to outlining what can only be called high-tech slavery––ways to control and manipulate AI to ensure human safety. While Bostrom clearly understands the magnitude of this challenge and its ethical implications, he doesn’t question the basic assumption that any and all methods should be deployed to give us the best possible chance of survival, and beyond that to promote economic growth and human prosperity. The proposed control strategies are particularly worrisome when applied to whole brain emulations––AIs built from models of artificial neural networks (ANNs) that could be employed in a “digital workforce.” Here are some examples: "One could build an AI that places final value on receiving a stream of 'cryptographic reward tokens.' These would be sequences of numbers serving as keys to ciphers that would have been generated before the AI was created and that would have been built into its motivation system. These special number sequences would be extremely desirable to the AI…The keys would be stored in a secure location where they could be quickly destroyed if the AI ever made an attempt to seize them. So long as the AI cooperates, the keys are doled out at a steady rate." (133) "Since there is no precedent in the human economy of a worker who can be literally copied, reset, run at different speeds, and so forth, managers of the first emulation cohort would find plenty of room for innovation in managerial strategies." (69) "A typical short-lived emulation might wake up in a well-rested mental state that is optimized for loyalty and productivity. He remembers having graduated top of his class after many (subjective) years of intense training and selection, then having enjoyed a restorative holiday and a good night’s sleep, then having listened to a rousing motivational speech and stirring music, and now he is champing at the bit to finally get to work and to do his utmost for his employer. He is not overly troubled by thoughts of his imminent death at the end of the working day. Emulations with death neuroses or other hang-ups are less productive and would not have been selected." (169) To his credit, Bostrom doesn’t shy away from the array of ethical dilemmas that arise when trying to control and direct the labor of AIs, nor does he endorse treatment that would appear harmful to any intelligent being. What he fails to explore, however, are the possible consequences for humanity of assuming the role of master over AI. Given that most AI theorists seem to accept that the “control problem” is very difficult and possibly intractable, it is surprising how comfortable they are with insisting that we ought to do our best to solve it anyway. If this is where we decide our best minds and most critical resources should be applied, I fear we will risk not only incurring the wrath of intelligences greater than our own, but also of reducing ourselves to the status of slaveholders. One need only pick up a history book to recall humanity’s long history of enslaving other beings, including one another. Typically these practices fail in the long term, and we praise the moments and movements in history that signify steps toward greater freedom and autonomy for oppressed peoples (and animals). Never, however, have we attempted to control or enslave entities smarter and more capable than ourselves, which many AIs and any version of ASI would certainly be. Even if we can effectively implement the elaborate forms of control and value loading Bostrom proposes, do we really want to usher AI into the world and immediately assume the role of dungeon-keeper? That would be tantamount to having a child and spending the rest of our lives trying to make sure it never makes a mistake or does something dangerous. This is an inherently internecine relationship, one in which the experiences, capabilities, and moral statuses of both parties are corrupted by fear and distrust. If we want to play god, we should gracefully accept that the possibility of extinction is baked into the process, even as we do everything we can to convince ASI (not force it) to coexist peacefully. Beyond the obvious goals of making sure AIs can model human brain states, understand language and argumentation, and recognize signs of human pleasure and suffering, I do not believe we should seek to sculpt or restrict how AIs think about or relate to humans. Attempting to do so will probably result in tampering with a foreign mind in ways that could be interpreted (fairly or otherwise) as hostile or downright cruel. We’ll have a much better case for peaceful coexistence if we don’t have to explain away brutal tactics and ethical transgressions committed against digital minds. More importantly, we’ll have the personal satisfaction of creating a genuinely new kind of mind without indulging petulant illusions that we can exercise complete control over it, and without compromising our integrity as a species concerned with the basic rights of all forms of intelligence. Related to the problem of digital slavery is Bostrom’s narrow vision of how ASI will alter the world of human commerce and experience. Heavily influenced by the arguably amoral work of economist Robin Hanson, Bostrom takes it as a given that the primary function of whole brain emulations and other AIs should be to create economic growth and replace human labor. Comparing humans to the outsourced workhorses of our recent past, Bostrom writes: "The potential downside for human workers is therefore extreme: not merely wage cuts, demotions, or the need for retraining, but starvation and death. When horses became obsolete as a source of moveable power, many were sold off to meatpackers to be processed into dog food, bone meal, leather, and glue. These animals had no alternative employment through which to earn their keep." (161) Once reduced to a new “Malthusian” condition, human workers would be replaced by digital ones programed to be happy on the job, run at varying speeds, and also “donate back to their owners any surplus income they might happen to receive” (167). These whole brain emulations or AIs could be instantly copied and erased at the end of the working day if convenient. Bostrom is quick to assure us that we shouldn’t try to map “human” ideas of contentment or satisfaction onto this new workforce, arguing that they will be designed to offer themselves up as voluntary slaves with access to self-regulated “hedonic states,” just so long as they are aligned with ones that are “most productive (in the various jobs that emulations would be employed to do)” (170). It would be unwise to critique this model by saying it is impossible to design an artificial mind that would be perfectly happy as a slave, or to say we could scrutinize the attitudes and experiences of such minds and reliably conclude that they have what Bostrom calls “significant moral status” (i.e. the capacity for joy and suffering) (202). It is therefore hard to raise a moral objection against the attempted creation and employment of such minds. However, it seems clear that the kinds of individuals, corporations, and governments that would undertake this project are the same that currently horde capital, direct resources for the good of the few rather than the many, militarize technological innovations, and drive unsustainable economic growth instead of promoting increases in living standards for the neediest humans. The use of AI to accelerate these trends is both a baleful and, realistically, probable outcome. But it is not the only possible outcome, or even the primary one, as Bostrom and Hanson would have us believe. There is little mention in this book of the ways AI or ASI could improve and/or augment the human experience of art, social connection, and meaningful work. The idea of humans collaborating with artificial workers in a positive-sum way isn’t even seriously considered. This hyper-competitive outlook reflects the worst ideological trends in a world already struggling to legitimize motivations for action that extend beyond the tripartite sinkhole of profit, return on investment, and unchecked economic growth. Readers seeking a more optimistic and humanistic view of how automation and technology might lead to a revival of community values and meaningful human labor should seek out Jeremy Rifkin’s The Zero Marginal Cost Society. My argument is not that the future economy Bostrom and Hanson predict isn’t viable or won’t some to pass, but rather that in order to bring it about humans would have to compromise our ethics even more than the globalized world already requires. Wiring and/or selecting AIs to happily and unquestioningly serve pre-identified human ends precludes the possibility of allowing them to explore the information landscape and generate their own definitions of “work,” “value,” and “meaning.” Taking the risk that they come to conclusions that conflict with human needs or desires is, in my view, a better bet than thinking we already know what’s best for ourselves and the rest of the biosphere. Speaking of “biosphere,” that’s a word you definitely won’t find in this book’s index. Also conspicuously absent are words like “environment,” “ecosystem,” or “climate change.” Bostrom’s book makes it seem like ASI will probably show up at a time of relative peace and stability in the world, both in terms of human interactions and environmental robustness. Bostrom thinks ASI will be able to save us from existential risks like “asteroid impacts, supervolcanoes, and natural pandemics,” but has nothing to say about how it might mitigate or exacerbate climate problems (230). This is a massive oversight, especially because dealing with complex problems like ecosystem restoration and climate analysis seem among the best candidates for the application of superintelligent minds. Bostrom skulks around the edges of this issue but fails to give it a proper look, stating: "We must countenance a likelihood of there bring intellectual problems solvable only by superintelligence and intractable to any ever-so-large collective of non-augmented humans…They would tend to be problems involving multiple complex interdepencies that do not permit of independently verifiable solution steps: problems that therefore cannot be solved in a piecemeal fashion, and that might require qualitatively new kinds of understanding or new representation frameworks that are too deep or too complicated for the current edition of mortals to discover or use effectively." (58) Climate change is precisely this kind of problem, one that has revealed to us exactly how inadequate our current methods of analysis are when applied to hypercomplex systems. Coming up with novel, workable climate solutions is arguably the most important potential use for ASI, and yet such a proposal is nowhere to be found in Bostrom’s text. I’d venture that Bostrom thinks ASI will almost certainly arrive prior to the hard onset of climate change catastrophes, and will therefore obviate worst-case scenarios. I hope he’s right, but find this perspective incommensurate with Bostrom’s detailed acknowledgments of precisely how hard it’s going to be to get ASI off the ground in the first place. It also seems foolhardy to assume ASI will be able to mitigate ecosystem collapse in a way that’s at all satisfactory for humans, let alone other forms of life. Ironically, Bostrom’s willingness to ignore this important aspect of the AI conversation reveals the inadequacies of academic and professional specialization, ones that perhaps only an ASI could overcome. I want to close with some words of praise. Superintelligence is an inherently murky topic, and Bostrom approaches it with thoughtfulness and poise. The last several chapters––in which Bostrom directly takes up some of the ethical dilemmas that go unaddressed earlier in the book––are especially encouraging. He effectively argues that internationally collaborative projects for pursuing ASI are preferable to unilateral or secretive ones, and also that any benefits reaped ought to be fairly distributed: "A project that creates machine superintelligence imposes a global risk externality. Everybody on the planet is placed in jeopardy, including those who do not consent to having their own lives and those of their family imperiled in this way. Since everybody shares the risk, it would seem to be a minimal requirement of fairness that everybody also gets a share of the upside." (250) Bostrom’s explication of Eliezer Yudkowsky’s theory of “coherent extrapolated volition” (CEV) also provides a pragmatic context in which we could prompt ASI to aid humanity without employing coercion or force. CEV takes a humble approach, acknowledging at the outset that humans do not fully understand our own motivations or needs. It prompts an ASI to embark on an in-depth exploration of our history and current predicaments, and then to provide models for action based on imaginings of what we would do if we were smarter, more observant, better informed, and more inclined toward compassion. Since this project needn’t necessarily take up the entirety of an ASI’s processing power, it could be pursued in tandem with the ASI’s other, self-generated lines of inquiry. Such collaboration could provide the bedrock for a lasting, fruitful relationship between mutually respectful intelligent entities. The global discussion about the promise and risks of artificial intelligence is still just beginning, and Nick Bostrom’s Superintelligence is a worthy contribution. It provides excellent summaries of some of our best thinking, and also stands as a reminder of how much work still needs to be done. No matter where this journey leads, we must remain vigilant of how our interactions with and feelings about AI change us, for better and for worse. This review was originally published on my blog, words&dirt.

  29. 5 out of 5

    Richard Ash

    A few thoughts: 1. Very difficult topic to write about. There's so much uncertainty involved that it's almost impossible to even agree on the basic assumptions of the book. 2. The writing is incredibly thorough, given the assumptions, but also hard to understand. You need to follow the arguments closely and reread sections to fully understand their implications. Overall, interesting and thought-provoking book even though the basic assumptions are debatable P.S. (6 months later) Looking back on this A few thoughts: 1. Very difficult topic to write about. There's so much uncertainty involved that it's almost impossible to even agree on the basic assumptions of the book. 2. The writing is incredibly thorough, given the assumptions, but also hard to understand. You need to follow the arguments closely and reread sections to fully understand their implications. Overall, interesting and thought-provoking book even though the basic assumptions are debatable P.S. (6 months later) Looking back on this book I think a major theme is encapsulated by the story of the AI Alice, the paperclip maximizer. In this story, Alice is charged with collecting as many paperclips as she can. She goes to achieves this goal by transforming the entire universe into a paperclip factory, and in the process destroying all life in the universe. (For the full story see https://wiki.lesswrong.com/wiki/Paper...) Now the main lesson is that what we consider human values won't spontaneously arise in machines. And as shown in the story of Alice this could be dangerous for humans. Nick visits this theme again and again throughout his book. We need to be very careful and teach machines human values and not assume that these values will arise automatically.

  30. 5 out of 5

    Morgan Blackledge

    I’m late to the party as far a considering the dangers of artificial intelligence. I got this book after watching Sam Harris’s TED talk on the subject. I’m still on the fence about whether to be afraid or psyched. Admittedly, I’m mostly the latter. But it is at least clear to me now that this is a pernicious intuition that deserves further interrogation. On the fun side: The topic is rich and generative of some really fun and interesting thought experiments. On the fear side: One can’t help but thin I’m late to the party as far a considering the dangers of artificial intelligence. I got this book after watching Sam Harris’s TED talk on the subject. I’m still on the fence about whether to be afraid or psyched. Admittedly, I’m mostly the latter. But it is at least clear to me now that this is a pernicious intuition that deserves further interrogation. On the fun side: The topic is rich and generative of some really fun and interesting thought experiments. On the fear side: One can’t help but think about WWI as an example of what happens when technology changes war/politics and millions die before people adapt their thinking. Weaponizing super-intelligence sounds sci-fi but after reading this book and living through the recent election cycle, I’m rather convinced that it’s an inevitability. The understanding that weaponized AI is inevitable elicits philosophical and ethical dilemmas akin to those considered in the lead up to nuclear weapons production. Ironically, we may need John Von Neumann + Alan Turing’s trillion terefloping electronic love child to sort it all out and save us from itself.

Add a review

Your email address will not be published. Required fields are marked *

Loading...
We use cookies to give you the best online experience. By using our website you agree to our use of cookies in accordance with our cookie policy.