Superintelligence : paths, dangers, strategies / Nick Bostrom, Director, Future of Humanity Institute, Professor, Faculty of Philosophy & Oxford Martin School, University of Oxford.

By: Bostrom, Nick, 1973- [author.]Material type: TextTextPublisher: Oxford, United Kingdom : Oxford University Press, 2014Edition: First editionDescription: xvi, 328 pages : illustrations, graphs, tables ; 25 cmContent type: text Media type: unmediated Carrier type: volumeISBN: 9780199678112Subject(s): Artificial intelligence -- Philosophy | Artificial intelligence -- Social aspects | Artificial intelligence -- Moral and ethical aspects | Cognitive science | Société de l'information | Intelligence artificielle | Philosophie | Sciences cognitives | Artificiell intelligens | Artificial intelligence -- Moral and ethical aspects | Artificial intelligence -- Philosophy | Artificial intelligence -- Social aspects | Cognitive scienceDDC classification: 006.301 LOC classification: Q335 | .B685 2014Other classification: 50.12
Contents:
Past developments and present capabilities -- Paths to superintelligence -- Forms of superintelligence -- kinetics of an intelligence explosion -- Decisive strategic advantage -- Cognitive superpowers -- superintelligent will -- Is the default outcome doom? -- control problem -- Oracles, genies, sovereigns, tools -- Multipolar scenarios -- Acquiring values -- Choosing the criteria for choosing -- strategic picture -- Crunch time.
Abstract: The human brain has some capabilities that the brains of other animals lack. It is to these distinctive capabilities that our species owes its dominant position. Other animals have stronger muscles or sharper claws, but we have cleverer brains. If machine brains one day come to surpass human brains in general intelligence, then this new superintelligence could become very powerful. As the fate of the gorillas now depends more on us humans than on the gorillas themselves, so the fate of our species then would come to depend on the actions of the machine superintelligence. But we have one advantage: we get to make the first move. Will it be possible to construct a seed AI or otherwise to engineer initial conditions so as to make an intelligence explosion survivable? How could one achieve a controlled detonation? To get closer to an answer to this question, we must make our way through a fascinating landscape of topics and considerations. Read the book and learn about oracles, genies, singletons; about boxing methods, tripwires, and mind crime; about humanity's cosmic endowment and differential technological development; indirect normativity, instrumental convergence, whole brain emulation and technology couplings; Malthusian economics and dystopian evolution; artificial intelligence, and biological cognitive enhancement, and collective intelligence.
Tags from this library: No tags from this library for this title. Log in to add tags.
    Average rating: 0.0 (0 votes)
Item type Current library Call number Copy number Status Notes Date due Barcode
Books Books Female Library
Q335 .B685 2014 (Browse shelf (Opens below)) 1 Available STACKS 51952000193807
Books Books Female Library
Q335 .B685 2014 (Browse shelf (Opens below)) 2 Available STACKS 51952000199007
Books Books Main Library
Q335 .B685 2014 (Browse shelf (Opens below)) 1 Available STACKS 51952000199014
Books Books Main Library
Q335 .B685 2014 (Browse shelf (Opens below)) 2 Available STACKS 51952000193814

Includes bibliographical references (pages 305-324) and index.

1. Past developments and present capabilities -- 2. Paths to superintelligence -- 3. Forms of superintelligence -- 4. The kinetics of an intelligence explosion -- 5. Decisive strategic advantage -- 6. Cognitive superpowers -- 7. The superintelligent will -- 8. Is the default outcome doom? -- 9. The control problem -- 10. Oracles, genies, sovereigns, tools -- 11. Multipolar scenarios -- 12. Acquiring values -- 13. Choosing the criteria for choosing -- 14. The strategic picture -- 15. Crunch time.

The human brain has some capabilities that the brains of other animals lack. It is to these distinctive capabilities that our species owes its dominant position. Other animals have stronger muscles or sharper claws, but we have cleverer brains. If machine brains one day come to surpass human brains in general intelligence, then this new superintelligence could become very powerful. As the fate of the gorillas now depends more on us humans than on the gorillas themselves, so the fate of our species then would come to depend on the actions of the machine superintelligence. But we have one advantage: we get to make the first move. Will it be possible to construct a seed AI or otherwise to engineer initial conditions so as to make an intelligence explosion survivable? How could one achieve a controlled detonation? To get closer to an answer to this question, we must make our way through a fascinating landscape of topics and considerations. Read the book and learn about oracles, genies, singletons; about boxing methods, tripwires, and mind crime; about humanity's cosmic endowment and differential technological development; indirect normativity, instrumental convergence, whole brain emulation and technology couplings; Malthusian economics and dystopian evolution; artificial intelligence, and biological cognitive enhancement, and collective intelligence.

1 2

There are no comments on this title.

to post a comment.