188298-superintelligence-paths-dangers-strategies-by-nick-bostrom

Computational ‘Superintelligence’ and Human Idiocy: What Does Our Future Hold?

Superintelligence may evolve or it may be engineered; either path leads to an existential threat to humanity, perhaps in decades, perhaps in hundreds of years.

Engineers continue to increase the capabilities of computing, and it appears that little thought is going to the long-term implications of their work. That holds true even for working in machine intelligence, which means figuring out how to apply computing to decision making, pattern recognition, relationship identification and other types of human endeavors.

We already have devices and software that far outstrip the human mind’s ability to compute. Indeed, the computing capabilities of computers are so commonplace that we don’t consider the rapid recalculation of a spreadsheet to be superintelligent, but it is. No human can calculate a large spreadsheet with the speed or accuracy of Microsoft Excel or Apple Numbers.

But machines today aren’t just calculating spreadsheets. Software is used to influence what you buy through advertising and recommendations. Software is used to calculate risk in financial markets and even to guide driverless cars. That engineers aren’t thinking about the long-term implications of their work is not surprising, but as a species, we must, because we are on the edge of potentially profound changes in our relationship with technology. In Superintelligence, Nick Bostrom attempts to make sense of where we are with regards to technology, and where we’re heading.

Bostrom is not an engineer, he is a philosopher. He leads Oxford University’s Future of Humanity Institute.

At the heart of his book lies a number of uncertainties:

When will machines become intelligent?

Will machine intelligence be singular or coordinated?

Will machines evolve intelligence through human guidance or on their own?

Will superintelligent machines find a way to work with people or find humanity superfluous once they achieve cognitive awareness?

Will superintelligence be a confluence of computing and humanity, or will it be a machine-only outcome?

Bostrom offers a thorough analysis, but he doesn’t answer the questions. The end of the book is not “Bostrom’s Guide to What Will Happen with Superintelligence”. What Bostrom does write is a comprehensive guide about how to consider superintelligence at this point in human history. This is a book very much about the moment, because as technology evolves, although some of the questions will remain the same, new ones may emerge, and existing ones will be found no longer relevant. We aren’t talking about the next smart phone, but potentially the next smart thing, which may be capable of displacing humankind.

Much of the Superintelligence could easily be read as science fiction, and will likely inform science fiction writers as they use Bostrom’s work to inspire situations for their fictional characters. But Bostrom isn’t interested in fiction, he’s interesting in helping humanity figure out how to manage through the existential threat of, as he says in the concluding chapter, “mismatch between the power of the plaything and the immaturity of our conduct. It is a challenge for which we are not ready now and will not be ready for a long time. We have little idea when the detonation will occur, though if we hold the device to our ear we can hear a faint ticking sound.”

Throughout, Bostrom grapples with the issue of control and guidance. He asks how we will control our creations. But he also recognizes that we may invest in technology that is beyond control, in which algorithms evolve new algorithms of intelligence, and do so more effectively than nature does because human-driven evolution can weed out the inefficiencies of natural selection that result in selecting for traits that have nothing to do with intelligence. Further, an engineered intelligence crucible could also eliminate any conditions related to threats and disease, which also take up evolutionary cycles in the natural world. We could, in other words, engineer a factory for engineering intelligence, though we don’t know how such simplification of environment and process could lead back to the evolutionary computation model.

Evolutionary algorithms are but one path to superintelligence. Bostrom also covers artificial intelligence, whole brain emulation, biological cognition, human-machine interfaces and organized, collective networks.

Although Bostrom acknowledges that “some idiot” may eventually launch a rouge superintelligence just to see what will happen, the book is biased toward human beings guiding the development of superintelligence. I see no reason to believe that such a benign futures awaits. Superintelligence, once recognized as achievable, will also become a weapon of geo-political and economic warfare. Superintelligence will be developed from an ideology and therefore any universal idea of right and wrong will be filtered through the ideological constructs of its inventors with its aims designed to augment and enhance the welfare and riches of those who funded its development. It doesn’t matter if these are businesses or governments; either is capable of developing technology to serve its own purposes.

In the end, any book of philosophy is as much about what the central theme means to humanity than conclusions or observations about the initial idea. Superintelligence is not an easy read, but confronting one’s self in the physical or psychological mirror also proves difficult most of the time. Superintelligence reinforces the notion that we are inventing thinking machines that are made to serve our needs, in many cases, to mimic our mental processes, albeit in emulation mode.

We are, of course, much more likely to create intelligences to do our bidding, to denigrate the lives of our enemies and to make our lives easier and better. And to think about how we shape technology to help us do things for which we have yet to prove our own competency in a consistent way may well prove Bostrom’s question of idiocy. In the end we must ask if this moment in evolution is the moment that we want to codify in software and hardware, or should we, perhaps wait until we figure out a few more things about ourselves?

RATING 8 / 10