Climbing Out of the Transhuman Stew Pot, or, Why I’m Not a Singularitarian

Thinking about the future always depresses me, so I’m not sure why I do it so often. I sometimes wish I could crawl into the chorus of some hippy song like “sha la la la la la live for today” and like the song says, “don’t worry about tomorrow, hey hey hey hey.” But that’s just not me. I think if the hippies had put a little more thought into the future and not got stuck on the idea that all we need is love, we might not have stumbled barefoot into our dystopian present.

And as that present dissolves into an even more dystopian future, I find myself at a loss for some philosophy, some fragment of a clue, that could guide me on a safe path through what is shaping up to be the most unpredictable era in human history.

If that sounds like hyperbole to you, then we are clearly on separate cultural tracks. And yes, it’s true that my cultural track has primed me to fear the future. From the atom age hysterics of The Twilight Zone (best TV show, ever) to the cybernetic armies crushing mountains of human skulls in The Terminator movies, I’ve seen the human race destroyed in a thousand ways—tragic, ironic, horrible and camp. I wish I could say it’s prepared me for what I’ve come to feel in my bones is the inevitable end of the human race. As I’ve written in previous columns, I’ve chosen not to live my life as if the end were at hand. But that doesn’t mean I don’t think about it all the time.

When I was a kid, the big threat was nuclear annihilation. As the cold war faded, new doomsdays appeared in its place. The ozone hole got fixed, more or less, but then climate change became a thing, along with various pandemics, killer asteroids, cancer clusters, frankenfoods, megavolcanoes, solar storms, mass extinctions, Y2Ks, EMPs, and of course that old standby, The Rapture. Through all these imagined disasters, pop culture was there to simultaneously stoke and soothe my fears, with eye-candy images of fantastic destruction paired with stories of unlikely heroes somehow beating the odds. In the last few years, we seem to have settled on a lazy, catch-all metaphor for all those fears in the form of zombies. But zombies don’t scare me, anymore. What scares the hell out of me lately is computers.

Book: The Singularity is Near: When Humans Transcend Biology

Author: Ray Kurzweil

Publisher: Penguin

Publication date: 2006-09

Image: http://images.popmatters.com/news_art/o/outofpocket-singularityisnear-kurzweil-cvr-200.jpgThe big trend in apocalyptic thinking is now computer based, and it’s strangely not even billed as apocalyptic. It’s known as the Singularity, a point in the somewhat near future when computers become more intelligent than people, which will force humans to adapt their very biology in order to keep up. This era, say the “singularitarians” (an appropriately weird and culty-sounding moniker), will usher in the next phase of human evolution, one in which we transcend all of our current limitations, including death itself. The prophet of Singularity is inventor and all-around smartypants, Ray Kurzweil, who through a supreme effort of optimism, has come to argue that the end of human life as we know it is not a scary thing at all, but a necessary and natural step in our species’ evolution.

I am clearly not as smart as Kurzweil, so I’m in no position to say whether he’s right or not. His books give me a headache, zipping around as they do from dendrites to destiny, then from nanobots to Nietzsche. I can’t hold all this stuff in my head at once, so I tend to fall back on the stories and metaphors helpfully provided by the culture I grew up in. The Rise of the Machines is something I can wrap my head around. The Matrix I get. Frankenstein, Wall-E, Armageddon” and War of the Worlds all make perfect sense to me. Partially, that’s because those stories are continuations of ancient themes so basic and ubiquitous they make up a huge chunk of who we are. But something about The Singularity is different. Its effects would be so sudden and unpredictable that we currently have no metaphors, no easy frames of reference, to even imagine it with.

Here’s the scenario: Within a decade or so, computers will be as smart as humans. This will allow scientists to use them to solve all kinds of problems inherent in our biology. Tiny, self-replicating robots will scour our blood for pathogens and repair our bodies at the cellular level. Computers the size of molecules will enhance our brains and muscles, making us superhuman. From there, change will accelerate so rapidly that by 2040, computers will be able to build ever more intelligent and networked computers, which will process information and communicate with one another at the speed of light. By the middle of this century, the new superhumans will ditch their old bodies and personalities to merge with these machines, creating some new entity that is neither strictly biological nor technological. Our world will be unalterably transformed, and our species as we know it will cease to exist.

My immediate, anthropocentric reaction to all this is a hearty and incredulous “Fuck that!”, even as the lazy existentialist in me wonders if it’s not all for the best. I mean, humans appear to be constantly on the verge of destroying themselves already, so maybe it’s better for the planet if we pass the top-of-the-food-chain baton to some being smarter than ourselves. We had a good run, right? (Nothing compared to the dinosaurs of course, who lasted 165 million years to our puny 100,000, but then again, dinosaurs could not have possibly have had as good of a time as we have.) Maybe whatever we turn into will, as Kurzweil dreams, spread germs of intelligence throughout the universe, eventually transforming the universe itself it into one giant, conscious, self-regulating supercomputer. If that happens, we will have manufactured God in our image.

I’m so bothered by Kurzweil’s vision that I sometimes can’t sleep at night. Part of it, as I’ve said, is that I’m particularly susceptible to the seductive oblivion of the doomsday fantasy. I also have no religious beliefs to reassure me in an afterlife, and no cultural frame to help me envision my role in such a weird future. But throwing out the metaphysics and thinking strictly in practical terms, what does one do today to prepare for tomorrow’s transhuman existence? Stocking up on canned food and firearms seems like a pretty quaint solution. Hedonism would be a nice way to go out, except I have a life to run here and not enough money to last until next month, much less until 2040.

And then there’s my kids, the oldest 12, the youngest eight. What should I urge them to be when they grow up? Cyborgs?

“Don’t worry about college, kids. We can’t afford it, and besides, by the time you’re my age you’ll have morphed into a silicon-controlled, shapeshifting ultra-organism that will allow you to become and do whatever you feel like, and you’ll live forever as part of a vast and immaculate superbrain. Just play lots of video games and you’ll be fine.”

This is where the fear comes in. Because if I already can’t pay their orthodontist bills, how am I going to afford blood-scrubbing nanobots and cybernetic implants for Christmas 2020? I should probably tell them to be investment bankers or something. See why I’m depressed?

For all of Kurzweil’s far-reaching and well-researched theories, he’s managed to either miss or fail to communicate the most important thing: The effect of the Singularity on the day-to-day lives of ordinary, everyday people.

Book: You Are Not a Gadget: A Manifesto

Author: Jaron Lanier

Publisher: Knopf Doubleday

Publication date: 2011-02

Image: http://images.popmatters.com/misc_art/o/outofpocket-youarenotagadget-cvr-200.jpgThat’s why my outlook was lifted a bit after reading Jaron Lanier’s You Are Not a Gadget (2010). (I got it at the library and mistook it for his latest, Who Owns the Future?, but it seems to cover much of the same territory.) Lanier is an OG computer guy who helped invent Virtual Reality, so he’s no luddite out to smash Kurzweil’s carbon nanotube-weaving loom. But he does take exception to the fact that futurists like Kurzweil seem so enthralled with the coming machine age, they seem creepily eager to discard humanity altogether.

“If you believe the Rapture is imminent,” Lanier writes, “Fixing the problems of this life might not be your biggest priority… In the same way, if you believe the Singularity is coming soon, you might cease to design technology to serve humans, and prepare instead for the grand events it will bring… There’s nothing special about the place of humans in this scheme…. People will be obsolete…”

Lanier also takes issue with one of the underpinnings of Kurzweil’s argument, that according to Moore’s law, computers will become exponentially more powerful over the coming years. While not denying the essential truth of Moore’s Law, Lanier points out that while raw computing power is indeed accelerating, the software that runs those machines shows signs of stagnation and “lock-in”, a coder term that describes how new software is inevitably built on what came before it.

Lock-in occurs when an essential program (like UNIX or MIDI, for example) becomes so standardized that every new piece of software must be compatible with it in order to work at all. This leads to a situation where the original, flawed software designs of the early computer era come to dictate the shape of modern software. The flaws are “locked-in”, and since the system is interdependent, it would be almost impossible to change, like building a new foundation for the Empire State Building while keeping all its offices open.

Maybe this only slows down the effect of Moore’s Law and puts off the Singularity for a few years, I’m not sure. Kurzweil’s time-frame dos seem to overestimate how fast we humans can actually get anything done. But what really interests me about Lanier is that he draws a direct parallel between computer lock-in and cultural lock-in. “Pop culture has entered into a nostalgic malaise,” he notes. “Online culture is dominated by trivial mashups of the culture that existed before the onset of mashups, and by fandom responding to the dwindling outposts of centralized media. It is a culture of reaction without action.”

This statement is borne out in so many examples of pop culture right now it’s hard to winnow the list. Hollywood has desperately pinned its fortunes on superheroes from the mid-20th century, and it now churns out almost identical remakes of movies that are only few years old. Pop songs sound more alike each day. Fashion is a circular montage of re-recycled trends. Otherwise talented writers have resorted to rewriting classics to include zombies, and what is touted as the most “original” new writing is appallingly emotionless, intentionally meaningless and abrasively machine-like ( looking at you, Tao Lin).

Yes, there is a lot of good stuff out there, too. But as Lanier notes, why have there been no major cultural advancements in the past two decades? Hip-hop was the last big innovation in music, and that happened in the ‘80s. Technology has advanced like crazy since then, and in the past, culture has kept pace, a good example being the leap taken in music from the mid-‘40s to the mid-‘50s, thanks to the invention of the electric guitar amp. In the ‘90s, when computers were becoming personal and we were all logging on for the first time, everyone dreamed of a connected society, where the tools to create and distribute our individual visions would be democratized and a vibrant new culture would emerge. We would leave the 20th century for ports unknown, like a cruise ship off to sea.

We’re told to believe that wiki-style collaboration is better than individual effort, but is that really true? Why do we suddenly believe in the wisdom of crowds? When has a crowd, inclined to turn into a mob, ever been wise?

But it’s been 20 years and we’re still tied to the dock, trading .jpgs of cats. Why?

Lanier blames the architecture of the web, and I think he has a good point. The internet is now the dominant medium in our lives, and the way it’s currently set up, it promotes the idea that being anonymous sprockets in a vast machine is better than being a collection of autonomous individuals. We’re told constantly that wiki-style collaboration is better than individual effort, but is that really true? Where’s the proof? Why do we suddenly believe in the wisdom of crowds? When has a crowd ever been wise? Certainly fewer times than they’ve turned into mobs, and no one ever talks about the wisdom of mobs. But we are becoming one huge, digital mob, and it seems to me that the very idea of valuing unique expression is being sacrificed to it.

Which is probably why I’m so disturbed by Kurzweil. While a part of me adores the idea that we are evolving into something smarter and more efficient (honestly I wouldn’t mind having a chip in my brain to help me remember people’s names and where I left my keys), another part of me rages against the loss of humanity and individuality that we are surely setting ourselves up for.

Because innovation has never been accomplished by crowds. Yes, there have been revolutions in which mobs overpowered the state, but most have been bloody and unsuccessful, leading not to utopias, but to cycles of reprisal and revenge that leaves societies stunted for generations. The few revolutions that did succeed had at their helm a visionary individual, or a small group of visionaries working closely together, not a loosely connected crowd.

In the arts, when has a crowd ever written a great book or a symphony, or even a decent pop song? Try to get an anonymous crowd to paint the most gorgeous landscape on Earth and you’ll end up with a bunch of petty squabbles and a canvas covered in random strokes and the occasional happy squirrel.

It’s the same in the tech world—what true innovation has come from our newly exalted crowdsourcing? Wikipedia? It’s just another encyclopedia, only the writers don’t get paid and have to fight each other over every sentence. Linux? Wonderful, we’ve reinvented UNIX and made it even harder for the average computer user to even turn the damn thing on.

Lanier calls the proponents of crowdsourcing “digital Maoists”, because for all their libertarian posturing, they are, in practice, elevating the mob over the individual. I say the same goes for singularitarians, with their blasé attitude towards human extinction.

We can see the results of these attitudes in real time on the ‘net—the standardization and subsequent impoverishment of personal expression through reductive and redundant “memes”; flame wars between unaccountable blowhards sucking the air out of civil discourse; random, anonymous bullying; the abandonment of objective reality based on personal observation in favor of new, tribal “realities” that are easily and invisibly manipulated; the surrender of personal privacy—all these trends point towards the degradation of individuality.

So for my own peace of mind at least, I’m writing Kurzweil off as a well-meaning nut. But singularity or no singularity, we have to find a place in the future for people as individuals, not just as some lumpy mass of flesh and gray matter and computer chips. Because from this point on, where digital culture goes, pop culture will follow, and there’s no reason we ought to follow blindly.