[9 October 2006]
A problem that has haunted progressive politics is how to conjure an inspirational image of the future it’s working for without evoking a pie-in-the-sky utopia that is vulnerable to reactionary sniping. That efforts to change the status quo are inherently unrealistic and juvenile is one of the most potent weapons in the conservative arsenal. Too much vague happy talk about genial communities and personal fulfillment and you invite a sarcastic retort along the lines of Homer Simpson’s “Oh, look at me! I’m making people happy! I’m the magical man from Happyland, in a gumdrop house on Lollipop Lane!” (Or perhaps: “Marge, you’re my wife, and I love you very much. But you’re living in a world of make believe, with flowers and bells and leprechauns, and magic frogs with funny little hats.”) Enjoying meaningful work, living among friendly neighbors, having a safety net in case of catastrophe and a just distribution of a society’s wealth: these are childish notions one is supposed to surrender when one becomes an adult.
In the face of such derision, then, progressives must project a convincing ideal worth struggling for in order for people to bother to confront institutions or involve themselves in the messy world of politics. The further one’s ideal drifts away from the prevailing multinational techno-capitalism, the more necessary—and difficult—this becomes. Many strains of Marxist thought are traditionally anti-utopian, regarding idle daydreaming about ideal worlds as supplanting the work necessary to achieve them. Fredric Jameson points out that “the projection of ‘socialism’ as a radical ethical alternative to the existing order virtually ensures the impossibility of its coming into being: and this, not despite its plausibility and its power as an ethical critique of capitalism, but virtually in proportion to it.” (Postmodernism, or the Cultural Logic of Late Capitalism, Duke U Press, 1991). Philosopher Slavoj Zizek adds this coruscating critique of liberal academics:
My personal experience is that practically all of the “radical” academics silently count on the long-term stability of the American capitalist model, with the secure tenured position as their ultimate professional goal (a surprising number of them even play on the stock market). If there is a thing they are genuinely horrified of, it is a radical shattering of the (relatively) safe life environment of the “symbolic classes” in the developed Western societies. Their excessive politically correct zeal when dealing with sexism, racism, Third World sweatshops, etc., is thus ultimately a defense against their own innermost identification, a kind of compulsive ritual whose hidden logic is: “Let’s talk as much as possible about the necessity of a radical change to make it sure that nothing will really change!” (”Repeating Lenin”)
Alternatives to global consumer capitalism become harder to imagine as its logic seems ever more universal and rational (it “defeated” socialism in the Cold War), and we grow accustomed to its rewards, the fruits of “false utopia”. The glorification of purchasing power as a kind of political freedom and the spread of easy credit extending it to all helps sustain the illusion that we live in the best of possible worlds. And because consumerism has fulfilled certain past utopian wishes—say, for easy and rapid travel, for on-demand entertainment, for all the information in the world at our fingertips—without making us any happier, we’ve lost our ability to believe in the possibility of meaningful change. We instead embrace insatiable consumer desire and resign ourselves to endless series of shopping binges. As Adorno claims in a conversation with Ernst Bloch from 1964, “People have lost subjectively in regard to consciousness is very simply the capability to imagine the totality as something that could be completely different.” (Bloch, The Utopian Function of Art and Literature, MIT Press, 1988). The horizon for revolutionary ideas has been delimited by the market, to which we now customarily turn to sate all our needs. Rather than utopia, we find ourselves restricted to imagining the “new and improved”—whether it be a detergent or a hybrid vehicle or a dual-core microchip processor.
Zizek has suggested that late capitalism’s grip on our imagination as the end of history is so firm, “the only way to imagine a utopia of social cooperation is to conjure a situation of absolute catastrophe” (Noam Yuran, “Disaster movies as the last remnants of Utopia” Haaretz 14 January 2003). Thus religious fundamentalism, with its eschatological worldview, now seems to offer the most fully elaborated alternative to global capitalism. America’s war against “Islamofascists” would seem to pit the West, with its faith in technology and scientific rationalism and democratic institutions—the sorts of things alleged to have won the Cold War—against jihadists who believe they must resist such progress in order for transformative spirituality to proliferate and make a heaven possible, in one way or another, either on this world or in the next.
So two competing utopian visions are at work, the spiritual community of religious believers, and the eager community of technologized subjects, epitomized by the commercial and participatory promise of the Internet. In the realm of science fiction, the contemporary genre perhaps most likely to nurture utopian thinking, the clash-of-civilizations weltanschauung is routinely troped as a battle between humans and a rogue race of machines that have escaped from mankind’s control and seek to obliterate their weak and imperfect erstwhile masters. In the past, the robots were often depicted as a monstrously harmonious collective, in a parody of the aspirations of communism, but with new enemies, a new spin has emerged.
What makes the television show Battlestar Galactica, (whose third season began on October 6) so compelling is that it adopts the struggle against Islamic terrorism as it’s subject, but rather than pit technological advancement against zealously intolerant spirituality, it unites both motives together in the guise of humankind’s robotic enemy, the Cylons, who have become otherwise indistinguishable from actual humans. Humans, who survive an initial Cylon onslaught only because they have clung to semi-outdated technology, confront Cylon ideals only with a vague belief in freedom and the redemptive power of emotion (usually love), even though it also routinely brings, irrationality, blindness and betrayal. We don’t ever root against humans, as we do in Paul Verhoeven’s Starship Troopers, in which it dawns on us that the humans are actual fascist villains of the movie, but the robots are sympathetic enough for us to entertain their point of view and have any hopes of an vicarious escape into a world of simple antinomies frustrated.
The conflict in Battlestar Galactica has overtones of C.P. Snow’s famous “two cultures” thesis, which posits a unbridgeable disconnect between the humanities and the sciences: Humanists reject the quantifying, empirical approach to knowledge that those with a scientific (Cylonic) outlook adopt, some going so far as to regard science as just another kind of distorting dogma, not altogether different from religious explanations. Humanists suspect scientists of wanting to represent all of life, in its rich ineffable mystery, in a radically reductive form as a series of mathematical equations, effectively rendering us into programmable robots, subject to having our behavior predicted by certain formulas. These seemingly irreconcilable views (E.O. Wilson notwithstanding) generate a preponderance of dystopic thinking, which, for instance, surfaced in a recent survey among policymakers about the future of the Internet. As Evan Hughes reports on the Boston Globe’s Ideas blog, 42 percent of respondants agree that “By 2020, intelligent agents and distributed control will cut direct human input so completely out of some key activities such as surveillance, security, and tracking systems that technology beyond our control will generate dangers and dependencies that will not be recognized until it is impossible to reverse them.” Paul Saffo, director of the Institute for the Future, believes “that sometime after 2020 our machines will become intelligent, evolve rapidly, and end up treating us as pets.”
But on the flip side of the fear of mechanistic science is a kind of technophilia, in which humans are so impressed with the efficiency of machines that they voluntarily seek to emulate them. Consider, for example, Mind Performance Hacks, a book recently touted at BoingBoing.net that promises “tips and tools for overclocking your brain” and “new ‘software subroutines’ that you can run to optimize various mental processes.” The brain is hardware, and consciousness the output of resident programs. Computer metaphors are attractive in that they allow us to conceptualize enduring human problems—the ones that require utopian thinking—in a readymade way that makes them seem easily and inevitably solvable by the march of technological process. We see our own minds as programmable, controllable, able to be applied to discrete focused tasks, and different ones simultaneously in heroic feats of multitasking. We talk about plugging ourselves into networks and leveraging the knowledge distributed among us. We imagine social life as a massive operating system in which everything has a deliberate function, so that it can seem comprehensible and manageable. We talk of unfortunate ideas as computer viruses, taking a biological metaphor that’s been technologized and repatriating it for humans.
The human-computer fantasy typically views the brain as fundamentally passive—think of The Matrix‘s depiction of Keanu Reeves downloading immediately functional information directly into his brain, as though it ran on third-party programs that just needed occasional patching. One is configured as an end-user of one’s own brain, a mere consumer of the experiences it can be programmed to spit out. Consciousness is a step removed from the brain, which provides the data that consciousness enjoys, as though it were a film.
But Mind Performance Hacks inverts this, promising to make the brain work more like a machine under the user’s conscious direction, which suggests that the user’s consciousness itself aspires to be more machinelike, more relentlessly productive. Rather than receiving data the brain spits out, consciousness merges with “subroutines” it can perform to think more mechanically and more efficiently. No doubt these things work—these kinds of ideas for human perfectibility and increased mental acuity have kicked around before as mnemonics or chisenbop or EST or hypnotherapy, bioengineering, methadrine, etc.—but what seems new is the insistence on the computer metaphor, as if to be a computer would be to live the dream.
By imagining ourselves more like computers, we are to take the value system technology generates—the idea almost hegemonic in business culture that greater productive efficiency automatically generates an expansion of happiness—and apply it to our own behavior. Our economy’s emphasis on technology as a means to produce perpetual growth makes us think that by becoming more machinelike, we become more human in the sense of fulfilling our maximum potential. The more raw data we can process, the richer our lives become, as if processing information was valuable for its own sake. Information, now an unconquerable ocean, tempts us to master it through heroic feats of navigation, exploratory expeditions made purely for glory. Human potential, human experience, may come to seem entirely a matter of information processing; and the faster your brain processes information, the more life one is cramming into our allotted time on earth. Efforts to absorb all this information can become a kind of flow experience, a way of entering the “zone” associated with athletic accomplishment, and at that point one may seem to merge with the information itself, to become inseparable from its continual transmission. That might be the aspiration anyway, to become the best data you can be, so you still figure in a world awash in nothing but data.
Social networking sites, which already seek to reduce ourselves (enhance ourselves?) into a flow of routinely updated data, may be the first florescence of this. And the burgeoning popularity of virtual spaces would be the next, integrating the data in a reconstituted virtual self, bringing people a step closer to having the field for one’s identity laid out as a flexible, benevolent operating system, which lets one be ensconced in the safety of programming logic, having shifted existence to a space where inhibiting personal anomalies can simply be debugged. That is what virtual universe of Second Life purports to provide.
My original reaction when I first heard about Second Life was something along the lines of “How pathetic”. I was quick to assume that a life created online was somehow fictitious, a compensation for a circumscribed real life (as though mine is so free and uninhibited). Involvement with these worlds seemed to me the product of stunted, misdirected energy. Lacking autonomous scope in reality, we can seek refuge in a virtual world where we have quasi-divine powers of generation. In short, I saw it as one of those debilitating false utopias that vents the pressure required to generate real ones, something closer to an egalitarian socialist vision. For socialism to come into being, Jameson argues, it cannot be “staged as an ideal or a Utopia but a tendential and emergent set of already existing structures.” Zizek, in the documentary about him, argues that “the true utopia is when the situation is so without issue, so without a way to solve it within the coordinates of the possible, that out of the pure urge of survival, you have to invent a new space.” Upon further contemplation, could Second Life be such a space, emerging organically from within the technological advances wrought by capitalism?
Second Life prescribes no specific goals, or specific fantasies you should indulge—it’s not an online Renfest. Instead, unconstrained by the givens of genetics and circumstances, one can build an entirely new self that conforms more closely to one’s aspirations without having to undergo the struggles and compromises, without having to take the risks or confront the failures that one would while pursuing such ambitions in real life. A recent Wall Street Journal article reported, “There are no dragons or wizards to slay. Instead, San Francisco-based Linden Lab, the company behind Second Life, has provided a platform for players—median age 32 and 57 percent male, with 40 percent living outside the U.S.—to do whatever they want, whether it is building a business, tending bar or launching a space shuttle. Residents chat, shop, build homes, travel and hold down jobs, and they are encouraged to create items in Second Life that they can sell to others or use themselves” (Andrew Lavallee, “Now, Virtual Fashion,” Wall Street Journal, September 22, 2006). It certainly sounds like an unbounded space wherein individuals can be left alone to construct their own fantasy lives without the constraints of social pressure or necessity—a utopian space where both egalitarian and individualistic priorities can prevail.
But much of the attention Second Life attracts relates to the unusual business opportunities it has presented early adopters, which are analogous to the commercial bonanza the Internet first ushered in. Generally the oft-reported economic transactions mediating between real and online worlds—the news of people selling characters they’ve developed or treasures they’ve amassed or virtual real estate they have claim to—seemed to me to simply extend the verisimilitude of the virtual space, make the pretend world seem more real, like having a toy Fisher-Price gas station for your Matchbox cars. The leakage between real and virtual worlds served to call greater attention to the boundary, and to the novelty of it all and the extremity of some participants’ escapism of some participants. But economic penetration into these worlds does nothing to reinforce their status as escapist realms. Rather, it actually introduces the very elements utopias have served to rectify: the competition for limited resources, the positional status games that come along with unequal distributions of income; in other words, the hegemony of late capitalism.
Suddenly one’s limitless autonomy is constrained by the very same intractable realities of money and status that it would seem one would use role-playing games to render insignificant. But without a specific fictive goal to pursue, the goals we improvise to direct our ambitions in real life invade, and the anxieties that beset such ambitions also seem to follow them into cyberspace. One of the fundamental ambitions we invent to preoccupy ourselves is keeping up with fashion, or staying ahead of its curve. Lavallee focuses on a user who in Second Life is both a stripper and fashion designer. “The scene—drama and all—keeps Janine Hawkins engaged in fashion in a way that wouldn’t be possible for her offline. ‘It’s totally different to pay $15 to keep up with the fashions in Second Life than’ the $1,500 that would be necessary in real life, she says. Her avatar, Iris Ophelia, originally paid for outfits by dancing at Second Life bars. ‘Every time I had enough money, I’d run there and buy everything I could,’ she says.” As much as it sounds like she is living the dream, one wonders about the compulsiveness and narcissism, not to mention the subjugation to whimsical turns of the fashion cycle.
Often fashionability is a proxy for wealth, another way of demonstrating it conspicuously. And in the case of the otherwise disenfranchised, fashion can be an alternate means for accruing status, for participating in a game with winners and losers without having to have vast sums of money. Fashion can create a zero-sum game anywhere, permitting ruthless ranking where the means wouldn’t otherwise exist while stripping away any excuses one might have for not playing. Second Life is becoming overun by the fashion business, which combines two compelling ways to create winners and losers:
Because Second Life creators own their products and can sell them, the game has attracted both professional and amateur designers, says Linden spokeswoman Catherine Smith. That has led to a thriving fashion scene that includes not just dressmaking but also jewelry, hair and even skin design, as people purchase the elements to create a look for their online alter egos. Selling virtual clothes to real people for their avatars can even be lucrative: In August, the 20 best-selling Second Life fashion designers generated a combined $140,466 in sales, Linden says. “We found out pretty quickly that people loved owning things” Ms. Smith says, and many start by buying items for their avatars. “It’s not surprising that fashion and hairstyles and skins are as attractive and as exciting and as valuable as they are, because it’s part of individualizing” the appearance of a player’s online persona.
Individualization online is not an innocuous project of self-actualization but a competition, a contest, just as we are encouraged to see it in real life. Fashion, in order to thrive, must make sure we never forget it. And rather than easing the pressure, Second Life intensifies the pitiless style scrutiny, since in the virtual world there are no excuses for fashion faux pas, and no distance between how you appear and how you believe yourself to be. These pressures ultimately encourage one to build an identity out of purchases, shrouding oneself in the protection of goods’ market-established values. Then as one identifies with these purchases they become, in their way, beloved. Thus, that utopia that might have come to be in this new, miraculously unbounded space, the conceptual space where the next step beyond consumer capitalism might have been taken, is broken upon the wheel of Smith’s simple ineluctable insight: People love owning things.
Robert Horning has developed a substantial body of work in PopMatters' music reviews, concerts, film, and TV sections. His writing has also appeared in Time Out New York and Skyscraper. In his PopMatters column, "Marginal Utility", Rob bridges the abstract and concrete aspects of consumerism. His writing is as grounded and approachable as an everyday trip to the grocery store. Rob has a BA and MA in English Literature; his interests in social theory, economics, and sociology generates his solid background knowledge for "Marginal Utility" and informs his music reviews. For more Rob Horning, be sure to read the Marginal Utility blog.
Published at: http://www.popmatters.com/pm/column/virtual-utopia/