Call for Essays About Any Aspect of Popular Culture, Present or Past

 

Latest Posts

Bookmark and Share
Text:AAA
Tuesday, Apr 29, 2008
by PopMatters Staff

CSS
Rat Is Dead [MP3]
     


Robert Forster
Pandanus [MP3]
     


The Long Blondes
Guilt [Video]


Guillemots
Falling Out of Reach [Video]


Black Diamond Heavies
Bidin’ My Time [MP3]
     


Young and Sexy
The Fog [MP3]
     


Sarandon
Welcome [MP3]
     


Mike’s Dollar [MP3]
     



Bookmark and Share
Text:AAA
Tuesday, Apr 29, 2008

How is it that a song made of all the worst things possible is endlessly awesome? Contemplate this while enjoying Komar & Melamid’s “Most Unwanted Song” (link via Scott McLemee). The song was produced based on the results of a 1990s poll asking Americans what features they like least in music. Michael Bierut at Design Observer suggests the result is a triumph of design: “If working within limitations is one of the ways designers distinguish themselves from artists, America’s Most Unwanted Song is a design achievement of a high order.”


And naturally, the most wanted song is unlistenable. If American Idol is the future of pop music, this poll-produced contrivance suggests the future will be bleak indeed.


Bookmark and Share
Text:AAA
Tuesday, Apr 29, 2008

Clay Shirky, the technophilic author of a new book about spontaneous organizational behavior online, recently delivered this widely linked speech about how TV once managed to suck up the “social surplus” that is now being directed into building social networks and open-source applications and whatnot on the internet. He argues that the 20th century brought us more disposable leisure time, and it brought us TV to help us dissipate it.


Starting with the Second World War a whole series of things happened—rising GDP per capita, rising educational attainment, rising life expectancy and, critically, a rising number of people who were working five-day work weeks. For the first time, society forced onto an enormous number of its citizens the requirement to manage something they had never had to manage before—free time.
And what did we do with that free time? Well, mostly we spent it watching TV.



Shirky then explains that while Wikipedia took an estimated 100 million hours of human participation to create, American TV viewers spend that much time every weekend watching advertisements.


Currently, with Web 2.0, etc., society is erecting a “architecture of participation” (a term he borrows from tech-industry consultant Tim O’Reilly) which will allow people to switch gears from passive consumption of TV to active participation in collective social projects—annotating maps and debugging software and posting lolcats and correcting misinformed bloggers in comments and that sort of thing.


It’s better to do something than to do nothing. Even lolcats, even cute pictures of kittens made even cuter with the addition of cute captions, hold out an invitation to participation. When you see a lolcat, one of the things it says to the viewer is, “If you have some sans-serif fonts on your computer, you can play this game, too.” And that’s message—I can do that, too—is a big change.
This is something that people in the media world don’t understand. Media in the 20th century was run as a single race—consumption. How much can we produce? How much can you consume? Can we produce more and you’ll consume more? And the answer to that question has generally been yes. But media is actually a triathlon, it ‘s three different events. People like to consume, but they also like to produce, and they like to share.
And what’s astonished people who were committed to the structure of the previous society, prior to trying to take this surplus and do something interesting, is that they’re discovering that when you offer people the opportunity to produce and to share, they’ll take you up on that offer. It doesn’t mean that we’ll never sit around mindlessly watching Scrubs on the couch. It just means we’ll do it less.


Shirky’s vision of the future sounds great. We all benefit when we contribute our spare time to projects that in theory benefit society at large. And when people are more attuned to their productive rather than their consuming capabilities, they are probably likely to be much happier, as displaying one’s ability to make things is a quintessentially human quality and social recognition makes life worth living. But some skepticism is in order. The notion of a social surplus sounds a lot like Bataille’s accursed share. Bataille argues that in a post-scarcity society, individuals need to come up with ways to destroy excess production through various modes of luxurious waste in order to sustain economic growth. From this point of view, wastefulness is intentional, a demonstration of wealth. Squandering the surfeit of free time on sitcoms seems a variation on this. Perhaps, watching TV is not like drinking gin, as Shirky suggests, but like conspicuous consumption. It’s a triumph of our culture that we can waste entire evenings on Battlestar Galactica rather than, say, foraging for food. The point of free time is wasting it, not employing it productively in some other arena. An hour spent watching VH1 = luxury. An hour spent annotating a Google map = digital sharecropping. I don’t want to believe this, but empirical observation suggests that it’s true that many people have this attitude. Most people aren’t looking for ways to be more productive; instead they tend to seek means to consume more. And if they are given reasons to believe their manic consumption is really a kind of production in itself, so much the better. The culture industry’s main thrust in the internet era has been to do just that, make consumption feel like it’s participation, so we don’t feel bad when we don’t better avail ourselves of the “architecture of participation”. Chances are, this architecture will become more akin to a Playland at McDonald’s than the infrastructure for a social revolution.

In the quote from Shirky, a lot hinges on the inadequately defined word interesting. Likewise for his assertion that people watching TV are doing “nothing.” Some people will not be easy to convince that watching TV is equal to doing nothing, that it is not in fact “doing something interesting.” What the culture industry has traditionally done is not only mask the social surplus, as Shirky notes, but sell the passive squandering of it as a dignified social activity—watching “Must-see TV” becomes a way to participate in water-cooler conversations that occur mainly in our imaginations. The internet is a real ongoing conversation, one that opens us to risks (of embarrassment or irritation) as well as rewards. But the pretend conversation we have while passively consuming is 100 percent safe. We are also persuaded that participation and collaboration are more inconvenient than individuation and private consumption in isolation. In my life, this plays out as me playing chess against a computer when I could readily play against human opponents online. In my mind, playing a human is more rewarding, but in practice playing the computer fulfills my need for momentary, commitment-free distraction. Sharing and cooperation leads inevitably to compromise, and the main thrust of most advertising is to convince us never to compromise when pursuing what we want in our hard-earned leisure time.


In short, the marketing world and the culture industry at large invests a lot in making doing nothing feel like something, and for many of us, it does; we collaborate on the fiction with the marketers, and this curbs our inclination for the sort of collaborating Shirky talks about. Shirky does qualify his statements by saying that it only a few people to change their habits to produce a huge shift once the behavior is leveraged across the internet. And perhaps that is enough to prompt optimism. But the same forces that enable online sharing also enable deeper individuation, filtering, personalization and so on, discouraging cooperation on anyone else’s terms, limiting the usefulness of what is shared. It also extends marketing’s reach, enhancing the power of its messages about the joys of passive consumption and fantasizing about identity rather than “doing something.”


The most difficult word to define, then, is participation—to a certain degree we can’t help but participate in our culture, and Web-style interactivity has become much more prominent in that culture. But with that prominence the phenomenon becomes domesticated, becomes a new way to absorb the social surplus harmlessly, becomes the new way to watch TV. Its fruits are trivial before the fact, because most people don’t want to see themselves as revolutionaries, and many want to luxuriate in flamboyant triviality. They are already absorbed into the status quo, which perhaps is subtly shifting along the lines Shirky is suggesting but is always in the process of disguising the change. The radical breaks that futurists and techno-evangelists are always predicting is always about to happen; it can never actually come.


Bookmark and Share
Text:AAA
Tuesday, Apr 29, 2008
The Zarathustruan Analytics series continues with L.B. Jeffries' thoughts on player input.


Part of the reason this analytical method is named after Nietzsche’s Thus Spoke Zarathustra is to do justice to the individualized nature of player input, to put aside judging a game purely by the game play or plot and go beyond that to analyzing the actual experience of a game itself. The problem is…although critics are quite capable of analyzing their own experience from playing a game, it is not quite so easy to apply that analysis to others. Indeed, this critical method is more an approach to assessing the experience creating methods in a game rather than the individual experience itself. The player input, then, is literally your connection to the game because it keeps you interested and playing. To that end, when critically judging player input, you are looking at how the game and story react to your input and the impact this has on the overall experience. Rather than go into the huge variety of ways games do this, we’ll do an analysis of one of the more controversial player input methods that’s prevalent in games today and use it to highlight the requirements of player input itself.

There has been a great deal of criticism over the silent protagonist in video games recently and for good reason: they’re suddenly everywhere. Out of the top ranking games of 2007, almost all of them involve playing characters who don’t speak. Gordon Freeman from Half-Life never utters a word. Master Chief hardly speaks, and Link does little more than grunt. It’s tempting to dismiss the feature as simply a cop-out on the part of the creators, and yet there are certainly games that have used the device effectively. Why does the connection of not letting a player’s character speak work in some games and in others supposedly break-down?


Bookmark and Share
Text:AAA
Monday, Apr 28, 2008

Recently I started making my way through Irish author Eoin “It’s Pronounced ‘Owen’!” Colfer’s popular Artemis Fowl series. I’ll admit, I’m rather behind on the times—the original Artemis Fowl was published in 2001, and the following four books (plus one due out this July) about the boy genius have emerged at roughly the rate of one per year.


I believe it was in early 2004 that a fellow student of fine literature mentioned the Fowl series to me and heartily recommended them—knowing that I had just finished the latest Harry Potter installment, Harry Potter and the Order of the Phoenix, and would have to wait another year for the next segment of the Hogwarts adventure. The magical elements and witty writing style of Colfer’s work were sure to appeal. I have mentioned before that young adult fiction is not just meant for teenagers—anyone with a short attention span or simply a love of a well-spun tale is sure to enjoy.


My friend failed to mention the enormous difference between J.K. Rowling’s work and Colfer’s. Artemis Fowl is a criminal mastermind. That is, he enjoys cheating other people out of money for profit. And he only seems to do it in order to increase his family’s fortune, which is already extensive. He gets away with it (and keeps the reader’s interest) because he has a high IQ, and some excellent (and entertaining) backup in the form of his martial arts aficionado and gun-wielding ‘man-mountain’ servant known as Butler.


The reason one reads on is because Artemis is so darned clever, first of all, and secondly, there are moments when his humanity shines through (though he tries so hard to be evil) and the reader begins to like him despite his shabby, selfish actions.


image

Like the Harry Potter series, Artemis Fowl is supported by supplementary short stories and even graphic novels; the first Artemis Fowl movie is rumored to be in the works. The books are quick adventures and easy reading; I made it through The Arctic Incident before the break and neglected to check out the third book in the series, The Eternity Code, but it is on my library shortlist.


Last week I wrote optimistically about my spring break reading—thinking I’d use a little LEPrecon fairy magic to stop time and get through a stack of magazines. Unsurprisingly, not much progress was made. Did you get through your vacation reading?


Now on PopMatters
PM Picks
Announcements

© 1999-2014 PopMatters.com. All rights reserved.
PopMatters.com™ and PopMatters™ are trademarks
of PopMatters Media, Inc.

PopMatters is wholly independently owned and operated.