CFP: The Legacy of Radiohead's 'The Bends' 20 Years On [Deadlines: 29 Jan / 12 Feb]

 

Latest Posts

Bookmark and Share
Text:AAA
Tuesday, Apr 29, 2008
I sorta made a point of disparaging the idea of the Wii Wheel last week. Well, I was wrong.

One of the most misleading aspects of game journalism as a whole is the relentless air of positivity that goes into game and gear previews.  On one hand, it’s true that you don’t necessarily want to dismiss the potential of a game based on an early build or a demo; on the other, if something looks like it’s going to be lousy, even if it’s a hotly anticipated piece of software from a major publishing company, shouldn’t we go ahead and feel free to say so?


It is in this spirit that I ranted for a paragraph or so about just how awful an idea the Wii Wheel was, how its presence sullied the good name of Mario Kart and put it in the company of such peripheral bad ideas as the Wii Baseball Bat and the Wii Tennis Racket (I mean, really, just the Wiimote, all by itself, had been proven dangerous—did we have to find ways to make it bigger?).  I know that the Wii Wheel isn’t exactly an uncommon target for criticism, but between the Wii Zapper fiasco (so when’s the next “Wii Zapper Compatible” game coming out, anyway?) and this, Nintendo’s propensity to hop on the plastic-shell bandwagon seemed too troubling not to call out.


Given the quickness with which I jumped on the bandwagon of Wii Wheel rippers, then, it seems only fair that I should now admit that I was wrong.


There is no game out there right now, not a single one, that has brought my family together for game time more reliably and consistently than Mario Kart Wii.  Let me be clear: we are not a house of Mario Kart enthusiasts; I’ve had only a passing interest in the franchise for most of its life, apart from a brief time with the original when I was utterly obsessed.  The DS version is fun enough, but it didn’t exactly steal my life away, and I’m a little bit ashamed to admit that I’ve never even played Double Dash.  The kids have played a couple of previous iterations of the franchise as well, finding the most interest in the DS version, but even that struck them as not exactly worth giving up things like Dogz and Spider Man: Friend or Foe.


Mario Kart Wii, on the other hand, has a Wheel.


As suggested by my wife, a teacher, it seems to be a matter of context; in education, the use of appropriate contextual cues can not only make learning easier, but can also make the students want to learn.  It seems like such a simple concept, but I had never considered that a simple wheel, attached to nothing at all, could make playing a game so much more fun than holding the Wiimote and pretending that I was gripping the three-o’-clock and nine-o’-clock positions on a wheel.  In doing so, I obviously made a huge error in judgement, because not only does the wheel seem to drum up interest in the game, it gives the kids confidence.  The game then transcends the label of “daddy’s video games” and becomes, simply, a toy.  Turn the wheel left, car goes left.  Turn the wheel right, car goes right.  Hold down the ‘2’ button the whole time, and you’re driving.  Easy as pie.


My six-year-old has won a few 50cc races, which was a surprise to me given that she has never shown a propensity for games that require quick thinking and fast action.  These wins have been utter events in our household, things that can be attributed not only to her increasing-all-too-fast age, but also to the fact that turning a steering wheel probably seems like a pretty basic mechanic, even for her; combining the function of an analog stick and various buttons is still a bit abstract for her mind, while turning a wheel is entirely logical and mechanical, and the confidence of knowing exactly what that wheel is supposed to do was enough to convince her that she could win.  And so she did.


This all may seem like fairly minor stuff in the grand scheme, and it’s true that the Wheel is not going to win you any tournaments the way the more traditional Nunchuck/Wiimote combo will.  Still, for casual players, children, and anyone else that Nintendo is trying to “bridge” to more serious gaming via Mario Kart Wii, the wheel is absolutely useful, and borders on essential.


And no, I can’t believe I just said that.


Bookmark and Share
Text:AAA
Tuesday, Apr 29, 2008
by PopMatters Staff

CSS
Rat Is Dead [MP3]
     


Robert Forster
Pandanus [MP3]
     


The Long Blondes
Guilt [Video]


Guillemots
Falling Out of Reach [Video]


Black Diamond Heavies
Bidin’ My Time [MP3]
     


Young and Sexy
The Fog [MP3]
     


Sarandon
Welcome [MP3]
     


Mike’s Dollar [MP3]
     



Bookmark and Share
Text:AAA
Tuesday, Apr 29, 2008

How is it that a song made of all the worst things possible is endlessly awesome? Contemplate this while enjoying Komar & Melamid’s “Most Unwanted Song” (link via Scott McLemee). The song was produced based on the results of a 1990s poll asking Americans what features they like least in music. Michael Bierut at Design Observer suggests the result is a triumph of design: “If working within limitations is one of the ways designers distinguish themselves from artists, America’s Most Unwanted Song is a design achievement of a high order.”


And naturally, the most wanted song is unlistenable. If American Idol is the future of pop music, this poll-produced contrivance suggests the future will be bleak indeed.


Bookmark and Share
Text:AAA
Tuesday, Apr 29, 2008

Clay Shirky, the technophilic author of a new book about spontaneous organizational behavior online, recently delivered this widely linked speech about how TV once managed to suck up the “social surplus” that is now being directed into building social networks and open-source applications and whatnot on the internet. He argues that the 20th century brought us more disposable leisure time, and it brought us TV to help us dissipate it.


Starting with the Second World War a whole series of things happened—rising GDP per capita, rising educational attainment, rising life expectancy and, critically, a rising number of people who were working five-day work weeks. For the first time, society forced onto an enormous number of its citizens the requirement to manage something they had never had to manage before—free time.
And what did we do with that free time? Well, mostly we spent it watching TV.



Shirky then explains that while Wikipedia took an estimated 100 million hours of human participation to create, American TV viewers spend that much time every weekend watching advertisements.


Currently, with Web 2.0, etc., society is erecting a “architecture of participation” (a term he borrows from tech-industry consultant Tim O’Reilly) which will allow people to switch gears from passive consumption of TV to active participation in collective social projects—annotating maps and debugging software and posting lolcats and correcting misinformed bloggers in comments and that sort of thing.


It’s better to do something than to do nothing. Even lolcats, even cute pictures of kittens made even cuter with the addition of cute captions, hold out an invitation to participation. When you see a lolcat, one of the things it says to the viewer is, “If you have some sans-serif fonts on your computer, you can play this game, too.” And that’s message—I can do that, too—is a big change.
This is something that people in the media world don’t understand. Media in the 20th century was run as a single race—consumption. How much can we produce? How much can you consume? Can we produce more and you’ll consume more? And the answer to that question has generally been yes. But media is actually a triathlon, it ‘s three different events. People like to consume, but they also like to produce, and they like to share.
And what’s astonished people who were committed to the structure of the previous society, prior to trying to take this surplus and do something interesting, is that they’re discovering that when you offer people the opportunity to produce and to share, they’ll take you up on that offer. It doesn’t mean that we’ll never sit around mindlessly watching Scrubs on the couch. It just means we’ll do it less.


Shirky’s vision of the future sounds great. We all benefit when we contribute our spare time to projects that in theory benefit society at large. And when people are more attuned to their productive rather than their consuming capabilities, they are probably likely to be much happier, as displaying one’s ability to make things is a quintessentially human quality and social recognition makes life worth living. But some skepticism is in order. The notion of a social surplus sounds a lot like Bataille’s accursed share. Bataille argues that in a post-scarcity society, individuals need to come up with ways to destroy excess production through various modes of luxurious waste in order to sustain economic growth. From this point of view, wastefulness is intentional, a demonstration of wealth. Squandering the surfeit of free time on sitcoms seems a variation on this. Perhaps, watching TV is not like drinking gin, as Shirky suggests, but like conspicuous consumption. It’s a triumph of our culture that we can waste entire evenings on Battlestar Galactica rather than, say, foraging for food. The point of free time is wasting it, not employing it productively in some other arena. An hour spent watching VH1 = luxury. An hour spent annotating a Google map = digital sharecropping. I don’t want to believe this, but empirical observation suggests that it’s true that many people have this attitude. Most people aren’t looking for ways to be more productive; instead they tend to seek means to consume more. And if they are given reasons to believe their manic consumption is really a kind of production in itself, so much the better. The culture industry’s main thrust in the internet era has been to do just that, make consumption feel like it’s participation, so we don’t feel bad when we don’t better avail ourselves of the “architecture of participation”. Chances are, this architecture will become more akin to a Playland at McDonald’s than the infrastructure for a social revolution.

In the quote from Shirky, a lot hinges on the inadequately defined word interesting. Likewise for his assertion that people watching TV are doing “nothing.” Some people will not be easy to convince that watching TV is equal to doing nothing, that it is not in fact “doing something interesting.” What the culture industry has traditionally done is not only mask the social surplus, as Shirky notes, but sell the passive squandering of it as a dignified social activity—watching “Must-see TV” becomes a way to participate in water-cooler conversations that occur mainly in our imaginations. The internet is a real ongoing conversation, one that opens us to risks (of embarrassment or irritation) as well as rewards. But the pretend conversation we have while passively consuming is 100 percent safe. We are also persuaded that participation and collaboration are more inconvenient than individuation and private consumption in isolation. In my life, this plays out as me playing chess against a computer when I could readily play against human opponents online. In my mind, playing a human is more rewarding, but in practice playing the computer fulfills my need for momentary, commitment-free distraction. Sharing and cooperation leads inevitably to compromise, and the main thrust of most advertising is to convince us never to compromise when pursuing what we want in our hard-earned leisure time.


In short, the marketing world and the culture industry at large invests a lot in making doing nothing feel like something, and for many of us, it does; we collaborate on the fiction with the marketers, and this curbs our inclination for the sort of collaborating Shirky talks about. Shirky does qualify his statements by saying that it only a few people to change their habits to produce a huge shift once the behavior is leveraged across the internet. And perhaps that is enough to prompt optimism. But the same forces that enable online sharing also enable deeper individuation, filtering, personalization and so on, discouraging cooperation on anyone else’s terms, limiting the usefulness of what is shared. It also extends marketing’s reach, enhancing the power of its messages about the joys of passive consumption and fantasizing about identity rather than “doing something.”


The most difficult word to define, then, is participation—to a certain degree we can’t help but participate in our culture, and Web-style interactivity has become much more prominent in that culture. But with that prominence the phenomenon becomes domesticated, becomes a new way to absorb the social surplus harmlessly, becomes the new way to watch TV. Its fruits are trivial before the fact, because most people don’t want to see themselves as revolutionaries, and many want to luxuriate in flamboyant triviality. They are already absorbed into the status quo, which perhaps is subtly shifting along the lines Shirky is suggesting but is always in the process of disguising the change. The radical breaks that futurists and techno-evangelists are always predicting is always about to happen; it can never actually come.


Bookmark and Share
Text:AAA
Monday, Apr 28, 2008
The Zarathustruan Analytics series continues with L.B. Jeffries' thoughts on player input.


Part of the reason this analytical method is named after Nietzsche’s Thus Spoke Zarathustra is to do justice to the individualized nature of player input, to put aside judging a game purely by the game play or plot and go beyond that to analyzing the actual experience of a game itself. The problem is…although critics are quite capable of analyzing their own experience from playing a game, it is not quite so easy to apply that analysis to others. Indeed, this critical method is more an approach to assessing the experience creating methods in a game rather than the individual experience itself. The player input, then, is literally your connection to the game because it keeps you interested and playing. To that end, when critically judging player input, you are looking at how the game and story react to your input and the impact this has on the overall experience. Rather than go into the huge variety of ways games do this, we’ll do an analysis of one of the more controversial player input methods that’s prevalent in games today and use it to highlight the requirements of player input itself.

There has been a great deal of criticism over the silent protagonist in video games recently and for good reason: they’re suddenly everywhere. Out of the top ranking games of 2007, almost all of them involve playing characters who don’t speak. Gordon Freeman from Half-Life never utters a word. Master Chief hardly speaks, and Link does little more than grunt. It’s tempting to dismiss the feature as simply a cop-out on the part of the creators, and yet there are certainly games that have used the device effectively. Why does the connection of not letting a player’s character speak work in some games and in others supposedly break-down?


Now on PopMatters
PM Picks
Announcements

© 1999-2015 PopMatters.com. All rights reserved.
PopMatters.com™ and PopMatters™ are trademarks
of PopMatters Media, Inc.

PopMatters is wholly independently owned and operated.