I’d been playing the long game. While steadily making my way through the early acts of Diablo II, I’d carefully hoarded gems and runes in my private stash, sorting them meticulously by type, combining and upgrading them when opportunities arose. On approaching the end of Act IV, knowing that a confrontation with Diablo himself was close at hand, I carefully selected the ingredients that I’d need to create a weapon imbued with the powerful properties of a runeword. Checking and checking everything again, I finally took the plunge and inserted the runes into the appropriate sockets. Excitedly, I hovered the cursor over the newly runed weapon but instead of the impressive stats that I had expected, I saw a mediocre blade, crippled by my foolishly having inserted the runes into a sword classified as “rare” and invalidating the recipe. Damn it.
Latest Blog Posts
Pac-Man will die.
The space invaders will win.
Donkey Kong will get the girl. And you won’t.
Sorry about spoiling the endings of all those classic games. A warning seemed a touch superfluous.
Since viewing the Fine Brothers farcical video on game endings, 50 Nintendo Games Spoiled in 2 Minutes, I’ve been pondering the relative weirdness of video game resolutions. While the Fine Brothers video merely reveals the simplicity of a lot of game endings (yes, plumber saves princess), the seeming simplicity of the plots of games of the mid- to late-‘80’s really have nothing on the sort of thing offered by early arcade game “endings.”
Now, certainly the creators of Pac-Man, Space Invaders, and Donkey Kong really had no intention of creating a traditional plot for their games—something complete with introduction, conflict, and a final resolution (let alone much in the way of characterization or thematic interest). However, these games still contained rudimentary narrative elements (space invaders were apparently “invading,” and the Earth, or something, needed defense from them). These elements do provide the player with a vague sense of purpose besides merely racking up points.
Since the advent of games like Super Mario Bros. on the arcade (and home console) scene, the idea of a game being resolvable at all and playing a game in order to complete it or “see its ending” has become more the norm than anything else. It is with that in mind that I ponder the sheer cynicism of the bare narratives of earlier arcade games of the early ‘80’s.
Early arcade games were made to be potentially perpetual experiences, and ironically, looking at these narratives now makes one realize that the idea of being “unwinnable” has relatively horrific narrative consequence. Again, this isn’t really an intentional goal of game design, just an interesting notion to ponder in retrospect (specifically, from the vantage point of an era where storytelling has become almost intrinsically entwined with video game design).
Any player of the arcade Pac-Man knows the ending of the game before going in: Pac-Man will die; the ghosts will win. The “story” is designed that way. In other words, almost all early arcade games had unhappy endings. They spoke themes of doom and inevitable failure. Seen in this light, early arcade plots border on the nihilistic.
The only “reward” that might be possible for playing the game is not the thrill of Pac-Man completing his final maze to return home to Ms. Pac-Man and Baby Pac-Man; instead, it is the possibility of a kind of transient fame on an arcade machine scoreboard. You can “carve” your initials into the video game (at least until someone unplugs it for the night) and revel in your standing in the rankings. This kind of reward exists in a manner very different than the narrative resolution in most games now. It exists external to the game itself, whereas the idea of “beating” Super Mario Bros. by saving the princess is an accomplishment embedded in the world and narrative of the game itself.
Instead, the only thing embedded in the narrative (in terms of resolution) in earlier games is the same inevitability of failure embedded in the endless repetition of Donkey Kong’s plot (yes, you could get the girl back from Donkey Kong but that “success” is always rewarded with a new level in which the monkey snatches her up again, so you have to start over—once you run out of lives, I guess the monkey wins). That repetition broken only by regular loss complements an intentional goal of the arcade machine, though. Early arcade games were about failure; it’s how video game owners made money on them. The interest of the game was always in killing you, so that you (or the next guy in line) would slip another quarter in the slot. The more plays an hour, the more quarters an hour. They are games meant to extract from the player, much like gambling (be it machines or casinos). The allure of continual play in this sense rests on failure and the idea of possibly “improving” the next time. However, the story remains the same: the house wins, the space invaders crush the Earth’s defenses, the monkey gets the girl. Oh, and Pac-Man will die.
While Mario’s (or Jumpman’s) experience of being a perennial loser in Donkey Kong probably wasn’t meant to parallel the player’s experience of being the same (as they plunked down another quarter to fail a game once again), it is ironic those experiences do run in parallel.
This cynicism of purpose in games is in stark contrast to the idea of actual win states for video games. One can see why adding a resolvable narrative as a way of concluding a game was almost an inevitability in the evolution of gaming. It is hard to continue wanting to play games with such a cynical outlook on the world, telling tales of losers and failure. While the “guy saves girl” plotline of most early narrative-focused games may have seemed trite and over used, it is a hell of a lot more motivating than the “guy can never ever, ever save the girl” of the early arcade era.
The multiple choice question may be one of the most despised games ever conceived. The purpose of a multiple choice exam is to exclude people in a quantitative manner, be it for admission into schools, licensing professionals, or limiting the number of high grades in a class. Assessing a person on their individual merits is a time consuming process, and once a school or class hits a critical mass of students, it isn’t economically reasonable to scrutinize all of them. Let’s say you’ve got 1000 participants and five people reading their results. You can cut time and costs by figuring out a way to neatly get rid of 500 because they scored under a certain amount. A multiple choice exam cannot be so difficult that you exclude an excessive number of applicants. Most law schools, for example, have a minimum LSAT score that you must score below for automatic denial and a high score for automatic acceptance. Applicants in between those scores are then addressed on an individual level and other factors are introduced. The problem with such a system is that to ensure a multiple choice test produces the right number of passing scores, you have to keep changing the questions.
An article by the National Center for Fair and Open Testing explains, “multiple-choice items are an inexpensive and efficient way to check on factual (“declarative”) knowledge and routine procedures. However, they are not useful for assessing critical or higher order thinking in a subject, the ability to write, or the ability to apply knowledge or solve problems” (“Multiple-Choice Tests”, Education.com). It’s for that reason that a multiple choice question is always limited in scope: it can only be about basic knowledge of a topic. The formula is to have two answers that are blatantly wrong, one that is kinda right, and one that is the most right. One instructor during a review session for the BAR pointed out that on average a student will know the correct answer immediately to 25% of the problems, have no clue on 25%, and be able to boil it down to the right and kinda right answer for the other 50%. So the way that you evaluate the difficulty of a multiple choice question is how similar the right and kinda right answers are. A person who can’t boil it down to those two doesn’t know the basic material and shouldn’t pass.
This week we have a special interview for you. We’ll be doing more interviews as time goes on, talking with people who’re not just gamers, but people who’ve been genuinely moved by their love of games.
Holly Conrad is a costume designer and an avid gamer. And by costume designer, I mean sculptor, engineer, seamstress, and designer. She recently became a minor internet celebrity with her audition video for a Joss Whedon produced documentary about Comic Con, in which she showed off the impressive set of Mass Effect 2 costumes that she’s been creating.
A couple weeks ago I wrote about how giving shooters a real world context could make their violence feel more real and less like mindless entertainment (“Why Do I Cheer For War?”, PopMatters, 9 July 2010). So I was very interested in trying out the Medal of Honor multiplayer beta because the game seems very committed to its realistic setting, separating players into teams of US forces and Taliban soldiers. I was curious to see if fighting against the terrorist group and not just vague “insurgents” would add some kind of poignancy to the common emergent stories of multiplayer shooters.
This did not happen. All poignancy is lost within the strict rule set of a competitive online game. In fact, it’s specifically because it’s competitive that the game part of the experience takes precedent over everything else. While not surprising, this tendency does expose the inherent limitations of storytelling in multiplayer games. You can’t tell a story in a competition; the message gets drowned out. That’s why most emergent stories that come out of multiplayer games are really just “cool moments.” There’s no narrative arc in a match, no rising and falling action, no climax, and it seems impossible to accomplish until you play Left 4 Dead or its sequel.