As a game reviewer (and maybe this is true even more so than for any other form of criticism), you can never quite shake the fear that maybe it’s just you who doesn’t like a game. Or conversely, maybe it’s just you who could ever enjoy the twisted thing. While something like New Games Journalism (“The New Games Journalism”, Popmatters.com, 18 June 2009) attempts to articulate the individual experience, the hazard with a game review is that your experience might ultimately be too unique. A reviewer might have played every single FPS that came out in 2009 and nothing short of the second coming is going to impress them. A reviewer who has been a fan of every single Bioware RPG is probably going to be able to figure out a game’s system much more quickly than someone who has never touched one. Review sites like IGN or Kotaku mitigate this problem by breaking things into categories like Story, Presentation, Likes, or Dislikes but these are hardly objective standards. It’s easy to dismiss technical critiques like bugs or load times as irrelevant to a game’s value, but the notion of bringing them up still has merit. What can be gained by approaching a game review from a more technical perspective than things like fun factor or story? Looking at a game from a technical perspective really just means treating games like experience generating machines instead of experiences themselves.
I’m not talking about just rattling off stats, I mean applying a technical methodology normally used to test for things like bugs to gauge the value of the game itself. If a game is the space between design and content, then engaging with topics like feedback, distribution of load times, and accessibility to new players are important factors. How much of a beating can a game take if you play badly? A Gamasutra article by David Wilson on QA styles highlights several interesting testing methods. The Ad-Hoc style is one in which the QA tester is constantly screwing with the system. If they spot a hole, they try to jump into it. If they see a weird nook in a fence, they plow into it with the strongest attack. The article explains, “This is where ad-hoc testing becomes an art: finding things that the end-user may attempt that the developers haven’t planned for” (“Quality Quality Assurance: A Methodology for Wide-Spectrum Game Testing”, Gamasutra, 28 April 2009). A more reasonable test for a reviewer is one in which the QA lets the screen fill up with monsters then tries to save or perform a move that will tax the hardware to the brink. If it’s a mission in which you are supposed to be following an NPC, what happens if I turn around and go back to the start of the level? If I’m supposed to be guarding an NPC, does friendly fire hurt them? Explosives? How many bullets does it take before they drop? The purpose of these tests is to undermine the fact that as a game reviewer or experienced player, you might not run into these problems. Rather than try to break it down into “I found this easily” you can just say, “The NPC can only take five bullets and will stop moving at key intervals if you forget them”. Consider the last level of Half Life 2: Episode 2. Anthony Burch points out in an article for The Escapist (“String Theory: The Illusion of Videogame Interactivity”, The Escapist, 31 March 2009) that the whole level is an elaborate feedback system. It’s designed to just put you on the brink. Based on your health and location, X number of spider tanks will come after the base. As more games begin to revolve around adapting to player input to perfect the player’s experience, spotting the edges of the system can only be done if you do some serious poking around.
From Disney\‘s Cars: The Game
Another way to test a game’s system itself is to take a much harder look at the feedback that it provides the player. When you screw something up, what does the game tell you? Going back to the escort mission question, once you get lost does the game do anything to get you back on track? Is there a map or compass? The difference between a good feedback system and a bad one is that the player should always understand what they did wrong. There should never be a moment where they just drop dead for seemingly no reason. Games work with this issue in a number of interesting ways. Another Gamasutra piece, this time by Bruce Phillips, focuses on successful feedback systems. He writes, “The most common are the in-game hints players get at appropriate moments, such as after a death. The Call of Duty series has been doing this for a while, leaving a message for players when they die from a grenade in Modern Warfare, for example, or are stabbed in World at War” (“Staying Power: Rethinking Feedback to Keep Players in the Game”, Gamasutra, 27 October 2009). Cutting to where the enemy player was hiding in multiplayer after they shot you makes you understand how you got caught and keeps you from ever feeling overwhelmed. A reviewer pushing that feedback system by misbehaving within the confines of the game might be a more effective way to see how a different player would react. How well does the game report your errors and teach you to adapt?
A lot of what I’m describing is basically what you do when you’re reviewing something like a car (Peter Johnson, et al, “How to Write a Car Review”, Wikihow, 27 October 2009). You don’t just drive on paved roads, you take it down some dirt roads and maybe slam the brakes at high-speeds a few times. Maybe even take someone for a ride with you and see if the passenger side is fun. Applied to games, it makes the reviewer consider things like if it has co-op, then you should do your best to play it that way. For example, Chris Dahlen made the interesting observation that although Demon’s Soul is a great RPG, it’s also really boring to watch (The Most Boring Game of the Year, EDGE Online, 4 November 2009) . The game is dull because you repeat levels a lot and die often, which makes for dull viewing compared to a game like God of War or Street Fighter IV. That might seem inane to some gamers, but if you live in a house where you have to share the common area, it’s a factor. Observations like how entertaining the game is to watch can only come about by playing with other people.
Sometimes when I have a game and company is over, I’ll have them play the opening sections and just watch them. Where do they get stuck? What controls do they not grasp? Often these are things that only take a few minutes for them to figure out, but it’s always interesting to see where they get stuck and where I do not (or even if they felt like they were stuck at all when it was happening). The more I have to write reviews, the more I find myself relying on troubleshooting techniques with games. After all, I’m not really trying to share my personal experience that the game generated when I write a review. I’m trying to talk about how well the machine makes that experience come together.