When we talk about video games, we don't seem to have the same understanding of "choice" as we do in other media or even in real life.
Morality is conditional. There is no way to determine ahead of time what the appropriate moral decision is for a given situation. A truly complex decision is woven with so many thread and contains so many competing needs that a truly right path may be too difficult to follow through or may not even exist. When we go through life, we are confronted by thousands if not millions of choices every day. Most end up being choices of no consequence. For instance, walking down the sidewalk and observing an insect on the path, the choice to step on it or not presents itself. So small is the choice that it probably doesn't even enter the person's mind.
Generally when we call something a choice, we speak of those moments that seem to possess potential consequences and that require conscious thought when considering it. They may be small things, like what to order off the menu or (along the same lines) what car to buy. What we call a choice are things that we stop and think about, weighing whatever considerations we feel necessary and then picking an option that seems reasonable.
I bring this up because I was going to respond to a pair of comments that I received in response to my post from last week by elaborating on my exploration of the "Auntie Greenleaf" choice in The Wolf Among Us. But as my own response got longer and Firefox eventually ate it in the end anyway, I figured a follow up post would be better.
When we talk about video games, we don't seem to have the same understanding of what we consider a choice to be as we do in other media or even in real life. Thanks to the incessant marketing in the last half decade or so about games featuring "moral choices", the very idea of making a moral choice in a video game has become decoupled from the actual concept of it. Video games require players to take a lot of actions during their play time, and it is difficult to code individual situations and responses for each individual encounter when action is required. So instead designers systematize the process.
When I spoke of the poor implementations of moral choices along a binary divide of good and evil, there were many games I could have been referencing. Fable, Knights of the Old Republic, or really any Bioware game are all suitable examples. But the one that I had in mind when I wrote those words was the inFamous series. Each of the aforementioned games feature a bar or graph somewhere that shows how far along the player is on the good path or along the evil path that you are during a particular playthrough. inFamous goes the extra mile and features its marker for morality as a glowing colored bar at the top of the screen. The game judges your play behavior against a predetermined set of parameters then indicates whether you should be granted good points or evil points. You aren't really being asked to make multiple choices across the game, though. You are being asked to make a single choice at the beginning of the game and then to go through the motions for the rest of the game. And that choice is "which ending do I want to see this playthrough."
This systemized choice system is what players have been force fed as a way of understanding of what moral choices are within video game systems. Even as players over time have rejected them in reviews, in forum posts, and in social media threads, they have still seemingly altered our understanding of what it means to make a choice.
Whether to burn Auntie Greenleaf's tree in The Wolf Among Us was not revelatory because it changed the binary from good vs. evil to law vs. anarchy. Thinking as such ignores the actual complexity at play and various contending considerations to be taken into account in that scenario. I explained those in some detail in last week's post. Furthermore, the choice in question up-ends several of our preconceptions regarding a simple divide of law on one side and chaos on the other.
While there are two outcomes for the tree itself -- it either stays intact or burns -- there are three options available to the player. You, as Sheriff Bigby, burn the tree, let Auntie Greenleaf and her tree remain, or essentially conscript Auntie Greenleaf and her tree into government service, becoming legal. Because of the way that the situation has been constructed and the circumstances of the fictional reality of the fables' world none of these are a particularly wrong answer. Or as Peter Shaffer puts it, "Tragedy, for me, is not a conflict between right and wrong, but between two different kinds of right." That is a real moral choice because it determines what you, the individual, considers moral.
Dungeons and Dragons complicates the idea of a morality system by creating a two axis system. One of good and evil and the other of law and chaos. But that simply presents more holes in which to be pigeon holed. After all, the most ignored line in the Dungeons and Dragons manual is "Always consider alignment as a tool, not a straitjacket that restricts the character." It can't be helped. Games are sets of systems and borderline cases or exceptions don't work well in a place where systems rule the day. The play is supposed to bring out character informed by a general worldview, but not make that player beholden to it. Poor is the hero that makes their decisions against a predetermined checklist and yet that is what happens when a system is put in charge of every situation. It's even worse in the digital realm where such a true/false checklist is exactly how computers function.
The Wolf Among Us created a situation where a choice had to be made, but lacked any game response outside of the specific situational narrative construct. Let's use an example outside of a Telltale game to further illustrate what I mean.
Gunpoint is an indie game that came out in 2013. You play as a hardboiled detective in a pair of high tech inflatable pants. Over the course of the game, you will be given missions by various clients to sneak into buildings and retrieve or delete data from a secure computer. In the player's way is a series of complex security systems that you can rewire with your expert hacking skills. At the beginning of the game, you find yourself near a building where a company executive has been murdered and you manage to get framed for the crime. Many of your clients have been giving you your various jobs in an effort to get ahead of the others. There are some optional missions that you can miss, but at the end of the game, you are presented with a choice by two different clients. They pay the same, they take place in the same building, and they give the player the same justice they have been working towards. The main difference is who falls for the original murder officially. Choose Rooke and the truth never gets out and her ex-husband Jackson, goes to jail. Choose Jackson and the truth gets out there, but the man who did the deed gets away scot-free. Functionally, the only difference between the two is there is an extra man outside the executive's office. In either case, you do take down the executive who ordered the hit in the first place.
The two options manage to mitigate the various competing angles by offering the exact same things. This reduces the choice down not to whether which option is right, but to what you the player value more. You can't bring everyone to justice. Either the hitman gets away or Jackson is put in jail for a very long time for a murder that he didn't commit. Or as Gunpoint is nice enough to put it: The Truth vs. The Killer. It's not a question of right or wrong, but a question of priorities. The player is offered up two rights and asked to make a choice between them. Maybe the philosophical implications of Kant's truth aren't what are running through your head, but which of the two people you like more from your conversations with them. The morality here isn't based on abstract rules, but on the individual player -- what they would do and why is up to them.
I spent almost as much time on that menu screen looking at the two job offers as I did in the actual level. I just sat there weighing them against one another. It took the entire game to properly set up the situation for this choice to mean something. The circumstances were specific to this case. The truth of the matter is that systems that attempt to simulate moral choice don't fail because they apply rewards of money or powers to the options but because they don't properly set up any context or complexity for individual situations. Without any background, choices have to be as broad and bland as possible so that the player can understand them.
I think of my time playing inFamous. After fighting my way out of some random encounter in the street with some super powered homeless men with guns, I had accidentally hit some innocent bystanders in the crowd. Suddenly I was no longer at maximum good. As I'm running to my next destination, I jolt fallen people back to life earning a few good points in the process for each one. I'm not doing this because it is the right thing to do; I'm doing this to fill up my meter and get my ultimate power back. But what really gets me is that all the people that I passed by who needed a jolt to get back up and never would because I had already maxed my goodness meter. I left them where they lay. A callous and a rather wicked decision to make, prioritizing my own efficiency in moving on over their lives. But the system never once calls this evil. Then again, how many people in need have you walked by in the street without giving it a second thought? In inFamous The opposite of jolting them back to life like a human defibrillator is sucking out their bioelectric energy and killing them, more broad easily and unambiguously identifiable choices where the actions can be called good and evil without complexity or nuance because there wasn't the time to set them up.
Of course all of this brings up the bigger question of how the video game should respond to the player's choice. Rewards of money and ability are clearly the wrong way to go, but it is wrong to provide any response that might make the player make a meta choice in lieu of an actual moral decision? Should the player's own emotional state be consequence enough?