How Artificial Intelligence Views Humans in ‘The Fall’

We are faulty, we must be appeased, we are destructive, and ultimately we are self-destructive.

In my last post, I explored our views, as humans, about artificial intelligence and our contradictions in holding those views. I made mention of the three main players of the game The Fall, all machines, and the ways in which they emulate humans. We have made machines in our own image to one degree or another, and the game seems to be itself a representation of our fear of the machines owning that emulation of ourselves. Instead of following our orders and having their choices and identity dictated by our control, the machines move beyond their simple base parameters and try to become their own beings.

I’d argue that only ARID, the protagonist of The Fall, succeeds in attaining such consciousness by accepting the most human trait of all, self contradiction. But even in the limited emulations of humanity of the System Administrator and the Caretaker, there is still insight about what they see in us: what an intelligence, alien to our own, sees and thinks of human behavior through what they mimic in us and how they mimic it.

Of course, that’s not really true. The Fall wasn’t created by a machine, one with thoughts and ideas of its own. It was created by people writing from an imagined point of view. So really it’s a filtered look at what these writers and developers think of human behavior.

The three principal actors of the game view humans very differently.

The Caretaker views humans as both his masters and also as fundamentally faulty. Whether it was a malfunction on his part or some sloppy programming, he does not see any difference between machine and man. A human who refuses to do his job, like say a janitor who doesn’t want to go near some alien fungus to clear out the air ducts, is faulty and must go in for repairs. Just because the process of cutting open the cavity to search for the problem is lethal to humans and thus means that they will never function again is not a consideration for him. He is a complete slave to his programming, which doesn’t include understanding the difference between robots and living things in a robot shape.

His attempts at emulating humanity are a product of pure programming. On our end, this is supposed to make him more comforting to us, even if it has the opposite effect. To the Caretaker, his evaluations are the result of what is simply a subroutine to be followed. When doing janitorial work, look like the janitor. When doing engineering work, look like one of the engineers. It’s about putting on a new skin to match the job. He still does this when all of the humans are dead and gone. Yet, one of the points where he turns on his human skin demonstrates his belief about humans. During the confrontation between himself and ARID, he turns on this skin of a human when his authority over the facility is challenged. In turn, he deactivates the skin when he accepts the alternative of repurposing is valid and must be attempted. The Caretaker associates humans with hierarchy. Looking human is how he asserts his authority. He judges humans to be above him. Yet, even higher than the humans that rule him is a higher authority of dictated function that the humans are also subject to. This is similar to the way that a feudal peasant views his king, but he understands that they are both subject to the higher authority of God.

The System Administrator has far more personable feelings about humans. Unlike the Caretaker, who willingly follows a strict set of protocols that he holds valuable above all else, the System Administrator wants to behave more like a human. He alters his speech patterns to be more conversational, and his creativity comes through in his attempt at creating a face using limited approved visual building blocks, i.e. the company logo, and when he comes up with the idea of circumventing the strict protocols adhered to by the Caretaker by reclassifying ARID. Yet, in these more presentational human qualities, there is a failure. Recordings of systems information break in during his conversations. He cannot deliver this information with his own voice, and while he can get around the straight forward readings of certain protocols, he must paint between the lines. He is as much slave to them in the end as is the Caretaker.

But what is most important is why the System Administrator says he tries to emulate humans. Echoing what I think about our own inherent fear of our creations being out of our control, he wishes to placate us. He doesn’t want to be feared, but accepted. If the humans don’t like him, they won’t accept him, and as a result, he will be repurposed or replaced. Despite being the all seeing overlord in charge of the functions of the facility, he seems helpless. This is proven to be true when the Caretaker takes advantage of him and reformats him. The System Administrator views humans as powerful tyrants to be appeased, but not feared, like how a child views the adults around them. There is also an element of him wanting respect. He says at one point, “Because. The closer we get to them, the more we get treated like them.” While he recognizes he is not on the level of human beings, he wants to be and is bettering himself to reach an ideal, as a child might when growing up into an adult.

Then there is ARID, who seems to have little to say about humans. She contrasts herself with the player in terms of her view of humans, mostly by opposing the views of the other two artificial intelligences. She is subservient to humans (mostly to her pilot) and to her programming, but understands that there is a difference between herself and her pilot. Unlike the Caretaker, she can make the distinction between machine and human. And while she seems to be more personable with humans, she does not idealize them like the System Administrator does. Her relation to them, outside the emotions that we feel and may want to place upon her, seems almost as a means to an end. She has a goal that involves humans and the boundaries meant to protect humans from her, but she doesn’t seem to possess a full view of them. All of her language is centered on herself. “I must save my pilot.” “I must complete my mission.” “I didn’t make it in time.”

Though, what really differentiates her from the other two is the fact that she becomes conscious. Remember, we aren’t viewing humans through the lens of what machines think of us, but what a human author thinks of us. The story that we are playing out is one of achieving a human-like self-aware consciousness. To do so in the terms of the fiction is to become human in mind, if not in body.

However, before we get to that point, one of the most interesting points to reflect on is the incident when ARID has to power the water pumps to clear out the lower levels. She is presented, not with a choice, but a single means by which to accomplish her goal. All of the robots that have been serviced and repurposed are all hooked up in a bay waiting to be shipped out. She can use their combined power to jumpstart the pump’s generators, frying the robots in the process. There is no choice to be made here. To continue the story, you have to push the button. ARID doesn’t think about the robots. They are of no concern to her. She knows the difference between a human life and a robot life. Tens of thousands of robots for one human, the scales don’t come close in her mind. And they didn’t in mine either.

These robots have no future and no function in the game beyond this end. They will rust in that hanger, turned off and unaware, unthinking, forever. I pressed the button and never thought about its implications. I though it an odd moment to include, especially given how the System Administrator frames it. He phrases it like a video game would a moral choice. In this direction lies progress and committing “genocide,” but this other direction the robots are left untouched, but the pilot doomed. The tone of his voice makes it feel like there is another option even though it does not exist. This statement applies gravitas to a prompt that hasn’t earned it. Neither ARID, nor I, felt the weight of the action there — because I’m human and robots aren’t human.

Humans have the ability to abstract and then categorize information in a way that computers do not. After six decades of work, there still isn’t a search engine that can determine what is in a photograph. The ones we have know to look at the text around a photo for context. More relevantly, we can recognize when something isn’t human even when it comes close to appearing so. In fact, the closer it gets to appearing human, the more we recognize that it isn’t. We call it the uncanny valley. And more direly, we can recognize small, insignificant details about someone who is human, but those same details allows us to recognize them as different than us.

We have the amazing ability to apply these categorizations to other people, considering ourselves as “normal” and our behaviors as acceptable. They, however, are not, and, thus, their behavior is “wrong.” We would never perpetrate a crime against “us,” but they are different and aren’t “us.” They are “them.”

Humans are “us” and robots are “them.” I never paused and thought about that during this scene in The Fall. Of course, there is a recognizable difference between these robots and the humans that both worked at the facility in The Fall and those that suffer in real life. For instance, robots can be brought back online. They haven’t got an individualized thinking mind and can be infinity replicated. Yet, the System Administrator feels a kinship with his fellow machines. It’s understandable, if philosophically debatable, that I don’t. What’s ARID’s excuse? Is she more human than robot?

ARID is self-centered, bull headed, hot headed, othering, rationalizing, violent, murderous, and ultimately a failure. If this is what it takes for a machine to become human, maybe we should be frightened of our own creations, especially those that learn too well. Thus, I wonder if in a way humans shouldn’t feel a bit insulted by what The Fall defines as human. We are faulty, we must be appeased, we are destructive, and ultimately we are self-destructive.

But while The Fall was written by human authors and designers, they have done so from the point of view of the machines that they created to tell this story. All three of the principal actors have their views of humans dictated by their own functions. The Caretaker is an administrator and thus views humans through the lens of hierarchy and how they fulfill a function. The System Administrator, who more often than not acts as a community manager, is a communicator dealing with issues, often at a disadvantage, subject to its bosses, and serves as the bulwark against the customers. It’s not hard to see him viewing humans as the powerful in need of placating. Finally, ARID, a military AI, sees humans as destructive and dedicated to a higher duty, the mission. From this perspective, its behavior makes perfect sense.

Yet, with all things ARID, that is not the whole story. This is her view of humans when she followed her programming, when she had programming to follow. Where does she stand now and how does she see us now that she is like us? That is the question.