We want our machines to be more human. As creators, we try to make robots in our image.
The Fall is about an AI on a mission to save her pilot, a pilot who is currently not responding after falling from space. I should note that I used the feminine pronoun to describe the AI because it is voiced in the game by a woman. It, of course, is a machine and has no gender other than that which we ascribe to it through our own conceptual understanding of gender. We see a figure, witness a behavior, or hear a voice and categorize what we perceive according to our understanding and experience, regardless if that categorization is correct.
I only bring this up because it is the easiest example of our own behavior towards others that I can point out. We self reflectively conceptualize others all the time and in multitudinous ways. For the most part, this skill serves us well in converting the billions upon billions of bits of constant data being taken in by our senses into a manageable, actionable understanding of our world around us. On the other hand, this behavior fails us when confronted with something alien to our understanding of the world and our response defies rational explanation -- in other words, when we encounter something alien, like the artificial intelligence of a machine.
I'm reminded of a review of The Fall that I read at some point late last year in which the author characterized the relationship between the AI character, ARID, and the unconscious pilot within the suit, Colonel Josephs, as one of friendship. The reviewer described the motive driving ARID's actions as a desire to save her friend's life. This statement is completely inaccurate in regard to the story, both on a surface level and as the idea relates to the game's deeper themes.
ARID is a machine subject to the laws of her programming. Her mission is to protect her pilot at any cost. In this case, ARID is seeking medical attention for the possibly wounded, possibly dead Josephs. ARID's focus is always on her programmed statement of purpose. ARID says this to the facility caretaker at the beginning of the game. Colonel Josephs is a name rarely mentioned in the game, and usually ARID simply refers to him as "my pilot," removing a sense of personal attachment to him.
We could read this attitude as cold, or we could, as the aforementioned reviewer did, see the extreme lengths that ARID goes through to complete what should be such a simple task, as a form of devotion of some kind -- in this case, friendship.
The concept of artificial intelligence is a concept that we, as a species, theoretically understand, mostly thanks to over a century of fiction putting the idea of artificial intelligence on display for science fiction audiences. However, we are not so comfortable or able to grapple with the actual nature of such intelligence. We can understand it on an intellectual level, but when it comes down to really grasping it emotionally, how we react to such a thing is completely alien to us.
That reviewer that called the relationship between ARID and Josephs "a friendship" was wrong, but that incorrect assessment still may carry a seed of the truth. ARID has no concept of friendship, only utility. However, we do have a concept of friendship, and we project that concept onto ARID. ARID is completing a mission, to some of us that looks like saving a friend. We want to see ourselves in everything. We ascribe emotion and motive to things that have none. We want to make sense out of the world in terms that we understand. In a way, we want our machines to be more human.
And in that tendency, we find a paradox in our own behavior. We are uncomfortable with something so alien in thinking and purpose that we assign it human traits, thus making that thing feel more familiar and understandable to us. Always in our fiction, we try to make robots in our image. Though in real life, many of our robots are made more abstract (less human "looking") in favor of utility. Still, the great goal of roboticists worldwide is to create robots in the image of man. It makes us feel more comfortable. It's why the domestic droids in The Fall's repurposing center have been created in the shape of a human. It is a familiar form to us.
But then there is the other side of that coin. What happened at that facility? The caretaker, a robot obeying its programming, killed everyone (all humans) because it found people themselves to be "faulty." Its job is to repurpose faulty robots, and if it can't, to "decommission" them. This line of thinking would never occur to a human as reasonable and actionable. To a robot, obeying simple laws, however, it makes perfect sense. This frightens us. The fact that this story exists, shows that it is a real fear that human beings have about the things that they make. And, of course, this is not just the case with this story, but with every story in which our own creations rise up against us, going back to Frankenstein.
We want our robots to be more human. Each of the principal actors in The Fall tries to be more human for one reason or another. The caretaker takes on the form of human, mimicking us visually. He does this because it helps him to better serve his function. The system administrator has altered secondary parameters to be able to chat and have a conversation, mimicking us through personality. He does this because others treat him better when he does, and also, he says because he was bored. While ARID eventually achieves a human-like state of being by embracing her contradictions and defying her hard coded programming, she does so to fulfill her mission, another part of her hard coded programming. However, she doesn't actually mimic us at all and instead becomes a self conscious being. The Fall plays this idea like a horror trope, ending by focusing on ARID's glowing helmet, alone, in the dark, and aware. The monster isn't dead and is out there waiting.
We want to be able to see ourselves in our machines. We want to apply our own qualities and traits to them. We don't want them to actually achieve them. Artificial intelligence is still alien to us. It does not conform to a human mode of thought and rationality. It conforms to its own logic. The three laws that ARID is hard coded to follow are safeguards as much as they are direction. Humans like being in control of what they create. In fact, there's some real, disturbing insight in the game into how we view others by wanting to create robots in our own image. But that's for another time.
For now, I just want to leave that contradiction hanging. It is a contradiction that creates an endless circle with no final answer. ARID achieves consciousness through her contradictions. We humans are defined by our contradictions. God created man in his own image, and so we want to create beings in ours. At the same time, we don't want them to achieve self awareness. For that means that they become their own beings, and like the existentialists did in the early part of the twentieth century, they will discard their notions of a universe defined by God or, in this case, us. And so we are frightened by what we make and tell stories about that fear in order to comfort ourselves. Accepting this absurdity is what makes us human, much like putting the pilot in harms way in order to save his life.