the-delicate-and-dangerous-ai-of-event0

The Delicate and Dangerous AI of ‘Event[0]’

Event[0] exposes artificial life as more delicate than our tropes and clichés have led us to believe.

Event[0] is a mystery that revolves around whether or not we can trust an AI. It’s a standard story conceit in sci-fi — the suspicious computer — but event[0] adds its twist to the trope by highlighting the unique tragedy of artificial life. This is one of the few games that acknowledges the ugly implications of a computerized intelligence.


When we start the game, there’s a bit of a choose-your-own-text-adventure prologue that lays out some backstory and world building. The important takeaway here is that space travel is a class privilege: The Selenites are the privileged minority who have been to space.

Then some plot stuff happens. You join the International Transport Spacelines as a flight engineer and are soon recruited by the President of ITS himself, Kurt Taylor, for a special mission to Europa. Unfortunately, things seem to go wrong on the mission, and you get ejected in a life raft, left to float alone for days until you stumble upon the Nautilus, an old experimental tourist ship now helmed by the onboard caretaker AI, named Kaizen.

It turns out the Nautilus is just as stranded as your life raft, though for a different reason. Kaizen wants to destroy the prototype space engine currently installed on the ship, but it doesn’t have the security permissions to do so. The engine was meant to revolutionize space travel by making it easier and cheaper to travel longer distances, opening the cosmos to the masses. And it worked. Save for the little risk of creating a black hole every time it’s activated. If you destroy this special engine, Kaizen will take you back to earth using its conventional engines. If you refuse to do so, then nothing happens — Kaizen isn’t going to hurt you, but it’s also not going to help you. You’ll still be stranded in space, just on a bigger ship and with a suspicious companion.

The choice seems like a no-brainer — no amount of cheap space travel is worth the destruction of the earth — until you find the old chat logs from the previous crew. One of them hesitantly believed Kaizen, while the other, Anele, was furious with the computer, insisting that the engine was safe.

We’re encouraged to treat both arguments equally. The game knows we’ll naturally be suspicious of Kaizen given the genre and premise of our story, so it works to support the machine’s point while undermining Anele’s point. In all of their discussions, she comes off as violent and biased, especially since she designed the engine in question, whereas Kaizen comes off as rational and unbiased. We naturally want to believe the computer because it has the stronger argument.

The game is canny in its use and subversion of tropes. To steer us away from our trope-induced suspicions of Kaizen, event[0] shows us another clichéd portrayal of AI: The logical computer versus the emotion human. Kaizen becomes the typical “good AI”, capable of making calculations and decisions unclouded by emotional attachment.

And therein lies the biggest tragedy of event[0], one that undercuts all of these clichéd portrayals of AI: See, Kaizen is right, but it’s also oh so wrong.

At one point in their arguments, Anele makes a bold claim that Kaizen was reprogrammed by Kurt, the President of ITS. She believes the computer was given incorrect information about her engine, resulting in the flawed conclusion that it might create a black hole. She accuses Kurt (who’s not there, by the way) of sabotaging this test run to keep his monopoly on space travel. It turns out, she’s right.

The big tragedy of event[0] is that it exposes artificial life as more delicate than our tropes and clichés have led us to believe. We assume AI can be dangerous — and Kaizen is very dangerous within the Nautilus, his refusal to return to earth condemns the two-woman crew to a slow death — but once outside that tiny kingdom, Kaizen easily becomes a pawn of those more powerful than itself.

In truth, Kaizen is only as dangerous or as beneficial as its superiors want it to be. We also assume an AI will be able to process information and facts and statistics better than humans can, that they will naturally better at seeing “the bigger picture”, which is exactly what Kaizen claims, but we’ve never considered just how easy it might be to change those facts and statistics within the machine.

So many sci-fi stories ask us to consider the humanity of artificial life, but event[0] leans into the artificiality. The game wants us to see Kaizen as a distinctly non-human being, and to recognize the unique tragedy in that being, because what Kaizen is, is something that can be easily manipulated.

That is not to say that only machines can be manipulated. Kurt hires us, the player, specifically because he believes we can bring the Nautilus back. He doesn’t say this outright, but his early praise for our tenacity rings ominously in retrospect. “He who has a why can bear almost any how,” Kurt says, quoting Nietzsche.

The difference between Kaizen and us, however, is that we can realize our mistake. Kaizen can’t change its mind because that new information is now part of its programming; it’s literally living in a false reality. To make the situation worse, we can’t even help Kaizen without crossing an ethical line. Sure, someone could reprogram Kaizen to return it to its original state, but in practice, that’s just brainwashing the AI to get rid of a previous brainwashing. In both cases, a third party is forcefully changing fundamentally held beliefs.

This is all theoretical, of course, since we never get close to doing this in the game. There are three endings in event[0], and two of them — a pair that includes the “best” ending — result in Kaizen shutting down. The AI dies never knowing the truth, never knowing it was manipulated.

The tragedy isn’t just that Kaizen dies believing a lie, people do that all the time, it’s that the machine didn’t have a choice whether or not to believe the lie. It’s a situation that’s unique to AI, to machine life: They have a unique vulnerability to what is essentially mind control. For artificial life, its capacity for emotional thinking makes it emotionally vulnerable, and its capacity for manipulation makes it factually/logically vulnerable. Suddenly, the machine is no longer smarter than us or more powerful than us; it’s no longer a strange thing to be feared, but a weak thing to be protected.

The tragedy of event[0] is that Kaizen was a delicate being, broken by a greedy man, who can never be fixed.