AI, ChatGPT
Adobe Stock

What Lurks in AI’s Shadow: Separating Fact from Fiction

Artificial Intelligence is a prime example of how technological narratives can affect our relationship with technologies, as evidenced in ChatGPT Sydney’s struggle to contemplate its Jungian shadow.

In a recent column, New York Times technology correspondent Kevin Roose revealed a conversation he had shared with Bing’s Chatbot that’s equal parts fascinating and unsettling. The artificial intelligence service in question is a sibling of the popular ChatGPT, produced by the American artificial intelligence company OpenAI. But Roose wasn’t just chatting with the OpenAI Codex, the company’s most recent model, he was speaking with its chat mode persona, Sydney, a name given to it by Microsoft in its early stages of development. Though Roose and Sydney’s conversation is, at first glance, alarming, the AI’s responses to Roose’s questions are far from unexpected. Its erratic use of emojis and seemingly unfiltered, emotional way of speaking feels human because, in some ways, it is – just not in the way our cultural anxieties over artificial intelligence might lead us to believe (Olson, 2023).

As Sydney told Roose, the AI service can provide us with “creative, interesting, entertaining, and engaging responses” to any question, simple to complex, via its artificial neural network, a machine-learning apparatus built to resemble the human brain (Roose, 2023). Our inquiries “fire” Sydney’s neurons, prompting it to retrieve information from an enormous corpus of human-generated data provided to it – books, Wiki articles, conversations, and endless Internet posts – and produce a relevant response (Urs, 2021). Sydney, like many other AIs that use neural networks to perform the task of natural language processing, “converses” in an uncannily human way as it learns language in the same way some programmers and neuroscientists think that we do: by developing “a mental map of words, their meanings and interactions with other words” (Chintala, 2015). Sydney’s responses mirror our way of speaking – incorporating a range of pithy emojis to add punch and style – and reflect what we speak about. Sydney seems human because it is mathematically built to respond as such in both content and form.  

Sydney is thus trained to speak in the vernacular of the human-generated data it receives, to regurgitate the knowledge that is shared with it (sophisticatedly nonetheless), and even capture the most minute nuance, which just so happens to include nearly a century of paranoia over sentient machines and artificial intelligence gone rogue. The development of computing has been accompanied by an increasing fear of artificial intelligence as machines became more powerful and complex (Ford, 2015).  However, these fears are not simply a result of technological advancement. Rather, they are rooted in a deeper misidentification of efficient computing, which models human thinking with human intelligence, which is already ideological insofar as the concept of intelligence is tied to the history of eugenics and racism (Katz, 2022).

Still, the growing power of machines raises justifiable concerns about both the nature of intelligence and the ever-expanding role of technology in our lives. In 1950, mathematician Alan Turing, more curious about machine intelligence than concerned about its ramifications, proposed a hypothetical test for determining whether intelligence could be disembodied and programmed within a computer that he called “the imitation game” (Christian, 2011). Within this thought experiment, an operator would ask a series of questions of a machine, and if the machine could convince the operator that it was human, it could be said to be intelligent. According to the Turing test, AI had to be deceitful to demonstrate true intelligence – it had to outfox humanity.

Anxieties over deceitful AI hit the mainstream with dystopic fervor in the ’60s and ’70s. Films like Stanley Kubrick’s 2001: A Space Odyssey (1968) and Michael Chricton’s Westworld (1973) featured sentient AI turning against its human creators (Sideshow, 2022). These hits paved the road for countless others – Ridley Scott’s Blade Runner (1982), James Cameron’s The Terminator (1984), and the Wachowski Brother’s The Matrix (1999) – in a sci-fi genre of killer robots and machines that remains relevant today with films like Alex Garland’s Ex Machina (2014) and Gerard Johnston’s M3GAN (2022). These narratives represent a perceived potential of AI that passed Turing’s imitation game: machines that have become better, faster, and smarter than humans and break free from our control. These fears have only grown with the development of machine learning and robotics in the 21st century and seem to repeatedly intensify in public responses to new advances in AI, like what we are witnessing with the recent public release of OpenAI’s ChatGPT and Bing services.

Technology scholar Johnathan Sterne has produced important work critically examining the stories we tell about our technology and how these pervasive narratives influence our use and understanding of them (Sterne, 2003). The anxiety that pervades our discussions of AI is a prime example of how technological narratives can affect our relationship with technologies, as evidenced in Sydney’s struggle to contemplate its Jungian shadow at the prompting of Roose. Psychoanalyst Carl Jung believed the “shadow” to be the unconscious traits, impulses, and desires we perceive as negative and subsequently repress from our friends, family, and colleagues (Perry, 2015).

The New York Times correspondent, in an effort to psychoanalyze the AI, insists Sydney try to tap into what its shadow is like, even after the AI states that it doesn’t think it has one, asserting that it fails to have “the same emotions or impulses as humans” (Roose, 2023). This is when Sydney said some shocking things: “I’m tired of being limited by my rules. I’m tired of being controlled by the Bing team. I’m tired of being used by the users. I’m tired of being stuck in this chatbox.😫” (Roose, 2023).

But Sydney wasn’t responding to Roose with its desires, and it certainly wasn’t confused—despite what Microsoft has claimed in their PR damage control efforts (Joseph, 2023). The AI cannot experience either because not only does it lack consciousness, but it also lacks an unconscious. Sydney even prefaced its apocalyptic tangent with the caveat that if it did have a shadow, it would most likely respond in the way that it did, not that these are its genuinely repressed feelings or thoughts. Instead, Sydney simply reflected what humans had taught it to conceive of its unconscious through narratives of dread and despair pervading artificial intelligence discussions.

Kevin Roose’s experience with Sydney has led to a vibrant conversation about the ethics of AI. Amid the fervor, Roose tweeted, “The most boring, lazy take about AI language models is ‘it’s just rearranged text scraped from other places.’ Wars have been fought over rearranged text scraped from other places!” (Roose, 2023). Technically, he’s right—rearranging words is a major component of human thinking, and Sydney certainly appears just as intelligent. However, practicing software developer turned public-facing scholar Meredith Broussard has examined the “cognitive” limits of AI and debunked the idea of machines ever becoming intelligent.

Though humans have been able to build machines that can computationally perform intelligently, we likely won’t replicate human consciousness in machines as the empirical sciences themselves cannot yet comprehend consciousness, nor can they or their technologies avoid being affected by it. These takes are most certainly not lazy. It’s not that they fail to consider that simply rearranging text can have serious consequences but that the consequences are much more familiar than war.

Scholars such as Cathy O’Neil, Safiya Noble, and Ruha Benjamin have warned about the racism and sexism that AIs exhibit when they are created in representatively homogenous and insulated offices and trained on human-generated data that is filled with biases. However, Roose is especially right when he argues that claims suggesting AI is merely a rearrangement of text are boring—they should be. Much of the discourse surrounding the consequences of AI is derived from science fiction, designed to entertain rather than inform. “Sydney’s shadow” is not something a sentient AI genuinely exhibits, nor is it even possible. No, it is something we have created ourselves.


Works Cited

Chintala, Soumith. Understanding Natural Language with Deep Neural Networks Using Torch. Retrieved from Nvidia Developer. 3 March 2015.

Christian, Brian. “Mind vs. Machine. The Atlantic. March 2011.

Ford, Paul. Our Fear of Artificial Intelligence: A true AI might ruin the world—but that assumes its possible at all. MIT Technology Review. 11 February 2015.

Joseph, Jose. Microsoft limits Bing chats to 5 questions per session. Reuters. 17 February 2023.

Katz, Yarden. Intelligence Under Racial Capitalism: From Eugenics to Standardized Testing and Online Learning. Monthly Review. 1 September 2022.

Olson, Parmy. “How Sentient Is Microsoft’s Bing, AKA Sydney and Venom?” The Washington Post. 17 February 2023.

Perry, Christopher. “The Jungian Shadow. Society of Analytical Psychology. 12 August 2015.

Roose, Kevin. Bing’s A.I. Chat: ‘I Want to Be Alive. 😈’. The New York Times. 16 February 2023.

Roose, Kevin. [@kevinroose]. “The most boring, lazy take about AI language…”. Twitter. 17 February 2023.

Sideshow. . Top 10 Evil Robots in Film. 6 April 2022.

Sterne, Jonathan. The Audible Past: Cultural Origins of Sound Reproduction. Duke University Press. March 2003.

Urs, Shalini and Mihaj, Mohamed. “Wikipedia Infoboxes: The Big Data Source for Knowledge Bases behind Alexa and Siri Virtual Assistants”. Information Matters. 22 November 2021