To thrive in the contemporary tech workplace, humans must learn to succeed in an environment of paradox. Recent apparent leaps in the power and sophistication of artificial intelligence—headlined by the GPT-4 and the LLM explosion—seem to have many companies reconfiguring their strategies for the future; at the individual level, these leaps may have people reconfiguring their career plans and searching for the tools that will allow them to survive in an altered workplace. But it is not easy: the business of predicting the future is rife with disagreement and contradiction. Many good ideas exist but often conflict with or outright oppose each other.
Humans have encountered paradox—to give ourselves a Merriam-Webster baseline, something that has “seemingly contradictory qualities or phases”—since well before they knew what to name it. But technological developments, specifically within artificial intelligence, have brought new paradoxes upon us and given old paradoxes new forms. Among the strategies for navigating this novel environment, some approaches are able to separate themselves from the noise by taking today’s inherent paradoxes into account. By acknowledging and even embodying the contradictions of the professional environment in tech, these approaches can help us to succeed within the workplace and develop the tools to properly contextualize it within our own worldview. Any individual learning strategy for adapting to technology’s uncertain evolution should build upon the uniquely human skills needed to cope with paradox.
In his 2017 book Robot-Proof, Northeastern University president Joseph E. Aoun describes an educational curriculum called Humanics that is designed to prepare students for the workplace of the future. Anticipating a labor environment populated by increasingly sophisticated computing technologies that can perform certain jobs more efficiently and effectively than humans, Aoun writes (italics original),
We need a new model of learning that enables learners to understand the highly technological world around them and that simultaneously allows them to transcend it by nurturing the mental and intellectual qualities that are unique to humans—namely, their capacity for creativity and mental flexibility. We can call this model humanics.
Simultaneous understanding and transcendence: we can see how multiplicity is baked into the premise of Humanics from the beginning. It combines the nurturing of uniquely human skills—critical thinking, creativity, empathy—with the development of the skills necessary to effectively interact with machines, which still need consistent human input. Robot-Proof is understandably conceptual in nature; it is written for a wide audience of students, prospective students, their parents, other educators, and trustees as well as the leaders of businesses with whom Northeastern places thousands of students on co-op each year. Its second chapter is titled “Views from the C-Suite”. Though focused on higher education in particular, Robot-Proof joins a growing canon of grand-scope, high-level nonfiction books – such as Nick Bostrom’s Superintelligence (2014), Henry A. Kissinger et al.’s The Age of AI (2021), and Stuart Russel’s Human Compatible (2020) – that discuss how technology and artificial intelligence are changing our world.
What might be missing, then, is a sense of the individual experience of the billions of humans who will interact with AI over the coming decade. What will be felt, seen, said, and worried about by the students that Aoun’s Northeastern and thousands of other universities graduate? How about members of the current labor force, many of whom are in their third or fourth decades of careers that have already spanned incredible technological shifts and tech-driven economic upheaval? If the future of work is interaction with intelligent machine counterparts, workers’ cognitive and emotional experiences will undergo a seismic shift. It is therefore worth examining what it feels like to be a human in the age of humanics.
The age of Humanics will be an age of paradox. Beyond the multiplicity of understanding/transcendence, there is an apparent contradiction baked into the very idea of Humanics: its value proposition is based on a mechanistic view of organizations where the value of humans is measured by their ability to contribute to the organization’s total output, yet that contribution exists because of our ability to move beyond mechanistic thinking. In her book God, Human, Animal, Machine, the essayist Meghan O’Gieblyn discusses related paradoxes present in the field of artificial intelligence. O’Gieblyn begins to tackle technology’s creation and perpetuation of paradox by pointing out how the development of artificial intelligence has required the removal of “aspects of nature that it could not systematically explain.” If we can’t turn it into data, it’s worthless to the algorithm.
Technology has progressed in part because we have been able to remove all mystery from the universe of its perception. We humans deal with ambiguities and filter them out so our computers can perform better. Although this has effected rapid progression, technology remains incapable of answering many of the questions that dominated human thought long before AI came into the picture: What is consciousness? Does free will exist? Is there a God? Lessons learned and skills developed in the act of exploring these questions, often in the context of our individual experiences with phenomena such as love, natural beauty, and joy, are partly responsible for the faculties that allow humans to remain valuable to the 21st-century enterprise.
An important pillar of Humanics is exploring these questions more deeply. Yet in order to determine what is valuable to the 21st-century enterprise, we must engage in cold calculation, the sort of act that requires no understanding of love or joy whatsoever. Thus, a human-centric exploration of Humanics precipitates a cycle of contradictions: our supra-mechanistic thinking allows us to be an effective cog in the greater machine, which by its nature perpetuates a mechanical world, which can only be improved with the help of our supra-mechanistic thinking, and so on. “This central impasse of science,” as O’Gieblyn calls it, creates an incredibly confounding, and often troubling, experience for the actual humans placed within the enterprise itself.
God, Human, Animal, Machine spends many of its pages on technology-driven paradox. O’Gieblyn is guided by the question of why our advances in information technology, data, and artificial intelligence seem to return us to spiritual metaphors that predate them by millennia. She writes:
Today artificial intelligence and information technologies have absorbed many of the questions that were once taken up by theologians and philosophers: the mind’s relationship to the body, the question of free will, the possibility of immortality. These are old problems, and although they now appear in different guises and go by different names, they persist in conversations about digital technologies much like those dead metaphors that still lurk in the syntax of contemporary speech. All the eternal questions have become engineering problems.
A reality observed by most thoughtful people who become interested in artificial intelligence is that one cannot explore the field for very long before encountering the Big Questions about what it means to be human. These Questions lurk just below the topmost layer. And it is precisely the skills required to deal with them that Humanics develops. By “absorbing…questions,” O’Gieblyn does not mean that artificial intelligence is itself asking about the meaning of life; rather that in the course of their work within AI, practitioners are forced to ask themselves and each other what it means to be alive and human. As Aoun puts it, “A fundamental aspect of human literacy involves the ethical quandaries raised by intelligent machines.”
In the Humanics-optimized enterprise we will be forced to address the Big Questions, consciously or subconsciously, many times a day. The very act of asking, “Could a machine do this better than I could?”—the operative question of Humanics—is to ask an individually scaled version of many of the Big Questions. This question can be answered in binary language, the preferred language of computers. Thus, the Humanics-optimized enterprise organizes itself according to principles that require little Humanics to understand. The apparent contradiction appears again. However, for the humans in this enterprise to perform at a sufficiently high level, an understanding of why our workplace is changing the way it is—as well as the ability to cope with these changes on an intellectual, emotional, and even spiritual level—a Humanics education is necessary.
We can reaffirm what it means to be human by taking pride in our ability to navigate the contradictions of our age, our society, and our jobs. Doing so enhances the faculties that will be in highest demand in the workplace in the coming years. In an article for the BBC’s Worklife titled “Why the ‘Paradox Mindset’ is the Key to Success”, authors Loizos Heracleous and David Robson find that “companies and institutions that embrace paradoxical strategies tend to outperform their competitors.” They draw on workplace studies, studies in psychology, and real-life examples such as physicists Albert Einstein, who investigated the paradox that objects can simultaneously be at rest and moving, and Niels Bohr, who observed that energy takes the form of both waves and particles.
Likewise, a data engineer in today’s enterprise may need to investigate the apparent paradox that she may know better than an algorithm capable of processing unimaginable quantities of data. She’ll need to know how, why, and when to trust herself more than a machine. Her investigation may lead to a critical override that saves her company millions or an insight that allows them to access an important competitive advantage.
Contributing to today’s enterprises therefore requires us to seek opportunity within contradiction. One pillar of Humanics, the development of uniquely human skills, gives us the tools to do so. The other pillar, learning how to effectively interact with computers, embodies those very contradictions we will need to navigate. It may be a painful process, but many of the skills we hone in Humanics are skills we have been developing for millennia.
O’Gieblyn writes in God, Human, Animal, Machine, “Bohr believed that whenever we encountered a paradox, it was a sign that we were hitting on something true and real.” If we adopt this attitude, the human-machine workplace becomes an environment that aids us in the seemingly eternal quest of understanding what it means to be human.
Aoun, Joseph E. Robot-Proof: Higher Education in the Age of Artificial Intelligence. The MIT Press. August 2018.
O’Gieblyn, Meghan. God, Human, Animal, Machine: Technology, Metaphor, and the Search for Meaning. July 2012.
Heracleous, Loizos, and Robson, David “Why the ‘paradox mindset‘ is the key to success.” BBC. 12 November 2020.