we-robots-staying-human-in-the-age-of-big-data

We, Robots: Staying Human in the Age of Big Data

Can technology solve all of our problems? Curtis White urges us to remember that we've been deluded by technology -- and seductive stories -- before.

Excerpted from We, Robots: Staying Human in the Age of Big Data by Curtis White with permission from Melville House. Copyright © Curtis White. All rights reserved. No part of this excerpt may be reprinted, reproduced, posted on another website or distributed by any means without the written permission of the publisher.

The Criticism of No Criticism

In this culture, we are asked to live through stories that make no sense but that we are not allowed to criticize—unless the criticism itself confirms the stories.

Take Nicholas Carr’s recent book The Glass Cage: Automation and Us, a detailed critique of our over-dependence on Cowen’s intelligent machines. A good part of Carr’s critique is pragmatic: the computers we depend on are not as safe or productive as we have been led to think they are—in large part because the human attendants to the computer’s work (Cowen’s freestylers) are “deskilled” and have become complacent. Carr provides multiple examples of the dangers of our growing dependence on computers in the airline industry (where some pilots have forgotten how to fly, especially in crisis situations), in medicine (where doctors who have lost the ability to diagnose), and in architecture (where architects no longer know how to draw). (Carr doesn’t mention the most ominous use of AI: autonomous weapons like Britain’s “fire and forget” Brimstone missiles. Will these military innovations breed a generation of soldiers who can’t shoot straight?)

While Carr is rightly concerned with the consequences of our digital dependencies, he does not come close to calling for the abandonment of an economy based on computers. Rather, he is asking for a correction. He doesn’t condemn computers, or automation, or freestyling; he simply reminds us that we should use digital power as a tool and not be displaced by it. It is a position that Cowen would very likely agree with. Carr simply calls for “wisdom” and, to use an engineer’s term, recalibration. A Luddite he’s not.

Which isn’t to say that Carr lacks sympathy for the Luddites, for there is more substance to his critique than concern with safety. For Carr, the deskilling of labor through computer automation is not only inefficient and unsafe, it is also dehumanizing. Carr makes frequent appeal to familiar ethical concepts like “freedom”—“all too often, automation frees us from that which makes us feel free”—and “humanity”— “automation confronts us with the most important question of all: what does human being mean?” At one point, Carr seems to answer this question by saying, “We are, after all, creatures of the earth.” This means that we are not just the dematerialized phantoms that AI seeks; we are embodied in a particular world:

Getting to know a place takes effort, but it ends in fulfillment and in knowledge. It provides a sense of personal accomplishment and autonomy, and it also provides a sense of belonging, a feeling of being at home in a place rather than passing through it.

Invoking Karl Marx, Carr complains that “in case after case, we’ve seen that as machines become more sophisticated, the work left to people becomes less so.” He worries that “when automation distances us from our work, when it gets between us and the world, it erases the artistry from our lives.”

That does sound bad. But there’s something odd about these assertions—or rather, something missing. Clearly, Carr’s conclusions are a product of the Western humanist tradition, which took up Christian ethics, secularized them in Kant’s “categorical imperative,” enlarged them through Romanticism’s call to freedom, gave them political force through socialism, and brought them to full flower after World War II in the work of leftist humanists like Theodor Adorno, Herbert Marcuse, Paul Goodman, Theodore Roszak, George W. S. Trow, Michel Foucault, Slavoj Žižek, Chris Hedges, and countless more that any half-competent English grad student could instantly name.

This tradition makes it possible for Carr to invoke certain ethical values and have them seem familiar and acceptable, but the tradition itself is not present in this book nor, it would appear, in Carr’s mind. Without that explicit acknowledgment, Carr’s ethical claims exist, as Trow put it, “in the context of no context.” “The motif,” Trow wrote, “is history used in the service of the force of no-history.” And Carr provides dehistoricized criticism in the service of no criticism.

It’s not that Carr does not provide reasons, or evidence, for his misgivings about technology. In fact, it is in these reasons that his meaning—his intention—is most naked. To support his humanist critique, Carr appeals not to philosophy but to science. He appeals to “research” and “studies,” words that he uses dozens and dozens of times in just one short chapter.

Researchers at the venerable RAND Corporation … detailed analysis … the RAND study … RAND research … recent published studies … the research that has been used … “a large majority of the recent studies” … existing research … strong empirical support … research that failed to find … one study, published in the journal Health … the researchers argue … a study of primary-care physicians … a recent study of the shift from paper to electronic records … in a study conducted at a Veterans Administration clinic … in another study—conducted at a large health maintenance organization—researchers found that … a study said that electronic record keeping …

To be fair, Carr is critical of the RAND research, but he seems to believe that the only way of countering it is through counterstudies and research and not through an intellectual grounding in the history of ideas. That would appear to be verboten. The problem for Carr’s position is that there is no empirical research and no clinical study that can show why we should care about the loss of “artistry” in our lives. That evidence is elsewhere.

The reason that the Western humanist tradition—with its explicit antipathy for social regimentation in capitalist economies— is not in Carr’s book has not only to do with Carr. Our culture’s implicit but strongly regulatory understanding is this: you may use that history and those ideas if you are an academic or if you write for a low-circulation left-leaning magazine or press, but you may not use that history or those ideas in a book intended for the general public, even when the book’s outlook is dependent on that history. You may criticize only in a way that either directly or indirectly confirms the legitimacy of the ruling techno-capitalist order. This “regulation” does not need to be stated so long as it is thoroughly internalized by writers and editors.

The irony here is that while Carr assumes that “research” and “studies” provide the best way to make this argument, or any argument, the kind of science he depends on is itself utterly dependent on a truly breathtaking world of as-ifs, of fictions. Carr presents to us not only made-up sciences but even made-up scientists, newly minted and factory sealed, in particular the “human-factor expert.” These researchers have knowledge of the best kind—expert knowledge—of, obviously, “human factors.” This noble field is proficient in the creation of neologisms and buzzwords like:

EXPERIENCE SAMPLING

MISWANTING

DESKILLING

SKILL FADE

AUTOMATION ADDICTION

COMPUTER FUNCTIONALITY

DEGENERATION EFFECT

SUBSTITUTION MYTH

AUTOMATION COMPLACENCY

AUTOMATION BIAS

JUDGMENT DEFICIT

PROCEDURALIZATION

AUTOMATION PARADOX

INTEROPERABILITY

DESKILLING OUTCOMES

ALERT FATIGUE

PEOPLE ANALYTICS

DATA FUNDAMENTALISM

ad infinitum.

I’m feeling a little proceduralized, deskilled, fatigued, and lacking in functionality just from putting this list together.

But let’s put the pseudoscientific jargon aside and return to Carr’s fundamental question: what human thing is it that the ills of computer automation deprive us of? What knowledge and what skills are we “creatures of the earth” being denied? Carr writes:

Knowledge involves more than looking stuff up; it requires the encoding of facts and experiences in personal memory. To truly know something, you have to weave it into your neural circuitry.

As this passage reveals, incredibly, Carr’s human objections (as opposed to his technical objections) to what intelligent machines are doing to us is also based in science, neuroscience, a discipline whose strong tendency is to think of the brain as a machine: a “circuitry” into which “facts” are “encoded,” in Carr’s words. As for the source of this ethic, Carr tells us “ergonomists are our metaphysicians” or, he emphasizes, “should be.”

Take that, Theodor Adorno.

In his concluding chapter, Carr makes an effort to move away from science. He calls the reader’s attention to a poem by Robert Frost in which there is a line that he is “always coming back to”: “The fact is the sweetest dream that labor knows,” from the poem “Mowing.”

For Carr, this line is evocative of a certain hands-in-the-dirt knowledge and ethic. It is an example of how we are “embodied in a particular world.” It’s a Tolstoyan perspective up to a certain point.

He’s a farmer, a man doing a hard job on a still, hot summer day … His mind is on his work—the bodily rhythm of the cutting, the weight of the tool in his hands, the stalks piling up around him… The work is the truth.

But then Carr writes,

We rarely look to poetry anymore, but here we see how a poet’s scrutiny of the world can be more subtle and discerning than a scientist’s. Frost understood the meaning of what we now call “flow” and the essence of what we now call “embodied cognition” long before psychologists and neurobiologists delivered the empirical evidence.

Of course, if Carr’s position were truly Tolstoyan, his concluding appeal would not be to “empirical evidence” but to the way that the poem brings together, per Hesiod, the “works and days” of the farmer. Or he would invoke Virgil’s Eclogues, or what Tolstoy invoked: a radical understanding of the meaning of religious faith. In “A Confession,” Tolstoy wrote, “True religion is that relationship, in accordance with reason and knowledge which man establishes with the infinite world around him, and which binds his life to that infinity and guides his actions.” Now, that is an apt way of talking about Frost’s poem. Or consider how the art critic John Berger talks about how a song inhabits the body of the singer: “It finds its place in the body’s guts—in the head of a drum, in the belly of a violin, in the torso or loins of a singer and listener.”

Instead of this, Carr attempts to imagine that the work of the poet can be “embodied” by joining with the work of the neuroscientist, an odd quest on which he does not travel alone. In 2007, a fellow science journalist, Jonah Lehrer, published Proust Was a Neuroscientist, in which he argued that the modern insights of neuroscience had been discovered earlier by artists like Proust.

Unfortunately, yoking the poet to the neurobiologist requires an awkward logic that must go something like this: Robert Frost has a powerful experience while working on a farm; he writes a poem that captures that moment of labor; he comes to understand that “love … [lays] the swale in rows.” So far, so good. But next we must make a leap of faith: the process through which the experience became a poem is the same as “what we now call” embodied cognition; and embodied cognition is the neural process of encoding work/poem in neural circuitry. All of which is fine so long as you don’t mind overlooking what Frost explicitly urges you to consider—love. The farmer didn’t lay the swales and the poet didn’t lay the swales and embodied cognition sure as shit didn’t lay the swales; love did.

Does Carr think that love is also coded in neural circuitry? Is that what we are now to call love—encoding? We don’t know what Carr thinks because he simply ignores the presence of the word (not what you’re supposed to do when reading a poem). But for Frost, love is not the consequence of work or poem and it certainly isn’t the result of a neural circuit. Love is not a witness to the labor; it is what asks to be witnessed. Frost wants the poem to open out onto the question of love; Carr wants to close off the poem by equating it with neural embodiment.

It is not at all the case, of course, that neuroscientists are on board with Carr’s way of thinking. Science journalists like Carr and Lehrer are far more likely to indulge in metaphysical speculations about the identity of poetry and neuroscience than actual neuroscientists are. In 2014, New York Times science reporter James Gorman wrote an instructive article about submitting to an MRI with the thought that he might see something of his “self ” in the image.

Philosophers might say that my desire and disappointment [he didn’t see his “self ”] are all the result of a basic, and pretty dumb, misunderstanding. The “me” I hoped to glimpse might emerge from the physical brain, but it is a different category from an actual brain region or pattern …

But I think that the scientists at Washington University and I are actually interested in something far less. They want clear indications of what structures and activities are associated with differences in personality or mental health. They want reliable, detailed information on what is normal in a brain, for entirely practical purposes.

In other words, neuroscientists don’t think that they have “delivered empirical evidence” about the transcendental experience Frost is providing us (and it is explicitly transcendental: the ordinary act of mowing is transcended through the action of love and an act of the imagination; love is transcendental because it is the condition that made the poet’s experience possible). Certain overexcited journalists might think so, but most neuroscientists don’t. For neuroscientists, the poem is in a “different category” of experience.

And that is a telling point for a criticism that wants to criticize technology in the name of human interests but then reduces those interests to whatever can be shown through technical research and studies. Such a criticism defends humanity by excluding it from consideration.

Carr’s book is like the triple full-page ad that appeared in the October 27, 2014, New York Times. On the first blue-green page, large white text appears over a human eye reflecting a ceiling of fluorescent lights. It reads:

TECHNOLOGY CAN
SAVE US ALL.
PROVIDED IT DOESN’T KILL US FIRST.

The exponential proliferation of mobile devices, social media, cloud technologies and the staggering amounts of data they generate have transformed the way we live and work. In fact, 61 percent of companies report that the majority of their people use smart devices for everything from email to project management to content creation.

While all of these advancements have improved our lives and provided us with greater opportunities for innovation than ever before, they have also accelerated the rise of an entirely new problem to contend with: unprecedented and crippling complexity.

The world may be getting smarter, but it hasn’t gotten any easier.

The ad gets scarier:

accomplishing less … growth slowing … declining … an intractable issue of our time … an epidemic … far ranging … too complicated … health issues … stress … information overload … suffering … enormous cost … escalating costs … an impediment to growth … time wasted … unproductive activities …

Sounds awful, no? But SAP (formerly Sapphire Analytics) has the answer: “While technology is clearly contributing to the problem it also holds the solution—a different kind of solution built on the idea that sophisticated technology doesn’t have to be complicated technology.”

Okay, so what should I do?

You, reader, should “run simple” because “if we simplify everything, we can do anything.”

That sounds great! But what should I do?

“We invite you to read more at sap.com/runsimple.”

“Okay,” you might say, “this is getting complicated. Can’t I just buy something?

“Yes!”

SAP has entered a “cloud pact” with IBM to sell cloud-based business apps to corporations, so for the moment they have lots of money to buy big splashy three-page ads in the Times.

The point is not complexity and it is not simplicity. The point is selling you, business leader, something you probably didn’t know you needed. It’s not criticism, it’s an advertisement, and so, in a more “complicated” way, is The Glass Cage.


Curtis White is the author of, most recently, the acclaimed The Science Delusion: Asking the Big Questions in a Culture of Easy Answers, and of the international bestseller The Middle Mind: Why Americans Don’t Think for Themselves.

FROM THE POPMATTERS ARCHIVES