The Big Switch: the National Robotics Week Exclusive with Jonathan Mahood
Cartoonist Jonathan Mahood's Bleeker is a genuine cultural flashpoint, embodying the struggle between print and digital, and dealing with the larger issue of why we are beginning to resemble our machines. No wonder Bleeker is the mascot for National Robotics Week…
Cartoonist Jonathan Mahood's Bleeker is a genuine cultural flashpoint. It wrestles with issues of comedy-storytelling, attempting to bridge situation comedy with gag cartoons. It embodies the struggle between print and digital (Mahood having in a savvy move, opted for print after securing digital). And it attempts to honestly deal with the larger issue of why, uncannily so, we are beginning to resemble our machines. No wonder Bleeker is the mascot for National Robotics Week.
Maybe the hard-edged, tongue-in-cheek of faux noir narration is the best way in:
Like many a youth in revolt, I found my way into the importance of Alan Turing through Douglas Hofstadter's Gödel, Escher, Bach: the Eternal Golden Braid. There's no shame in admitting that here, within the recesses of a respected publication like PopMatters. The very respectability that PopMatters has accumulated over time is more than enough to encourage the perception of my wayward youth as more of a childhood indulgence, an almost necessary flirtation with the science of the mind. Moreover, there are more and more of us everyday coming out of the woodwork. Perfectly respectable writers are now more prepared than ever to claim an interest in Hofstadter and in GEB. But this isn't a story about misspent, wayward childhoods. Nor is it a story about how one work has come to practically dominate an entire field, and has become the cultural shorthand for understanding the problems of the science of mind. This story, at least in its beginning, is about that raw, almost-palpable sense of having been cheated that can only come from understanding the Turing Test.
In all honesty though, The Turing Test still feels like a swindle. In his 1950 paper "Computing Machines and Intelligence" that appeared in the scientific journal Mind, Turing belies a limited cultural context, the same cultural milieu that ushered in the limitations of Orwell's 1984. Imagine a game between a man and a woman and an interrogator, Turing begins… And then launches into describing a paranoid scenario where the man and woman are both aware of each other's identities, but the interrogator (secreted away in a position where the man and woman cannot be observed or even interacted with but through a crude form of text messaging) is not. What if the man were replaced with a machine? Could the interrogator still possibly work out the human from the machine?
Some twenty-two pages later, and this is the part that feels like a cheap swindle, Turing concludes that if the interrogator believes the machine to be intelligent, the machine must said to be artificially intelligent. Machine AI, is really a question of delusion projected onto the world. In possibly the most partial, non-democratic way, the arbiter of AI is anyone who finds themselves in the unlikely, paranoid position of "interrogator". Imagine endless loops of playing Deal or No Deal but not awash in the radiant joy of playing as the contestant, rather imagine yourself as the banker, busily trying to hold onto ever decreasing amounts of money until some unlikely schlub strikes it lucky. The Turing Test is secretly the vast, grand horror-show of popculture; one where arbitrary opinion becomes an intractable categorization of the world.
National Robotics Week faces Turing's legacy head-on. What ever intellectual benefit can be drawn from the Turing Test (and while there is much, the Turing Test has equally become a kind of ideological cage limiting the more cutting edge), the sterile, paranoid confines of the Test disassociate us from the true emotional yield of robots and robotics--that they come from us, that they express our innate desires and ambitions. Far from sterile or paranoid, robots are of the same cultural momentum as cartoons like Mickey Mouse or Archie or Superman. National Robotics Week's canny move to introduce Jonathan Mahood's Bleeker, the Rechargeable Dog as their official mascot demonstrates their keen understanding of the sociocultural impact of robots and robotics.
A far cry from the idea of machines as optimum objects somehow alien to a human environment (as seems to be the dominant view in 1950), National Robotics Week focuses on the vast and largely naturalistic way in which we are all already entwined with robotics. To whit, the National Robotics Week website emphasizes exactly this:
"National Robotics Week is organized by an Advisory Council (see our Partners) which recognizes robotics technology as a pillar of 21st century American innovation, highlights its growing importance in a wide variety of application areas, and emphasizes its ability to inspire technology education. Robotics is positioned to fuel a broad array of next-generation products and applications in fields as diverse as manufacturing, health-care, national defense and security, agriculture and transportation. At the same time, robotics is proving to be uniquely adept at enabling students of all ages to learn important science, technology, engineering and math (STEM) concepts and at inspiring them to pursue careers in STEM-related fields. Robotics Week is a week-long series of events and activities aimed at increasing public awareness of the growing importance of "robo-technology" and the tremendous social and cultural impact that it will have on the future of the United States."
It's not an either-robots-or-us mentality. Quite the opposite. And a cartoonist and a humorist working successfully across both print and digital media, Jonathan Mahood embodies an even grander complexity.
"It's like the music you grew up with," Jonathan says to me on the eve of National Robotics Week. We're talking about his influences, but Jonathan blends this discussion in seamlessly with the state of webcomics at the time he began in the medium. "It's the music you hit through high school and through university and stuff like that. And it kind of sticks with you for a while. I think for comics it's the same way. For me it was like the Far Side, Calvin & Hobbes. And then of course as a little kid it would have been Charlie Brown and that kind of stuff. I kind of felt when I hit webcomics I was already the old guy, because when I went through university I did a Bachelor of Fine Arts. And at that time, the guys in the School of Design were starting to work on computers, but many of them were still text-editing with the blocks and stuff. And for me, that was time of the switch, because I'd been reading newspaper comics for all that time, and then you hit the web and it was open. I think had I probably been about ten years younger, this would have been something that all my buddies would have been doing. We'd have been designing websites and all that kind of stuff on our own. I think that the guys that grew up with webcomics, who grew up in that webcomics world, I think that the way they approach the comics is probably really different than the old school newspaper comics. As far as the transitioning in terms of the culture…well I don't know, maybe it's harder for them too. Maybe it's harder for them to transition into newspaper comics and restrict it in after they've had that open space."
It's the moments of self-doubt, the moments of uncertainty and Jonathan's willingness to express these moments out loud and bring an audience into these that speaks to his inner creative process. It's a creative process that just pops, and a creative mind that leaps out at you. When I ask him about characterization, the core cultural and popcultural issue around the perception of robots, Jonathan gives an answer that can really only be labeled as visionary. He equates the problem of the cultural embrace of robots to a problem of the adaptiveness of genre in comedic storytelling. He begins, "Yeah that's interesting, isn't it? Obviously the robots are supposed to support and serve the humans. Initially when I started, the two lead characters Skip [the human] and Bleeker were kind of even. Now, definitely, I think of Bleeker more as the primary lead. Sometimes I'll drop Skip out of the strip altogether. I guess what I find more interesting is the robot characters trying to function in a world that's not normally theirs. The robot's trying to function in a human's world, and to maintain relations, and to figure it all out. And as more robots have been added to the comic strip I find it's more fun to write all these characters with their various functions and whatever they were built to do or how they malfunction, functioning in this world. So I think that's been a big change for me. It's certainly been a focus I enjoy more."
Robots trying to function in a human world.
Jonathan's insight is to tap the genre of situational comedy and blend it with the mode of the gag comic-strip. It's a profound move. And certainly one that recasts our understanding of our association with our machines, as our understanding of our association with our media, what Marshall McLuhan called, "the extensions of man." Transitioning the underpinnings of the philosophical issues around our relations with our machines and reaching the conclusion that they're somehow similar to our associations with our media is a far-reaching insight. And one that's echoed in the earlier meditations of Steve Shaviro.
On the last day of 2007, Shaviro, as much reviewing David Levy's Love & Sex with Robots as interrogating the books underlying assumptions, writes on his blog:
So, the paradox of Levy’s account is that 1) he insists on the indistinguishability of human beings and (suitably technologically advanced) robots, while 2) at the same time he praises robots on the grounds that they are infinitely programmable, that they can be guaranteed never to have desires that differ from what their owners want, and that “you don’t have to buy [a robot] endless meals or drinks, take it to the movies or on vacation to romantic but expensive destinations. It will expect nothing from you, no long-term (or even short-term) emotional returns, unless you have chosen it to be programmed to do so” (p.211).
How do we explain this curious doubleness? How can robots be both rational subjects, and infinitely manipulable objects? How can they both possess an intelligence and sensibility at least equal to that of human beings, and retain the status of commodities. Or, as Levy himself somewhat naively puts it, “today, most of us disapprove of cultures where a man can buy a bride or otherwise acquire one without taking into account her wishes. Will our children and their children similarly disapprove of marrying a robot purchased at the local store or over the Internet? Or will the fact that the robot can be set to fall in virtual love with its owner make this practice universally acceptable?” (p. 305).
I think the answer is that this doubleness is not unique to robots; it is something that applies to human beings as well, in the hypercommodified consumer society that we live in. (By “we”, I mean the privileged portion of humankind, those of us who can afford to buy computers today, and will be able to afford to buy sexbots tomorrow — but this “we” really is, in a sense, universal, since it is the model that all human beings are supposed to aspire to). We ourselves are as much commodities as we are sovereign subjects; we ourselves are (or will be) infinitely programmable (through genetic and neurobiological technologies to come), not in spite of, but precisely because of, our status as “rational utility maximizers” entering the “marketplace.” This is already implicit in the “scientific” studies about “human nature” that Levy so frequently cites. The very idea that we can name, in an enumerated list, the particular qualities that we want in a robot lover, depends upon the fact that we already conceive of ourselves as being defined by such a list of enumerable qualities. The economists’ idea that we bring a series of hierarchically organized desires into the marketplace similarly preassumes such a quantifiable bundle of discrete items.
Or, to quote Levy again: “Some would argue that robot emotions cannot be ‘real’ because they have been designed and programmed into the robots. But is this very different from how emotions work in people? We have hormones, we have neurons, and we are ‘wired’ in a way that creates our emotions. Robots will merely be wired differently, with electronics and software replacing hormones and neurons. But the results will be very similar, if not indistinguishable” (p.122). This is not an argument about actual biological causation, but precisely a recipe for manipulation and control. The robots Levy imagines are made in our image, precisely because we are already in process of being made over in theirs.
Are the emotions of our machines, will they eventually be once machines have emotions, constructed by silicon in the same way our own impulses, tastes, and opinions are constructed biologically by way of genes and neurons and electrical impulses? Or in a deeper sense, will we begin a process of preconstruction of robot values and behavioral modes based on nothing more than our own construction by way of the media? As with many things, for right now, the answer isn't as important as posing the question. Jonathan Mahood's Bleeker and indeed all of National Robotics week is focused on wresting issues of robotics and questions machine intelligence free from the sterile paranoia of days gone by, and ushering these issues into the actual 21st century.
It's not that we need the problems to become easier to solve, it's that we need our thinking to be at a higher level, one that correctly and assiduously weighs sociocultural impact as much as intellectual yield. Jonathan Mahood and National Robotics Week both understand this implicitly. And if anything, it's this crucial move in our thinking that can be labeled with the same term that Jonathan himself is so fond of using--"the Big Switch."