When Worlds Collide

Are video games art or science? It’s unlikely to be a question that preys very much on the minds of either players or non-players of the manifold digital diversions currently available on consoles, via the Internet or in the arcades. Neither the diehard nor casual player of Prince of Persia, The Sims or the latest iteration of Time Crisis need concern themselves with such an apparently abstract question, since it doesn’t impact on his or her immediate engagement in the virtual milieu. At least not if the game is any good. Similarly, for the non-player, regardless of whether they are beguiled, intrigued, bemused, or appalled by what the medium offers, or whether in fact they’re excluded from the medium or simply non-committal about it, which side of the science-art binarism we choose to place games is simply not seen as relevant to a discussion of the pros and cons of games. For players and non-players alike, games are purely and simply one thing: entertainment.

In 1956 the British writer CP Snow first used the expression “The Two Cultures” as a means of characterizing the split between the arts and the sciences. Snow was encapsulating a state of affairs, the roots of which stretch back through Romantic thinking to the Enlightenment. Since few of us would classify ourselves as polymaths, equally adept in the worlds of physics and literature, biology and cinema, we could easily conclude that The Two Cultures continues unabated, at least at the level of the distinctly non-Renaissance individual. Except that we could also argue that in the 20th century, art and science became, through means of production and distribution, inextricably linked. In fact, we might even see the video game as the ultimate expression of what happens when these two worlds collide.

Art in the age of mechanical reproduction, to invoke the famous line of another 20th century thinker, Walter Benjamin, surely requires the artist to be both artist and scientist. That as players we see the product of the collision between art and science, rather than the constituent elements that contribute to it, is a tribute to both the adaptability of the medium and to our adaptability as users. As we’ve become more technically adept, games have become easier to use and more redolent of the kind of leisure activity we’re familiar with from other kinds of entertainment.

Any survey of the existing marketplace reveals a surfeit of games that look for all the world look like interactive movies, big budget thrillers, horrors, or adventures, which utilize many of the techniques and tropes familiar from the flickering and dancing images available on the cave walls of your local multiplex. Outside of those games that simply look like films, such as the recent Splinter Cell 2: Pandora Tomorrow and Manhunter, there are those games actually based on cinema releases. The merchandising onslaught for big budget Hollywood movies invariably includes, amidst the themed bathroom soap, specially repackaged baked beans, and iconic mouse mats, a licensed video game of some sort. My shelves are packed with polygon iterations of Hollywood celluloid, some of which exhibit strong and fruitful genetic connections with their movie parents, like Spiderman, Rocky and Lord of the Rings: The Two Towers. Alternatively, many, many movie licenses-turned-games feel like the predictable outcome of kissin’ cousins taking their tactility one step too far: that lamentable six-toed runt of a game Enter the Matrix is a recent and particularly distressing warning to those tempted by incestuous cross-media breeding. Watch the gene pool, folks.

Whether a game is a successful reinterpretation of an existing cinematic work or instead employs techniques we recognize from film, contemporary games are guilty by association with movies, the pre-eminent mass entertainment form (along with TV) of this and the last century. But whereas sometimes movies are considered art, seldom is the same claim made for games. Neither do we think of video game players in the same way we might think of cinema buffs, jazz fans, or regular connoisseurs of art galleries. But if the wider culture’s image of a game player largely omits to acknowledge that art appreciation might play some part in their play, other stereotypes hold fast.

Historically, game players have often been characterized by non-gamers (and gamers themselves, indeed) as geeks, often bereft of the social skills necessary in maintaining real relationships with real people in the real world. According to the stereotype, gamers are male adolescents, are often spectacle-wearing and overweight, and invariably possess a penchant for heavy metal music. This caricature may sound extreme, but anecdotally speaking, I’m not convinced that this particular view of gamers has necessarily subsided, despite what the facts of game consumption might otherwise tell us.

Set aside the small detail that the increasingly surreal nature of the violent reality happening around us arguably makes greater involvement in the “real” world a fairly unedifying proposition. Consider instead that the stereotypical construction of the nerdy male adolescent gameplayer runs counter to the facts: as the British academic James Newman highlights in his comprehensive new book, succinctly entitled Videogames and published by Routledge (March 2004), the IDSA (Interactive Digital Software Association) indicate the average age of a gameplayer as being 28 years old.

And game players aren’t always male. In the past, media reports would periodically indicate the popularity of diverse games like Pacman and Doom amongst female players in duly patronizing, astonished tones. But female usage of games is more varied and widespread than the mainstream media generally leads us to believe. Dr Kathryn Wright, in her article Video Gaming: Myths and Facts, available at the website WomenGamers.com indicates that some ” . . . 43% of PC gamers and 35% of console gamers are women.” If we keep going at this rate, the male science geek caricature of the proto-Mr Magoo with the Darkness-fixation should crumble in, say, the next 50 years or so. Why, then, does the stereotype still obtain?

My suspicion is that it’s a hangover from the past. Older game players, those of us that earned our stripes trying to load often reluctant, sometimes downright petulant games from cassette tape into the teeny weeny RAM of our struggling Commodore 64, ZX Spectrum or Atari 800, saw more clearly the science that underpinned our game experiences than most contemporary game players ever get to do. We could even type in programs from game magazines that, on rare occasions, actually transformed into working, albeit extremely simple, games. Game players of yore really did fit the Dungeons and Dragons player uber-nerd M.O., because we were the ones with access to the machines, and we had the time to learn the idiosyncrasies of the software and hardware. Crucially, in Britain certainly, we were also the ones who benefited from an educational system in which science and computing education disciplines were still heavily targeted at males.

Contemporary console games are presented in ultra user-friendly terms involving only the insertion of a glittering silver disk into the machine’s waiting maw, thereby rendering game play an easy, consistently smooth activity. A contemporary game user generally waits a limited time before the machine yields up some more or less well-realised virtual environment, which then might or might not dazzle in terms of gameplay and graphical and audio sophistication. Forget having to be subjected to the flickering loading screens and noises that made the run-up to gameplay on first generation home computers reminiscent of KGB interrogation procedures. As a British Prime Minister once opined, “You’ve never had it so good”.

Ask makers of games if they think games are science or art, and they might well answer both, perhaps also insisting on the primacy of “commerce” in the definition. Gone are the days when games were created by one, two, or three individuals in their laboratory, or bedroom or garage. Contemporary games are team efforts, and they command budgets that would be the envy of independent film-makers, if not quite Hollywood. Take a look at the job descriptions on offer at the leading industry watering holes, and you soon see the level of specialization now required by game developers. A quick survey of vacancies at Gamasutra.com, GameJobs.com, or in the pages of magazines like the British publication Edge reveals manifold different kinds of game jobs. But despite the contention that arts and sciences disciplines are of equal importance in the creation of video games, the dominance of science is very much the reality of this industry, and is evident as soon as you read the pre-requisites of the job description.

Aside from the positions requiring high-level knowledge of the various programming languages, various design positions also require knowledge of design software. This is not an industry that takes well to the idea of artistic muse separate from technological savvy: if you’ve got mathematics or science on your side, you’ll do well. Without these skills, it’s a trickier proposition altogether; not impossible, but trickier.

Do The Two Cultures really exist? On the contrary, video games suggest we may be altogether more Renaissance than we give ourselves credit. On the one hand we have video game players: art lovers who aren’t allowed to say it and science buffs who don’t realize it. On the other hand we have the creators of games: mathematicians and scientists who are really artists, and artists who are really scientists.