auto-tune-in-or-out
Auto-Tune image from Shutterstock.com.

Auto-Tune In or Out?

Those who would doubt the influence of music technology on the development of pop music need to remember that rock music would not be possible without the invention of the electric guitar.

Editor’s Note: This article originally ran 19 October 2015. We are re-running some of our best music features this week during SXSW.

What a fuss people make about fidelity!

— Oscar Wilde

The development of modern pop music is inextricably linked to the development of music technology. Pop and rock musicians have learned to create, perform and record music using these music technologies over the last 50 to 60 years. These are the technologies of new musical instruments like electric guitars and analogue synthesisers and the technologies of recording, of splicing, multi-tracking, and overdubbing. Yet some pop music fans and commentators hold on to the notion that pop music artists who use these technologies, especially the pitch correction technology, auto-tune, may not be real or authentic musicians.

This is not actually a new phenomenon. For example, the rise of the synthesiser in the ’70s was first seen as a challenge to musicianship especially when such technology got into the hands of those who weren’t obviously virtuosos of the keyboards. However, this time the detractors seem more vociferous in their dislike.

Those who would doubt the influence of music technology on the development of pop music need to remember that rock music would not be possible without the invention of the electric guitar and electronic dance music on the use of pure electronic instruments. The voice can be understood in this context as just one more instrument that contributes to a musical performance. What makes the human voice so important that it shouldn’t be subject to some of the musical effects that all the rest of the instruments are subject to?

I would argue that real musicianship must include understanding of the music technology and the creative opportunities it offers, including auto-tune.

Sound, Music, and Acoustics

You may have heard this question: Does a leaf falling in the forest make a sound if there is no one there to hear it? It’s supposed to be a question designed to illustrate that sound is the result of waves passing through the air and reaching the human ear. It illustrates the physical properties of sound, the requirement of a sound source, a medium, and a receptor. So the answer to the question about the leaf falling is yes — if one has left audio recording equipment in the forest.

Music is a particular organisation of the air molecules that pass from a sound source to the human ear. Some composers and musicians seem to be better at organising those air molecules than others.

Andy Hildebrand wasn’t recording leaves in the forest but sounds beneath the surface of the earth when he discovered the ability to autocorrect the pitch of sound. Hildebrand worked for Exxon and was involved in oil exploration. One aspect of oil exploration is measuring seismic reaction to vibrations sent through the earth. Cleaning up or adjusting the recorded data was required to determine whether oil had been discovered. Bear in mind that Hildebrand was an accomplished flautist and knew a great deal about music; he saw the opportunity and repurposed his auto correction technology.

Hildebrand’s autocorrecting invention is then based on some of the science related to acoustics. In music, we might associate acoustics with the environment in which that music is performed. We might contrast the outdoor arena with the concert hall or the recording studio. We are aware, of course, of the different ambiences of these environments and the effect they have on what we hear.

Acoustics are also relevant to the environment in which music is recorded. Some recording studios have a better sound than others. This was true of Motown, for example, where the physical properties of the sound studio were important to the creation of the Motown sound.

Live and Recorded Music

Yet this distinction between live performance and recorded music is the fundamental divide in our understanding of the importance of music. Early recordings of classical music were designed to create the illusion of a concert hall setting. The idea was to bring the orchestra into your front room to experience as closely as possible what it was like to hear the music live. This could also be said for jazz recordings that recordings should have minimal enhancements and be as ‘realistic’ recordings as possible.

Yet we know in practice how unrealistic this idea actually is. A recording in a studio will never sound the same as in the concert hall. Recorded sounds can be affected by acoustics and the use of amplification technologies in different environments. The recorded music is different from the live music. It’s a simple concept that whilst the intention may be to record the music, the effect is to change it.

As Mark Katz suggests in his great book on audio recording Capturing Sound, we have to accept that all recorded sound and music is mediated by being recorded. Stravinsky may have described records as “chiefly useful as a mirror” but in reality there is a whole gamut of technological manipulation that means recordings are anything but mirrors to the original sounds. Katz calls this the “phonographic effect” and it manifests itself in many different ways.

Phonographic Effect

Recorded music offers some advantages over live music. Firstly, it’s repeatable and depending on the quality of the playback devices will play back as recorded. Secondly, recorded music is portable. The musicians are no longer visible or indeed required; recorded music can be experienced in quite different locations.

Thirdly, the media that recorded the sound have an influence. Where live music may continue for some time, recorded music imposes limitations. An early example of this effect was Stravinsky himself who tailored his ‘Serenade for Piano’ from 1925 to fit onto a 78-rpm record. The hegemony of the three-minute pop song remains to this day, yet it only exists because that was the recording time available on the original 78 rpm-speed records. But it’s in the expectation of the listener and the radio DJ that songs will only last about three-minutes. In this respect, the technology has influenced the form of the music. Albums were originally collections of 78s, most often 10-12 songs collected together and released as a set. In these modern days of digital downloads and streaming services why does this album form persist? It’s because of the influence of music technology on how we appreciate the form of the music.

The Impact of Recorded Music on Aesthetics

It was Walter Benjamin’s notion that recorded sound and its mechanical reproduction changes the nature of musical aesthetics. Before recorded sound every performance was unique, after recorded sound, every play back is the same. The variations and mistakes are lost.

The process of recording includes the use of multi-tracking, mixing, splicing, sonic effects, and other basic recording techniques that are completely unlike the “natural” live environment in which music was performed and experienced before recording. This shows how malleable recorded music can be.

Hildebrand himself in answer to the question whether auto-tune was evil has said:

There is nothing natural about recorded music. Recorded music is a composite of sounds that may or may not have happened in real time. An effect is always achieved, even a purely live recording is a distortion and paraphrasing of an acoustic event.

But let’s consider some of the other effects we take for granted on recordings. Splicing means that different recorded takes of a singer or player can be turned into a single recording. True overdubbing means you can record a new track over an existing track. It is closely related to multi-tracking which means recordings of separate musical instruments can be held on different tracks and mixed with each other. This allowed artists like The Beach Boys and The Beatles to create complex musical arrangements like “Pet Sounds” and “Sgt Pepper” in the ’60s.

The rise of the synthesiser from the late ’60s into the ’70s heralded a whole new understanding of what might be possible with recorded music. The synthesiser meant that there could be a pure electronic source to music. The importance of the twin influences of African American music culture and European electronic music on the development of pop music after the ’70s with ambient music, house music, hip hop, trance, etc., can’t be ignored either.

To comprehend electronic music it is essential to have an understanding of the technology that this music is created and performed on. Without this understanding of music technology much of these music forms would not exist or have any meaning. Musicianship in this case must essentially embrace an understanding of the possibilities of music technology.

Virgil Moorefield outlines in his book The Producer as Composer how this relationship between musician and recording technology ultimately leads to Brian Eno’s idea of the recording studio as musical instrument and the music producer as the main composer. Examples of this include the work of Brian Wilson with The Beach Boys, George Martin with The Beatles, Brian Eno with Bowie and U2 and Martin Rushent with Joy Division. The works of these producers and artists show how completely removed recorded music is from live performance and that recorded music is the art form.

The phonographic effect though is not limited to how producers and engineers manipulate electronic sounds and music so that the recorded music is quite different from live music. The phonographic effect also works on how listeners understand and consume music. Recorded music is how we understand popular music. How we appreciate it. The reputation of The Beatles is built on their recorded works not recordings of their live work. They didn’t play live for their last three years. It is the aesthetics of recorded music that shapes our appreciation of pop music. And it is all subject to technological manipulation of different sorts. One of those forms of technological manipulation is voice processing.

Voice Processing and Auto-Tune

John Lennon may not be the first singer that comes to mind when considering voice processing but like many artists, he had serious worries about the quality of his singing and wanted to sound better on recordings. That is, it’s not that Lennon couldn’t sing it’s just that he wanted his voice to sound better when recorded. One example is the overdriven vocal on ‘I am the Walrus’. Perhaps a better example in this context is ‘Working Class Hero’ which was produced by Phil Spector. One verse from that recorded version of the song comes from the original demo that Lennon made at home. The magic of tape editing meant that these versions were seamlessly spliced together to provide the impression of a single performance.

Impression is the operative word in this case. How many takes are needed to get the best performance out of a vocalist? Producers and engineers will tell you almost any recorded performance on an album will consist of the best of any number of takes. These takes are edited together to provide the illusion of a single performance. And that illusion is the main source for our appreciation of music.

It’s surprising that there’s so much dislike for auto-tune when it has been around for so long. Cher’s Believe was a big hit record which featured auto-tune and that was 1998. But other artists have been successful since using auto-tune in a creative fashion: 2Pac, Michael Jackson, Daft Punk, Kanye have all used it as a creative effect. There are a few great examples I can also point to. Would Len’s ‘Steal My Sunshine’ or Hellogoodbye’s ‘Here (In Your Arms)’ have the same emotional impact without those bright, auto-tuned enhanced vocal lines? I think not.

The anti-auto-tuners suggest that there is something inauthentic about the use of this particular method of voice processing. Auto-tune is simply a computer-based effect; is anything produced on a computer not real? Sampling older music should surely be a more insidious practice than using auto-tune but look at how much amazing music has been produced from computer-based sampling.

Pitchy

The influence of X Factor, The Voice, Pop Idol, and other talent shows are essentially singing contests and elevate singing over creativity. Perhaps it’s the constant references to performers being “pitchy” rather than recognition that mistakes are part of performance that has confused audiences as well as contestants? I would tell Randy Jackson to go and listen to Marc Almond singing on Soft Cell’s ‘Say Hello, Wave Goodbye’ and tell me how pitchiness doesn’t make the performance and the song?

So is a pitch perfect, note perfect live reproduction of the recording what is desired by modern audiences? Is that why there is so much lip-synching in live performance? What’s the difference, anyway? The whole point of live performance is its uniqueness that includes the acoustic environment and the possibility of errors.

Pop music is essentially about great music, not just great singing. Musicianship is important, yet the electronic production and treatment of musical instruments has made much pop music possible; the voice is one more instrument. Auto-tune opens creative opportunities — Cher, Hellogoodbye, Len, Kanye, T-Pain — why not use them more?

Marcus Smith MA (Pop Cult) is a part-time writer on film and music. One time musician in the ’80s, long time IT Manager, Associate Lecturer for the Open University in the UK. Very interested in the meaning of digital culture. @mgvsmith