The Sounds of Now

Tristan Murail and Sounding Stasis

by Chadwick Jenkins

1 July 2008

What happens to the ear when it receives musical sound? Do we hear "our" music as music and the rest as noise?
Floo (1999), from the Audiovisual Environment Suite (AVES) at the MIT Media Laboratory (partial) 

What happens to the ear when it receives musical sound? That is, how does the ear differentiate between musical sound and sound in general? Is it a purely cultural concern? Do we hear “our” music as music and the rest as noise? Or would we recognize something as music even if it were not like “our” music (whatever that music might be)? Is it purely an aesthetic question? Do I choose to listen to certain things as music? When I listen to raindrops striking the windowsill, do I choose to listen to these sounds as music—thereby wresting them from the world of mere sounds and making them music merely by virtue of listening to them as music? Or is there a difference between listening to something as music and listening to music? Finally, is it possible that there is something about the physical makeup of our bodies (and, in particular, our faculty of hearing) that constrains what sounds we will accept as musical? In other words, is the physicality of our ability to hear constitutive of music as opposed to the ear being simply the passive recipient of music?

The interest in the connection between the hearing faculty of the human being and the composition of music dates back to pre-Socratic thought. In the discourse surrounding the other arts, concern with the interaction between the senses and the art form has never attained the level of scrutiny and contention that it has achieved in discussions of music. Although poetry is undoubtedly concerned with the sensuous nature of the sounding words and rhythms, it typically utilizes these resources in order to project an image or a series of images that the auditor is asked to contemplate. Even painting generally employs color (which does directly impact the faculty of vision without the mediation of a concept) in order to project an ideal notion of form that the viewer is asked to consider. Such projection calls a concept into play; it asks for what Kant would call a determinant judgment—that is, an application of a concept that we use as a screen through which to understand the artwork. Part of the meaning of the painting is lost if we don’t recognize it as a portrait of Salome and thus subsume it beneath our understanding of her story.

cover art

Tristan Murail

Gondwana; Désintégrations; Time and Again

(Disques Montaigne)
US: 18 May 2004
UK: 1 Mar 2004

(Notice that this kind of judgment does not play such a decisive role in abstract painting nor in poetry that eschews sense in favor of pure sound. It is interesting to note, however, that in both cases, the progenitors of those approaches claimed that they were bringing their arts closer to the state of pure music!)

Music has always born a closer relationship to its material. Some would argue that music (without the assistance of text) is incapable of presenting a concept at all, that all meaning in music is “purely musical meaning” and derives from the interaction of pitches and rhythms. Thus it is inherently non-conceptual (whether that implies that musical meaning is beneath or above conceptual meaning depends on the person making the argument). Even those who prefer to grant music the possibility of conceptual meaning on some level would be hard-pressed not to agree that it does not bring determinant judgment to bear in the manner of representational painting.

Without a clear conceptual paradigm against which one might judge the relative worth of musical production, many writers have turned to the apparatus of hearing itself as a means of determining value. Pythagoras and his many intellectual descendents claimed that musical relationships derived from simple numerical ratios—indeed only what are called multiple (e.g. 3:1) and superparticular (e.g. 4:3) ratios, using only numbers one through four. Other ratios might be used in music, but they were not considered to be consonant and therefore were not musical in the strictest sense. The basic foundation of this notion was that the universe was constituted of these simple mathematical relationships. Inasmuch as a human being was the microcosmos to the macrocosmos of the universe, the same simple relationships that explained the universe also accounted for the combination of body and soul, the parts of the body, the parts of the soul, and so on. Since like is attracted to like and since the consonances represented the harmony that guaranteed the coherence and stability of the universe, we naturally heard consonant sounds (defined in this sense) as musical and other sounds as noise.

With the advent of polyphony in the Middle Ages and its further refinement in the Renaissance, the issue of dissonance became increasingly critical. Dissonance was often viewed as something necessary for good composition and yet was not essential to the notion of music itself. That is, music was defined in the Aristotelian manner as essentially consonant and contained dissonance as a non-essential accident (again in the Aristotelian sense, meaning an attribute that is not part of the essence of the thing). Accordingly, dissonance was to be treated with the utmost care. One had to prepare and resolve dissonance so that its presence would nearly go unnoticed. This was explained through recourse to the Aristotelian understanding of sense perception. Consonant intervals blend together to create a single object while dissonant intervals refuse to do so. They simply do not harmonize and are therefore less musical—not simply because of the nature of their ratios but, more importantly, because of the way in which our faculty of hearing processes intervals with relatively complex versus relatively simple ratios.

The most thoroughgoing attempt to relate musical aesthetics to the physical facts of aural perception was Hermann Helmholtz’s treatise On the Sensations of Tone as a Physiological Basis for the Theory of Music. Helmholtz’s principal achievement was in demonstrating that the manner in which the ear transformed vibrations into the sensation of tones had a profound impact upon the way that we understand musical sound. One of the most important aspects of this demonstration was his discussion of “roughness” to account for the sensation of dissonance. The ear recognizes consonance as relatively smooth and stable because the constituent tones within a consonance do not give rise to interference patterns, or beating. Dissonance, on the other hand, is perceived as relatively unpleasant and unstable because it does cause interference patterns and is therefore comparatively rough.

Throughout the history that I have here so inelegantly telescoped, there have been reactions against the notion that music finds its foundation within the physiology of the ear. Indeed, many composers and writers have insisted that music has an obligation to move beyond those boundaries that have been attributed to (some might say “blamed” on) our physical limitations. Different motivations for such an endeavor have been cited—such as the need to better represent the panoply of emotion, or the desire to articulate a greater range of formal possibilities—but the most common justification is that music as an art cannot and should not remain static. It must continue to progress.

It was, in part, this insistence upon the necessary evolution of music that contributed to the eschewal of tonality altogether in the early 20th century and the concomitant loss of a ready audience for contemporary composition. This glorification of aesthetic evolution has recently been dubbed “Schoenberg’s Error” in a 1991 monograph by William Thomson. Thomson argues that Schoenberg (seen as the father of modern music, the prime mover behind the embrace of atonality, and the inventor of the 12-tone method of composition) simply breached the bounds of what could properly constitute music by dismissing music’s natural basis in tonality. The tonal system, according to this point of view, reflects something inherent in our makeup and therefore is indispensable in any attempt to create a musically sound composition.

To criticize Schoenberg in this manner is to completely lose sight of the utopian zeal behind his music. The point was not simply to capitulate to what one is comfortable hearing but rather to challenge the listener to go forward, to embrace what one ought to be able to hear, to expand one’s abilities to hear finer relationships, deeper levels of coherence. Nevertheless, Thomson’s criticism of Schoenberg’s 12-tone music is by no means the only one. A far more interesting response came not from a critic but rather from a group of mostly French composers that emerged in the 1970s. The work produced by these composers came to be dubbed “spectral music”.

The spectral composers employ far too many diverse techniques and approaches to constitute anything resembling a “school” of composition but they tend to share one abiding concern: the re-investigation of the natural basis of sound—not necessarily as a justification of the sound of music but rather as a compositional resource for the further exploration of sonic possibilities. These composers tend to make compositional decisions based on an analysis of sound spectra (hence, spectral music). That is, they return to the very same phenomenon that fascinated other seekers after the “natural” in music (including Rameau and Helmholtz): the overtone series.

Every pitched sound gives rise not only to the fundamental pitch but also a host of other higher pitches that result from its vibrations and can be calculated as multiples of the foundational vibration. As an example, take A110. This nomenclature signifies a string (or column of air, etc.) that vibrates at 110 cycles per second. When a string vibrates at that frequency, we hear it as the A contained in the lower region of the piano (for instance). However, when a string vibrates at 110 cycles per second, it simultaneously vibrates at twice that speed, three times that speed, four times, etc. This gives rise to the series: 110; 220; 330; 440; 550; 660; etc. But each of these numbers corresponds to a pitch—hence 110 (A); 220 (a); 330 (e’); 440 (a’); 550 (c#’‘); 660 (e’‘). This is as far as Rameau and others tended to go. In this way, they were able to derive the major triad and claim that consonance and tonal music were natural and therefore the proper foundation of music as a whole.

The spectral composers, however, go much farther—into the reaches of the overtone series where things get very complicated, very quickly. They are able to analyze the upper reaches of the overtone series and the specific spectra of various instruments by employing computer analysis. This analysis is then used as a basis upon which to create a composition. Furthermore, these composers are interested in how different instruments manifest different spectra by emphasizing certain overtones more than others. The specific spectrum of an instrument contributes to its timbre (or tone color). Thus we hear a clarinet as a clarinet in part because it has the intriguing characteristic of lacking every other overtone. The pristine purity of the flute owes at least some of its clarity to the emphasis on the first overtone (the octave above the fundamental).

In many cases, spectral composers will take the spectrum generated by a specific sound source and use a portion of it as a harmonic resource. Because of their interest in the upper reaches of the overtone series, these foundational harmonies are often relatively dissonant (hence, this is not a return to tonality). This transference of spectrum to harmony means that what had been part of the timbral makeup of a specific sound (sometimes even a sound specific to a given range of a particular instrument) becomes a sonority—a chord. If taken as a justification for the “naturalness” of music or of this music, this is a fatal error. It is not, however, an isolated error of this type. Henry Cowell famously equated pitch and rhythm. He suggested that since a given pitch was simply a frequency that could be calculated as a particular number of cycles of vibration per second (remember A110), then that implied that pitch could be subsumed beneath rhythm. Therefore, since 5:4 is a consonant interval (specifically, a major third), then a rhythm that juxtaposed five beats against four ought also to be considered a rhythmic consonance, despite the fact that western music has typically treated such juxtapositions as relatively rare and special events. Similarly, the upper reaches of the overtone series, while they may indeed contribute to the timbre of the fundamental sound, are not to be equated with a natural foundation for musical expression. This is a confusion of categories, at best.

However, the spectral composers need not rely upon such flimsy argument to make their case. As is true of any compositional resource, what is important is not the manner of justification but rather the aesthetic result of the compositional process. In this sense, the best justification for spectral music is the overwhelming quality of the music produced. A case in point is the compositional output of Tristan Murail.

Murail has produced a great deal of music and I would in no way reduce his output to a few limited techniques. His catalogue of works contains several pieces that do not employ the spectral approach (a personal favorite that falls into this non-spectral category is Vampyr!, written for electric guitar). Indeed, Murail himself has seemed somewhat uncomfortable with the label “spectral music” in relation to his compositions and I would grant that his suspicion of the term transcends a mere composerly discomfort with labels. Nevertheless, even those pieces that are not, strictly speaking, spectral demonstrate an overriding concern with timbral effects and the (usually slow) exploration of a sonority over an extended period of time. Perhaps this is what Murail meant when he defined the spectral aesthetic (regardless of any specific sets of techniques) as the belief that “music is ultimately sound evolving in time”.

Tristan Murail

Tristan Murail

However, Gondwana, his 1980 work for orchestra, often cited as one of the purest manifestations of the spectral approach, strikes me not so much as a study of music’s evolution over time but rather a reconfiguration of our understanding of what constitutes the temporal aspect of music altogether. In fact, what moves me about this music is precisely that it does not feel like an evolutionary process in the manner of, for instance, Schoenberg’s notion of developing variation—where we follow the gradual changes in a set of motives and/or harmonies. Gondwana, despite the descriptions of other critics and Murail’s assertions to the contrary, does not plot a trajectory of motion through time (at least not in my hearing of the piece) like the majority of musical works tend to do. Rather this piece (along with other works by Murail during the surrounding years) manages to capture a sense of stasis that is not only more successful than compositions in other styles but also far more intriguing. Thus this music operates in defiance of one of the most well-worn notions concerning music (a notion that Murail obviously endorses): that music is by its very nature not static; it is the unfolding of sound across time, and when it comes to represent stasis, it does so through motion. This is, of course, precisely why Kierkegaard used music as the embodiment of his notion of immediacy—it is always in motion, always becoming; it never attains any semblance of permanence.

Now it is true, in the trivial sense, that Gondwana involves roughly 17 minutes of orchestral instruments performing sonorities, trills, scalar passages, and so on. Furthermore, I am far from attempting to suggest that the piece lacks coherence. Of course, if the work was simply a concatenation of various fragments of sound, lacking all coherence, then it could not possibly resemble stasis inasmuch as one thing would be constantly replaced with another, creating, perhaps not an evolution of sound, but certainly a trajectory that would make the listener aware of the temporality of the music at hand. Moreover, in a less trivial aesthetic sense, the first sounds of Gondwana indeed connect to the final sounds of the work. This should strike any reader, at first glance, as a description of music unfolding or evolving over time.

But that is not how the piece sounds—at least not to me. And the reason for the music’s sense of stasis is not so much the choice of sonority as it is Murail’s use of the sonority. Throughout Gondwana, the sonorities employed are not simply the result of the surface articulations but rather they seem to be an ever-present background configuration that hovers (almost audibly) behind all of the surface articulations. Thus the surface articulations do not give rise to the sonorities so much as the sonorities serve as the condition of the possibility of the various surface articulations.

Let us employ a visual analogy to clarify my understanding of this music. Imagine a rather large spider web, stretched out over an expanse of space on the edge of a wooded area where the sun comes through in shards of light broken up by the intertwining of branches and leaves. When you stand in a certain relation to the spider web, you cannot see it at all but if a breeze should happen to stir then a certain portion of it will appear momentarily within a beam of light and then a different portion of the web appears. As various bits of leaves fall toward the ground, some of them stick to the surface of the web and while you cannot see the web itself, you know where it is based on these hovering fragments. Early in the morning, a thin glaze of morning dew sticks to the surface and again articulates those portions of the underlying design that interact with the light.

This is how I imagine the achievement of Murail’s Gondwana. Those sonorities are always already there—almost, but not quite, palpable at all times. The music isn’t evolving any more than the spider web (which I imagine not to be under continual construction) evolves. Rather each sonority serves as an ever-present background that is sounded out (sometimes in part, sometimes as a whole) through the surface articulations of the music. This, of course, does not mean that every sound we hear is a part of the underlying sonority. Just as the web is at times articulated by extrinsic material (such as the fragments of leaves, in my image), so the sonority can be articulated from its interactions with extrinsic musical material. But this is not development—at least not in any recognizable sense. It is a wonderful representation (perhaps even a demonstration) of musical stasis. Each gesture sends a ripple through the underlying web, revealing more and more of its formal structure. Notice that the formal structure here belongs not to the piece but rather to the sonority itself.

I do not find this to be a “natural” form of music despite the reliance upon natural acoustic phenomena (such as the overtone series) as compositional resources. Indeed, at least under my description, this is in many ways a very unnatural approach to musical understanding and yet it is precisely in its rarefied strangeness that this music (and this piece, in particular) strikes me as one of the most fascinating and important achievements of the late 20th century.

//Mixed media

Tibet House's 30th Anniversary Benefit Concert Celebrated Philip Glass' 80th

// Notes from the Road

"Philip Glass, the artistic director of the Tibet House benefits, celebrated his 80th birthday at this year's annual benefit with performances from Patti Smith, Iggy Pop, Brittany Howard, Sufjan Stevens and more.

READ the article