The Reverb Beard

Something that I find rather curious, is that many of the reverb pioneers sported some seriously impressive beards. Christopher Moore has posted a few beard-heavy pictures on his website (http://www.sevenwoodsaudio.com). Here’s my favorite:

From left to right, you have Christopher Moore (Ursa Major reverbs, AKG ADR 68K), Anthony Agnello (Eventide, Princeton Digital), Wolfgang Schwarz (or Wolfgang Buchleitner, not sure of the name, but the Quantec guy) and David Grieisinger (designer of the Lexicon reverb algorithms). An amazing amount of reverb knowledge in one place, and rocking beards that rival ZZ Top, assuming that you put the 4 beards together to form one super beard like some sort of beard Voltron.

Another picture tosses in Barry Blesser (EMT-250), sporting a scholarly pipe and an even more scholarly beard:

Nowadays, I use the term “Reverb Beard” (or “Reverbskägg” in Swedish) to refer to people that develop reverberation algorithms, or to describe the state of people in the middle of the design process for reverb algorithms. Feel free to use this meme.

Note: I’ve tried to grow the reverb beard before, but it either comes out red, which makes me look like Kris Kringle in “Santa Claus is Coming To Town,” or greyish-red, which makes me look and feel old. So the “reverb beard” is more of a mental state.

EDIT: Chris Randall and Adam Schabtach, of Audio Damage fame, both pointed out to me that the mighty beards of the reverb pioneers were first mentioned on the Music Thing blog in 2004:

http://musicthing.blogspot.com/2004/12/beards-and-music-technology-pioneers.html

Caves, megaliths, and reverbs in the prehistoric world

I have a confession to make: I was an Anthropology major. I took some courses at CCRMA as an undergraduate, but my degree was in Anthropology, with a focus in archaeology. Instead of studying the types of things that make up my work nowadays, like electrical engineering and computer science, I spent my time learning about the relationship of environment to culture, hunter gatherers from an ecological perspective, the societies of Central and South America, the interaction between the fur trade and religion in Subarctic Canada, stuff like that. My favorite book from that era (or, at the very least, the book with the best title):

So it should come as no surprise that I am fascinated by the sounds of ancient buildings, caves, and other prehistoric constructions and dwellings. The study of ancient acoustics, or archaeoacoustics, covers a variety of sonic phenomena of the prehistoric world, from research into early musical instruments such as bone flutes and percussion instruments, to the possibility of whether grooves in pottery could have recorded sounds from thousands of years ago.

A number of books have been published on the subject of archaeo-acoutics. Paul Devereux, in “Stone Age Soundtracks: The Acoustic Archaeology of Ancient Sites,” provides a high-level summary of the different theories. A recent publication, “Archaeoacoustics,” edited by Chris Scarre and Graeme Lawson, collects a number of articles with a more scholarly bent. Barry Blesser and Linda-Ruth Salter provide an overview of the topic in “Spaces Speak, Are You Listening?” As far as web resources, the Acoustics and Music of British Prehistory Research Network provides an excellent bibliography and set of links to current research projects.

There are a lot of cool theories that have been posited in the last few decades in the archaeo-acoustic field. David Lubman has described how handclaps reflecting off of the staircase of a Chichen Itza pyramid are transformed into a chirping sound very close to the call of the quetzal. Aaron Watson and David Keating have conducted experiments in burial mounds throughout the British Isles, and found that the chambers tend to have a Helmholtz resonance in the 1 Hz to 7 Hz range. Watson and Keating experimented with drumming at a tempo that matched the frequency of the Helmholtz resonances, and have argued that the resulting infrasonics caused subjective effects in listener, such as elevated pulse rates and breathing. Robert Jahn and Paul Devereux have found that many chambered megalithic tombs had strong resonance frequencies in the 95 to 120 Hz range, which corresponds with the low baritone range of the human voice, and that exposure to this frequency could cause changes in brain activity that correspond to meditative and trance states.

Some of the theories, in my opinion, fall under the category of “just-so stories.” Great ideas, cool to think about, and absolutely unprovable. Without the use of a time machine, we have no idea what type of music was being played 20,000 years ago. It is safe to assume that people in the Paleolithic era were reacting to their sonic environment, as the caves and megaliths that ceremonies were (presumably) performed in have quite striking sonic characteristics. Beyond that, if there is no evidence of musical instruments on the site, there is very little evidence of what sorts of sounds were being made within these environments, all talk of resonances and infrasound aside.

While the evidence for what sounds were produced in prehistoric sites is often scant, there is strong evidence of awareness of striking acoustic characteristics in prehistoric times. Iegor Reznikoff has studied the location of Paleolithic art in European caves, and has found a strong correlation between the presence of art or distinctive markings in a given location, and the quality of the resonance in those locations. The resonance was most marked in niches or recesses that were decorated, and Reznikoff argues that it is inconceivable that the person(s) decorating these spaces would not have noticed the striking sonic quality of the space. Steven Waller has found a similar degree of correlation between the placement of rock art, and the distinctiveness of the echos within those locations. It may not be possible to know what sounds were being made thousands of years ago, but there is a fair amount of evidence that our ancestors had strong preferences about where these sounds were made.

In the next blog post, we’ll skip ahead a few thousand years, to discuss recent research conducted on the acoustics of a South American ceremonial site, and how the sonics of that site may have factored into societal control.

Modulation in reverbs: reality and unreality

The use of modulation in digital reverbs dates back to the first commercial digital reverberators. The EMT250 used an enormous amount of modulation, to the point where it sounded like a chorus unit. Lexicon’s 224 reverberator incorporated what they called “chorus” into the algorithms, working along principles not dissimilar to the string ensembles in use at the time. The Ursa Major Space Station was based around an unstable feedback arrangement, that relied upon randomization to achieve longer decay times without self-oscillating.

Recently, Barry Blesser has written about randomization in his book, “Spaces Speak: Are You Listening?” Blesser argues that thermal variations in most real-world acoustic spaces results in small variations of the speed of sound within those spaces. Multiply this by several orders of reflections, and the result is an acoustic space that is naturally time varying. Blesser goes on to argue that random time variation in algorithmic reverbs emulates the realities of an acoustic space more accurately than time-invariant convolution reverbs.

Blesser makes a convincing argument, but I am not convinced that the heavy amounts of delay modulation used in the older reverbs makes for a more “realistic” space. The randomization in the older algorithms does a nice job in masking the periodic artifacts that can be found when using a small amount of delay memory. However, the depth of modulation used in the old units goes far beyond what can be heard in any “real world” acoustic space. The thermal currents in a symphony hall will result in a slight spread of frequencies as the sound decays, but will not create the extreme chorusing and detuning found in the EMT250, or in the Lexicon algorithms with high levels of Chorus.

Having said that, I would argue that the strengths of algorithmic reverbs is not in emulating “real” acoustic spaces, but in creating new acoustic spaces that never existed before. Blesser recently said that the marketing angle of the EMT250 was to reproduce the sound of a concert hall, but later describes the EMT250 in terms of a “pure effect world.” The early digital reverbs, in the hands of sonic innovators such as Brian Eno and Daniel Lanois, were quickly put towards the goal of generating an unreal ambience, where sounds hang in space, slowly evolving and modulating. Listen to Brian Eno’s work with Harold Budd, on “The Plateaux of Mirror,” to hear the long ambiences and heavy chorusing of the EMT250 in action. A later generation of ambient artists made heavy use of the modulated reverb algorithms in such boxes as the Alesis Quadraverb to create sheets of sound, that bear little resemblance to any acoustic space found on earth.

Creating these washy, chorused, “spacey” reverbs has been a pursuit of mine since 1999. My early Csound work explored relatively simple feedback delay networks, with randomly modulated delay lengths, in order to achieve huge reverb decays that turn any input signal into “spectral plasma” (a term lifted from Christopher Moore, the Ursa Major reverb designer). With my more recent work, I have tried to strike a balance between realistic reverberation, and the unrealistic sounds of the early digital units. The plate algorithms in Eos are an attempt to emulate the natural exponential decay of a metal plate, but were also inspired by my understanding of the EMT250. The Superhall algorithm in Eos was not attempting to emulate any “natural” space, but rather the classic early digital hall algorithms, with heavy randomization, nonlinear build of the initial reverberation decay, and the possibility of obtaining near infinite decays. The “real” world continues to be a source of inspriation for my algorithms, but I find myself more attracted to the unreal side.