Stability through Time Variation: Ursa Major Space Station

In 1978, Christopher Moore’s company, Ursa Major, released the Ursa Major Space Station:

The Space Station, or SST-282, was described as a “reverberation effect.” It could apparently get reverb times of up to 3.5 seconds. This may not seem like a particularly long time by modern standards, but it was a huge achievement given the architecture that was used. In the SST-282, the reverb effect was obtained by using a single delay line, with 15 output taps from the delay buffer summed and used for feedback, and an additional 8 taps used to monitor the delay line. Multitap delay lines such as this, where several taps are summed and used for feedback, can quickly reach a high reflection density. However, they are notoriously unstable, with the maximum feedback gain being allowed under conventional circumstances being equal to 1 divided by the number out output taps. Yet Moore was able to achieve a significantly higher feedback gain. How?

Fortunately for geeks like myself, Moore extensively documented the process he used (which puts him in my DSP Hero list). The basic diagram of the algorithm is right there on the front panel, and Moore also described the algorithm in a patent. The key diagram from the patent:

The basic idea is that the taps that are summed and used for feedback are modulated. In the patent, Moore describes the clever modulation process used, as well as the tap spacings. By moving the feedback taps back and forth, Moore was able to get a much higher feedback gain before instability, which results in a longer decay time.

I built my own version of the SST-282 back in 2001 or so, using a program called SynthBuilder. I found that by modulating the taps as Moore describes in his patent, I was able to get about a 3X increase in feedback gain before things started getting too weird. Mind you, they got pretty weird anyway. The SST-282 simulation could get a reverb sound, but it sounded like it was full of spooky voices at high feedback settings. Very cool stuff.

Christopher Moore used a similar topology for the later Stargate reverb, but with a longer delay buffer. By doubling the delay buffer size, the maximum reverb time before instability is also doubled. Apparently the Stargate used a somewhat different randomization scheme as well – see below.

Moore has recently described some of the issues that the original SST-282 had:

…I had not been able to tame the various flaws I could hear in the Space Station. These included spectral smearing (due to the wandering feedback taps), modulation noise (delay taps were simply picked up and moved 62uS with no smoothing), and the inability to get a really distant sound due to the fact that the Audition Delay taps by design picked up the dry source as early reflections as well as the dense later reverberation.

Later Christopher Moore designs, such as the 8×32 Reverb, the AKG ADR-68K, and a number of algorithms designed for Kurzweil, made use of stable reverberation algorithms. However, the Space Station’s method of obtaining stability through time variation resulted in a distinctive sound that is still useful to this day. The original Space Station algorithm was turned into the SST-206, a compact hardware version of the SST-282, and Eventide has released a plugin that uses the SST algorithm.

UPDATE: Chris Moore, in a comment on this post, points out that the randomization scheme used in the Stargate was considerably different than the Space Station:

You are right about the 323 as far as you go. The StarGate has a wonderful sound (thanks go to Charles Andersion for great support during the long and arduous tuning process), due to the invention of a different way to change delays. Without giving away the store (because I may revisit the design one day and have some really cool ideas to tame those moving delays altogether). Anyway, the 323 has no pitch smearing, no Doppler shift, and almost no modulation noise. No free lunch: it has a tremolo sound.

I had always thought that the Stargate was similar to the Space Station, except for a larger amount of memory used – I stand corrected.

Modulation in reverbs: reality and unreality

The use of modulation in digital reverbs dates back to the first commercial digital reverberators. The EMT250 used an enormous amount of modulation, to the point where it sounded like a chorus unit. Lexicon’s 224 reverberator incorporated what they called “chorus” into the algorithms, working along principles not dissimilar to the string ensembles in use at the time. The Ursa Major Space Station was based around an unstable feedback arrangement, that relied upon randomization to achieve longer decay times without self-oscillating.

Recently, Barry Blesser has written about randomization in his book, “Spaces Speak: Are You Listening?” Blesser argues that thermal variations in most real-world acoustic spaces results in small variations of the speed of sound within those spaces. Multiply this by several orders of reflections, and the result is an acoustic space that is naturally time varying. Blesser goes on to argue that random time variation in algorithmic reverbs emulates the realities of an acoustic space more accurately than time-invariant convolution reverbs.

Blesser makes a convincing argument, but I am not convinced that the heavy amounts of delay modulation used in the older reverbs makes for a more “realistic” space. The randomization in the older algorithms does a nice job in masking the periodic artifacts that can be found when using a small amount of delay memory. However, the depth of modulation used in the old units goes far beyond what can be heard in any “real world” acoustic space. The thermal currents in a symphony hall will result in a slight spread of frequencies as the sound decays, but will not create the extreme chorusing and detuning found in the EMT250, or in the Lexicon algorithms with high levels of Chorus.

Having said that, I would argue that the strengths of algorithmic reverbs is not in emulating “real” acoustic spaces, but in creating new acoustic spaces that never existed before. Blesser recently said that the marketing angle of the EMT250 was to reproduce the sound of a concert hall, but later describes the EMT250 in terms of a “pure effect world.” The early digital reverbs, in the hands of sonic innovators such as Brian Eno and Daniel Lanois, were quickly put towards the goal of generating an unreal ambience, where sounds hang in space, slowly evolving and modulating. Listen to Brian Eno’s work with Harold Budd, on “The Plateaux of Mirror,” to hear the long ambiences and heavy chorusing of the EMT250 in action. A later generation of ambient artists made heavy use of the modulated reverb algorithms in such boxes as the Alesis Quadraverb to create sheets of sound, that bear little resemblance to any acoustic space found on earth.

Creating these washy, chorused, “spacey” reverbs has been a pursuit of mine since 1999. My early Csound work explored relatively simple feedback delay networks, with randomly modulated delay lengths, in order to achieve huge reverb decays that turn any input signal into “spectral plasma” (a term lifted from Christopher Moore, the Ursa Major reverb designer). With my more recent work, I have tried to strike a balance between realistic reverberation, and the unrealistic sounds of the early digital units. The plate algorithms in Eos are an attempt to emulate the natural exponential decay of a metal plate, but were also inspired by my understanding of the EMT250. The Superhall algorithm in Eos was not attempting to emulate any “natural” space, but rather the classic early digital hall algorithms, with heavy randomization, nonlinear build of the initial reverberation decay, and the possibility of obtaining near infinite decays. The “real” world continues to be a source of inspriation for my algorithms, but I find myself more attracted to the unreal side.