Supported by the Danish Composers Society, I recently acquired a compact 10 channel speaker setup, for composing (and performing!) multi channel concerts...
The system consists of 10 small active studio monitors with a 5" kevlar membrane and a 1" dome tweeter. The monitors are bi-amped with a 40W amp for the bass and a 30W amp for the tweeter. It's not a big powerful system, but it's effective, precise and well suited for smaller venues of 60-90m2.
I have composed music for 4, 5 12 and 16 channels before, and it's really something different, than a regular stereo setup. But the problem is often that you won't have much time with the multi channel system. You often have to compose in blind, and try to guess how it will be, and then try to save the worst misassumptions during a short rehearsal before the concert. And even more unfortunate; you never have time to experiment and explore the compositorial possibilities of the system. For instance using sample delays for creating advanced phasing effects, algorithmic ways of panning, diffusing, distributing and moving the sounds etc. So that is basically the reason for this project; to actually be able to work with the setup and compose with it instead of for it.
Sound localization
Sound localization is a big subject of research, and here are just a few key concepts. When we localize a sound source, we use a combination of three main indicators.
ILD (Interaural Level Difference)
A sound source will always be closer to one ear than the other (on less the source is on-axis). Since sound level decays with distance the sound level perceived by the first ear hit by the wavefront, will be louder than what is perceived by the second. And the head will also shadow away some of the higher frequencies. This level difference is used to determine the direction of a sound source.
ITD (Interaural Time Difference)
When a sounds wavefront approaches the ears, it will reach the ears with a small time delay, which creates a phase difference. This phase difference is detected by the brain and is used to calculate the direction. We can detect an ITD in the magnitude of tens of microseconds, which gives a directional precision of down to 2 degrees under the right circumstances.
ILD becomes close to useless with wavelengths longer that ear distance times 4, because the head isn't big enough to shade off the wave front at the second ear. ITD becomes useless at wavelengths shorter than ear distance times 2 because the phase difference gets so big that the brain can't calculate if the phase difference is 1/4 wavelength or 5/4 wavelength og 9/4 wavelength or more.
HRTF (Head Related Transfer Function)
The head related transfer function is the frequency "coloration" that happens when a wavefront hits the head, torso, pinnae, hair etc. from a certain angle. The coloration is different depending on th angle so from registering which frequencies are boosted and which are attenuated, the brain leans the directional fingerprint imbedded in the sounds perceived.
So since localizing sound sources are one of the core competences of the auditory system, it is if course, interesting to work with this in a artistic way. Deciding to go with 10 speakers is more of a practical and financial decision rather than an artistic decision, but it still radically expands the possibilities over a stereo or a 5.1 setup. And I found that it is just enough speakers to make an acceptably smooth circular panning. But if I could have had 1000 speakers sphericially distributed around the audience, I would prefer that:-)
Sound material
The first thing I did when I had the system set up, was to record and test a lot of different sounds, both synthetic and acoustic, and se how they were acting when being diffused and panned in different gestural movements and abrupt patterns. Some sounds were recorded in a old gym hall, with a lot of reverb. Stuff was thrown, hit, bowed, strung, spun, etc. make rich, vibrant, gestural sounds.
Other sounds were made up from very simple basic synthetic waveforms (sines, saws, squares). A rather obvious finding is that sounds that has energy distributed over a broad spectrum, are much easier to locate than sounds with narrow spectral energy. A sine wave, for instance, that per definition only has energy at one frequency, is very hard to source locate. Almost all the spectral cues are gone, so you only have vague cues from ILD and/or ITD depending on the frequency.
On the other hand, short broad spectrum bursts are quite easy and fast to locate.
The patch below is a 10ch panning interface that can be automated for different predicitable or unpredictable patterns.
To be continued...
19 October 2014
29 January 2014
Roland CR-68 external trigger mod
I just finished this mod on the Roland CR-68 drum machine. I bought it several years ago, and always liked the sound of it, but never really used it, since it only features preprogrammed rythms.
But now it has external trigger in for nine of it's eleven analog voices.
The mod is based on Andrew Kilpatrick's cool MIDI-solution for the machine, and he has been very helpful with this much simpler mod.
The trigger signal for each voice is simply a 5 v pulse of 15-20ms. Andrew Kilpatrick suggests connecting each trigger through a resistor of 47k. So far I haven't done that, since I need to test the implementation with different setups. I tested it with an Arduino board (which outputs 5v on it's digital pins), but I also need to test if I can trig the sounds with my MOTU audio interface (with DC-coupled outputs for CV control) and my PAIA MIDI-to-CV converter.
Anyway, here's pics!
Here's the row of the nine extra trigger inputs, underneath the five existing in/outputs.
From the left:
-Bass Drum
-Snare Drum
-Rim Shot
-Low Conga
-Low Bongo
-Hi Hat
-Maracas
-Cymbal
-Claves
(there are two more voices, but for now I couldn't fit the trigger input plugs properly. It's Cow Bell and Hi Bongo)
Anyway, here's pics!
Here's the row of the nine extra trigger inputs, underneath the five existing in/outputs.
From the left:
-Bass Drum
-Snare Drum
-Rim Shot
-Low Conga
-Low Bongo
-Hi Hat
-Maracas
-Cymbal
-Claves
(there are two more voices, but for now I couldn't fit the trigger input plugs properly. It's Cow Bell and Hi Bongo)
24 September 2010
Sonic Boom
I googled sonic boom and found an article on the subject. It is quite good and easy to understand. Among other things it explains that the sonic boom only happens as an aircraft hits the exact speed of sound. However, at the end af the article, this proclamation is made:
"Our ships are beyond the speed of your supersonic planes, from the moment they determine to move. Its as simple as that, we skip the sonic boom period."
Then I saw it was from ZetaTalk.com:-)
"Our ships are beyond the speed of your supersonic planes, from the moment they determine to move. Its as simple as that, we skip the sonic boom period."
Then I saw it was from ZetaTalk.com:-)
22 September 2010
From today I'm 28
10 September 2010
Live at "Train"
18 August 2010
Acoustic Occurences Live at Trinitatis Kirke in Copenhagen
27 July 2010
Acoustic events
Here I am building and playing around with 9 psc. of the final design in the apartment.
An Arduino Mega board works as interface to communicate with the Solenoids from maxMSP.
And here, the first concert at Musikhuset Aarhus on July 24. 2010 in the café area where people (mostly +60) enjoys salmon and expensive cake. It is a wonderful place to play concerts and the reverb of the big foyer really worked it's magic on the quite crappy objects, on which the solenoids knocked. It was cheap wine glasses and metal brackets and an old cookie jar.
Subscribe to:
Posts (Atom)