Infrasound, Motion Capture and Choreography at the HopBarn
[for listening to the audio in this post, use headphones, or speakers which have a good bass response]
Man, infrasound is sexy. Infrasound is sound which happens at incredibly low frequencies – at pitches lower than our ears’ and brain’s ability to hear them. At the lowest end of audibility, there are sounds that we can basically only feel (click here to listen to some simple sine tones at frequencies down to 20hz, the bottom of our range, which only high end headphones and very large subwoofers can produce). You’ve felt this when a big lorry goes past – casting a blanket of silence over everything else with sound we can’t exactly hear (cf. Old Tom) – or in a club, when your ribcage starts shaking to the beat.
Low tones are great for understanding the physicality of sound. Owing to their large size, the soundwaves will cancel out or reinforce each other when they meet in a space, meaning there are pockets of physical space in a room where the sound changes dramatically (called nodes). We were exploring this when in residency at the HopBarn a few weeks ago with sound artist Angie Atmadjaja, who specialises in psychoacoustic phenomena. It’s difficult to get across how weird this is. I had heard of nodes and cancellation in theory before then, but being there in the space was the first time I’d experienced it. There’s something really special and present about just walking, listening, reacting in a room – and because of the somatic effect low frequencies have, you’re listening with your entire body.
So we made a dance.
At its core, this was an exercise to integrate the properties of infrasound with some of the other things important in Me & My Whale: liveness, agency and the body. It resulted in a duet between me and Mook with a fairly simple score – to react – but which brought on lots of pretty interesting complexities, at least interesting if you’re into that sort of thing.
At some point, I’ll dedicate a post to talking about gesture control – here, we’re using some soft-hacked gaming controllers (hereafter called the Magic Gloves) which are essentially two gloves on strings coming out of a box. I can get the position data from each of the gloves and use it to do anything.
One of the main resonant frequencies of the space (52hz – which is also a very important frequency for this project, more on that soon) was used to create two simple sine tone generators. Then, in Max/MSP, frequency and phase parameters were mapped onto the Magic Gloves so that distance and angle would subtly change the sound – bringing it up by a fraction of a hz, moving its phase by a gnat’s wing. Each glove affects its own sine wave generator, which goes into independent speakers. This is important because it means neither sine wave affects each other in the software, or in the speaker box, keeping all of the wave interaction in the space.
Our score was to react – to the sounds we were hearing, and to each other’s presence. That’s not particularly special, but with the added layer that the tones interact with each other in the space, it meant we had another performer with us – the room itself. As we moved through the space reacting to sound, and through our gesture changing the sound, our heard sound changes as we pass through room nodes or move the position of our heads. It actually turns into a really meditative blend between deep listening and a game. Another cool thing was the way this exercise sonified proxemics – argh, sorry – made real our physical and social distance through sound: when we were near each other, the difference between our two generated tones was very small, and our individual hearing of the sounds in the space were similar; but when we were on different sides of the room, or at different heights, our tones were phasing like crazy, and we would have heard very different things.
There’s a lot of choreography scored from sound, and in my work as a sound designer, I’ve seen how important music is in the devising process, but there’s something really powerful (maybe in just a wanky way, but powerful anyway) about two people who are sometimes following the same directions and are sometimes completely against each other, but are mainly unaware of when the other one is and isn’t. This dance doesn’t really work as a performance in a sit-in-the-dark-and-judge sort of way, it’s definitely a piece to do rather than watch – by the way, the sound the camera captured has very little to do with what we actually heard, which is pretty neat. As an exercise, though, it’s given us some room for thought and experimentation on the interaction between body, dynamic movement and environment. I’m really excited about the formal implications it has – what happens to our control over material when we perform fluid scores? how can the score change depending on where it’s observed from? what happens to your body when it learns it’s being tracked? what influences does the watching of action have on the action, and how can that be sonified? and what the hell does this have to do with whales?
~ ~ ~
Many thanks to Angie & Jon at the HopBarn for their support.
Infrasound: any sound that happens lower than humans can hear it.
Sine tone: the simplest synthesised sound, and one that can’t occur in nature. It’s basically a beep – very exciting.
Hertz (hz) – measurement of the pitch, or how high or low a sound is.
Psychoacoustics – the study of the psychological and physiological perception of sound.
Max/MSP – a programming environment that is used by nerds like me because you can plug anything into anything else and make sound from it.