The Magic Gloves of Destiny

The whale moves in a sea of sound:
Shrimps snap, planton seethes,
Fish croak, gulp, drum their air-bladders,
And are scrutinised by echo-location,
A light massage of sound
Touching the skin.

– Whale Nation, Heathcote Williams

cropped-mook-mocap1.jpg
Mook in the Magic Gloves of Destiny at our very first show on our very first day of development, at Live Art Bistro, as part of Come Find Us.

Contents

  1. Controlling Sound with Gesture
  2. The Magic Gloves of Destiny
  3. The Nitty Gritty
  4. Sound and Body in Me & My Whale
  5. Inspirations

Controlling Sound with Gesture

A lot of digital music is performed by a frowning face behind a glowing apple logo. We have no problem with transparency as a thing, but we intend to think about the scenography like theatre makers, just as we think of the text as sound artists. And a lot of our separate practices, and a lot of the ideas that have gone into Me & My Whale, have to do with the body as object/subject/host.

Motion capture is a technique to measure and track position and movement digitally – it’s used a lot in commercial films (gollum) and gaming (kinect, wii). Before advanced motion tracking technology, it’s been done with cameras – and you can still find stuff with webcams, laptop cameras and I’m sure there’s some periscope ones too. The most common way that I’ve seen in digital art – installations, performances, exhibitions – tends to be using a kinect, because its software gives you a skeleton. That means you have a really clear interpretation of where a person’s body is, meaning you can adapt sound to very specific body parts and their individual quality of movement.

virtuviankinect
the virtuvian kinect skeleton

Well, anyway, we’re not gonna learn how to programme a kinect because effort. So, instead, we’re using these babies:

THE MAGIC GLOVES OF DESTINY

gametrak

aka the Gametrak by Elliot Myers. The name was coined by Powder Keg during their show BEARS, which incidentally was where Mook & I met. It’s a piece of equipment that comes with with a golf simulator from the early noughties. I was shown it by Tom Mudd from Goldsmiths. It’s a box with strings on a ball joint with gloves at the end. It measures position with relation to the box in six continuous numbers (if the box is on the floor, left-to-right, forward-to-back, up-down for each hand). In the game, it’s used to measure your golf swing:

jerrygolf2
remember to square your shoulders

They run through Max/MSP, which is the controlling software used in Me & My Whale, and can be used in loads of different ways, which we’ll go into further down. I haven’t seen them used much very well elsewhere – it’s used really nicely in the dance film Oblique Theorem, but apart from that the function of the gloves in other people’s pieces seems really hidden. The really nice thing about using these controllers is that it can be very clear how they function – the strings are bright orange, there is a clear tension in them, and when you move, you hear a sound. That makes it much easier to start building a language between your body and the technology, and in turn makes it simpler for an audience to listen to that language. At the same time, because they’re not well known (they sold terribly) they don’t have the cultural baggage of a playstation joystick – where you can’t stop thinking of it as a tool for gaming (which is used consciously by, for instance, MizKai, who DJs chiptune with a gameboy). It’s both something that people have no prior knowledge of and also something easy for people to get, allowing us to join body semiotics, proxemics and technology without looking wanky as fuck.

More complex technologies, like the Kinect, don’t have this. They’re the equivalent of the frowning face behind an apple logo, and unless they’re twinned with live projection mapping, they’re basically as interesting. And, although there has been some amazing work done with it which I draw on (see inspirations), I prefer the lame black box with the bright orange strings.

The two of us doing our Tone Dance at the HopBarn, using the Magic Gloves to affect low frequency resonance. Read up on that here.

 

The Nitty Gritty: Ways of Using the Magic Gloves

Well, if you’ve got this far without being completely bored out of your mind, well done. So, on to level two.

The simplest, and most obvious, way to use them is to simulate a theremin. That is, using one hand to control the pitch of a sine tone, and the other to control its volume. That way, you can make some nice sweeping notes, and it can be quite melodic (over the rainbow on theremin). But to be honest, that’s dull as hell, and there’s already a theremin, it’s called a theremin.

I used these recently when doing live sound for a dance piece for Ekata Theatre, which I’ll go into now because it’s my blog and I can do what I want.

Unbelonger (2017): Direct Mapping

The Nitty:

The piece had a lot of improvised moments, and was devised itself, so I felt that the best way to go about my sound design was to play it live, rather than play it back over Qlab, which is how most theatre/dance sound is triggered. Using the air marimba instrument, I played background themes, reactive twiddles and a simple waltz.

The Gritty:

I ran this off Ableton, sending midi via loopMIDI from Max/MSP which in turn was connected to a [hi] object picking up the Magic Gloves. I placed a threshold roughly 10 degrees off centre (the X axis, left-to-right) for each of the gloves – when my hand passes that threshold into the middle, it triggers a midi note-on message. The velocity (representing force of impulse) is affected by how far back I start the swing (this doesn’t work perfectly yet, as you might be able to tell). The pitch of the midi note is determined by the height (Z axis), with the left hand being an octave or so lower as it is on a real marimba or piano. I forced a scale onto the outgoing notes, though, or it would be stupidly difficult to find the right notes, which was based on my favourite chord (C#m with added 7/9/13, since you ask). I added a dead zone at the very bottom of the Z axis range so it would only send midi notes when I’ve pulled it up a bit, else it would trigger while I’m getting into it.

Now you completely understand the principle, you can see how I use it with other sounds in different ways, but with the same mechanics. In the video below, I also use forward-backward (the Y axis) to bend the pitch of a synthesised voice.

Gonna introduce some terminology for you here. This is known as direct mapping – that means: A results in B; changing A results in B changing. In the example above, the moving the gloves further away from the ground changes the pitch, and that’s all, and that’s that. Mapping control A to parameter B, control C to parameter D, etc.

But we also have convergent mapping, in which many controls affect one parameter and divergent mapping, where one control affects many parameters. This figure explains it better than I will – it’s using the example of a person playing a wind instrument:

mapping.png
from A NIME Reader: Fifteen Years of New Interfaces for Musical Expression, p341.

Real acoustic instruments are, as the book says, a ‘web of interconnections’ (ibid). And if we’re going to be making some instruments, it’s got to be as complex and interrelated as real ones. And that means drawing on musical as well as technical sensitivity.

Poem of the Body (2016): divergent convergence

The first time I used the Magic Gloves was with choregrapher and dancer Deliah Seefluth in our collaboration Motion.Captured. The piece was called the Poem Of The Body, and was based on text by Fred M-G. This piece is super complex and it would be boring and irrelevant to go into all of it here, so I’ll just highlight the bits that relate to mapping and instrument design.

The Nitty:

Deliah and I recorded our voices speaking fred’s poem. Those recordings were then broken into a series of samples, related to each line. Deliah was attached between two sets of Magic Gloves, meaning four strings – attached to each of her ankles and wrists. Deliah then moved in specific ways, choreographed both fixed and improvised, to ‘find’ the text by moving through space. But for each line, the parameters for how the words came out changed – sometimes they happened every time they moved, sometimes she had to move very quickly to get any sense, sometimes they triggered automatically and it was her movement that stopped them – and the effect was a fighting against the medium to be able to be understood. It was all in the attempt.

The Gritty:

Ooookay. This was completely run off Max/MSP. So each of the samples was loaded into a [polybuffer~] object. The X, Y and Z axes of the four Gloves were divided into ten ‘zones’. Moving through the zones would send a [bang] message (ie a trigger). When sufficient triggers had been received for a given axis, a [groove~] object would play a fragment of the sample. The more samples were found, the longer the length of each fragment and the lower the randomisation of the starting point (and therefore the more understandable the text becomes). So as Deliah moves through the zones, the amount of glitch in the sample reduces – this was sometimes flipped on its head, meaning the more still she remained, the more understandable the sample. Still with me? A timer was set to move the sample on to the next one, during which time Deliah would move and try to trigger the broken spoken word. Each separate line had its own set of parameters, which in turn were affected by the way that Deliah moves.

The divergent mapping – interpreting the body’s movement through the zones of the space into the multiple modifying parameters within the patch – works alongside the convergent mapping – which runs from the combination of axes/triggers over time to change sample length – to generate understanding. This is a far deeper look at mapping and instrument design than the air marimba – even though the air marimba allows for musical complexity, it doesn’t allow for formal and conceptual complexity. Although difficult to understand and explain, the Magic Gloves in the Poem Of The Body presented something alive and dynamic. For one thing, there’s an active dialogue, a seeking to understand each other, between the technology and the body, that comes across in the making and the performance. And also, it takes ages to start understanding it – it will behave in strange ways, ways that I couldn’t predict or really understand, even though I made the damn thing. In our only full-length performance at Theatre N16 (sadly soon to close), it got all shy and barely triggered.

Procedural Feedback; or: jesus christ, there’s more

Fully aware that you’ve clicked away somewhere more shiny and with fewer words, I’ll plough on into the void, I’ll go down with this ship, I’ll not put my hands up and surrender.

As you might have already been able to tell, what I like is to make something that could be really useful and beautiful and then fuck with it in unpredictable ways. One of the ways to do this is what I arrogantly call procedural feedback. Acoustic feedback, which is that horrendous sound you’ve heard at your dad’s welcome to middle age gig, is caused by a microphone picking up itself through a speaker. In turn the speaker plays itself and you get a feedback loop. Now, procedural feedback is the idea of using this principle at a deeper level. An example from Me & My Whale is what’s known as the Tonesplit module. This is the bit that helps create the subaquatic soundscape at the top of the piece:

The patch takes a single input, let’s say a singing voice, delays it, and applies pitch shift to the delays (in the sample above, the pitch always shifts down). The louder the incoming voice, the more extreme the pitch shifts; the softer the incoming voice, the closer to normal it has. The really cool thing – and I can say it’s cool, because I’m the only one who will ever read this – is that the rate at which the pitch shift change changes can change. For instance, by picking up the mean amplitude where the pitch shift happens and adapting it to be more or less extreme. Yeah? Let’s say –

  1. You shout into the microphone that goes into the Tonesplit module
  2. Because it’s loud, the Tonesplit module pitches your shift down about three octaves
  3. But as you keep shouting, the amount to which it shifts is lessened as the average loudness increases.
  4. Meaning you need to work even harder to make it more extreme.

It’s a direct mapping form of procedural feedback – change into change – but it already has quite a nice narrative to it, and you can expand that out as much as you want. In the original Me & My Whale, the final section of the piece was completely generated out of the actions of the first two sections, with the parameters for each of the sound objects and equipment, the creation of text and score, being in a constant state of fluid change.

The Magic Gloves are also a very good way of measuring change of state, and impacting on it. I’ve already shared the way that the parameters in Poem Of The Body adapt to and struggle against the performers’ body. Because the measurements that are picked up by the computer are continuous – from 0 to 65536 (don’t ask me why) – they can be easily interpreted and fed into themselves. We could make a system to measure speed, whose measurements impact the measurement of speed. Or a system that detects certain gestures, whose detection of gesures change the type of gestures it detects. And these recursive systems can be present in all of the different instruments, sound objects and patches within the performance.

When you follow this further, and compound different processes together, it puts the technology in a state of functional flux, compromising its ability to support our storytelling. Eventually, the feedback generated by this flux leads to a liquid meta-narrative in which everything is affected by everything else, changing gesture into tactility: connecting listening, sound-making and touching into one sensation, just like it is in the sea.

argh

ok

 

Embodying Sound in Me & My Whale

So hopefully the parry gripp hamster meant you stopped scrolling down past all the wordy bits and I’ve got your attention again. hi.

Me & My Whale is about the control of choice. How in control of our own voice and body are we? What does it mean to have your pattern changed by an outside source? When we impose our own voice and body on somebody or something else, how does that change their voice and body?

One way we’ve done this is doing a sort of duo controlling one of our voices.

In the above, Mook’s voice is being pitch shifted and delayed depending on how I move the Magic Gloves. This is rough as hell, and doesn’t always come out right, but this is how it works: When the Magic Gloves go past the 90 degree angle to the front (ie when I move them forward), they duplicate Mook’s voice. The higher I move them forward, the higher Mook’s voice sounds, and the further out to the side on each hand I go, the more delays occur. It’s a little hard to tell from the video as there’s a bunch of other things going on, so just take our word for it if you have to.

So what’s happening here? Well, the power to create is held between the two of us. If I don’t move, Mook’s voice stays the same; if Mook doesn’t sing, my movements don’t do anything. So the way to start creating harmonies, resonance, assonance and all the other tasty things we like about sound, is to work together. It’s very easy to make it sound shit, as you can probably hear, and it’s only at some times where the good stuff comes through. As a man working in digital sound, particularly working in sampling and live manipulation of voice, I have to be vigilant about what degree my control over another’s voice is (I addressed this, btw, in an acousmatic piece of mine, Flüchtlinge), and having a shared control is something which interests me in a piece where the (highly gendered) assumption is always Xavier’s the one who can use the equipment and Mook’s the one who can act. Voice, and adapting voice, can be a very powerful thing, even when (or especially when) you make someone sound like a chipmunk.

A New Development: ‘Finding in Space’

This is something that I’m working on at the moment, so I have no media to show. Finding in Space is something we’ve been playing around with since our residency at the royal exchange. In technical terms, it’s a bit of a reverse of the way the Magic Gloves have been used up to this point, so if you’ve only just gotten the hang of it, well, sorry. It’s a bit like virtual reality, but instead of using sight to map objects in space, we use sound. Just like WHALES.

Imagine an empty room. Within this room are invisible shapes – spheres, cuboids, whatever – attached to the walls, coming out the floor, hanging in mid-air. Let’s say there’s two of them. I’ve made you a picture.

a sphere and a cuboid

As you touch the outside of them, they make a sound – let’s say a soft orgasmic groan. We’ve found the boundary – I’ll come back to that in a bit. Then you go through the boundary and, as you get closer to its centre, the orgasmic groan turns into a field recording of the Gloucester Cheese Rolling Festival. The closer you are to the boundary, the more orgasmic the sound, and the closer to the middle, the more … cheese-rolly. By listening-feeling your way through space you perceive the object’s density. Now, remember, you can’t see the object, meaning the understanding of its size, weight and importance is communicated through sound. Many cetaceans use echolocation, like bats, to hear their way around objects and animals in the water. It’s so acute, though, that they can even tell the material that the object is made of. This is the same principle as seismic surveys.

In the patch, the boundary is the limits of the object. I was thinking of making a calculation based on the distance from the boundary to make an echolocation module –  you send out a sound, as a whale sends a click, and then as you hear your sound back, you start to work out where it is. Then I went to the natural history museum and sighed because someone already did an echolocation game, and it was weird and difficult and not very fun. So it’s not as easy as it seems. At least, maybe it’s something that some gallery will fund me to do at some point. Anyway, there’s something interesting about how we could use the density to shape the space: either an evolving sound, or maybe the same sound just getting clearer. Or a sound which gets so loud that we don’t want to move further in, meaning we shape the space with our resistance.

(It’s worth pointing out, btw, that this idea was unwittingly hijacked from Jan Lee & Tim Murray-Browne, who are basically me and my collaborators but grown up. They shared something very similar to this in a talk I went to last year at Hackney Hackspace. I think that was done on a Kinect, though, and way more complicated coding-wise but hey, nothing’s original and you gotta start somewhere, right?)

My main problem at the moment is that, although it’s pretty easy to map a sphere in a space (you give it a point and then say +/- 10cm or whatever in each direction), I’m stuck about how to map even a cube, let alone a more complex shape. Eventually (sneak peek) the idea is to map a whale in freefall, which the submarine captain finds along her travels and removes its innards. For this, instead of having to think in complex mathematics, which is so not my speciality, I’m thinking of using machine learning software Wekinator, developed by Rebecca Fiebrink, whom I met at Goldsmiths. It would let me map some points in space and then interpolate between them beautifully in some magical way that only Rebecca knows. I’ve meant to use it for ages but have never found a good excuse.

We don’t know quite yet when and how we’ll use this technique yet – the little we did at the exchange didn’t make it into the sharing at live art bistro in december. When we were at the exchange, we played around with finding script in the air – recordings of our voices would be in randomised positions around the space, and, Magic Gloves in hand, we would have to find them to carry on the story. Sometimes it would take about five minutes of frustration before finding anything. But, again, I love the attempt.

 

Inspirations/ Further Reading/ Go Away

My brain hurts far too much to sum all of this up, so I’m going to leave with a bunch of interesting things related to motion capture and mapping, which I’ll update gradually as we stumble on more things. We’re always hungry to find out more, so please set things our way that we haven’t mentioned. Thanks for hanging around. We’ll be announcing our date for the Manchester sharing in the next week or so, so keep them peeled.

Video

The duo that I mentioned earlier, this is a really nice example of their technical and expressive scope.

After I saw this, I basically decided to change artistic direction because this guy had pretty much done what I was on a path to doing.

A role model of mine, Laetitia Sonami is basically the best. She’s best known (I think) for this, a glove which detects hand and finger movement.

Getting into the realms of wtf, Atau Tanaka is sonifying the neuron impulses created by him flexing his muscles.

Words

Adam Hunt & Ross Kirk: Mapping Strategies for Musical Performance (2000) [PDF]
a fairly decent and not mind-numbingly dull introduction into mapping music. It’s from quite a while ago, but still quite helpful for starting to understand more modern concepts. 

CataRT: Real-Time Corpus-Based Concatenative Synthesis [Weblink]
Concatenative synthesis, not just the winner of the hardest two words to say in the english language, is a way to map one set of sounds to another set of sounds by defining how similar they are to each other. CataRT physically maps this out onto a 2D square – I haven’t seen anybody use motion capture to trigger this off/manipulate it, but that would seem like a fun feedback game.

 

Tone Dance from Video
long may we reign

 

Xavier
praise welcome: xvelastin@gmail.com.

Advertisements

It’s all about that (sea) bass

Infrasound, Motion Capture and Choreography at the HopBarn

[for listening to the audio in this post, use headphones, or speakers which have a good bass response]

Spectro Bubbles
Different ways of visualising low frequencies layered over each other: using Audacity, Sonic Visualiser and Jitter via Max/MSP and mastered in GIMP.

 

Man, infrasound is sexy. Infrasound is sound which happens at incredibly low frequencies – at pitches lower than our ears’ and brain’s ability to hear them. At the lowest end of audibility, there are sounds that we can basically only feel (click here to listen to some simple sine tones at frequencies down to 20hz, the bottom of our range, which only high end headphones and very large subwoofers can produce). You’ve felt this when a big lorry goes past – casting a blanket of silence over everything else with sound we can’t exactly hear (cf. Old Tom) – or in a club, when your ribcage starts shaking to the beat.

 

Low tones are great for understanding the physicality of sound. Owing to their large size, the soundwaves will cancel out or reinforce each other when they meet in a space, meaning there are pockets of physical space in a room where the sound changes dramatically (called nodes). We were exploring this when in residency at the HopBarn a few weeks ago with sound artist Angie Atmadjaja, who specialises in psychoacoustic phenomena. It’s difficult to get across how weird this is. I had heard of nodes and cancellation in theory before then, but being there in the space was the first time I’d experienced it. There’s something really special and present about just walking, listening, reacting in a room – and because of the somatic effect low frequencies have, you’re listening with your entire body.

 

So we made a dance.

At its core, this was an exercise to integrate the properties of infrasound with some of the other things important in Me & My Whale: liveness, agency and the body. It resulted in a duet between me and Mook with a fairly simple score – to react – but which brought on lots of pretty interesting complexities, at least interesting if you’re into that sort of thing.

 

Method
At some point, I’ll dedicate a post to talking about gesture control – here, we’re using some soft-hacked gaming controllers (hereafter called the Magic Gloves) which are essentially two gloves on strings coming out of a box. I can get the position data from each of the gloves and use it to do anything.

One of the main resonant frequencies of the space (52hz – which is also a very important frequency for this project, more on that soon) was used to create two simple sine tone generators. Then, in Max/MSP, frequency and phase parameters were mapped onto the Magic Gloves so that distance and angle would subtly change the sound – bringing it up by a fraction of a hz, moving its phase by a gnat’s wing. Each glove affects its own sine wave generator, which goes into independent speakers. This is important because it means neither sine wave affects each other in the software, or in the speaker box, keeping all of the wave interaction in the space.

 

Patch - Gametrak Spreading
The business end of the Max/MSP patch: two cycle~ objects making sine waves with the inputs from top of the picture being the movements of the Magic Gloves.

 

Choreography
Our score was to react – to the sounds we were hearing, and to each other’s presence. That’s not particularly special, but with the added layer that the tones interact with each other in the space, it meant we had another performer with us – the room itself. As we moved through the space reacting to sound, and through our gesture changing the sound, our heard sound changes as we pass through room nodes or move the position of our heads. It actually turns into a really meditative blend between deep listening and a game. Another cool thing was the way this exercise sonified proxemics – argh, sorry – made real our physical and social distance through sound: when we were near each other, the difference between our two generated tones was very small, and our individual hearing of the sounds in the space were similar; but when we were on different sides of the room, or at different heights, our tones were phasing like crazy, and we would have heard very different things.

Tone Dance from Video
Still from the video of Mook on the left and me on the right: I’ve used the Magic Gloves in performance for a while now, and I’m very grateful to Tom Mudd from Goldsmiths for first showing them to me.

 

Reflections
There’s a lot of choreography scored from sound, and in my work as a sound designer, I’ve seen how important music is in the devising process, but there’s something really powerful (maybe in just a wanky way, but powerful anyway) about two people who are sometimes following the same directions and are sometimes completely against each other, but are mainly unaware of when the other one is and isn’t. This dance doesn’t really work as a performance in a sit-in-the-dark-and-judge sort of way, it’s definitely a piece to do rather than watch – by the way, the sound the camera captured has very little to do with what we actually heard, which is pretty neat. As an exercise, though, it’s given us some room for thought and experimentation on the interaction between body, dynamic movement and environment. I’m really excited about the formal implications it has – what happens to our control over material when we perform fluid scores? how can the score change depending on where it’s observed from? what happens to your body when it learns it’s being tracked? what influences does the watching of action have on the action, and how can that be sonified? and what the hell does this have to do with whales?

 

Xavier

 

~ ~ ~

Many thanks to Angie & Jon at the HopBarn for their support.

Glossary
Infrasound: any sound that happens lower than humans can hear it.
Sine tone: the simplest synthesised sound, and one that can’t occur in nature. It’s basically a beep – very exciting.
Hertz (hz) – measurement of the pitch, or how high or low a sound is.
Psychoacoustics – the study of the psychological and physiological perception of sound.
Max/MSP – a programming environment that is used by nerds like me because you can plug anything into anything else and make sound from it.