The Magic Gloves of Destiny

The whale moves in a sea of sound:
Shrimps snap, planton seethes,
Fish croak, gulp, drum their air-bladders,
And are scrutinised by echo-location,
A light massage of sound
Touching the skin.

– Whale Nation, Heathcote Williams

cropped-mook-mocap1.jpg
Mook in the Magic Gloves of Destiny at our very first show on our very first day of development, at Live Art Bistro, as part of Come Find Us.

Contents

  1. Controlling Sound with Gesture
  2. The Magic Gloves of Destiny
  3. The Nitty Gritty
  4. Sound and Body in Me & My Whale
  5. Inspirations

Controlling Sound with Gesture

A lot of digital music is performed by a frowning face behind a glowing apple logo. We have no problem with transparency as a thing, but we intend to think about the scenography like theatre makers, just as we think of the text as sound artists. And a lot of our separate practices, and a lot of the ideas that have gone into Me & My Whale, have to do with the body as object/subject/host.

Motion capture is a technique to measure and track position and movement digitally – it’s used a lot in commercial films (gollum) and gaming (kinect, wii). Before advanced motion tracking technology, it’s been done with cameras – and you can still find stuff with webcams, laptop cameras and I’m sure there’s some periscope ones too. The most common way that I’ve seen in digital art – installations, performances, exhibitions – tends to be using a kinect, because its software gives you a skeleton. That means you have a really clear interpretation of where a person’s body is, meaning you can adapt sound to very specific body parts and their individual quality of movement.

virtuviankinect
the virtuvian kinect skeleton

Well, anyway, we’re not gonna learn how to programme a kinect because effort. So, instead, we’re using these babies:

THE MAGIC GLOVES OF DESTINY

gametrak

aka the Gametrak by Elliot Myers. The name was coined by Powder Keg during their show BEARS, which incidentally was where Mook & I met. It’s a piece of equipment that comes with with a golf simulator from the early noughties. I was shown it by Tom Mudd from Goldsmiths. It’s a box with strings on a ball joint with gloves at the end. It measures position with relation to the box in six continuous numbers (if the box is on the floor, left-to-right, forward-to-back, up-down for each hand). In the game, it’s used to measure your golf swing:

jerrygolf2
remember to square your shoulders

They run through Max/MSP, which is the controlling software used in Me & My Whale, and can be used in loads of different ways, which we’ll go into further down. I haven’t seen them used much very well elsewhere – it’s used really nicely in the dance film Oblique Theorem, but apart from that the function of the gloves in other people’s pieces seems really hidden. The really nice thing about using these controllers is that it can be very clear how they function – the strings are bright orange, there is a clear tension in them, and when you move, you hear a sound. That makes it much easier to start building a language between your body and the technology, and in turn makes it simpler for an audience to listen to that language. At the same time, because they’re not well known (they sold terribly) they don’t have the cultural baggage of a playstation joystick – where you can’t stop thinking of it as a tool for gaming (which is used consciously by, for instance, MizKai, who DJs chiptune with a gameboy). It’s both something that people have no prior knowledge of and also something easy for people to get, allowing us to join body semiotics, proxemics and technology without looking wanky as fuck.

More complex technologies, like the Kinect, don’t have this. They’re the equivalent of the frowning face behind an apple logo, and unless they’re twinned with live projection mapping, they’re basically as interesting. And, although there has been some amazing work done with it which I draw on (see inspirations), I prefer the lame black box with the bright orange strings.

The two of us doing our Tone Dance at the HopBarn, using the Magic Gloves to affect low frequency resonance. Read up on that here.

 

The Nitty Gritty: Ways of Using the Magic Gloves

Well, if you’ve got this far without being completely bored out of your mind, well done. So, on to level two.

The simplest, and most obvious, way to use them is to simulate a theremin. That is, using one hand to control the pitch of a sine tone, and the other to control its volume. That way, you can make some nice sweeping notes, and it can be quite melodic (over the rainbow on theremin). But to be honest, that’s dull as hell, and there’s already a theremin, it’s called a theremin.

I used these recently when doing live sound for a dance piece for Ekata Theatre, which I’ll go into now because it’s my blog and I can do what I want.

Unbelonger (2017): Direct Mapping

The Nitty:

The piece had a lot of improvised moments, and was devised itself, so I felt that the best way to go about my sound design was to play it live, rather than play it back over Qlab, which is how most theatre/dance sound is triggered. Using the air marimba instrument, I played background themes, reactive twiddles and a simple waltz.

The Gritty:

I ran this off Ableton, sending midi via loopMIDI from Max/MSP which in turn was connected to a [hi] object picking up the Magic Gloves. I placed a threshold roughly 10 degrees off centre (the X axis, left-to-right) for each of the gloves – when my hand passes that threshold into the middle, it triggers a midi note-on message. The velocity (representing force of impulse) is affected by how far back I start the swing (this doesn’t work perfectly yet, as you might be able to tell). The pitch of the midi note is determined by the height (Z axis), with the left hand being an octave or so lower as it is on a real marimba or piano. I forced a scale onto the outgoing notes, though, or it would be stupidly difficult to find the right notes, which was based on my favourite chord (C#m with added 7/9/13, since you ask). I added a dead zone at the very bottom of the Z axis range so it would only send midi notes when I’ve pulled it up a bit, else it would trigger while I’m getting into it.

Now you completely understand the principle, you can see how I use it with other sounds in different ways, but with the same mechanics. In the video below, I also use forward-backward (the Y axis) to bend the pitch of a synthesised voice.

Gonna introduce some terminology for you here. This is known as direct mapping – that means: A results in B; changing A results in B changing. In the example above, the moving the gloves further away from the ground changes the pitch, and that’s all, and that’s that. Mapping control A to parameter B, control C to parameter D, etc.

But we also have convergent mapping, in which many controls affect one parameter and divergent mapping, where one control affects many parameters. This figure explains it better than I will – it’s using the example of a person playing a wind instrument:

mapping.png
from A NIME Reader: Fifteen Years of New Interfaces for Musical Expression, p341.

Real acoustic instruments are, as the book says, a ‘web of interconnections’ (ibid). And if we’re going to be making some instruments, it’s got to be as complex and interrelated as real ones. And that means drawing on musical as well as technical sensitivity.

Poem of the Body (2016): divergent convergence

The first time I used the Magic Gloves was with choregrapher and dancer Deliah Seefluth in our collaboration Motion.Captured. The piece was called the Poem Of The Body, and was based on text by Fred M-G. This piece is super complex and it would be boring and irrelevant to go into all of it here, so I’ll just highlight the bits that relate to mapping and instrument design.

The Nitty:

Deliah and I recorded our voices speaking fred’s poem. Those recordings were then broken into a series of samples, related to each line. Deliah was attached between two sets of Magic Gloves, meaning four strings – attached to each of her ankles and wrists. Deliah then moved in specific ways, choreographed both fixed and improvised, to ‘find’ the text by moving through space. But for each line, the parameters for how the words came out changed – sometimes they happened every time they moved, sometimes she had to move very quickly to get any sense, sometimes they triggered automatically and it was her movement that stopped them – and the effect was a fighting against the medium to be able to be understood. It was all in the attempt.

The Gritty:

Ooookay. This was completely run off Max/MSP. So each of the samples was loaded into a [polybuffer~] object. The X, Y and Z axes of the four Gloves were divided into ten ‘zones’. Moving through the zones would send a [bang] message (ie a trigger). When sufficient triggers had been received for a given axis, a [groove~] object would play a fragment of the sample. The more samples were found, the longer the length of each fragment and the lower the randomisation of the starting point (and therefore the more understandable the text becomes). So as Deliah moves through the zones, the amount of glitch in the sample reduces – this was sometimes flipped on its head, meaning the more still she remained, the more understandable the sample. Still with me? A timer was set to move the sample on to the next one, during which time Deliah would move and try to trigger the broken spoken word. Each separate line had its own set of parameters, which in turn were affected by the way that Deliah moves.

The divergent mapping – interpreting the body’s movement through the zones of the space into the multiple modifying parameters within the patch – works alongside the convergent mapping – which runs from the combination of axes/triggers over time to change sample length – to generate understanding. This is a far deeper look at mapping and instrument design than the air marimba – even though the air marimba allows for musical complexity, it doesn’t allow for formal and conceptual complexity. Although difficult to understand and explain, the Magic Gloves in the Poem Of The Body presented something alive and dynamic. For one thing, there’s an active dialogue, a seeking to understand each other, between the technology and the body, that comes across in the making and the performance. And also, it takes ages to start understanding it – it will behave in strange ways, ways that I couldn’t predict or really understand, even though I made the damn thing. In our only full-length performance at Theatre N16 (sadly soon to close), it got all shy and barely triggered.

Procedural Feedback; or: jesus christ, there’s more

Fully aware that you’ve clicked away somewhere more shiny and with fewer words, I’ll plough on into the void, I’ll go down with this ship, I’ll not put my hands up and surrender.

As you might have already been able to tell, what I like is to make something that could be really useful and beautiful and then fuck with it in unpredictable ways. One of the ways to do this is what I arrogantly call procedural feedback. Acoustic feedback, which is that horrendous sound you’ve heard at your dad’s welcome to middle age gig, is caused by a microphone picking up itself through a speaker. In turn the speaker plays itself and you get a feedback loop. Now, procedural feedback is the idea of using this principle at a deeper level. An example from Me & My Whale is what’s known as the Tonesplit module. This is the bit that helps create the subaquatic soundscape at the top of the piece:

The patch takes a single input, let’s say a singing voice, delays it, and applies pitch shift to the delays (in the sample above, the pitch always shifts down). The louder the incoming voice, the more extreme the pitch shifts; the softer the incoming voice, the closer to normal it has. The really cool thing – and I can say it’s cool, because I’m the only one who will ever read this – is that the rate at which the pitch shift change changes can change. For instance, by picking up the mean amplitude where the pitch shift happens and adapting it to be more or less extreme. Yeah? Let’s say –

  1. You shout into the microphone that goes into the Tonesplit module
  2. Because it’s loud, the Tonesplit module pitches your shift down about three octaves
  3. But as you keep shouting, the amount to which it shifts is lessened as the average loudness increases.
  4. Meaning you need to work even harder to make it more extreme.

It’s a direct mapping form of procedural feedback – change into change – but it already has quite a nice narrative to it, and you can expand that out as much as you want. In the original Me & My Whale, the final section of the piece was completely generated out of the actions of the first two sections, with the parameters for each of the sound objects and equipment, the creation of text and score, being in a constant state of fluid change.

The Magic Gloves are also a very good way of measuring change of state, and impacting on it. I’ve already shared the way that the parameters in Poem Of The Body adapt to and struggle against the performers’ body. Because the measurements that are picked up by the computer are continuous – from 0 to 65536 (don’t ask me why) – they can be easily interpreted and fed into themselves. We could make a system to measure speed, whose measurements impact the measurement of speed. Or a system that detects certain gestures, whose detection of gesures change the type of gestures it detects. And these recursive systems can be present in all of the different instruments, sound objects and patches within the performance.

When you follow this further, and compound different processes together, it puts the technology in a state of functional flux, compromising its ability to support our storytelling. Eventually, the feedback generated by this flux leads to a liquid meta-narrative in which everything is affected by everything else, changing gesture into tactility: connecting listening, sound-making and touching into one sensation, just like it is in the sea.

argh

ok

 

Embodying Sound in Me & My Whale

So hopefully the parry gripp hamster meant you stopped scrolling down past all the wordy bits and I’ve got your attention again. hi.

Me & My Whale is about the control of choice. How in control of our own voice and body are we? What does it mean to have your pattern changed by an outside source? When we impose our own voice and body on somebody or something else, how does that change their voice and body?

One way we’ve done this is doing a sort of duo controlling one of our voices.

In the above, Mook’s voice is being pitch shifted and delayed depending on how I move the Magic Gloves. This is rough as hell, and doesn’t always come out right, but this is how it works: When the Magic Gloves go past the 90 degree angle to the front (ie when I move them forward), they duplicate Mook’s voice. The higher I move them forward, the higher Mook’s voice sounds, and the further out to the side on each hand I go, the more delays occur. It’s a little hard to tell from the video as there’s a bunch of other things going on, so just take our word for it if you have to.

So what’s happening here? Well, the power to create is held between the two of us. If I don’t move, Mook’s voice stays the same; if Mook doesn’t sing, my movements don’t do anything. So the way to start creating harmonies, resonance, assonance and all the other tasty things we like about sound, is to work together. It’s very easy to make it sound shit, as you can probably hear, and it’s only at some times where the good stuff comes through. As a man working in digital sound, particularly working in sampling and live manipulation of voice, I have to be vigilant about what degree my control over another’s voice is (I addressed this, btw, in an acousmatic piece of mine, Flüchtlinge), and having a shared control is something which interests me in a piece where the (highly gendered) assumption is always Xavier’s the one who can use the equipment and Mook’s the one who can act. Voice, and adapting voice, can be a very powerful thing, even when (or especially when) you make someone sound like a chipmunk.

A New Development: ‘Finding in Space’

This is something that I’m working on at the moment, so I have no media to show. Finding in Space is something we’ve been playing around with since our residency at the royal exchange. In technical terms, it’s a bit of a reverse of the way the Magic Gloves have been used up to this point, so if you’ve only just gotten the hang of it, well, sorry. It’s a bit like virtual reality, but instead of using sight to map objects in space, we use sound. Just like WHALES.

Imagine an empty room. Within this room are invisible shapes – spheres, cuboids, whatever – attached to the walls, coming out the floor, hanging in mid-air. Let’s say there’s two of them. I’ve made you a picture.

a sphere and a cuboid

As you touch the outside of them, they make a sound – let’s say a soft orgasmic groan. We’ve found the boundary – I’ll come back to that in a bit. Then you go through the boundary and, as you get closer to its centre, the orgasmic groan turns into a field recording of the Gloucester Cheese Rolling Festival. The closer you are to the boundary, the more orgasmic the sound, and the closer to the middle, the more … cheese-rolly. By listening-feeling your way through space you perceive the object’s density. Now, remember, you can’t see the object, meaning the understanding of its size, weight and importance is communicated through sound. Many cetaceans use echolocation, like bats, to hear their way around objects and animals in the water. It’s so acute, though, that they can even tell the material that the object is made of. This is the same principle as seismic surveys.

In the patch, the boundary is the limits of the object. I was thinking of making a calculation based on the distance from the boundary to make an echolocation module –  you send out a sound, as a whale sends a click, and then as you hear your sound back, you start to work out where it is. Then I went to the natural history museum and sighed because someone already did an echolocation game, and it was weird and difficult and not very fun. So it’s not as easy as it seems. At least, maybe it’s something that some gallery will fund me to do at some point. Anyway, there’s something interesting about how we could use the density to shape the space: either an evolving sound, or maybe the same sound just getting clearer. Or a sound which gets so loud that we don’t want to move further in, meaning we shape the space with our resistance.

(It’s worth pointing out, btw, that this idea was unwittingly hijacked from Jan Lee & Tim Murray-Browne, who are basically me and my collaborators but grown up. They shared something very similar to this in a talk I went to last year at Hackney Hackspace. I think that was done on a Kinect, though, and way more complicated coding-wise but hey, nothing’s original and you gotta start somewhere, right?)

My main problem at the moment is that, although it’s pretty easy to map a sphere in a space (you give it a point and then say +/- 10cm or whatever in each direction), I’m stuck about how to map even a cube, let alone a more complex shape. Eventually (sneak peek) the idea is to map a whale in freefall, which the submarine captain finds along her travels and removes its innards. For this, instead of having to think in complex mathematics, which is so not my speciality, I’m thinking of using machine learning software Wekinator, developed by Rebecca Fiebrink, whom I met at Goldsmiths. It would let me map some points in space and then interpolate between them beautifully in some magical way that only Rebecca knows. I’ve meant to use it for ages but have never found a good excuse.

We don’t know quite yet when and how we’ll use this technique yet – the little we did at the exchange didn’t make it into the sharing at live art bistro in december. When we were at the exchange, we played around with finding script in the air – recordings of our voices would be in randomised positions around the space, and, Magic Gloves in hand, we would have to find them to carry on the story. Sometimes it would take about five minutes of frustration before finding anything. But, again, I love the attempt.

 

Inspirations/ Further Reading/ Go Away

My brain hurts far too much to sum all of this up, so I’m going to leave with a bunch of interesting things related to motion capture and mapping, which I’ll update gradually as we stumble on more things. We’re always hungry to find out more, so please set things our way that we haven’t mentioned. Thanks for hanging around. We’ll be announcing our date for the Manchester sharing in the next week or so, so keep them peeled.

Video

The duo that I mentioned earlier, this is a really nice example of their technical and expressive scope.

After I saw this, I basically decided to change artistic direction because this guy had pretty much done what I was on a path to doing.

A role model of mine, Laetitia Sonami is basically the best. She’s best known (I think) for this, a glove which detects hand and finger movement.

Getting into the realms of wtf, Atau Tanaka is sonifying the neuron impulses created by him flexing his muscles.

Words

Adam Hunt & Ross Kirk: Mapping Strategies for Musical Performance (2000) [PDF]
a fairly decent and not mind-numbingly dull introduction into mapping music. It’s from quite a while ago, but still quite helpful for starting to understand more modern concepts. 

CataRT: Real-Time Corpus-Based Concatenative Synthesis [Weblink]
Concatenative synthesis, not just the winner of the hardest two words to say in the english language, is a way to map one set of sounds to another set of sounds by defining how similar they are to each other. CataRT physically maps this out onto a 2D square – I haven’t seen anybody use motion capture to trigger this off/manipulate it, but that would seem like a fun feedback game.

 

Tone Dance from Video
long may we reign

 

Xavier
praise welcome: xvelastin@gmail.com.

Advertisements

We Shared A Whale

So we did our sharing at Live Art Bistro last Saturday! The performance came out at just under an hour (managed with some brutal cutting) and apart from some moments of horrendous feedback, it actually worked out. It was really rough around the edges, obviously, but it was fun and, after a long and stressful day setting it up without a run beforehand, it was such a relief that it went well.

atlantis singing.png

More people than we expected turned up, and we had an interesting chat after, where people gave their feedback. The things that came up were:

  • People didn’t understand the nuances of the story, and when we explained it to them they really liked it – but would have liked to have it clearer in performance;
  • Although we did a bit of playing with speaker placement, some more advanced spatialisation would be good;
  • The relationship between ourselves and the technology was unclear – what relation do we have to them as objects?;
  • The sound blended between “noisy” and “beautiful” well;
  • Some engineering tech notes – the on-stage monitors were pushed far too hard and were apparently smoking (they’re okay by the way thank Whale);
  • People loved how the whale’s voice was actually the captain’s voice being manipulated;
  • Sightlines in some parts were an issue;
  • There was a lot of ‘what was this piece about?’.
  • Some structural issues with the ending (which I won’t spoil here!) which led to a bit of confusion as to what actually happens.

All in all, good feedback. I have a list of about a million things to work on aside from that. But the two big things that came up for me during the work in progress performance were music and narrative. I’ll talk a bit about them now, because that’s what blogs are meant to do.

Music
One of the things I really liked out of this was our variety-show-style singing of the theme song, written by our composition advisor Anna Clock.

surfingthrough
The Me & My Whale theme song

Our singing isn’t perfect – I have a musical ear but a shit voice, Mook has a great voice but finds pitching some of the whole-tone intervals tricky – and I think we need some proper vocal effects on the microphones so it doesn’t sound so dry, but aside from that, people really responded to it. I’ve also been humming it to myself pretty much constantly since then. When we do it well, I’ll put a recording up for your listening pleasure.

Sometimes, when something is just good or fun it doesn’t need a huge amount of justification. I know it’s unclear why we’re doing this (see below for why I don’t care), but I like the idea that us as narrators get carried away with the story so much that we choose to do a song and dance about it. We break from either our voices as weird arbiters of the ocean or the submarine captain herself to do something that is, to be honest, way more like how Mook and I play around generally – silly, a bit creepy, and desperately grabbing the spotlight. One of the things we noticed was that we’re both pretty playful people making work that’s really cynical and depressing and it’s nice that we get to show a bit of fun. I’m currently working on a lounge jazz piano version.

 

Narrative

judging the dome.png
Finding the Dome, the protective bubble that surrounds the city of whales. Our captain steps through it, while remarking on the nature of surface tension. It’s related to the brine lakes of the deep ocean, and to species dysmorphia (“a second skin”). Moving through it means entering a different consistency, changing our performative roles. Anyone get that?

Right. My personal attitude to narrative are a bit confused. I do like how this iteration of Me & My Whale is telling a story a lot more, and I do like the story. I know it’s something that Mook and Anna want to push. It’s way more accessible, it’s something an audience can grab onto even with complicated/unfamiliar images and sounds, and it doesn’t have to be at the compromise of complexity or content as long as it’s done cleverly. Maybe it’s because I’m not so confident of my writing that I can see how it can be done in conjunction with hidden, complex generative processes. It needs to be calculated very carefully because at the end of the day, if you watched a hundred number of monkeys trying to write macbeth you’d just end up with a faceful of poo. I think my problem is that I find it more interesting to enact the attempt to tell a story, especially when it’s a good one, because that way we’re constantly fighting and trying, literally battling the medium itself. One way we could do this is to apply a score to the performance, generated maybe by the audience, by cetacean migration patterns, by the resonance in the room, through a written score or by the pattern of cables in the space. However we do it, it would be partly within our control and partly as a result of leakage from the performance event itself. I think it gives some really exciting possibilities to look at agency and choice itself – something that can be mapped onto the consequences of stealing voice from people, ideologies and nature.

dav

dav
Two form scores I wrote during rehearsals in October – each colour could mean a separate narrative, or way to tell each narrative. Our journey through them is the performance event.

As an absolute nerd, I actually don’t really care much for the idea of understanding everything in one sitting – I’d rather it be a case of the moments, images and sounds staying around in the audience’s brain so people can have a more gradual understanding of what we’re trying to say. One of my more complex pieces with choreographer Deliah Seefluth, The Poem Of The Body, had a small set of materials (broken spoken word) triggered using gesture with motion capture controllers, whose parameters were adaptive, randomised, and difficult to predict. Part of the performance was in actually finding the poem itself, and it wasn’t something that you could just memorise and replicate. I think the obsession with making pieces for snapchat-length narratives with instagram-level depth has directly to do with a neoliberal idea of monetising our attention. Maybe it’s just me, but the experiences that have really moved me in an intellectual capacity have had to do with me coming back to it over a long period of time in my own head, or re-watching/listening. Like reading Foucault or Maria Mies, or listening to SOPHIE, Death Grips or Debussy’s Preludes for the first time, it was only over a long time of active and inactive engaging that they actually became meaningful. I don’t think theatre people are like that very much, not really. It’s worth mentioning that of the feedback I’ve received since the very first Whale (done at Goldsmiths in Sep 2016), the only people who cared about understanding narrative right away were people of traditional theatre background. Even if you go all john cage later, you still want that quick fulfilment, I guess. I’m a bastard.

At the moment, it’s a play with good sound design. I have a feeling that that’s where it’s going to go, and that’s not bad at all, it’s just different from how I started out. Anyway, it made total and complete sense to make a good story up for this sharing – & having Mook on board has meant the submarine captain is someone you actually like, as opposed to my writing which is just horrendous. We’re still in the pretty early stages of our devising together and I’m really excited to find out where it might go next.

Xavier

piloting
Piloting Resonance: check this post for what this is about