Whale Spotted In The Heart Of Capitalism

THAR SHE BLOWS

The whale surfaces in The Blessed Capital for one night only.

Saturday 22nd June at 7.30pm.

The Vaults – SE1 7NN – under Waterloo station.

Tickets £6/4 from thevaults.london/me-and-my-whale

Attend on facebook (virtual life is real life): facebook.com/events/2156483034412685/

 

The whales are on tour!

FUTURE DATES

Newlistingidea-6-logos.png

Saturday 23 February 2019
8pm – £5 / £4
Star & Shadow Cinema, Newcastle NE2 1BB
Facebook Event: facebook.com/events/344950066296558/
Tickets via BrownPaperTickets: whale-sns.brownpapertickets.com

Sat 16 & Sun 17 March 2019
7.45pm – £TBA
Theatre Delicatessen, Sheffield
Facebook Event: facebook.com/events/299059044058588/
Tickets via BrownPaperTickets: whale-deli.brownpapertickets.com

PAST DATES

fbcover-tourdates

Wednesday 10 October 2018
7.30pm – £10 / £7.50
stage@leeds, Leeds, University of Leeds LS2 9JT
Facebook Event: bit.ly/weloveblowholes
Tickets via stage@leeds: bit.ly/iloveplankton

Sat 20 & Sun 21 October 2018
7pm – £7 / £5
Partisan, 19 Cheetham Hill Road M4 4FY
Facebook Event: bit.ly/welovekrill
Tickets via BrownPaperTickets: bit.ly/belugasforever

More to follow …

 

The Past & The Future

The R&D is over! We had an excellent sharing at Partisan in Manchester.

Just want to do a quick shout out to those who have been part of the Whale’s journey and where it will go to next. Me & My Whale has been in development, in one form or another, since August 2016.

Thanks To (chronologically):

Alice Rose Parr (aka the original whale)
Goldsmiths University, especially the Sound Practice Research Group
Patricia Alessandrini
John Drever
Luisa Amorim & Graça
Hugh Aynsley
Zoë Gusy-Sprague
Amanda Tooke
Goldsmiths Drama Department, for the spaces
The audience from the first performance at Goldsmiths
Zoe Czavda Redo
Katie Reid & Chris Jena
Christian
By Other Means

EngineRoom
Morley College
Camilo Salazar
Daniel James Ross

Hannah Mook
Dimitra Maltsaki
Anna Clock
Lawrence Upton
Matthew Frener
Live Art Bistro
The HopBarn
The Royal Exchange Theatre
Arts Council England
East Street Arts
CHUNK
Partisan Collective
Z-Arts

Jan Lee & Tim Murray Browne, for letting me steal an idea

 

The Future

Now we’ve developed the piece, we’re going to try and find a way to show it in lots of interesting spaces. Anyone got any ideas or links hit us up via the contact page. Particuarly if you own a submarine.

Not sure what will happen to this website.

Love & Whales

Xavier

IMG_5717davmook pissing

Bring On The Whale!

newcoverwithrubber-partisansharing

Very happy to share that we have confirmed our slot for our end of R&D sharing at Partisan in Manchester! Mook and I will be presenting what we’ve been working on since September. After that we will be planning what 2018 – aka The Year Of The Whale – will look like. If you’ve been following us, we’d love to hear what you think, to shape the future. But let’s make a piece first.

Tickets can be reserved from meandmywhale.eventbrite.com, or you can pay on the door. Entrance is £4; no-one turned away for lack of funds.

If you’re that way inclined, head over to our event page on facebook: Me & My Whale ~ A Sound Play, and please do share it out!

Details

Sunday 4th February 2018 A.D.
Doors @ 7pm
Partisan Basement
M4 4FY (map)

Please note the venue is not wheelchair accessible.

See you soon xx

 

Bonus: some experiments with a contact mic and metal

 

The Magic Gloves of Destiny

The whale moves in a sea of sound:
Shrimps snap, planton seethes,
Fish croak, gulp, drum their air-bladders,
And are scrutinised by echo-location,
A light massage of sound
Touching the skin.

– Whale Nation, Heathcote Williams

cropped-mook-mocap1.jpg
Mook in the Magic Gloves of Destiny at our very first show on our very first day of development, at Live Art Bistro, as part of Come Find Us.

Contents

  1. Controlling Sound with Gesture
  2. The Magic Gloves of Destiny
  3. The Nitty Gritty
  4. Sound and Body in Me & My Whale
  5. Inspirations

Controlling Sound with Gesture

A lot of digital music is performed by a frowning face behind a glowing apple logo. We have no problem with transparency as a thing, but we intend to think about the scenography like theatre makers, just as we think of the text as sound artists. And a lot of our separate practices, and a lot of the ideas that have gone into Me & My Whale, have to do with the body as object/subject/host.

Motion capture is a technique to measure and track position and movement digitally – it’s used a lot in commercial films (gollum) and gaming (kinect, wii). Before advanced motion tracking technology, it’s been done with cameras – and you can still find stuff with webcams, laptop cameras and I’m sure there’s some periscope ones too. The most common way that I’ve seen in digital art – installations, performances, exhibitions – tends to be using a kinect, because its software gives you a skeleton. That means you have a really clear interpretation of where a person’s body is, meaning you can adapt sound to very specific body parts and their individual quality of movement.

virtuviankinect
the virtuvian kinect skeleton

Well, anyway, we’re not gonna learn how to programme a kinect because effort. So, instead, we’re using these babies:

THE MAGIC GLOVES OF DESTINY

gametrak

aka the Gametrak by Elliot Myers. The name was coined by Powder Keg during their show BEARS, which incidentally was where Mook & I met. It’s a piece of equipment that comes with with a golf simulator from the early noughties. I was shown it by Tom Mudd from Goldsmiths. It’s a box with strings on a ball joint with gloves at the end. It measures position with relation to the box in six continuous numbers (if the box is on the floor, left-to-right, forward-to-back, up-down for each hand). In the game, it’s used to measure your golf swing:

jerrygolf2
remember to square your shoulders

They run through Max/MSP, which is the controlling software used in Me & My Whale, and can be used in loads of different ways, which we’ll go into further down. I haven’t seen them used much very well elsewhere – it’s used really nicely in the dance film Oblique Theorem, but apart from that the function of the gloves in other people’s pieces seems really hidden. The really nice thing about using these controllers is that it can be very clear how they function – the strings are bright orange, there is a clear tension in them, and when you move, you hear a sound. That makes it much easier to start building a language between your body and the technology, and in turn makes it simpler for an audience to listen to that language. At the same time, because they’re not well known (they sold terribly) they don’t have the cultural baggage of a playstation joystick – where you can’t stop thinking of it as a tool for gaming (which is used consciously by, for instance, MizKai, who DJs chiptune with a gameboy). It’s both something that people have no prior knowledge of and also something easy for people to get, allowing us to join body semiotics, proxemics and technology without looking wanky as fuck.

More complex technologies, like the Kinect, don’t have this. They’re the equivalent of the frowning face behind an apple logo, and unless they’re twinned with live projection mapping, they’re basically as interesting. And, although there has been some amazing work done with it which I draw on (see inspirations), I prefer the lame black box with the bright orange strings.

The two of us doing our Tone Dance at the HopBarn, using the Magic Gloves to affect low frequency resonance. Read up on that here.

 

The Nitty Gritty: Ways of Using the Magic Gloves

Well, if you’ve got this far without being completely bored out of your mind, well done. So, on to level two.

The simplest, and most obvious, way to use them is to simulate a theremin. That is, using one hand to control the pitch of a sine tone, and the other to control its volume. That way, you can make some nice sweeping notes, and it can be quite melodic (over the rainbow on theremin). But to be honest, that’s dull as hell, and there’s already a theremin, it’s called a theremin.

I used these recently when doing live sound for a dance piece for Ekata Theatre, which I’ll go into now because it’s my blog and I can do what I want.

Unbelonger (2017): Direct Mapping

The Nitty:

The piece had a lot of improvised moments, and was devised itself, so I felt that the best way to go about my sound design was to play it live, rather than play it back over Qlab, which is how most theatre/dance sound is triggered. Using the air marimba instrument, I played background themes, reactive twiddles and a simple waltz.

The Gritty:

I ran this off Ableton, sending midi via loopMIDI from Max/MSP which in turn was connected to a [hi] object picking up the Magic Gloves. I placed a threshold roughly 10 degrees off centre (the X axis, left-to-right) for each of the gloves – when my hand passes that threshold into the middle, it triggers a midi note-on message. The velocity (representing force of impulse) is affected by how far back I start the swing (this doesn’t work perfectly yet, as you might be able to tell). The pitch of the midi note is determined by the height (Z axis), with the left hand being an octave or so lower as it is on a real marimba or piano. I forced a scale onto the outgoing notes, though, or it would be stupidly difficult to find the right notes, which was based on my favourite chord (C#m with added 7/9/13, since you ask). I added a dead zone at the very bottom of the Z axis range so it would only send midi notes when I’ve pulled it up a bit, else it would trigger while I’m getting into it.

Now you completely understand the principle, you can see how I use it with other sounds in different ways, but with the same mechanics. In the video below, I also use forward-backward (the Y axis) to bend the pitch of a synthesised voice.

Gonna introduce some terminology for you here. This is known as direct mapping – that means: A results in B; changing A results in B changing. In the example above, the moving the gloves further away from the ground changes the pitch, and that’s all, and that’s that. Mapping control A to parameter B, control C to parameter D, etc.

But we also have convergent mapping, in which many controls affect one parameter and divergent mapping, where one control affects many parameters. This figure explains it better than I will – it’s using the example of a person playing a wind instrument:

mapping.png
from A NIME Reader: Fifteen Years of New Interfaces for Musical Expression, p341.

Real acoustic instruments are, as the book says, a ‘web of interconnections’ (ibid). And if we’re going to be making some instruments, it’s got to be as complex and interrelated as real ones. And that means drawing on musical as well as technical sensitivity.

Poem of the Body (2016): divergent convergence

The first time I used the Magic Gloves was with choregrapher and dancer Deliah Seefluth in our collaboration Motion.Captured. The piece was called the Poem Of The Body, and was based on text by Fred M-G. This piece is super complex and it would be boring and irrelevant to go into all of it here, so I’ll just highlight the bits that relate to mapping and instrument design.

The Nitty:

Deliah and I recorded our voices speaking fred’s poem. Those recordings were then broken into a series of samples, related to each line. Deliah was attached between two sets of Magic Gloves, meaning four strings – attached to each of her ankles and wrists. Deliah then moved in specific ways, choreographed both fixed and improvised, to ‘find’ the text by moving through space. But for each line, the parameters for how the words came out changed – sometimes they happened every time they moved, sometimes she had to move very quickly to get any sense, sometimes they triggered automatically and it was her movement that stopped them – and the effect was a fighting against the medium to be able to be understood. It was all in the attempt.

The Gritty:

Ooookay. This was completely run off Max/MSP. So each of the samples was loaded into a [polybuffer~] object. The X, Y and Z axes of the four Gloves were divided into ten ‘zones’. Moving through the zones would send a [bang] message (ie a trigger). When sufficient triggers had been received for a given axis, a [groove~] object would play a fragment of the sample. The more samples were found, the longer the length of each fragment and the lower the randomisation of the starting point (and therefore the more understandable the text becomes). So as Deliah moves through the zones, the amount of glitch in the sample reduces – this was sometimes flipped on its head, meaning the more still she remained, the more understandable the sample. Still with me? A timer was set to move the sample on to the next one, during which time Deliah would move and try to trigger the broken spoken word. Each separate line had its own set of parameters, which in turn were affected by the way that Deliah moves.

The divergent mapping – interpreting the body’s movement through the zones of the space into the multiple modifying parameters within the patch – works alongside the convergent mapping – which runs from the combination of axes/triggers over time to change sample length – to generate understanding. This is a far deeper look at mapping and instrument design than the air marimba – even though the air marimba allows for musical complexity, it doesn’t allow for formal and conceptual complexity. Although difficult to understand and explain, the Magic Gloves in the Poem Of The Body presented something alive and dynamic. For one thing, there’s an active dialogue, a seeking to understand each other, between the technology and the body, that comes across in the making and the performance. And also, it takes ages to start understanding it – it will behave in strange ways, ways that I couldn’t predict or really understand, even though I made the damn thing. In our only full-length performance at Theatre N16 (sadly soon to close), it got all shy and barely triggered.

Procedural Feedback; or: jesus christ, there’s more

Fully aware that you’ve clicked away somewhere more shiny and with fewer words, I’ll plough on into the void, I’ll go down with this ship, I’ll not put my hands up and surrender.

As you might have already been able to tell, what I like is to make something that could be really useful and beautiful and then fuck with it in unpredictable ways. One of the ways to do this is what I arrogantly call procedural feedback. Acoustic feedback, which is that horrendous sound you’ve heard at your dad’s welcome to middle age gig, is caused by a microphone picking up itself through a speaker. In turn the speaker plays itself and you get a feedback loop. Now, procedural feedback is the idea of using this principle at a deeper level. An example from Me & My Whale is what’s known as the Tonesplit module. This is the bit that helps create the subaquatic soundscape at the top of the piece:

The patch takes a single input, let’s say a singing voice, delays it, and applies pitch shift to the delays (in the sample above, the pitch always shifts down). The louder the incoming voice, the more extreme the pitch shifts; the softer the incoming voice, the closer to normal it has. The really cool thing – and I can say it’s cool, because I’m the only one who will ever read this – is that the rate at which the pitch shift change changes can change. For instance, by picking up the mean amplitude where the pitch shift happens and adapting it to be more or less extreme. Yeah? Let’s say –

  1. You shout into the microphone that goes into the Tonesplit module
  2. Because it’s loud, the Tonesplit module pitches your shift down about three octaves
  3. But as you keep shouting, the amount to which it shifts is lessened as the average loudness increases.
  4. Meaning you need to work even harder to make it more extreme.

It’s a direct mapping form of procedural feedback – change into change – but it already has quite a nice narrative to it, and you can expand that out as much as you want. In the original Me & My Whale, the final section of the piece was completely generated out of the actions of the first two sections, with the parameters for each of the sound objects and equipment, the creation of text and score, being in a constant state of fluid change.

The Magic Gloves are also a very good way of measuring change of state, and impacting on it. I’ve already shared the way that the parameters in Poem Of The Body adapt to and struggle against the performers’ body. Because the measurements that are picked up by the computer are continuous – from 0 to 65536 (don’t ask me why) – they can be easily interpreted and fed into themselves. We could make a system to measure speed, whose measurements impact the measurement of speed. Or a system that detects certain gestures, whose detection of gesures change the type of gestures it detects. And these recursive systems can be present in all of the different instruments, sound objects and patches within the performance.

When you follow this further, and compound different processes together, it puts the technology in a state of functional flux, compromising its ability to support our storytelling. Eventually, the feedback generated by this flux leads to a liquid meta-narrative in which everything is affected by everything else, changing gesture into tactility: connecting listening, sound-making and touching into one sensation, just like it is in the sea.

argh

ok

 

Embodying Sound in Me & My Whale

So hopefully the parry gripp hamster meant you stopped scrolling down past all the wordy bits and I’ve got your attention again. hi.

Me & My Whale is about the control of choice. How in control of our own voice and body are we? What does it mean to have your pattern changed by an outside source? When we impose our own voice and body on somebody or something else, how does that change their voice and body?

One way we’ve done this is doing a sort of duo controlling one of our voices.

In the above, Mook’s voice is being pitch shifted and delayed depending on how I move the Magic Gloves. This is rough as hell, and doesn’t always come out right, but this is how it works: When the Magic Gloves go past the 90 degree angle to the front (ie when I move them forward), they duplicate Mook’s voice. The higher I move them forward, the higher Mook’s voice sounds, and the further out to the side on each hand I go, the more delays occur. It’s a little hard to tell from the video as there’s a bunch of other things going on, so just take our word for it if you have to.

So what’s happening here? Well, the power to create is held between the two of us. If I don’t move, Mook’s voice stays the same; if Mook doesn’t sing, my movements don’t do anything. So the way to start creating harmonies, resonance, assonance and all the other tasty things we like about sound, is to work together. It’s very easy to make it sound shit, as you can probably hear, and it’s only at some times where the good stuff comes through. As a man working in digital sound, particularly working in sampling and live manipulation of voice, I have to be vigilant about what degree my control over another’s voice is (I addressed this, btw, in an acousmatic piece of mine, Flüchtlinge), and having a shared control is something which interests me in a piece where the (highly gendered) assumption is always Xavier’s the one who can use the equipment and Mook’s the one who can act. Voice, and adapting voice, can be a very powerful thing, even when (or especially when) you make someone sound like a chipmunk.

A New Development: ‘Finding in Space’

This is something that I’m working on at the moment, so I have no media to show. Finding in Space is something we’ve been playing around with since our residency at the royal exchange. In technical terms, it’s a bit of a reverse of the way the Magic Gloves have been used up to this point, so if you’ve only just gotten the hang of it, well, sorry. It’s a bit like virtual reality, but instead of using sight to map objects in space, we use sound. Just like WHALES.

Imagine an empty room. Within this room are invisible shapes – spheres, cuboids, whatever – attached to the walls, coming out the floor, hanging in mid-air. Let’s say there’s two of them. I’ve made you a picture.

a sphere and a cuboid

As you touch the outside of them, they make a sound – let’s say a soft orgasmic groan. We’ve found the boundary – I’ll come back to that in a bit. Then you go through the boundary and, as you get closer to its centre, the orgasmic groan turns into a field recording of the Gloucester Cheese Rolling Festival. The closer you are to the boundary, the more orgasmic the sound, and the closer to the middle, the more … cheese-rolly. By listening-feeling your way through space you perceive the object’s density. Now, remember, you can’t see the object, meaning the understanding of its size, weight and importance is communicated through sound. Many cetaceans use echolocation, like bats, to hear their way around objects and animals in the water. It’s so acute, though, that they can even tell the material that the object is made of. This is the same principle as seismic surveys.

In the patch, the boundary is the limits of the object. I was thinking of making a calculation based on the distance from the boundary to make an echolocation module –  you send out a sound, as a whale sends a click, and then as you hear your sound back, you start to work out where it is. Then I went to the natural history museum and sighed because someone already did an echolocation game, and it was weird and difficult and not very fun. So it’s not as easy as it seems. At least, maybe it’s something that some gallery will fund me to do at some point. Anyway, there’s something interesting about how we could use the density to shape the space: either an evolving sound, or maybe the same sound just getting clearer. Or a sound which gets so loud that we don’t want to move further in, meaning we shape the space with our resistance.

(It’s worth pointing out, btw, that this idea was unwittingly hijacked from Jan Lee & Tim Murray-Browne, who are basically me and my collaborators but grown up. They shared something very similar to this in a talk I went to last year at Hackney Hackspace. I think that was done on a Kinect, though, and way more complicated coding-wise but hey, nothing’s original and you gotta start somewhere, right?)

My main problem at the moment is that, although it’s pretty easy to map a sphere in a space (you give it a point and then say +/- 10cm or whatever in each direction), I’m stuck about how to map even a cube, let alone a more complex shape. Eventually (sneak peek) the idea is to map a whale in freefall, which the submarine captain finds along her travels and removes its innards. For this, instead of having to think in complex mathematics, which is so not my speciality, I’m thinking of using machine learning software Wekinator, developed by Rebecca Fiebrink, whom I met at Goldsmiths. It would let me map some points in space and then interpolate between them beautifully in some magical way that only Rebecca knows. I’ve meant to use it for ages but have never found a good excuse.

We don’t know quite yet when and how we’ll use this technique yet – the little we did at the exchange didn’t make it into the sharing at live art bistro in december. When we were at the exchange, we played around with finding script in the air – recordings of our voices would be in randomised positions around the space, and, Magic Gloves in hand, we would have to find them to carry on the story. Sometimes it would take about five minutes of frustration before finding anything. But, again, I love the attempt.

 

Inspirations/ Further Reading/ Go Away

My brain hurts far too much to sum all of this up, so I’m going to leave with a bunch of interesting things related to motion capture and mapping, which I’ll update gradually as we stumble on more things. We’re always hungry to find out more, so please set things our way that we haven’t mentioned. Thanks for hanging around. We’ll be announcing our date for the Manchester sharing in the next week or so, so keep them peeled.

Video

The duo that I mentioned earlier, this is a really nice example of their technical and expressive scope.

After I saw this, I basically decided to change artistic direction because this guy had pretty much done what I was on a path to doing.

A role model of mine, Laetitia Sonami is basically the best. She’s best known (I think) for this, a glove which detects hand and finger movement.

Getting into the realms of wtf, Atau Tanaka is sonifying the neuron impulses created by him flexing his muscles.

Words

Adam Hunt & Ross Kirk: Mapping Strategies for Musical Performance (2000) [PDF]
a fairly decent and not mind-numbingly dull introduction into mapping music. It’s from quite a while ago, but still quite helpful for starting to understand more modern concepts. 

CataRT: Real-Time Corpus-Based Concatenative Synthesis [Weblink]
Concatenative synthesis, not just the winner of the hardest two words to say in the english language, is a way to map one set of sounds to another set of sounds by defining how similar they are to each other. CataRT physically maps this out onto a 2D square – I haven’t seen anybody use motion capture to trigger this off/manipulate it, but that would seem like a fun feedback game.

 

Tone Dance from Video
long may we reign

 

Xavier
praise welcome: xvelastin@gmail.com.

We Shared A Whale

So we did our sharing at Live Art Bistro last Saturday! The performance came out at just under an hour (managed with some brutal cutting) and apart from some moments of horrendous feedback, it actually worked out. It was really rough around the edges, obviously, but it was fun and, after a long and stressful day setting it up without a run beforehand, it was such a relief that it went well.

atlantis singing.png

More people than we expected turned up, and we had an interesting chat after, where people gave their feedback. The things that came up were:

  • People didn’t understand the nuances of the story, and when we explained it to them they really liked it – but would have liked to have it clearer in performance;
  • Although we did a bit of playing with speaker placement, some more advanced spatialisation would be good;
  • The relationship between ourselves and the technology was unclear – what relation do we have to them as objects?;
  • The sound blended between “noisy” and “beautiful” well;
  • Some engineering tech notes – the on-stage monitors were pushed far too hard and were apparently smoking (they’re okay by the way thank Whale);
  • People loved how the whale’s voice was actually the captain’s voice being manipulated;
  • Sightlines in some parts were an issue;
  • There was a lot of ‘what was this piece about?’.
  • Some structural issues with the ending (which I won’t spoil here!) which led to a bit of confusion as to what actually happens.

All in all, good feedback. I have a list of about a million things to work on aside from that. But the two big things that came up for me during the work in progress performance were music and narrative. I’ll talk a bit about them now, because that’s what blogs are meant to do.

Music
One of the things I really liked out of this was our variety-show-style singing of the theme song, written by our composition advisor Anna Clock.

surfingthrough
The Me & My Whale theme song

Our singing isn’t perfect – I have a musical ear but a shit voice, Mook has a great voice but finds pitching some of the whole-tone intervals tricky – and I think we need some proper vocal effects on the microphones so it doesn’t sound so dry, but aside from that, people really responded to it. I’ve also been humming it to myself pretty much constantly since then. When we do it well, I’ll put a recording up for your listening pleasure.

Sometimes, when something is just good or fun it doesn’t need a huge amount of justification. I know it’s unclear why we’re doing this (see below for why I don’t care), but I like the idea that us as narrators get carried away with the story so much that we choose to do a song and dance about it. We break from either our voices as weird arbiters of the ocean or the submarine captain herself to do something that is, to be honest, way more like how Mook and I play around generally – silly, a bit creepy, and desperately grabbing the spotlight. One of the things we noticed was that we’re both pretty playful people making work that’s really cynical and depressing and it’s nice that we get to show a bit of fun. I’m currently working on a lounge jazz piano version.

 

Narrative

judging the dome.png
Finding the Dome, the protective bubble that surrounds the city of whales. Our captain steps through it, while remarking on the nature of surface tension. It’s related to the brine lakes of the deep ocean, and to species dysmorphia (“a second skin”). Moving through it means entering a different consistency, changing our performative roles. Anyone get that?

Right. My personal attitude to narrative are a bit confused. I do like how this iteration of Me & My Whale is telling a story a lot more, and I do like the story. I know it’s something that Mook and Anna want to push. It’s way more accessible, it’s something an audience can grab onto even with complicated/unfamiliar images and sounds, and it doesn’t have to be at the compromise of complexity or content as long as it’s done cleverly. Maybe it’s because I’m not so confident of my writing that I can see how it can be done in conjunction with hidden, complex generative processes. It needs to be calculated very carefully because at the end of the day, if you watched a hundred number of monkeys trying to write macbeth you’d just end up with a faceful of poo. I think my problem is that I find it more interesting to enact the attempt to tell a story, especially when it’s a good one, because that way we’re constantly fighting and trying, literally battling the medium itself. One way we could do this is to apply a score to the performance, generated maybe by the audience, by cetacean migration patterns, by the resonance in the room, through a written score or by the pattern of cables in the space. However we do it, it would be partly within our control and partly as a result of leakage from the performance event itself. I think it gives some really exciting possibilities to look at agency and choice itself – something that can be mapped onto the consequences of stealing voice from people, ideologies and nature.

dav

dav
Two form scores I wrote during rehearsals in October – each colour could mean a separate narrative, or way to tell each narrative. Our journey through them is the performance event.

As an absolute nerd, I actually don’t really care much for the idea of understanding everything in one sitting – I’d rather it be a case of the moments, images and sounds staying around in the audience’s brain so people can have a more gradual understanding of what we’re trying to say. One of my more complex pieces with choreographer Deliah Seefluth, The Poem Of The Body, had a small set of materials (broken spoken word) triggered using gesture with motion capture controllers, whose parameters were adaptive, randomised, and difficult to predict. Part of the performance was in actually finding the poem itself, and it wasn’t something that you could just memorise and replicate. I think the obsession with making pieces for snapchat-length narratives with instagram-level depth has directly to do with a neoliberal idea of monetising our attention. Maybe it’s just me, but the experiences that have really moved me in an intellectual capacity have had to do with me coming back to it over a long period of time in my own head, or re-watching/listening. Like reading Foucault or Maria Mies, or listening to SOPHIE, Death Grips or Debussy’s Preludes for the first time, it was only over a long time of active and inactive engaging that they actually became meaningful. I don’t think theatre people are like that very much, not really. It’s worth mentioning that of the feedback I’ve received since the very first Whale (done at Goldsmiths in Sep 2016), the only people who cared about understanding narrative right away were people of traditional theatre background. Even if you go all john cage later, you still want that quick fulfilment, I guess. I’m a bastard.

At the moment, it’s a play with good sound design. I have a feeling that that’s where it’s going to go, and that’s not bad at all, it’s just different from how I started out. Anyway, it made total and complete sense to make a good story up for this sharing – & having Mook on board has meant the submarine captain is someone you actually like, as opposed to my writing which is just horrendous. We’re still in the pretty early stages of our devising together and I’m really excited to find out where it might go next.

Xavier

piloting
Piloting Resonance: check this post for what this is about

Preparing for the Work In Progress

So we’ve been hard at work preparing for the work in progress this Saturday!

Xavier’s been busy making patches:

Envirojoy patchTHE MODULE MATRIX

Mook’s been busy writing:

Collecting-Script

Using edits and feedback from our remote advisors Lawrence & Matthew, and with the amazing help of our advisors Anna and Dimitra in the rehearsal space over the past week, we’ve been building up a performance for Saturday’s show:

The Drops ChatMook RidingMook Mg and X Ink

Me & My Whale is a project that, like its subject, is an absolute leviathan. By setting our aims to incorporate different approaches to composing, devising and performing, by setting our goal to couch it inside a cross-temporal and generative framework, we’re trying to create a new form of expression. For now, we’re telling a story – about the lonely submarine captain who falls in love with a whale. That’s what we’ll enact through body, sound, image, text and form on Saturday. If you can, come down!

Me & My Whale :: Work in Progress (Leeds)

Some Sound:

SubAquatic Soundscape: One of the defining aspects of the sound design is that each scene’s sound is built up by manipulating the previous scene’s sound, or by building new soundscapes using a mixure of microphones, hydrophones, human-computer instruments and synthesis. This is an extract from one of the hydrophones setting up the submarine acoustic.

ToneHarm: a patch which detects the pitch of an incoming voice and then matches it with six sine tones, creating a sort of echoey delayey followy feeling. Needs work to make it nicer, but I like the difference between the soft, thin sine tones and a rich, harmonic-filled voice.

Bring on the Whale!

xx

Spectrograms

Spectrograms of baleen whale vocalisations

a spectrogram is a visual representation of sound – it’s a computer program which shows you how loud each pitch in a sound is over time. it’s at the same time mechanical, algorithmic, inorganic, and perceptive, intuitive, beautiful.

Blue Whale Sunset
“Blue Whale Sunset” – the hum of a blue whale song. This one’s fundamental (lowest, main) frequency is around 40hz, which is pretty much the highest they go. Their songs can go down to 10hz, which is well below human hearing.
Seen via Sonic Visualiser: www.sonicvisualiser.org

it’s a bit like how popular it is listening to whalesong relaxation videos online – their songs repitched and relayered, and the multitude of clicks, chirps, blows and tail kicks that aquatic mammals produce omitted to suit our conceptions of beauty.

Minke Whale Wheat Field
Minke Whale Wheat Field: Spectrogram of the clicks of a minke whale. Each vertical red line you can see in the image is one of the clicks, and the horizontal axis is time. Clicks are mainly used for echolocation. Whales – like dolphins and bats – are able to build up an incredibly detailed picture of their environment, giving them a description of objects’ sizes, distances and even density. By scanning each others brains in conjunction to cetaceans’ enlarged amygdalas, they can literally hear emotion.
Again, using Sonic Visualiser: www.sonicvisualiser.org

remember that god’s chorus of crickets track that makes the rounds every few years? it falsely claims to use otherwise unprocessed slowed down cricket sounds, when in reality there’s a choir and heavy processing happening, in order to promote a religious branding.

we alter the world around us for ourselves, and most of the time it’s by our curiosity and observation that actually makes these changes happen, like how a blue whale can look like a sunset, or a minke whale like a wheat field, and how seismic airguns, tunnelling for oil offshore, silences them both

Read & Listen: Seismic Airguns at Ocean Conservation Research

It’s all about that (sea) bass

Infrasound, Motion Capture and Choreography at the HopBarn

[for listening to the audio in this post, use headphones, or speakers which have a good bass response]

Spectro Bubbles
Different ways of visualising low frequencies layered over each other: using Audacity, Sonic Visualiser and Jitter via Max/MSP and mastered in GIMP.

 

Man, infrasound is sexy. Infrasound is sound which happens at incredibly low frequencies – at pitches lower than our ears’ and brain’s ability to hear them. At the lowest end of audibility, there are sounds that we can basically only feel (click here to listen to some simple sine tones at frequencies down to 20hz, the bottom of our range, which only high end headphones and very large subwoofers can produce). You’ve felt this when a big lorry goes past – casting a blanket of silence over everything else with sound we can’t exactly hear (cf. Old Tom) – or in a club, when your ribcage starts shaking to the beat.

 

Low tones are great for understanding the physicality of sound. Owing to their large size, the soundwaves will cancel out or reinforce each other when they meet in a space, meaning there are pockets of physical space in a room where the sound changes dramatically (called nodes). We were exploring this when in residency at the HopBarn a few weeks ago with sound artist Angie Atmadjaja, who specialises in psychoacoustic phenomena. It’s difficult to get across how weird this is. I had heard of nodes and cancellation in theory before then, but being there in the space was the first time I’d experienced it. There’s something really special and present about just walking, listening, reacting in a room – and because of the somatic effect low frequencies have, you’re listening with your entire body.

 

So we made a dance.

At its core, this was an exercise to integrate the properties of infrasound with some of the other things important in Me & My Whale: liveness, agency and the body. It resulted in a duet between me and Mook with a fairly simple score – to react – but which brought on lots of pretty interesting complexities, at least interesting if you’re into that sort of thing.

 

Method
At some point, I’ll dedicate a post to talking about gesture control – here, we’re using some soft-hacked gaming controllers (hereafter called the Magic Gloves) which are essentially two gloves on strings coming out of a box. I can get the position data from each of the gloves and use it to do anything.

One of the main resonant frequencies of the space (52hz – which is also a very important frequency for this project, more on that soon) was used to create two simple sine tone generators. Then, in Max/MSP, frequency and phase parameters were mapped onto the Magic Gloves so that distance and angle would subtly change the sound – bringing it up by a fraction of a hz, moving its phase by a gnat’s wing. Each glove affects its own sine wave generator, which goes into independent speakers. This is important because it means neither sine wave affects each other in the software, or in the speaker box, keeping all of the wave interaction in the space.

 

Patch - Gametrak Spreading
The business end of the Max/MSP patch: two cycle~ objects making sine waves with the inputs from top of the picture being the movements of the Magic Gloves.

 

Choreography
Our score was to react – to the sounds we were hearing, and to each other’s presence. That’s not particularly special, but with the added layer that the tones interact with each other in the space, it meant we had another performer with us – the room itself. As we moved through the space reacting to sound, and through our gesture changing the sound, our heard sound changes as we pass through room nodes or move the position of our heads. It actually turns into a really meditative blend between deep listening and a game. Another cool thing was the way this exercise sonified proxemics – argh, sorry – made real our physical and social distance through sound: when we were near each other, the difference between our two generated tones was very small, and our individual hearing of the sounds in the space were similar; but when we were on different sides of the room, or at different heights, our tones were phasing like crazy, and we would have heard very different things.

Tone Dance from Video
Still from the video of Mook on the left and me on the right: I’ve used the Magic Gloves in performance for a while now, and I’m very grateful to Tom Mudd from Goldsmiths for first showing them to me.

 

Reflections
There’s a lot of choreography scored from sound, and in my work as a sound designer, I’ve seen how important music is in the devising process, but there’s something really powerful (maybe in just a wanky way, but powerful anyway) about two people who are sometimes following the same directions and are sometimes completely against each other, but are mainly unaware of when the other one is and isn’t. This dance doesn’t really work as a performance in a sit-in-the-dark-and-judge sort of way, it’s definitely a piece to do rather than watch – by the way, the sound the camera captured has very little to do with what we actually heard, which is pretty neat. As an exercise, though, it’s given us some room for thought and experimentation on the interaction between body, dynamic movement and environment. I’m really excited about the formal implications it has – what happens to our control over material when we perform fluid scores? how can the score change depending on where it’s observed from? what happens to your body when it learns it’s being tracked? what influences does the watching of action have on the action, and how can that be sonified? and what the hell does this have to do with whales?

 

Xavier

 

~ ~ ~

Many thanks to Angie & Jon at the HopBarn for their support.

Glossary
Infrasound: any sound that happens lower than humans can hear it.
Sine tone: the simplest synthesised sound, and one that can’t occur in nature. It’s basically a beep – very exciting.
Hertz (hz) – measurement of the pitch, or how high or low a sound is.
Psychoacoustics – the study of the psychological and physiological perception of sound.
Max/MSP – a programming environment that is used by nerds like me because you can plug anything into anything else and make sound from it.

 

The Beginning

Me & My Whale is a performance project which incorporates expanded approaches to text, theatre, music and programming to ask questions about the modern ownership of the natural world and our impending apocalypse.

It follows a military submarine captain at a crucial point in a bloody future war. Her ship is equipped with an untested acoustic weapon, which she has been ordered to detonate over the enemy capital. However, at the point when she must surface, she falls in love with a whale. Caught between her duty and her amygdala, we see many futures – generated procedurally through a performance-based interaction with reactive, live programming.

Originally written for Xavier’s (who?) dissertation for a masters in music, new life has been breathed into the project with the help of Mook (who?) and the support of Live Art Bistro, The HopBarn, The Royal Exchange Theatre and Arts Council England, who are, frankly, mad.

logos1

This will be a straightforward blog, for anyone interested to see what we’ve been doing, and for its future life in 2018 (aka the Year of the Whale). It is my fond hope that one day it will take its place alongside Napoleon’s war diaries and the memoirs of Julius Caesar.