Thursday, July 26
Abject Leader presented three performances in what was my standout act of the night. The Australian duo of Sally Golding and Joel Stern combined a range of digital and analogue hardware to stage three mini-shows, Bloodless Landscape, The Gospell According to Johnny's Ghost, and Henri's Hallucinations across Time and Space. Joel provided the soundtrack, running sound from a laptop through live effects, with some a surprise coming from a trumpet he played at random intervals. Sally frenetically worked a group of five 16mm projectors, loading film, skewing and reflecting it off mirrors, and even tinting it via coloured rotating on a household fan. The performance was magical and surprising - frames drifting off into corners of the room, buzzing synths mirroring onscreen insects, it was expanded cinema at it's best.
Next up Stella Brennan showed South Pacific, a long researched (18 months she mentioned) digital video work exploring the legacy of WWII on the region, especially that of island culture. Much of the video plays out as you see below - blown up shots of ultrasounds, ocean waves, and bombs with typewritten text animating across the bottom. More poetry or commentary than any video work I've seen recently, for me it was mixed. Each word and phrase was plucked carefully, delivering a strong statement which was at times poignant (describing how efforts to replant war runways failed), and funny (riffing off pacific myths of steel guitar and khakis) but always retained a personal naturalness ("I'm going to land now"). But the visual side let me down, why use video for a poetic/text based piece? Like a computer or a mobile phone, text isn't particularly suited for the medium. Doable, but not easy.
The next act started mysteriously. The audience came back after the interval to find the chairs and mattresses moved, creating a diagonal aisle down the middle of the space. Loren Chasse, a San Francisco artist, proceeded to unfurl a giant paper banner down this, then scatter a collection of rocks, pebbles, and sand down it, recording the process with a MiniDisc. Loren then moved to creating sound with stones while the recorded sound played back. A colleague (unnamed in the programme) manipulated ferns, twigs, and other NZ flora over a OHP projector, mirroring the sound created by Loren. These two phases were repeated again, with variations - simple shadow making reflecting live sound creation. The piece was interesting, but I was so surprised they didn't do the obvious I became a little disappointed. Why not use concrete, physical actions - like tossing stones down on the ground - to create a live soundtrack? Sand washing down the paper would have been beautiful with effects applied, or as a trigger for other instruments. Instead, the record/playback process was slightly laborious, and the overlong piece - after the main action occured - grew tiring.
Finally we were up to Matt Brennan's Cardboard Cinema, an intriguing piece that was meant to finish up the programme. It never occured. Due to technical difficulties or breakages of equipment, Matt was stuck. Through some tremendous last-minute effort he submitted a short DV piece titled something like 'Matts great movie'. Consisting of a barrage of echoing yells, blasted trumpet and cymbal crashes, the video was fun, and funny. Matt appears in face masks -beating drumkits, running down stairs and playing lead guitar riff next to a toilet. A heroic effort and an easy end to the night, although it would have been interesting to see his billed act - where "audio visual collages emanate from cardboard machines".
Sunday, July 22
In in the early seventies, experimental German band Kraftwerk became known for building their own synthesisers. Robert Moog developed his transistorised modular version from 1964 onwards. And while not matching the innovation of these pioneers, I just built a simple Moog type synth in 5 minutes. And I'm not really a programmer.
And that, it seems, is the point. Increasingly, artists, musicians, and those working in the space between media and technology want something that provides some 'under the hood' control without the learning curve. Enter the rise of the graphical programming language - a visual approach that treats media as a series of connections - a slightly geekier version of the classic path from guitar to distortion pedal to amplifier.
Max/MSP, an early prototype for this concept, has been used for over 15 years, by artists, educators, and musicians like Johnny Greenwood from Radiohead and Richard D James (Aphex Twin). Still a popular and well supported tool, it's become commercialised, with the original coder Miller Puckette, going on to develop a very similar, and open source alternative, Pure Data.
But while creating your own sound is easy, the buzz wears off quickly when you realise what you're listening to - a very artifical sounding sine tone which is less interesting that a telephone ring. Pure Data and Max become interesting when their connections extend outside themselves, to samplers, audio triggers or further software. How about running an installation triggered by a footstep, with a constantly unique soundtrack which was played by a string section? Web cam input hooked into PD, which randomly selects part of a score and outputs to orchestral VST (virtual studio technology) plugins.
vvvv follows the same approach, substituting frames, RGB channels, and rendering commands for audio signals. While it's Windows only, the software has quickly produced some fascinating pieces. Seelenlose Automaten outputs a series of MIDI commands to both audio and video at the same time. For example, one note outputs a hi-hat sound to audio, and a 'rotate everything left' to the visual 3d model. The result is a perfectly synced generative composition.
Wednesday, July 11
A shriek goes up from the crowd of kids. It's a sweltering mid-summer day in downtown Chicago, but this group isn't feeling it. The moment they've been anticipating has arrived. A giant face projected on the column in front of them begins morphing, from sly smile to a puckered blowing expression. Then the surprise - a column of water bursts from the giants mouth, sending a spray cascading over the group.
The Crown Fountain - two large columns and the ultra-shallow pool between them, are the creation of Jaume Plensa, one of the many commissioned public pieces of Millenium Park. Inside they're actually very dry, with a core housing electronics for projection, timing, and a steel framework strengthening the thousands of clear glass bricks. Jaume chose a cross section of people from 1000 Chicagoans, then filmed a set facial sequence - smiling, serious, then blowing. The footage is slowed down drastically and cycles through people after the dramatic fountain blow - composed of mostly air to minimise impact.
Closer to home, the most similar work is Kentaro Yamadas portrait series, shown at Window in 2006. Kentaro filmed a selection of friends with a range of facial expressions. Compared to Crown Fountain, the portraits are much more interactive. When visitors blow into a connected microphone, the normally stone-faced portrait shifts - laughs, looks askew, or simply changes. Interactive - but uncontrollable. The work frustrates any planned user manipulation of the system by the simple input device (mic volume) and the responses which don't quite match up. The resulting control/uncontrol tension provides an edge and interest which the spectacular but passive Crown Fountain really lacks.