Sunday, July 22
In in the early seventies, experimental German band Kraftwerk became known for building their own synthesisers. Robert Moog developed his transistorised modular version from 1964 onwards. And while not matching the innovation of these pioneers, I just built a simple Moog type synth in 5 minutes. And I'm not really a programmer.
And that, it seems, is the point. Increasingly, artists, musicians, and those working in the space between media and technology want something that provides some 'under the hood' control without the learning curve. Enter the rise of the graphical programming language - a visual approach that treats media as a series of connections - a slightly geekier version of the classic path from guitar to distortion pedal to amplifier.
Max/MSP, an early prototype for this concept, has been used for over 15 years, by artists, educators, and musicians like Johnny Greenwood from Radiohead and Richard D James (Aphex Twin). Still a popular and well supported tool, it's become commercialised, with the original coder Miller Puckette, going on to develop a very similar, and open source alternative, Pure Data.
But while creating your own sound is easy, the buzz wears off quickly when you realise what you're listening to - a very artifical sounding sine tone which is less interesting that a telephone ring. Pure Data and Max become interesting when their connections extend outside themselves, to samplers, audio triggers or further software. How about running an installation triggered by a footstep, with a constantly unique soundtrack which was played by a string section? Web cam input hooked into PD, which randomly selects part of a score and outputs to orchestral VST (virtual studio technology) plugins.
vvvv follows the same approach, substituting frames, RGB channels, and rendering commands for audio signals. While it's Windows only, the software has quickly produced some fascinating pieces. Seelenlose Automaten outputs a series of MIDI commands to both audio and video at the same time. For example, one note outputs a hi-hat sound to audio, and a 'rotate everything left' to the visual 3d model. The result is a perfectly synced generative composition.