For the past few days, I’ve toyed with the Kinect sensor. In order to use the sensor, I downloaded the freenect library, as well as a freenect wrapper for both Processing (courtesy of Daniel Shiffman) and Max (courtesy of Jean-Marc Pelletier). After a bit of experimentation, I started working with the jit.freenect.grab object for Jitter.
By combining the jit.freenect.grab object with some computer vision processors (OpenCV – also thanks to JMP), I built a quick & dirty kinect-controlled synthesizer in Max. X/Y position of your hands (or head or legs) determines pitch/amplitude, and Z position triggers sound events on and off.
I’m thinking of this interface as a kind of virtual curtain. Put your hand through the curtain, and you make sounds. Move your hands around, and the sounds change accordingly. Pull them behind the curtain and the sound stop. It’s a pretty simple mapping, and I will incorporate more CV tracking to influence sound generation (direction, # of features, optical flow), and to build more complex sound objects. Right now, it’s just 16 voices of sine waves, but I’m thinking of using granular textures as well as virtual objects that will make sounds when “hit.” (Working title of this project is “Pay No Attention” as in “pay no attention to the man behind the curtain” from The Wizard of Oz).
I’ll post a video of what this looks like after the holiday break. I need to spend some time thinking about how to give this interface a non-trivial mapping, and the kind of “virtual instrument” I want to compose for. And after talking with Brygg Ullmer, I’m now thinking that this may be something that could be adapted for our laptop orchestra, the LOLs.
The good news is that I’ve had the time to become totally engrossed in working on this. The bad news is that I lose all sense of time when that happens. What is certain is that I have way, way too much fun at my job.