Tagline

The Studio of Eric Valosin

Saturday, September 21, 2013

Aferro Collaboration Progress

Making some major headway with the Kinect hacking project for Newark's Open Doors festival! Which is a good thing, because there are possibilities being stirred up for us to reprise this (a 2.0 version) with Aferro Gallery in January, and it may even travel to a gallery in Philly if this works well as a maiden voyage. Marc D'Agusto and I, in the process of collaborating on this piece, have cooked up some really interesting ideas on future projects as well, and it looks like we may get a chance to let them spread their wings a bit!

After working my way through 300 or so pages of text and tutorials in Greg Borenstein's amazing book Making Things See in the last couple weeks, I've begun to get a pretty strong handle on Processing and the SimpleOpenNI library for interfacing with the Xbox Kinect.

Having built countless great (and novel) tutorial projects like an interactive air drum kit, a hand-tracking paint application, and a disco song that gets triggered by striking the saturday night fever pose, I'm now working out some test sketches of the individual components of our actual installation.

For instance, I know we'll need to "green screen" out some portions of our image and create overlays, so I built a sketch to practice creating dynamic alpha masks using some images I had laying around. Just for kicks, the mandala form (from my Hyalo installation) moves with the cursor to prove its dynamic transparency, overlaid with a screenshot from my blog. The second image messes with displaying and overlaying 2D images in a 3D space.


overlaying a motion background video on a still image

Same, but translated and rotated in 3D space

The next big mountain to climb was applying that masking technique to the Kinect's depth image in order to green screen out not only the background, but to selectively limit the displayed image to only the foreground users, or middle-ground as needed. Then, to resize and relocate that image so that depending on where the user is in the space, they get displayed in different parts of the scene. (For now that scene is a red background)


As you can see, only the foreground elements are displayed here. Tonight I'll be experimenting with then applying this code to the middle-ground elements as well, and have them show up elsewhere.

I've also got some testing to do to determine the best way to go about achieving what I managed to get here. For example (if this means anything to you, you probably already know the answer to this) this test uses a threshold applied to the depthMap array (the list of depth information for the given Kinect scene, accurate to the millimeter) to create an alpha mask that shows only a certain range of distances from the kinect, which is then applied to the depthImage itself (the grayscale image with values that correspond to depth). However, I still want to try tracking users and isolating them with the Center of Mass function, as well as the newly added auto-calibration feature for skeleton tracking.

Finally, I'll be playing with the aesthetics of it all, working on getting the shadows and inverse shadow effects outlined in Marc's original proposal.

Then all that has to happen is cobbling all the test pieces together with the actually imagery we will be using, and we'll have ourselves a project!

Click here for the next post about this project (3/4) >>

No comments:

Post a Comment