I've been experimenting with merging the interactive computer programming I've used in projects like Luma and Venae Cavae with my projection negation technique from UnKnowledge or Triptych. Here's my first foray:
Above you see a board on my wall with a painted blue square on it. It's mounted on a stack of milk crates (these are big budget experiments, I assure you) that also houses a Kinect sensor.
Now ordinarily I would project onto that a still image with colors I've calibrated in photoshop to negate out the painted imagery (using complementary blends of additive and subtractive color on a matching gray background). The user's shadow then reveals the painting upon obstructing the projection.
However, this time, I wrote a program in Processing that would allow me to control the location and rotation of that projected image (in this case an orange square) by waving my right hand in real space. It's location corresponds with my hand's x and y position, while its z position (how close or far my hand is from the Kinect) controls the rotation of the shape. This way I can manipulate the occurrence of the negation with my hand!
Here's a video (you'll have to take my word for it that the color negation is seamless in person - the camera lens picks up the reflected colors differently than our eyes and therefore tints the blended color a bit off in the video. The grays are a truer match in person.)
Not sure yet where this will take me, but I imagine a large installation in which the person can manipulate the projected image with their body, which causes the negation, but while simultaneously being in front of the projector and thus concealing the negation and revealing the paint, in a sort of convoluted, partially self-defeating sort of way. The next trick will be getting it to work with multiple viewers (right now it relies on user-calibration in a specific pose to detect the user's hand...)
We'll see what comes out of this, but I essentially just wanted to see if it could be done...
It can.
No comments:
Post a Comment