MERRY CHRISTMAS!
(to those who celebrated)
&
HAPPY NEW YEAR!
(forthcoming)
Well, finals have come and gone, another semester at a close. This one, a far more comfortable (though no less rigorous) one than the last - not to say that the prior semester was without success, but I certainly made my way through this one with a far higher level of confidence resulting from a much stronger conceptual framework to undergird an equally stronger body of work (at least I think).
In my last post I talked a bit about the most recent QR code mandala drawings that I included in my final crit, and so I wanted to take this opportunity to talk (at what regrettably seems to be enormous length...) about the other half of my final crit, which consisted of my first experiment with interactive video! [cue trumpet fanfare]
The Backstory:
It occurred to me that interactivity and bodily connection to the digital would become a crucial part of my artistic quest for some sort of postmodern cyber-mysticism. For one, all spirituality is participatory, not merely observational. Secondly, there's something sacred, as Marcel Mauss points out, in the bodily techne, or techniques (equated to technology [also techne]) that allow us to engage in unifying the mind and body in mystical contemplation: a theology of the body as a technological mediator for the divine - a techno-theo-logy of the body, if you will. In light of that, there's something powerful in the ability of art to make people move.
So, I began constructing a stained glass/mandala video piece that would sense the viewer's presence and respond according to his/her movements. The main component is an arched stained glass window (a combination of a mandala temple form at the top and a cruciform cathedral layout at the bottom). Given that the semiotic cosmology of such an image is essentially vertical (heaven = up; earth = down) and therefore Platonic (upper focal point of mandala = good/ideal; lower outskirts = bad/derived), the viewer's presence would trigger crossfades in the video that would literally flip that on its head (and in many other directions). Essentially, I want the viewer to experience a 'beyond' that is not above or elsewhere, but a 'beyond' that is within and throughout.
The Experience:
So the viewer starts with this:
When they approach the video, they see themselves silhouetted within the window.
When their head enters the space within center of the mandala form at the top, it triggers a cruciform inversion of the imagery
If the viewer raises his/her arms to accord with the cross shape, on the way up (at 45º from the bottom) they trigger a diagonal spread of the image
Upon spreading his/her arms out 90º it triggers two floating, rotating mandala forms (the red you see was chromakeyed out to make a transparent background)
The Catch:
Of course, one of the problems with this sort of project, as I pointed out in my rant on ethics, is it's tendency toward a 'master-slave' dynamic; a "look what I can make it do" trick-pony sort of relationship of viewer to viewed. Theologically and ethically there are a lot of issues here (if cyberspace is truly a viable venue for divinity then it can't be entirely controlled by the viewer; a true participation has to be a back and forth, not a one-way interaction; etc).
So I built in a fail-safe mechanism. Should the viewer decide to 'play' too much, the build up of triggered changes eventually amounts to a progressive dimming of the screen until it goes entirely black and refuses the viewer any more interaction. Only if the viewer stands still for 15 seconds does the imagery restore itself to its former functionality!
Here's a video of it all in action during a test run...
Then I just had to build a rear projection screen for it, which would serve as a false wall (hiding all the equipment behind it)... I chose PVC piping for its lightness and modularity:
And then stretched satin over it (which entailed many trips to fabric stores, experiments with swatches, sewing inadequately sized pieces together, and a great deal of frustration...)
And finally put it all in place behind some of the school's moveable walls:
Behind these... |
...was this! |
Voila!
How it Actually Works:
Ok, so here's the nitty gritty, which quite honestly might be even more boring than all the stuff prior, but it may prove worthwhile for some cooperative Platonic reader out there...
All semester I'd taken up learning a programming software: Cycling 74's Max 6 with MSP and Jitter. Essentially, its object-based visual interface allows you to create your own programs (called patchers) without needing to be familiar with traditional text-based programming code language. Each "object" you create is given a function. Information is then passed through one object to another, via patch cables, being transformed in some way by each object's designated function. Max objects deal primarily with mathematical data, MSP with sound, and Jitter with video. By stringing these objects together you can essentially make something that does just about whatever you want, to just about any type of input, ranging from something as simple as an alarm clock to an entire DJ's Mixer.
Here's the basic layout of my patcher:
At the top you see a "loadbang" object that basically sets up all the pieces that need to be loaded upon opening the patcher. Next to it is a trigger button that sends a "bang" message (a "do something" signal - or a "1" in binary) to essentially turn on the machine, which then essentially has 4 parts:
Part 1 (top left corner of patcher)
I hacked a PS3 Eye camera to work with my computer as a webcam (easy to do; and the PS3 eye out-performs most other webcams that cost twice to even 4 times as much! Check this site for more info).
After tampering with the levels and turning of the auto-exposure, the camera - aimed at the viewer - takes a black and white, high contrast, live feed video.
pretend there's somebody standing in the frame... |
Ordinarily this would feed directly into Max, but I had some issues with lag (note: Max prefers firewire cameras, not USB) so I built a work-around, opening the video in the macam driver app's window and taking a video screen grab of that window in Max. Max then uses that silhouetted video of the viewer in the final mix and also sends that video information to be disected by part 2
Part 2 (colored boxes in patcher)
Each video file that I wanted to crossfade in based on the viewer's motion is represented in one of the colored boxes above. Each box contains sub-patchers that do two things. First (shown below in the left hand sub-patcher window) it isolates a pixel in the original black and white video feed of the viewer. This pixel represents a location in the space occupied by the viewer. It is by default a white pixel because of the high contrast. When part of the viewer's body obstructs this pixel it turns black.
Whenever that pixel changes color the second sub-patcher (on the right) uses that to trigger a crossfade with the given video. Each video is assigned a different pixel location, so depending on where the viewer moves, different videos will crossfade in and out.
Part 3
Each of these crossfades are then woven together and overlaid with the viewer's silhouette video to produce the video that will ultimately be output to the projector
Part 4
On its way out to the projector the video passes through one final sub-patcher, the fail-safe dimmer.
Besides triggering a crossfade, each bang message produced by the change of the isolated pixels' color also gets fed into this sub-patcher. It ignores the first 40 bangs (allowing the viewer to mess with it for a time without immediately getting confused and frustrated by a constantly dimming screen). After that 40th bang however, every subsequent bang amounts to a 10% decrease in brightness in the final image until the brightness is reduced all the way to 0.
At that point a timer is started. When the timer counts up to 15 seconds (20 in the above image, before I adjusted it) then the brightness is restored to 100% and the bang count reset to 0. However, until it reaches 15, every bang resets the timer, so that it will never reach 15 unless no bangs are sent; i.e. 15 seconds of complete stillness.
And once again, all this is then rear projected on the screen in front of the viewer so that they watch the results of their actions/interactions play out in front of them!
The Aftermath
With all said and done, I have to say I'm quite pleased with the outcome for a first try at such a project. However, I've learned a LOT of things that might be better done differently in the future.
For one, the isolated pixel technology banks on a high contrast video. In order for the viewer to stand out against the background, the wall behind him/her has to be well lit, but the viewer left in shadow. Though functional, this does have the corollary effect of providing enough light to wash out the projection screen to some degree. I know there are other ways to track motion with this sort of technology, and I'll be looking into those (still getting the silhouette imagery of the viewer however is another story...)
Secondly, I have much to learn about preserving the frame rate/quality of the videos within Max. I managed to avoid some lag by loading all the videos to RAM upon startup, but there is still a bit of lag and degradation of quality.
Third, the screen itself, would of course be entirely unacceptable for anything other than a school final with a ridiculous 30 minute install time limit (yep.). Ideally I'd love to find properly sized material with a tad more clarity. But I'm content with what I had for what it was.
Finally, there is always much to be excavated in the way of concept and formalism of the imagery itself, but that I can save for other projects.
I'd love to tweak some of this stuff before settling on this as a finished piece, but it proved to be a promising adventure into the world of interactive video!
Ok, I'm done talking. I swear.