Tagline

The Studio of Eric Valosin

Saturday, October 15, 2016

"Oooh, Ahhh, OOOA:" The Word that Speaks Itself (Objectless Oriented Program)

Sound is a fundamental part of the mystical experience, from the cosmic Om to the subatomic vibrations of matter, to the Logos of Biblical spoken creation, to the oral tradition of Quranic verse, to the noetic unity of a band finding the pocket.

I'd been trying to figure out how to incorporate sound into my work for years, but could never quite find a way that didn't seem contrived or precocious.

However, this past spring I was invited by Oculus Art Collaborative to join them in an experimental sound exhibition at Gallery Aferro, and this seemed the perfect challenge to dig in and figure this out.

Object Oriented Ontological Action (O.O.O.A.), an experimental, participatory exhibition of artist-made sound objects.
The call was for "sound objects" that viewers could interact with to collaboratively create music within an "anarchic interactive public performance inspired by John Zorn’s COBRA and object-oriented ontological philosophies by Yale architecture professor Mark Foster Gage."

I thought about my usual wont toward immateriality, and what sort of postmodern, relation, mystical sound object I might be able to contribute.

Object Oriented Objectlessness


Phenomenologically speaking, Object Oriented Ontology aims to redirect one's attention away from function back to the tool itself, which as Martin Heidegger points out in Being and Time, is usually made invisible, veiled behind its function until such a time as it breaks and stops functioning, and we stop taking the object's objectness for granted. In that sense, this exhibition was about the anarchic creation of function flowing out of the exploration of the object as object. Flowing out of Zorn's practice, it also lays bare the underlying arbitrariness of rule structures and guidelines, inevitably denying order while attempting to follow order, producing a new order of its own in the process. Our attention is drawn to the invisible object, and to the meta-framework we create around it.

It therefore also has something to do with making "invisible form" visible. Well, this seemed like a perfectly mystical place to start.

But it just so happens there is also a technological parallel. There's a type of programming in which mathematical functions get bundled up into "objects" that carry out those functions (likewise forcing visibility to shift from function to object). These are aptly named Object Oriented Programming Languages. Java is one of these languages, which just happens to be the basis of the Processing development environment, the very tool I've used for nearly all of my interactive new media projects.

The road seemed clearly laid before me at this point. I'd use an Object Oriented Program to create and Objectless Object that would be used in accordance with an Object Oriented Ontological experience!

My Objectless Sound Object


My proposal to Oculus was thus:

The Word that Speaks Itself (Objectless Oriented Program) is an objectless object that uses computer vision to track the viewer's hand motions within a defined space to control playback of the "object" chanting its own code. Two hands allow for two voices in harmony, and the motion/placement of the hands controls pitch, volume, and tempo. In that way the viewer becomes the conductor and the instrument. Fittingly, the project is created using Java, which is an "object oriented programming language." 
I'm interested, as you know, in the thingness of nothingness, and interactivity within a digitally mediated meditation, and I think this project would fit very well within the theme of the show. The title is taken from a Meister Eckhart sermon reflecting on the paradox of God being the Logos - the "Word" - and yet every word by nature needing to have been spoken. Thus God becomes the word that speaks itself.
To accomplish this, I essentially broke down the project into a set of variables tied to intuitive musical conducting motions. The position of the hands on the Y axis (high or low) would be tied to pitch, the X axis (out to the sides or near the body) to volume, Z axis (stretched out in front or back near the body) to the tempo. 

The result would be like some sort of full body, anthropomorphized, digital theramin that chants its own code like a self-generative, polyphonic monk!

Text-To-Speech

The first challenge was to get this thing to talk in the first place. If worst came to worst I knew I could make recordings of every necessary phoneme and then use the variable to trigger playback of a bazillion little .mp3 clips. But considering every device I own has some sort of text-to-speech accessibility option, I thought there must be a better way.

That's when I stumbled upon the FreeTTS Library. A developer named Guru (who's website is currently not available) wrote a convenient Processing wrapper for this Java library that would allow me to have a synthesized voice read words I input into my Processing sketch, and I could even change the variable for pitch.

Here's a basic example sketch that uses threading to create two voice harmony:



Tuning the Invisible Synthesizer

The problem quickly became apparent when I tried to map the y axis to pitch frequencies within a given vocal range.  I knew A440 to be concert tuning, so I googled from there. However, though a musician all my life, it somehow never registered that there are multiple tunings for our western 12 tone scale. As I tried to mathematically divide the range of frequencies I found into equal spaces, I found they did not land on the true pitches, and octaves didn't lineup. The problem looked something like this:


I had to shift away from using what turned out to be the Pythagorean scale, and instead move to an Equal Tempered scale so that the pitch interval ratios would all be equal.

Here's a site that describes the discrepancy pretty elegantly, and here perhaps more practically, and here's a good history of how this all developed in the first place. If you're having fun geeking out on this stuff like I was, follow the links in the first site and check out the books he recommends.

Once I was able to get the pitches mapped to the screen, I was able to produce prototypes like this: 



With a little more refining, I replaced the spoken filler text with excerpts of my program's own source code, and got some promising results.



Waxing "Poet"-ic

Eventually I found the Gurulib wrapper to be too confining. What it possesses in simplicity, it lacks in versatility. It lacked controls for volume, tempo, voices, and all the coding happened behind the scenes somewhere in the library's files, unable to be easily manipulated. FreeTTS's API documentation showed that all this should be possible, but when I'd attempt to access those functions through the wrapper it was a dead end.

So I took to the internet and discovered this dream come true buried in the comments of a forum thread: A homemade wrapper by the poster Kos that puts the heavy lifting in the hands of a class called "Basnik."

Basnik - as I'm sure we all know - is the Slovak word for "Poet."

It was much more transparent in it's setup, and allowed far more utilization of the FreeTTS's capabilities. I was able to dig into the FreeTTS API documentation and give the class a major facelift to access all the variables and create all the flexibility I needed.

Click here for my revised Basnik class. (code examples included)

That allowed me to create more sophisticated prototypes like this one:


With more tweaking to the aesthetics and user interface, I was ready to bring this to the public. Continue to my next post for the unveiling!



No comments:

Post a Comment