H E R E + N O W interactive and time based datascape painting and instrument
​
This project attempts to respond to the differences between modernist and post-modernist styles as described by Clement Greenberg and Michael Fired respectively. Formally, the product is an animated graphic output and digital note that are modified in response to the viewer's movements.
​
One primary parameter is time, or 'now.' This is the clock and calendar, constant across space. The other parameter, 'here,' is a video camera input which provides a random value set adjusting the experience depending on where it is set up and what is happening there.
​
The result is an interactive hologaphic projection, controlled partly by the artist, partly by the viewer, and partly by time. The moment is reflected to the viewer in a pictoral object, creating active compositional dialoue. ​Both internal and external relationships define the forms and tell a stories in endlessly cycling unique generations.
​




interactive parametric definition



1 writer_sets data path
​

2 video_parameter
The first necessary input is a reading of ‘here,’ the place where the audience is when seeing this. This is accomplished by streaming data from a video camera. The video resolution is set to low to keep the output as smooth as possible.The program has virtual eyes, and it is able to react to what it sees.

3 time_parameter
The second data stream is the current time and date, updated every second. This provides the mechanism with an ever changing, and non repeating index. Additionally, it is already expressed numerically, making it immediately available for mathematical applications.

4 time_reduction
Divide the time readout into constituant parts.

5 time_complexification
Mathematically transform combinations of numbers drawn from the time index to create new single values for output dependent on multiple time values as input.

5a time_sums
Create sums of two incoming time values (i.e. second plus day, minute plus hour etc.). This value draws from all the numbers from the second to the year and so repeats each century. This range in permutaion makes it effectively infinite to the individual observer even before the video parameter is added. A person would have to wait one hundred years to see the same combination, and would of course look different on video at that point anyway.

5b time_remapped domains
The numbers are translated into domains from 0-1 uniformly at one point to be translated in downstream applications. There are some time values that potentially equal zero (hour,minute,second). There are dispatch patterns set up that make 0=1 because a value of 0 will stop the geometry production.

6 video_grids
The video resolution is reset every minute in both the x and y direction. I have only left 5 options, integers 4-8 for the video output. The video is remapped into 36 pixels in 3 rows of 12 for the audio output, constantly allowing the full range of 3 octaves.

7 output_color mixer
The color is always in flux. The text and the linework are set to gradients. The hue of the surface is set by a material created through RGB channel mixers. Blue is the theme color as homage to Yves Klein in implicating the infinite quality of the concept. Red and green are mixed in as well allowing the surface hue to visit its neighboring shades of purple and jade.The colors fade away in generations, bright to dark, solid to transparent.

8 output_surface and line
The vector map is made into points that forms lines that are lofted into surfaces. The lines are set to different degrees 1 and 2. Sometimes the resolution of the grid can only allow 1 degree in both resulting in straight sided compositions.

9 output_text
The text is all set up and spaced by hand. a subdivided line provides the frame. Any font installed can be used with the squid plug-in, this is ISO CT. The text is made into surface objects.

10 output_preview geometry
This is everything you see, or more exactly, it is every surface and line in its finished material form. The text is incidently also made of surface objects. All of the visualization data is funneled into these few functions where it is translated into the projected geometry. The output is the conceptual space inside the software, the place where ideas are molded. In other words to make this work, the modeling software is required to be open, and the operator needs to zoom into the workspace where I made it. When the all the parameters are correctly plugged in, this turns on and goes by itself. It is as though I am sitting there re-drawing it furiously, tirelessly, forever, and I can sit at any number of places at once.
When it turns off, time continues to modify the composition. an extension of my own aura across space and time. It can only be observed looking into the world iside the tool. This is of course all separate but parallel to the sound output.

11 output_sound
duration/
The duration of the notes adjusts to produce shorter or longer tones.
The value shifts every minute.
sensor/
The vector map is broken into its 36 pieces, and each is measured. The list of measurements shifts each minute.
sensitivity/
The lighting in the room and the abilities of the camera in use will be variable. This parameter needs to be adjusted by hand and is the only slider in this definition.

12 output_notes
There are three octaves set up around middle c. The notes are measured in hertz, arranged in linear order like a keyboard. They are all turned off by default, until the vector map reads enough motion in a certain pixel, activating a dispatch to sound that corresponding tone.






​
face capture test screens / group 8
color tiles and ribbons






​
body capture test screens / group 8
3D circles and lantern cells

​
holographic object / tracers / clock readout
12 whole seconds (multiple frames between)

​
holographic object / tracers / clock readout
12 whole seconds (multiple frames between)
*animations and video for this project in progress / please check back later.
​