TACTILE
CONTROL
OF VIRTUAL
ELEMENTS


AARON SIEGEL
SPRING 2005
ART 106: HUMAN MACHINE INTERFACE
CADRE LABORATORY FOR NEW MEDIA


How can tactile control of virtual elements be leveraged towards the navigation of virtual spaces, and create a more cohesive connection between user interaction and system response?

PROJECT PLAN:

This project is an experiment in creating a system of sensory feedback to aid the user in interaction of virtual elements. By incorporating computer vision tracking and touch sensing technology, the system transparently reacts to the touches of the user. The tactile response the user feels when touching the screen, in addition to the reaction the systems visualization takes, gives the user the perception of the virtual element having realistic attributes of a tangible object.

The devices that comprise the system are an LCD projector, a webcam connected to the computer vision system (written in either processing or python), and a special projection screen constructed of cotton and metallic silk organza which is connected to a capacitance sensor on a BASIC Stamp 2-SX microcontroller.

The screen will be constructed with 1" diameter wooden dowels at each end to weight the screen to keep it taught while installed. The screen should ideally hang at a 30 degree angle across two strands of taught aircraft cable screwed into each side of the installation space. The LCD projector will then hang perpindicular to the screen above the space, and the webcam will be perpindicular to the screen underneath, catching all the projected images and cast shadows. The webcam feeds its input to a computer vision program which tracks the highest point of all the charcoal gray to black spots on the screen (which is the users shadow).

On a traditional shadow tracking application, one would be able to wave their arms in front of the projector in order to modify the status of the visualization. In this case, the screen itself acts a switch, turning the visualizations reaction to the shadows on and off. The screen contains a low current of electricity that completes a circuit when grounded by the touch of a fingertip. This doubling-up of cause and effect interaction gives the user a distinct sense of tactile and tangible control, rather than virtual or remote control.

The system can be installed in any manner that orients the user to use the top most point of the shadow being cast as their index point (similar to on cursors, where people expect to click with the tip of the arrow and not with the bottom of it). Downward projection is always a space saving possibility, as long as the user approaches the screen from only one side. And since the connection being made is on rolled up fabric, it can be draped further to create a smaller size screen, and the images on the projector and webcam can be scaled accordingly. And instead of using steel cables to hang it, one could lay it across a glass table and connect the fabric to the capacitance sensor with an alligator clip. No matter what, this project requires prepared installation.

USER INTERFACE:

Since the most interesting events of interaction that I foresee this project containing are simple touch (cursor double-click), grab and drag (clicking and moving an object around the screen), and translational physics (releasing your connection with the object and letting your kinetic energy virtually pass on to it), I aim to create an interface to utilize all of these interaction events to their fullest ability.

I'm also very interested in the spatial relationship of interactive objects in virtual space, as well as the benefits of recreating immediate tangible object simulation without the limitation of a fixed amount of tangible objects (and their obvious limitations abiding by the laws of physics).

It's with these goals in mind that I want to create a file system navigation structure using abstracted representations of directories and files as colored circles, triangles, and squares. The user will be able to collapse and expand directories, as well as create, move, and destroy new directories and files. The user will interact with each virtual object by tapping it twice to expand or collapse it, as well as pressing and dragging it to nest it inside other directories. A drag that results in no action taken will elastically spring the virtual object back to its original place.

On an extended project, I'd like to port the computer vision aspect to python, and do all the visualization in OpenGL using slut. This would allow for rotation of 3d environments and the representation of object structures as easily referenceable sprites, as well as extending the overall performance and capabilities.

PROGRESS:

Shadow Tracking Computer VisionFUNCTIONAL
    processing code v0.3
    processing code v0.4 (p87)
    processing code v0.5 (p87)
    python code v0.2
User Interface[95%]
    simple serial input/output
    simple serial input
    simple 3d cube control
    slightly complex 3d cube control
    almost complex 3d cube control
    so-close-to-complex 3d cube control
    cube tracking 0.1
    cube tracking 0.2
    cube tracking 0.3
    cube tracking 0.4
BASIC Stamp 2-SXFUNCTIONAL
    sensor debug LED code
Capacitance Sensor[95%]
    QT160 Integrated Circuit
    Component Grab Bags
Projection ScrollFUNCTIONAL
    Metallic Silk Organza
    White Projection Surface
    Wooden Dowels

SENSOR CONSTRUCTION:

        

        

        

        

        

  

SCREEN CONSTRUCTION:

        

        

        

     

SOFTWARE CONSTRUCTION: