Michael Fox presents paper on Gestures with kinect and arduino with Sho Ikuta and Ally Polancic at Acadia Conference San Francisco: Synthetic Digital Ecologies 2012.
Using a hacked Kinect to control architectural space with gestures.
Project done with Michael Fox, Rodrigo Gonzalez, Sho Ikuta and Allyn Polancic
Presented at ACADIA conference 2012
The intent of this project is to create a catalogue of gestures for remotely controlling dynamic architectural space. This research takes an essential first step towards facilitating the field of architecture in playing a role in developing an agenda for control. The process of the project includes a sequence carried out in four stages: 1) Research of gestural control 2) Creating an initial catalogue of spatial architectural gestures 3) Real-world testing and evaluation and 4) Refining the spatial architectural gestures. In creating a vocabulary for controlling dynamic architectural environments, the research builds upon the current state-of-the-art of gestural control which exists in integrated touch- and gesture-based languages of mobile and media interfaces. The next step was to outline architecturally specific dynamic situational activities as a means to explicitly understand the potential to build gestural control into systems that make up architectural space. A proposed vocabulary was then built upon the cross-referenced validity of existing intuitive gestural languages as applied to architectural situations. The proposed gestural vocabulary was then tested against user-generated gestures in the following areas: frequency of “invention”, learnability, memorability, performability, efficiency, and opportunity for error. The means of testing was carried out through a test-cell environment with numerous kinetic architectural elements and a Microsoft Kinect Sensor to track gestures of the test subjects. We conclude that the manipulation of physical building components and physical space itself is more suited to gestural physical manipulation by its users instead of control via device, speech, cognition, or other. In the future it will be possible, if not commonplace to embed architecture with interfaces to allow users to interact with their environments and we believe that gestural language is the most powerful means control through enabling real physical interactions.