Embedded Interactions is a four-week intensive studio, where students design and build prototypes that utilize a mix of working systems, wizard-of-ozing, and video simulations.
With the advent of wearables and the Internet of Things, and with increasingly accurate voice, gestural, computer vision and brain-computer interfaces, our interactions are transitioning from point and click, multi-touch, and typing to talking, gesturing, behaving, and even thinking. Interactions are also shifting from working with a single device to simultaneously conversing withmultiple devices.
This creates new Interaction Design challenges. Instead of using obvious “interfaces,” these interactions are embedded within our contexts. And rather than commanding, we “converse” with autonomous, seemingly intelligent wearables, implants, smart objects, rooms, cars, etc. These embedded interactions put the emphasis on the context created rather than the device, and may eschew the conventions of HCI, user experience, and interaction design. This new ecology of things opens up the possibilities for interactions between systems where the person is only one part; moving beyond user-centered design to a new form of design that is centered on the ecology or milieu created by the active participation of people and devices.
This kind of interaction (and the related Conversational User Interface (http://www.wired.com/2013/03/conversational-user-interface/) or CUI) presents a new set of affordances and qualities for interactions, as well as challenges for designers as they adapt to this new approach. And as everyday computational systems move from computers and phones to wearables, smart objects and environments, what are the implications for design? What are the new design patterns? How does the character of interaction change when there is no screen to look at or touch? What new uses will embedded interaction create?
Faculty: Phil van Allen