As light as your footsteps: altering walking sounds to change perceived body weight, emotional state and gait.

UNIVERSITY COLLEGE LONDON, 2015

An ever more sedentary lifestyle is a serious problem in our society. Enhancing people’s exercise adherence through technology remains an important research challenge. We propose a novel approach for a system supporting walking that draws from basic findings in neuroscience research. Our shoe-based prototype senses a person’s footsteps and alters in real-time the frequency spectra of the sound they produce while walking. The resulting sounds are consistent with those produced by either a lighter or heavier body. Our user study showed that modified walking sounds change one’s own perceived body weight and lead to a related gait pat-tern. In particular, augmenting the high frequencies of the sound leads to the perception of having a thinner body and enhances the motivation for physical activity inducing a more dynamic swing and a shorter heel strike. We here dis-cuss the opportunities and the questions our findings open.

Tajadura-Jiménez, A., Basia, M., Deroy, O., Fairhurst, M., Marquardt, N.,
Bianchi-Berthouze, N. (2015)

As light as your footsteps: altering walking sounds to change perceived body weight, emotional state and gait.
In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (CHI '15). ACM, New York, NY, USA, 2943-2952.

 

WatchConnect:
A Toolkit for Prototyping Smartwatch-Centric Cross-Device Applications.

UNIVERSITY COLLEGE LONDON, 2015

People increasingly use smartwatches in tandem with other devices such as their mobile phones. This allows novel cross- device applications using the watch as both input device and miniature output display. However, despite the increasing availability of smartwatches, prototyping cross-device watch applications remains a challenging task. Developers are often limited in the applications they can explore because most available toolkits provide only limited access to different types of input sensors for cross device interactions. To ad- dress this problem, we introduce WatchConnect, a toolkit for rapidly prototyping cross-device applications and interaction techniques with smartwatches. The toolkit provides develop- ers with (1) an extendable hardware platform that emulates a smartwatch, (2) a user interface framework that integrates with an existing UI builder, and (3) a rich set of input and output events using a range of built-in sensor mapping. We evaluate the toolkit’s capabilities with seven interaction tech- niques and applications.

Houben, S., Marquardt, N. (2015)
WatchConnect: A Toolkit for Prototyping Smartwatch-Centric Cross-Device Applications.
In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (CHI '15). ACM, New York, NY, USA, 1247-1256.

 

Proxemic Flow:
Dynamic Peripheral Floor Visualizations for Revealing and Mediating Large Surface Interactions.

UNIVERSITY COLLEGE LONDON, 2015

Interactive large surfaces have recently become commonplace for interac-tions in public settings. The fact that people can engage with them and the spectrum of possible interactions, however, often remain invisible and can be confusing or ambiguous to passersby. In this paper, we explore the de-sign of dynamic peripheral floor visualizations for revealing and mediating large surface interactions. Extending earlier work on interactive illuminated floors, we introduce a novel approach for leveraging floor displays in a sec-ondary, assisting role to aid users in interacting with the primary display. We illustrate a series of visualizations with the illuminated floor of the Proxemic Flow system. In particular, we contribute a design space for pe-ripheral floor visualizations that (a) provides peripheral information about tracking fidelity with personal halos, (b) makes interaction zones and borders explicit for easy opt-in and opt-out, and (c) gives cues inviting for spatial movement or possible next interaction steps through wave, trail, and footstep animations. We demonstrate our proposed techniques in the con-text of a large surface application and discuss important design considera-tions for assistive floor visualizations.

Vermeulen, J., Luyten, K., Coninx, K., Marquardt, N., and Bird, J. (2015)
Proxemic Flow: Dynamic Peripheral Floor Visualizations for Revealing and Mediating Large Surface Interactions.
In Proceedings of INTERACT (4) 2015, pp. 264-281.

 

The Design of Slow-Motion Feedback

UNIVERSITY COLLEGE LONDON, 2014

The misalignment between the timeframe of systems and that of their users can cause problems, especially when the system relies on implicit interaction. It makes it hard for users to understand what is happening and leaves them little chance to intervene. This paper introduces the design concept of slow-motion feedback, which can help to address this issue. A definition is provided, together with an overview of existing applications of this technique.

Vermeulen, J., Luyten, K., Coninx, K., Marquardt, N. (2014)
The Design of Slow-Motion Feedback.
In Proceedings of the 2014 ACM Conference on Designing Interactive Systems - DIS '14. ACM, New York, NY, USA, 267-270.

 

HuddleLamp: Spatially-Aware Mobile Displays for Ad-hoc Around-the-Table Collaboration

UNIVERSITY COLLEGE LONDON, 2014

We present HuddleLamp, a desk lamp with an integrated RGB-D camera that precisely tracks the movements and positions of mobile displays and hands on a table. This enables a new breed of spatially-aware multi-user and multi-device applications for around-the-table collaboration without an interactive tabletop. At any time, users can add or remove displays and reconfigure them in space in an adhoc manner without the need of installing any software or attaching markers. Additionally, hands are tracked to detect interactions above and between displays, enabling fluent cross-device interactions. We contribute a novel hybrid sensing approach that uses RGB and depth data to increase tracking quality and a technical evaluation of its capabilities and limitations. For enabling installation-free ad-hoc collaboration, we also introduce a web-based architecture and JavaScript API for future HuddleLamp applications. Finally, we demonstrate the resulting design space using five examples of cross-device interaction techniques.

Rädle, R., Jetter, H.-C., Marquardt, N., Reiterer, H., Rogers, Y. (2014)
HuddleLamp: Spatially-Aware Mobile Displays for Ad-hoc Around-the-Table Collaboration
In Proceedings of the Ninth ACM International Conference on Interactive Tabletops and Surfaces (ITS '14). ACM, New York, NY, USA, 45-54.

 

Group Together:
Cross-Device Interaction via Micro-mobility and F-formations

MICROSOFT RESEARCH REDMOND, 2012

GroupTogether is a system that explores cross-device interaction using two sociological constructs. First, F-formations concern the distance and relative body orientation among multiple users, which indicate when and how people position themselves as a group. Second, micromobility describes how people orient and tilt devices towards one another to promote fine-grained sharing during co-present collaboration. We sense these constructs using:(a) a pair of overhead Kinect depth cameras to sense small groups of people, (b) low-power 8GHz band radio modules to establish the identity, presence, and coarse-grained relative locations of devices, and (c) accelerometers to detect tilting of slate devices. The resulting system supports fluid, minimally disruptive techniques for co-located collaboration by leveraging the proxemics of people as well as the proxemics of devices.

Marquardt, N., Hinckley, K. and Greenberg, S. (2012)
Cross-Device Interaction via Micro-mobility and F-formations.
In ACM Symposium on User Interface Software and Technology - UIST 2012. (Cambridge, MA, USA), ACM Press, pages 13-22, October 7-10.

 

Gradual Engagement

Gradual Engagement

Gradual Engagement between Digital Devices as a Function of Proximity:
From Awareness to Progressive Reveal to Information Transfer

UNIVERSITY OF CALGARY, GROUPLAB
MICROSOFT RESEARCH REDMOND, 2011-2012

The increasing number of digital devices in our environment enriches how we interact with digital content. Yet, cross-device information transfer - which should be a common operation - is surprisingly difficult. One has to know which devices can communicate, what information they contain, and how information can be exchanged. To mitigate this problem, we formulate the gradual engagement design pattern that generalizes prior work in proxemic interactions and informs future system designs. The pattern describes how we can design device interfaces to gradually engage the user by disclosing connectivity and information exchange capabili-ties as a function of inter-device proximity. These capabili-ties flow across three stages: (1) awareness of device pres-ence/connectivity, (2) reveal of exchangeable content, and (3) interaction methods for transferring content between devices tuned to particular distances and device capabili-ties. We illustrate how we can apply this pattern to design, and show how existing and novel interaction techniques for cross-device transfers can be integrated to flow across its various stages. We explore how techniques differ between personal and semi-public devices, and how the pattern sup-ports interaction of multiple users.

Marquardt, N., Ballendat, T., Boring, S., Greenberg, S. and Hinckley, K. (2012)
Gradual Engagement between Digital Devices as a Function of Proximity: From Awareness to Progressive Reveal to Information Transfer.
In Proceedings of Interactive Tabletops and Surfaces - ACM ITS 2012 (Boston, USA), ACM Press, 10 pages, November 11-14.

 

The Proximity Toolkit:
Prototyping Proxemic Interactions in Ubiquitous Computing Ecologies

UNIVERSITY OF CALGARY, GROUPLAB, 2010-2011

People naturally understand and use proxemic relationships (e.g., their distance and orientation towards others) in everyday situations. However, only few ubiquitous computing (ubicomp) systems interpret such proxemic relationships to mediate interaction (proxemic interaction). A technical problem is that developers find it challenging and tedious to access proxemic information from sensors. Our Proximity Toolkit solves this problem. It simplifies the exploration of interaction techniques by supplying fine-grained proxemic information between people, portable devices, large interactive surfaces, and other non-digital objects in a room-sized environment. The toolkit offers three key features. 1) It facilitates rapid prototyping of proxemic-aware systems by supplying developers with the orientation, distance, motion, identity, and location information between entities. 2) It includes various tools, such as a visual monitoring tool, that allows developers to visually observe, record and explore proxemic relationships in 3D space. (3) Its flexible architecture separates sensing hardware from the proxemic data model derived from these sensors, which means that a variety of sensing technologies can be substituted or combined to derive proxemic information. We illustrate the versatility of the toolkit with proxemic-aware systems built by students.

Marquardt, N., Diaz-Marino, R., Boring, S. and Greenberg, S. (2011)
The Proximity Toolkit: Prototyping Proxemic Interactions in Ubiquitous Computing Ecologies.
In ACM Symposium on User Interface Software and Technology - UIST 2011. (Santa Barbara, CA, USA), ACM Press, 11 pages, October 16-18. Include video figure, duration 4:19.

 

The Continuous Interaction Space: Interaction Techniques Unifying Touch and Gesture On and Above a Digital Surface

UNIVERSITY OF CALGARY, GROUPLAB, 2010-2011

The rising popularity of digital table surfaces has spawned considerable interest in new interaction techniques. Most interactions fall into one of two modalities: 1) direct touch and multi-touch (by hand and by tangibles) directly on the surface, and 2) hand gestures above the surface. The limitation is that these two modalities ignore the rich interaction space between them. To move beyond this limitation, we first contribute a unification of these discrete interaction modalities called the continuous interaction space. The idea is that many interaction techniques can be developed that go beyond these two modalities, where they can leverage the space between them. That is, we believe that the underlying system should treat the space on and above the surface as a continuum, where a person can use touch, gestures, and tangibles anywhere in the space and naturally move between them. Our second contribution illustrates this, where we introduce a variety of interaction categories that exploit the space between these modalities. For example, with our Extended Continuous Gestures category, a person can start an interaction with a direct touch and drag, then naturally lift off the surface and continue their drag with a hand gesture over the surface. For each interaction category, we implement an example (or use prior work) that illustrates how that technique can be applied. In summary, our primary contribution is to broaden the design space of interaction techniques for digital surfaces, where we populate the continuous interaction space both with concepts and examples that emerge from considering this space as a continuum.

Marquardt, N., Jota, R., Greenberg, S. and Jorge, J. (2011)
The Continuous Interaction Space: Interaction Techniques Unifying Touch and Gesture On and Above a Digital Surface.
In Proceedings of the 13th IFIP TCI3 Conference on Human Computer Interaction - INTERACT 2011. (Lisbon, Portugal), 16 pages, September 5-9.

 

Designing User-, Hand-, and Handpart-Aware Tabletop Interactions with the TOUCHID Toolkit

UNIVERSITY OF CALGARY, GROUPLAB, 2011

Recent work in multi-touch tabletop interaction introduced many novel techniques that let people manipulate digital content through touch. Yet most only detect touch blobs. This ignores richer interactions that would be possible if we could identify (1) which hand, (2) which part of the hand, (3) which side of the hand, and (4) which person is actually touching the surface. Fiduciary-tagged gloves were previ-ously introduced as a simple but reliable technique for providing this information. The problem is that its low-level programming model hinders the way developers could rapidly explore new kinds of user- and handpart-aware interactions. We contribute the TOUCHID toolkit to solve this problem. It allows rapid prototyping of expressive multi-touch interactions that exploit the aforementioned characteristics of touch input. TOUCHID provides an easy-to-use event-driven API. It also provides higher-level tools that facilitate development: a glove configurator to rapidly associate particular glove parts to handparts; and a posture configurator and gesture configurator for register-ing new hand postures and gestures for the toolkit to recog-nize. We illustrate TOUCHID's expressiveness by showing how we developed a suite of techniques (which we consider a secondary contribution) that exploits knowledge of which handpart is touching the surface.

Marquardt, N., Kiemer, J., Ledo, D., Boring, S. and Greenberg, S. (2011)
Designing User-, Hand-, and Handpart-Aware Tabletop Interactions with the TOUCHID Toolkit.
ACM International Conference on Interactive Tabletops and Surfaces-ITS 2011. (Kobe, Japan), ACM Press, 10 pages, November 13-16.

 

The Fat Thumb:
Using the Thumb's Contact Size for Single-Handed Mobile Interaction.

UNIVERSITY OF CALGARY, GROUPLAB, 2011

Modern mobile devices allow a rich set of multi-finger interactions that combine modes into a single fluid act, for example, one finger for panning blending into a two-finger pinch gesture for zooming. Such gestures require the use of both hands: one holding the device while the other is interacting. While on the go, however, only one hand may be available to both hold the device and interact with it. This mostly limits interaction to a single-touch (i.e., the thumb), forcing users to switch between input modes explicitly. In this paper, we contribute the Fat Thumb interaction technique, which uses the thumb's contact size as a form of simulated pressure. This adds a degree of freedom, which can be used, for example, to integrate panning and zooming into a single interaction. Contact size determines the mode (i.e., panning with a small size, zooming with a large one), while thumb movement performs the selected mode. We discuss nuances of the Fat Thumb based on the thumb's limited operational range and motor skills when that hand holds the device. We compared Fat Thumb to three alternative techniques, where people had to pan and zoom to a predefined region on a map. Participants performed fastest with the least strokes using Fat Thumb.

Boring, S., Ledo, D., Chen, X., Marquardt, N., Tang, A., Greenberg, S. (2011)
The Fat Thumb: Using the Thumb's Contact Size for Single-Handed Mobile Interaction.
In Proceedings of the ACM Conference on Mobile HCI - MobileHCI 2012. ACM Press, pages 39-48, September 21-24, 2012.

 

Proxemic Interaction:
Designing for a Proximity and Orientation-Aware Environment

UNIVERSITY OF CALGARY, GROUPLAB, 2009-2010

In the everyday world, much of what we do is dictated by how we interpret spatial relationships, or proxemics. What is surprising is how little proxemics are used to mediate people’s interactions with surrounding digital devices. We imagine proxemic interaction as devices with fine-grained knowledge of nearby people and other devices – their position, identity, movement, and orientation – and how such knowledge can be exploited to design interaction techniques. In particular, we show how proxemics can: regulate implicit and explicit interaction; trigger such interactions by continuous movement or by movement of people and devices in and out of discrete proxemic regions; mediate simultaneous interaction of multiple people; and interpret and exploit people’s directed attention to other people and objects. We illustrate these concepts through an interactive media player running on a vertical surface that reacts to the approach, identity, movement and orientation of people and their personal devices.

Ballendat, T., Marquardt, N. and Greenberg, S. (2010)
Proxemic Interaction: Designing for a Proximity and Orientation-Aware Environment.
In Proceedings of the ACM Conference on Interactive Tabletops and Surfaces - ACM ITS 2010. (Saarbruecken, Germany), ACM Press, 10 pages, November 7-10.

 

Applying Proxemics to Mediate People’s Interaction with Devices in Ubiquitous Computing Ecologies

UNIVERSITY OF CALGARY, GROUPLAB, 2009-2010

Through vastly increasing availability of digital devices in people’s everyday life, ubiquitous computing (ubicomp) ecologies are emerging. An important challenge here is the design of adequate techniques that facilitate people’s interaction with these ubicomp devices. In my research, I explore how the knowledge of people’s and devices’ spatial relationships – called proxemics – can be applied to interaction design. I introduce concepts of proxemic interactions that consider fine-grained information of proxemics to mediate people’s interactions with digital devices, such as large digital surfaces or portable personal devices. In particular, my work considers four dimensions that are essential to determine basic proxemic relationships of people and devices: position, orientation, movement, and identity. I outline my previous and current work towards a framework of proxemic interaction, the design of adequate development tools, and the implementation and evaluation of applications that illustrate concepts of proxemic interactions.

Marquardt, N. and Greenberg, S. (2010)
Applying Proxemics to Mediate People’s Interaction with Devices in Ubiquitous Computing Ecologies.
In Doctoral Symposium at ACM Conference on Interactive Tabletops and Surfaces - ITS 2010. (Saarbruecken, Germany), 4 pages, November 7-10.

 

What Caused that Touch? Expressive Interaction with a Surface through Fiduciary-Tagged Gloves

UNIVERSITY OF CALGARY, GROUPLAB, 2010

The hand has incredible potential as an expressive input device. Yet most touch technologies imprecisely recognize limited hand parts (if at all), usually by inferring the hand part from the touch shapes. We introduce the fiduciary-tagged glove as a reliable, inexpensive, and very expressive way to gather input about: (a) many parts of a hand (fingertips, knuckles, palms, sides, backs of the hand), and (b) to discriminate between one person’s or multiple peoples’ hands. Examples illustrate the interaction power gained by being able to identify and exploit these various hand parts.

Marquardt, N., Kiemer, J. and Greenberg, S. (2010)
What Caused That Touch? Expressive Interaction with a Surface through Fiduciary-Tagged Gloves.
In Proceedings of the ACM Conference on Interactive Tabletops and Surfaces - ACM ITS 2010. (Saarbruecken, Germany), ACM Press, 4 pages plus video, November 7-10.

 

Rethinking RFID:
Awareness and Control For Interaction With RFID Systems

UNIVERSITY OF CALGARY, GROUPLAB,
MICROSOFT RESEARCH CAMBRIDGE, 2009-2010

People now routinely carry radio frequency identification (RFID) tags – in passports, driver’s licenses, credit cards, and other identifying cards – where nearby RFID readers can access privacy-sensitive information on these tags. The problem is that people are often unaware of security and privacy risks associated with RFID, likely because the technology remains largely invisible and uncontrollable for the individual. To mitigate this problem, we introduce a collection of novel yet simple and inexpensive tag designs. Our tags provide reader awareness, where people get visual, audible, or tactile feedback as tags come into the range of RFID readers. Our tags also provide information control, where people can allow or disallow access to the information stored on the tag by how they touch, orient, move, press, or illuminate the tag.

Marquardt, N., Talor, A., Villar, N. and Greenberg, S. (2010)
Rethinking RFID: Awareness and Control For Interaction With RFID Systems.
In Proceedings of the ACM Conference on Human Factors in Computing Systems - ACM CHI 2010. ACM Press, 10 pages, April.

Marquardt, N., Talor, A., Villar, N. and Greenberg, S. (2010)
Visible and Controllable RFID Tags.
In Video Showcase, DVD Proceedings of the ACM Conference on Human Factors in Computing Systems - ACM CHI 2010. ACM Press, 6 pages, April 10-15. Video and paper, demonstrated live at CHI.

Marquardt, N. and Taylor, A. (2009)
RFID Reader Detector and Tilt-Sensitive RFID Tags.
In DIY for CHI: Methods, Communities, and Values of Reuse and Customization. (Workshop held at the ACM CHI 2009 Conference, Boston, Mass.), (Buechley, L., Paulos, E., Rosner, D., Williams, A., Ed.), April 5.

Jain, A., Marquardt, N. and Taylor, A. (2008)
Near-Future RFID.
In Proceedings of Ethnographic Praxis in Industry Conference - EPIC. American Anthropology Association, pages 332-333. Artifact submission (similar to Demonstration).

 

Revealing the Invisible:
Visualizing the Location and Event Flow of Distributed Physical Devices

UNIVERSITY OF CALGARY, GROUPLAB, 2009-2010

Distributed physical user interfaces comprise networked sensors, actuators and other devices attached to a variety of computers in different locations. Developing such systems is no easy task. It is hard to track the location and status of component devices, even harder to understand, validate, test and debug how events are transmitted between devices, and hardest yet to see if the overall system behaves correctly. Our Visual Environment Explorer supports developers of these systems by visualizing the location and status of individual and/or aggregate devices, and the event flow between them. It visualizes the current event flow between devices as they are received and transmitted, as well as the event history. Events are displayable at various levels of detail. The visualization also shows the activity of active applications that use these physical devices. The tool is highly interactive: developers can explore system behavior through spatial navigation, zooming, multiple simultaneous views, event filtering and details-on-demand, and through time-dependent semantic zooming.

Marquardt, N., Gross, T., Carpendale, S. and Greenberg, S. (2010)
Revealing the Invisible: Visualizing the Location and Event Flow of Distributed Physical Devices.
In Proceedings of the Fourth International Conference on Tangible, Embedded and Embodied Interaction - TEI 2010. (Cambridge, MA, USA), ACM Press, 8 pages, January 25-27.

 

The Haptic Tabletop Puck:
Tactile Feedback for Interactive Tabletops

UNIVERSITY OF CALGARY, GROUPLAB, 2009

In everyday life, our interactions with objects on real tables include how our fingertips feel those objects. In comparison, current digital interactive tables present a uniform touch surface that feels the same, regardless of what it presents visually. In this paper, we explore how tactile interaction can be used with digital tabletop surfaces. We present a simple and inexpensive device – the Haptic Tabletop Puck – that incorporates dynamic, interactive haptics into tabletop interaction. We created several applications that explore tactile feedback in the area of haptic information visualization, haptic graphical interfaces, and computer supported collaboration. In particular, we focus on how a person may interact with the friction, height, texture and malleability of digital objects.

Marquardt, N., Nacenta, M., Young, J.,
Carpendale, S., and Greenberg, S. and Sharlin, E. (2009)

The Haptic Tabletop Puck: Tactile Feedback for Interactive Tabletops.
In ITS 2009: Proceedings of ACM International Conference on Interactive Tabletops and Surfaces, November 23–25, 2009, Banff, Alberta, Canada.

Marquardt, N., Nacenta, M., Young, J.,
Carpendale, S., and Greenberg, S. and Sharlin, E. (2009)

The Haptic Tabletop Puck: The Video.
In DVD Proceedings of Interactive Tabletops and Surfaces - ITS 2009. (Banff, Canada), ACM Press, November 23-25.

 

The Continuous Interaction Space:
Integrating Gestures Above a Surface with Direct Touch

UNIVERSITY OF CALGARY, GROUPLAB, 2009

The advent of touch-sensitive and camera-based digital surfaces has spawned considerable development in two types of hand-based interaction techniques. In particular, people can interact: 1) directly on the surface via direct touch, or 2) above the surface via hand motions. While both types have value on their own, we believe much more potent interactions are achievable by unifying interaction techniques across this space. That is, the underlying system should treat this space as a continuum, where a person can naturally move from gestures over the surface to touches directly on it and back again. We illustrate by example, where we unify actions such as selecting, grabbing, moving, reaching, and lifting across this continuum of space.

Marquardt, N., Jota, R., Greenberg, S. and Jorge, J. (2009)
The Continuous Interaction Space: Integrating Gestures Above a Surface with Direct Touch.
Research report 2009-925-04, Department of Computer Science, University of Calgary, Calgary, Alberta, Canada, April.

 

Situated Messages for Asynchronous Human-Robot Interaction

UNIVERSITY OF CALGARY, GROUPLAB, 2008/2009

An ongoing issue in human robot interaction (HRI) is how people and robots communicate with one another. While there is considerable work in real-time human-robot communication, fairly little has been done in asynchronous realm. Our approach, which we call situated messages, lets humans and robots asynchronously exchange information by placing physical tokens – each representing a simple message – in meaningful physical locations of their shared environment. Using knowledge of the robot’s routines, a person can place a message token at a location, where the location is typically relevant to redirecting the robot’s behavior at that location. When the robot passes nearby that location, it detects the message and reacts accordingly. Similarly, robots can themselves place tokens at specific locations for people to read. Thus situated messages leverages embodied interaction, where token placement exploits the everyday practices and routines of both people and robots. We describe our working prototype, introduce application scenarios, explore message categories and usage patterns, and suggest future directions.

Marquardt, N., Young, J., Sharlin, E. and Greenberg, S. (2009)
Situated Messages for Asynchronous Human-Robot Interaction.
In Adjunt Proc. Human Robot Interaction (Late Breaking Abstracts) - HRI 2009. (San Diego, California), 2 pages plus poster, March 11-13.

 

Prototyping Distributed Physical User Interfaces

BAUHAUS-UNIVERSITY WEIMAR,
UNIVERSITY OF CALGARY, GROUPLAB, 2007/2008

This project continued the research about prototyping distributed physical user interfaces, that started with the Shared Phidgets toolkit. The developed toolkit integrates distributed sensors and actuators, and provides easy to use programming strategies for developers to build their envisioned interactive systems. The runtime platform of the toolkit hides the complexity of hardware integration and network synchronisation. The implemented developer library as well as the introduced programming strategies address developers with diverse development skills. Appliance case studies illustrate the applicability of the toolkit and the provided utilities to support the rapid prototyping process.

Marquardt, N. (2008)
Developer Toolkit and Utilities for Rapidly Prototyping Distributed Physical User Interfaces,
Diploma Thesis (MSc), Cooperative Media Lab, Bauhaus-University Weimar, Germany.

 

Utilities for Controlling, Testing, and Debugging of Distributed Information Appliances

BAUHAUS-UNIVERSITY WEIMAR,
UNIVERSITY OF CALGARY, GROUPLAB, 2007/2008

To support the testing, debugging, and deployment of distributed physical user interfaces, a collection of development utilities allow the monitoring and control of all connected components at runtime. For instance, visualisations can be used to explore the distributed hardware components and the built appliances in their geographical context. These utilities allow gaining insight into the internal communication processes of the distributed infrastructure. Furthermore, the simulation utilities facilitate the testing and debugging of the developed appliances.

Marquardt, N. (2008)
Developer Toolkit and Utilities for Rapidly Prototyping Distributed Physical User Interfaces,
Diploma Thesis (MSc), Cooperative Media Lab, Bauhaus-University Weimar, Germany.

 

Microsoft Research Cambridge

Research prototype

Tangible Interfaces and Remote Awareness

MICROSOFT RESEARCH CAMBRIDGE, 2006

The objective of this research project was the prototype development of tangible user interfaces that help to provide awareness between remote collaborators. The research furthermore included a system prototype that could be used as surrogate/proxy of remote located people (by using video and audio streaming and extending this device with various tangible controls, sensors, and actuators).

A second part of the research explored the ways of how digital media (like video, photos) can be made 'tangible', in order that the exploration of this digital content can be more intuitive for people. Various prototypes have been built to illustrate the concepts of 'tangible digital media'.

 

Shared Phidgets

UNIVERSITY OF CALGARY, GROUPLAB, 2005/2006

The Shared Phidgets toolkit provides a library and tools that enables the rapid protoyping of physical user interfaces with remote located devices. The Phidgets can be accessed and controlled over the network, while all the network specific technology is hidden from the application developer.
.NET components and interface skins facilitate the development, and the included tools (such as the server, connector, controlling and observing applications) let the programmer easily overview and control the Phidget components.
The toolkit uses a distributed Model-View-Controller (dMVC) design pattern to represent every device so that data associated with the model is easily queried and manipulated. Therefore the programmer can also access the model directly (implemented as a shared dictionary) and reading and writing entries in the shared dictionary.

Marquardt, N. and Greenberg, S. (2007)
Shared Phidgets: A Toolkit for Rapidly Prototyping Distributed Physical User Interfaces.
In TEI 2007: Proceedings of the 1st international conference on Tangible and embedded interaction (February 15-17, Baton Rouge, Louisiana, USA), ACM Press, pp. 13-20.

 

Collaboration Bus

BAUHAUS UNIVERSITY WEIMAR, COOPERATIVE MEDIA LAB, 2004/2005

The CollaborationBus application is a graphical editor that provides abstractions from base technology and thereby allows multifarious users to configure Ubiquitous Computing environments. By composing pipelines users can easily specify the information flows from selected sensors via optional filters for processing the sensor data to actuators changing the system behaviour according to the users' wishes. Users can compose pipelines for both home and work environments. An integrated sharing mechanism allows them to share their own compositions, and to reuse and build upon others' compositions. Real-time visualisations help them understand how the information flows through their pipelines.

Gross, T. and Marquardt, N. (2007)
CollaborationBus: An Editor for the Easy Configuration of Complex Ubiquitous Computing Environments.
In Proceedings of the Fifteenth Euromicro Conference on Parallel, Distributed, and Network - Based Processing - PDP 2007 (Feb. 7-9, Naples, Italy). IEEE Computer Society Press, Los Alamitos, CA, (accepted).

 

SensBase: Ubiquitous Computing Middleware

BAUHAUS UNIVERSITY WEIMAR, COOPERATIVE MEDIA LAB, 2004/2005

Sens-ation is an open and generic service-oriented platform, which provides powerful, yet easy-to-use, tools to software developers who want to develop context-aware, sensor-based infrastructures. The service-oriented paradigm of Sens-ation enables standardised communication within individual infrastructures, between infrastructures and their sensors, but also among distributed infrastructures. On a whole, Sens-ation facilitates the development allowing developers to concentrate on the semantics of their infrastructures, and to develop innovative concepts and implementations of context-aware systems.

Gross, T., Egla, T. and Marquardt, N.
Sens-ation: A Service-Oriented Platform for Developing Sensor-Based Infrastructures.
International Journal of Internet Protocol Technology (IJIPT) 1, 3 (2006). pp. 159-167. (ISSN Online: 1743-8217, ISSN Print: 1743-8209).

 

Swarm Intelligence

BAUHAUS UNIVERSITY WEIMAR, VIRTUAL REALITY LAB, 2004

The basic principle of the swarm intelligence algorithms is to divide complex calculations between multiple, simple executive agents. The computer science algorithms are inspired by observations of real ant swarms, since they solve complex tasks by simple local behaviour and activities. With this swarm simulation implementation, various SI algorithms can be used to cluster data sets and display the virtual swarm environment with a two or three dimensional visualization. The applied algorithms are founded on the research of V. Ramos, J. Handl, M. Dorigo, E. Bonabeau, E. D. Lumer, B. Faieta and some other researchers of the swarm intelligence science community. Furthermore there are various extensions to the original systems, such as dynamic pheromone evaporation, object switching while carrying another object, dynamic picking and dropping probabilities, varying border conditions, changes of the world dimensions in two and three dimensions as well as elegant solutions for some problems concerning orientation and boundary conditions. For the visualization of the environment of the swarms we used two dimensional worlds, as well as varying three dimensional worlds (e.g. cube, dish and tube). We have analyzed the swarms sorting behaviour in the environments, and also evaluated the main swarm features: flexibility, robustness, decentralized organization and self-organization of the swarm. The self-organization of the swarm is based on the feedback that each agent can transmit with the change of the swarm's environment (e.g. picking or dropping) or the deposition of pheromones. More details about our research can be found in the Developer Documentation (only available in German).