Research Mission

The predominant way of interaction with today’s computing systems is not doing justice to us as human beings. It reduces Human-Computer Interaction to looking through and touching on a rectangular, flat and rigid window that separates us from the digital world rather than integrating it deeply into our physical world.


How can we overcome this barrier? Our vision is that user interfaces merge seamlessly with the physical world that surrounds us. They will leverage the entire space that surrounds us, as opposed to just a small window into the digital world. They will make use of the physical objects that surround us. They will take advantage of our physical skills: we are good at expressive physical interactions, fine motor movements, and have a strong sense for spatial location and arrangements. We believe that such embodied interfaces result in more effective, expressive and engaging interactions – both in personal and collaborative use.


To contribute to this vision, we develop and study novel devices, user interfaces and interactions in the following areas. These encompass diverse characteristics of Embodied Interaction at its different scales:

 

  • Responsive objects: We develop user interfaces and interactions for everyday objects that respond to how people interact with them, by means of sensor skins, displays and actuators. For instance, interactive paper interfaces seamlessly couple the advantages of traditional paper with those of computing.
  • Expressive physical computing devices: We invent novel handheld computing devices that go beyond multi-touch input and offer more expressive physical interactions. For instance, we develop deformable and resizable tablets and smart phones that offer novel rich and effective ways of interacting with digital media.
  • Interactive surfaces and spaces: We augment surfaces and spaces with sensors and displays to seamlessly couple them with computing. This allows people to leverage spatial cognition for interaction with digital media – at the level of tables, rooms or entire architectural spaces.
  • Augmented human: Going beyond augmenting inert objects and spaces, the human body itself can ultimately become the user interface. This enables an even more seamless fusion of human capabilities and computer interfaces, improving human senses and benefitting people with special needs.

Integrating the physical with the virtual involves many technical challenges. We focus on the following ones:

 

  • Printed electronics and printed sensors: To realize the vision of sensor skins that cover responsive objects and interactive surfaces, we develop novel printed sensors that are extremely thin, flexible, inexpensive and scalable in size.
  • Flexible displays: To design novel embodied interactions and radically depart from existing notions of displays, we use, simulate and develop flexible displays. This enables having displays at unconventional locations, on deformable objects and surfaces, and even on the human body.
  • Tracking of objects and people: To enable tangible, bodily and spatial interactions, we develop and use advanced methods for tracking objects and people in physical space, e.g. using depth cameras.
  • Smart projection: To augment objects, surfaces and spaces with visual output, we develop and use solutions for projecting information in real time onto moveable and deformable objects and surfaces.

By doing so, we aim to contribute to a future that bridges the physical-digital divide and enables rich interaction with digital media right where we humans live: in the physical world.



The Human-Computer Interaction group, led by Prof. Dr. Jürgen Steimle, is part of the Department of Computer Science at Saarland University and member of the DFG-Cluster of Excellence “Multimodal Computing and Interaction” at Saarland University.


Article about our group in the Max Planck Research magazine [PDF english | PDF german].