Scene2Hap

Project Details
Project Name :

Scene2Hap

Project Category :

arata

Client Review :

(CHI 2026)

Description :

Scene2Hap: Generating Scene-Wide Haptics for VR from Scene Context with Multimodal LLMs


Abstract

Haptic feedback contributes to immersive virtual reality (VR) experiences. However, designing such feedback at scale for all objects within a VR scene remains time-consuming. We present Scene2Hap, an LLM-centered system that automatically designs object-level vibrotactile feedback for entire VR scenes based on the objects’ semantic attributes and physical context. Scene2Hap employs a multimodal large language model to estimate each object’s semantics and physical context, including its material properties and vibration behavior, from multimodal information in the VR scene. These estimated attributes are then used to generate or retrieve audio signals, subsequently converted into plausible vibrotactile signals. For more realistic spatial haptic rendering, Scene2Hap estimates vibration propagation and attenuation from vibration sources to neighboring objects, considering the estimated material properties and spatial relationships of virtual objects in the scene. Three user studies confirm that Scene2Hap successfully estimates the vibration-related semantics and physical context of VR scenes and produces realistic vibrotactile signals.

Video

Publication

Arata Jingu, Easa AliAbbasi, Sara Safaee, Paul Strohmeier, Jürgen Steimle
In ACM conference on Human Factors in Computing Systems (CHI ’26),.

[Pdf]