Touch is harder to simulate than vision in virtual reality (VR). Many VR systems rely on passive haptic proxies—real-world objects that users hold or touch while interacting with virtual content—because physical boundaries and tactile cues help stabilize control and spatial perception. However, today’s passive haptic proxy solutions often operate under a one-object-one-model paradigm: each virtual object requires a dedicated physical prop and careful alignment, which increases fabrication cost, storage and transport burden, and deployment complexity. These barriers become even more critical in constrained environments such as astronaut cabin or extravehicular operation training, where space, weight and setup time are limited.
To address these limitations, researchers led by Donghua University report a fabric topological haptic proxy (FTHP) that aims to replace many dedicated props with a single reconfigurable textile interface. The work combines cross-disciplinary expertise from textile materials, interaction and configuration design, sensing and signal acquisition, and machine learning, with additional support from industry partners for fabric design and process implementation. The team integrates origami-inspired topological structures and sensing fibers into one piece of cloth, enabling it to switch among multiple functional states and support diverse VR interactions.
A key challenge for using fabrics as VR interfaces is controllability. Fabrics are naturally soft and highly deformable; this flexibility is attractive for portability and comfort, but it also introduces ambiguity. The same target shape can be achieved by many different folding paths, and the fabric can collapse, twist or relax during grabbing and manipulation. Such uncontrolled deformation leads to unstable tactile boundaries and ambiguous sensing signals, making reliable state recognition difficult.
FTHP tackles this problem by using “topology-guided” deformation. The researchers design folding geometry and topological constraints—such as preset fold lines, connectivity and rigid–soft partitioning—so that the fabric’s vast continuous deformation space converges into a small set of discrete, predictable and repeatable configurations. In practical terms, this yields clearer face–edge–corner references after folding, reduces state ambiguity and enables standardized interaction procedures that are easier to learn and reproduce.
Building on this controllable mechanics, the team proposes a three-state interaction framework that turns “shape into interaction semantics”. The same piece of cloth is used as (1) a Flat state, functioning like a touchable 2D input surface suitable for panels, menus, sliders and continuous control; (2) a Folded state, where the cloth is folded into simple 3D geometric controllers such as cubes or prisms to provide more explicit boundaries for clicking, sliding, grasping and rotating; and (3) a Transforming state, where the folding process itself encodes commands, allowing mode switching or tool switching through a predefined folding sequence. This “one cloth, many states” framework reduces the need to swap props and recalibrate alignment when switching tasks, offering a more scalable route for multi-task VR interaction.
To make the textile interface truly interactive, the system must sense both action and state. FTHP embeds sensing fibers into the fabric to capture touch and deformation signals without relying on external cameras or rigid sensor arrays that can limit deployment or compromise flexibility. The multi-channel signals provide touch cues in the Flat state, capture operation differences along specific faces or edges in the Folded state, and use time-sequenced folding signals as triggers for command activation in the Transforming state. These signals are then decoded by a lightweight learning model to identify “state plus action” reliably.
In demonstrations, the system combines a seven-channel FTHP prototype with a convolutional neural network (CNN) decoder for multi-mode interaction recognition and VR control. The reported results show about 92.4% action recognition accuracy, supporting diverse interactions across the three states. The authors highlight that the foldable, low-burden and portable nature of the fabric proxy makes it promising for constrained-space operation training and human–machine interaction scenarios, including astronaut-oriented virtual manipulation training tasks.
By integrating topology-guided mechanics with embedded sensing and machine learning, FTHP suggests a pathway from “one object, one model” toward “one cloth, many tasks”. Beyond VR training, the authors note that the same design philosophy could inspire portable and scalable immersive interfaces for broader applications where easy deployment and reusability matter.
National Science Review
Experimental study