{"id":1542,"date":"2025-08-27T11:24:45","date_gmt":"2025-08-27T11:24:45","guid":{"rendered":"https:\/\/muc2025.mensch-und-computer.de\/?page_id=1542"},"modified":"2025-08-27T13:12:42","modified_gmt":"2025-08-27T13:12:42","slug":"demos","status":"publish","type":"page","link":"https:\/\/muc2025.mensch-und-computer.de\/en\/programme\/demos\/","title":{"rendered":"Demos"},"content":{"rendered":"\n<p>In cooperation with Mensch und Computer 2025, the XR-INTERACTION network will showcase cutting-edge demonstrators at the intersection of Extended Reality (XR) and Artificial Intelligence (AI) \u2013 from immersive environments and realistic avatars to interactive, AI-powered applications. As a knowledge-transfer network of more than 60 companies and research institutions, XR-INTERACTION develops scalable XR technologies for socio-cultural applications, enabling new forms of digital presence and collaboration. We warmly invite all conference participants to visit our demo and experience the future of XR and AI first-hand.<\/p>\n\n\n\n<p><strong>(be)greifbar \u2013 Ein physisches Interaktionsobjekt zur Vermittlung abstrakter Zusammenh\u00e4nge<\/strong><br>Margit Rosenburg<br>Hochschule Merseburg, Deutschland<\/p>\n\n\n\n<p><strong>WorkoutBuddy: AI-Based Virtual Fitness Coach for Home Workouts<\/strong><br>David Schwarz (1,2), Tobias Breitenauer (1,2), Daniel Matt (2)<br>1 University of Augsburg, Germany;<br>2 Technical University of Applied Sciences Augsburg, Germany<\/p>\n\n\n\n<p><strong>BreathClip: A Wearable Respiration Sensor for Interaction Design<\/strong><br>Iddo Wald (1), Amber Maimon (2,3), Shiyao Zhang (1), Rainer Malaka (1)<br>1 University of Bremen, Germany;<br>2 University of Haifa;<br>3Ben Gurion University<\/p>\n\n\n\n<p><strong>HugSense: Exploring the Sensing Capabilities of Inflatables<\/strong><br>Klaus Stephan, Maximilian Eibl, Albrecht Kurze<br>TU Chemnitz, Deutschland<\/p>\n\n\n\n<p><strong>Worldhat: A Humanoid Social Robot Interpreter for Multilingual Dyadic Conversations<\/strong><br>Sandra M\u00fcller, Martin Feick, Alexander M\u00e4dche<br>Karlsruhe Institute of Technology, Germany<\/p>\n\n\n\n<p><strong>Breathe Me Up, Scotty: A Virtual Reality Free-Fall Experience to Explore Breathing Rate as a Measure and Interaction Modality in Stressful Situations<\/strong><br>Niklas Pf\u00fctzenreuter (1,3), Daniel Zielasko (2), Uwe Gruenefeld (3)<br>1 University of Duisburg-Essen, Deutschland;<br>2 Trier University, Deutschland;<br>3 GENERIO, Deutschland<\/p>\n\n\n\n<p><strong>CUTIE: A human-in-the-loop interface for the generation of personalised and contextualised image captions<\/strong><br>Aliki Anagnostopoulou (1,2), Sara-Jane Bittner (1), Lavanya Govindaraju (1), Hasan Md Tusfiqur Alam (1), Daniel Sonntag (1,2)<br>1 DFKI Deutsches Forschungszentrum f\u00fcr K\u00fcnstliche Intelligenz, Deutschland;<br>2 Carl-von-Ossietzky Universit\u00e4t Oldenburg, Applied Artificial Intelligence, Deutschland<\/p>\n\n\n\n<p><strong>Demonstrating BREEZE-VR: A Gamified Virtual Reality Biofeedback Breathing Training to Strengthen Mental Resilience and Reduce Acute Stress<\/strong><br>Tobias Kowatsch (1,2,3), Lola Jo Ackermann (2), Helen Galliker (2), Yanick Xavier Lukic (4]<br>1 Institute for Implementation Science in Health Care, University of Z\u00fcrich, Z\u00fcrich, Switzerland;<br>2 School of Medicine, University of St.Gallen, St.Gallen, Switzerland;<br>3 Department of Management, Technology, and Economics, Eidgen\u00f6ssische Technische Hochschule Z\u00fcrich, Z\u00fcrich, Switzerland;<br>4 Institute of Computer Science, School of Engineering, Zurich University of Applied Sciences, Winterthur, Switzerland<\/p>\n\n\n\n<p><strong>DoggoRoomie &#8211; Examining Zoomorphic Robot Interactions For Promoting Active Behavior In A Comfortable Setting<\/strong><br>Anika Bork, Christopher Kr\u00f6ger, Ivana \u017demberi, Lars Hurrelbrink, Srujana Madam Sampangiramu, Yuliya Litvin, Rachel Ringe, Bastian D\u00e4nekas, Rainer Malaka<br>University of Bremen, Germany<\/p>\n\n\n\n<p><strong>Integrating Human Feedback in VR \u2013 A Human-in-the-Loop Approach to Real-Time Gesture Recognition<\/strong><br>Mathias Haimerl (1,2), Andreas Riener (1)<br>1 Human-Computer Interaction Group (HCIG), Faculty of Computer Science, Technische Hochschule Ingolstadt, Deutschland;<br>2 Johannes Kepler Universit\u00e4t Linz, Austria<\/p>\n\n\n\n<p><strong>Sensorkit: A Toolkit for Simple Sensor Data in Research, Education and Beyond<\/strong><br>Albrecht Kurze, Christin Reuter, Andy B\u00f6rner<br>TU Chemnitz, Deutschland<\/p>\n\n\n\n<p><strong>Setting the Stage for Collaboration: A Multi-View Table for Touch and Tangible Map Interaction<\/strong><br>Erich Querner (1,2), Philipp Ewerling (1), Martin Christof Kindsm\u00fcller (2)<br>1 Interactive Scape GmbH, Germany;<br>2 Technische Hochschule Brandenburg, Germany<\/p>\n\n\n\n<p><strong>Whack-a-Pattern: Fighting Deceptive Patterns With a Hammer<\/strong><br>Ren\u00e9 Sch\u00e4fer, Lennart Becker, Adrian Wagner, Kevin Fiedler, Paul Miles Preuschoff, Sophie Hahn, Jan Borchers<br>RWTH Aachen University, Germany<\/p>\n\n\n\n<p><strong>Write Again(st) the Machine. Reanimating a GDR-Era Typewriter as a Reflective Interface for Human-AI Dialogue<\/strong><br>Karola K\u00f6pferl, Albrecht Kurze<br>TU Chemnitz, Deutschland<\/p>\n","protected":false},"excerpt":{"rendered":"<p>In cooperation with Mensch und Computer 2025, the XR-INTERACTION network will showcase cutting-edge demonstrators at the intersection of Extended Reality (XR) and [&hellip;]<\/p>\n","protected":false},"author":6,"featured_media":0,"parent":927,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"footnotes":""},"class_list":["post-1542","page","type-page","status-publish","hentry"],"_links":{"self":[{"href":"https:\/\/muc2025.mensch-und-computer.de\/en\/wp-json\/wp\/v2\/pages\/1542","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/muc2025.mensch-und-computer.de\/en\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/muc2025.mensch-und-computer.de\/en\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/muc2025.mensch-und-computer.de\/en\/wp-json\/wp\/v2\/users\/6"}],"replies":[{"embeddable":true,"href":"https:\/\/muc2025.mensch-und-computer.de\/en\/wp-json\/wp\/v2\/comments?post=1542"}],"version-history":[{"count":4,"href":"https:\/\/muc2025.mensch-und-computer.de\/en\/wp-json\/wp\/v2\/pages\/1542\/revisions"}],"predecessor-version":[{"id":1559,"href":"https:\/\/muc2025.mensch-und-computer.de\/en\/wp-json\/wp\/v2\/pages\/1542\/revisions\/1559"}],"up":[{"embeddable":true,"href":"https:\/\/muc2025.mensch-und-computer.de\/en\/wp-json\/wp\/v2\/pages\/927"}],"wp:attachment":[{"href":"https:\/\/muc2025.mensch-und-computer.de\/en\/wp-json\/wp\/v2\/media?parent=1542"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}