Prof. Dr. Thies Pfeiffer

Address

University of Applied Sciences Emden/Leer Constantiaplatz 4 26723 Emden

Contact Information

Email: thies.pfeiffer@hs-emden-leer.de

Prof. Dr. rer. nat. Thies Pfeiffer

Professor for Human-Computer Interaction

University of Applied Sciences Emden/Leer Faculty of Technology; Department for Electrical Engineering and Information Technology

Thies Pfeiffer’s research interests include human-machine interaction with a strong focus on gaze and gesture, augmented and virtual reality, as well as immersive simulations. In particular, he is interested in applying mixed-reality technologies and advanced user interfaces in the domains of learning and education, assistance and training, as well as prototyping and user research.

Publications

  • K. Skyba and T. Pfeiffer, “Towards natural language understanding for intuitive interactions in XR using large language models,” in Proceedings of the Workshop Virtuelle & Erweiterte Realität 2024, Gesellschaft für Informatik e.V., 2024. doi:10.18420/vrar2024_0021
    [BibTeX] [Abstract] [Download PDF]
    This paper presents a voice assistance system for extended reality (XR) applications based on large language models (LLMs). The aim is to create an intuitive and natural interface between users and virtual environments that goes beyond traditional, predefined voice commands. An architecture is presented that integrates LLMs as embodied agents in XR environments and utilizes their natural language understanding and contextual reasoning capabilities. The system interprets complex spatial instructions and translates them into concrete actions in the virtual environment. The performance of the system is evaluated in XR scenarios including object manipulation, navigation and complex spatial transformations. The results show promising performance in simple tasks, but also reveal challenges in processing complex spatial concepts. This work contributes to the improvement of user interaction in XR environments and opens up new possibilities for the integration of LLMs in XR environments.
    @incollection{SkybaPfeiffer2024,
    author = "Skyba, Kevin and Pfeiffer, Thies",
    title = "Towards natural language understanding for intuitive interactions in XR using large language models",
    booktitle = {Proceedings of the Workshop Virtuelle & Erweiterte Realität 2024},
    year = 2024,
    doi = "10.18420/vrar2024_0021",
    howpublished = "GI VR / AR Workshop",
    abstract = {This paper presents a voice assistance system for extended reality (XR) applications based on large language models (LLMs). The aim is to create an intuitive and natural interface between users and virtual environments that goes beyond traditional, predefined voice commands. An architecture is presented that integrates LLMs as embodied agents in XR environments and utilizes their natural language understanding and contextual reasoning capabilities. The system interprets complex spatial instructions and translates them into concrete actions in the virtual environment. The performance of the system is evaluated in XR scenarios including object manipulation, navigation and complex spatial transformations. The results show promising performance in simple tasks, but also reveal challenges in processing complex spatial concepts. This work contributes to the improvement of user interaction in XR environments and opens up new possibilities for the integration of LLMs in XR environments.},
    publisher = "Gesellschaft für Informatik e.V.",
    url={https://dl.gi.de/items/af77e572-225b-4125-8e2d-5201e795f045},
    }
  • L. Eidloth, C. Atzenbeck, and T. Pfeiffer, “Stepping into the Unknown: Immersive Spatial Hypertext,” , New York, NY, USA, 2024. doi:10.1145/3679058.3688632
    [BibTeX] [Abstract] [Download PDF]
    Traditional spatial hypertext systems, predominantly limited to two-dimensional (2D) interfaces, offer limited support for addressing long debated inherent problems such as orientation difficulties and navigation in large information spaces. In this context, we present opportunities from interdisciplinary fields such as immersive analytics (IA) and embodied cognition that may mitigate some of these challenges. However, while some research has explored the extension of spatial hypertext to three dimensions, there is a lack of discussion on recent advances in virtual reality technologies and related fields, and their potential impact on immersive spatial hypertext systems. This paper addresses this gap by exploring the integration of immersive technologies into spatial hypertext systems, proposing a novel approach to enhance user engagement and comprehension through three-dimensional (3D) environments and multisensory interaction.
    @inproceedings{10.1145/3679058.3688632,
    author = {Eidloth, Lisa and Atzenbeck, Claus and Pfeiffer, Thies},
    title = {Stepping into the Unknown: Immersive Spatial Hypertext},
    year = {2024},
    isbn = {979-8-4007-1120-6/24/09},
    publisher = {Association for Computing Machinery},
    address = {New York, NY, USA},
    url = {https://doi.org/10.1145/3679058.3688632},
    doi = {10.1145/3679058.3688632},
    abstract = {Traditional spatial hypertext systems, predominantly limited to two-dimensional (2D) interfaces, offer limited support for addressing long debated inherent problems such as orientation difficulties and navigation in large information spaces. In this context, we present opportunities from interdisciplinary fields such as immersive analytics (IA) and embodied cognition that may mitigate some of these challenges. However, while some research has explored the extension of spatial hypertext to three dimensions, there is a lack of discussion on recent advances in virtual reality technologies and related fields, and their potential impact on immersive spatial hypertext systems. This paper addresses this gap by exploring the integration of immersive technologies into spatial hypertext systems, proposing a novel approach to enhance user engagement and comprehension through three-dimensional (3D) environments and multisensory interaction.},
    numpages = {6},
    keywords = {hypertext, spatial hypertext, knowledge, extended reality, virtual reality, immersiveness, information exploration},
    location = {Poznań, Poland},
    series = {HUMAN '24}
    }
  • Y. Tehreem, T. Pfeiffer, and S. Wachsmuth, “A Hybrid Collaboration Design for a Large Scale Virtual Reality Training Environment to Fulfil the Belongingness Needs of Maslow’s Theory,” in Virtual Reality and Mixed Reality, Cham, 2025, p. 259–281. doi:https://doi.org/10.1007/978-3-031-78593-1_15
    [BibTeX] [Abstract]
    This work is part of a line of research to systematically investigate, how virtual reality (VR) trainings can be designed to be pragmatically effective (e.g. scalable) while satisfying human needs. To this ends, the design is guided inter alia by human motivational theory and in particular Maslow’s hierarchy of needs (MHN). The study at hand focused on the third level of MHN, covering the need for belongingness. Considering a classroom-sized VR setup, it appears obvious, that multi-user implementations may be effective in creating a sense of belongingness. However, increasing sizes of training areas, sense of competition, distractions or mutual influence impose challenges that need to be overcome. The study evaluates a new concept for a diminished multi-user approach, where only selected elements, that are supportive for belongingness, are synchronized, while disturbing elements are filtered. Results show, that the design, which is applicable for large environments, indeed increased communication, collaboration and awareness, without affecting comfort or distraction compared to single-user simulations.
    @InProceedings{10.1007/978-3-031-78593-1_15,
    author="Tehreem, Yusra and Pfeiffer, Thies and Wachsmuth, Sven",
    editor="Reyes-Lecuona, Arcadio and Zachmann, Gabriel and Bordegoni, Monica and Chen, Jian and Karaseitanidis, Giannis and Pagani, Alain and Bourdot, Patrick",
    title="A Hybrid Collaboration Design for a Large Scale Virtual Reality Training Environment to Fulfil the Belongingness Needs of Maslow's Theory",
    booktitle="Virtual Reality and Mixed Reality",
    year="2025",
    publisher="Springer Nature Switzerland",
    address="Cham",
    pages="259--281",
    abstract="This work is part of a line of research to systematically investigate, how virtual reality (VR) trainings can be designed to be pragmatically effective (e.g. scalable) while satisfying human needs. To this ends, the design is guided inter alia by human motivational theory and in particular Maslow's hierarchy of needs (MHN). The study at hand focused on the third level of MHN, covering the need for belongingness. Considering a classroom-sized VR setup, it appears obvious, that multi-user implementations may be effective in creating a sense of belongingness. However, increasing sizes of training areas, sense of competition, distractions or mutual influence impose challenges that need to be overcome. The study evaluates a new concept for a diminished multi-user approach, where only selected elements, that are supportive for belongingness, are synchronized, while disturbing elements are filtered. Results show, that the design, which is applicable for large environments, indeed increased communication, collaboration and awareness, without affecting comfort or distraction compared to single-user simulations.",
    isbn="978-3-031-78593-1",
    doi="https://doi.org/10.1007/978-3-031-78593-1_15"
    }
  • T. Weiss, J. Pfeiffer, and T. Pfeiffer, “Early Bird – Predict healthy product choices in virtual commerce,” in ECIS 2024 Proceedings, 2024.
    [BibTeX] [Abstract] [Download PDF]
    Due to advances in extended reality technology, an increasing number of head-mounted displays are equipped with eye trackers. These sensors allow to predict customers’ preferences on-the-fly. Such information can serve as features for recommender systems. We propose to treat eye tracking data as time series and utilize a deep time series classifier for inference. Our evaluation investigates possibly early predictions about customer preferences for healthy products in a virtual reality environment. The results, that are based on data from a large-scale laboratory experiment, demonstrate superior performance of the time series classifier, compared to a shallow gradient boosting baseline. They indicate a trade-off between prediction quality and how early this prediction is made. Overall, our study suggests that eye tracking and time series classification are valuable avenues for research and practice. Adaptive (shopping) assistants and recommendations based on artificial intelligence and bio sensors seem to be in close vicinity.
    @inproceedings{weiss2024early,
    title={Early Bird - Predict healthy product choices in virtual commerce},
    author={Weiss, Tobias and Pfeiffer, Jella and Pfeiffer, Thies},
    booktitle = {ECIS 2024 Proceedings},
    year = {2024},
    abstract = {Due to advances in extended reality technology, an increasing number of head-mounted displays are equipped with eye trackers. These sensors allow to predict customers’ preferences on-the-fly. Such information can serve as features for recommender systems. We propose to treat eye tracking data as time series and utilize a deep time series classifier for inference. Our evaluation investigates possibly early predictions about customer preferences for healthy products in a virtual reality environment. The results, that are based on data from a large-scale laboratory experiment, demonstrate superior performance of the time series classifier, compared to a shallow gradient boosting baseline. They indicate a trade-off between prediction quality and how early this prediction is made. Overall, our study suggests that eye tracking and time series classification are valuable avenues for research and practice. Adaptive (shopping) assistants and recommendations based on artificial intelligence and bio sensors seem to be in close vicinity.},
    url = {https://aisel.aisnet.org/ecis2024/track09_coghbis/track09_coghbis/3/}
    }
  • J. Dyrna, M. Arnold, H. Fischer, T. Pfeiffer, L. Meyer, and T. Köhler, “Virtual Reality-gestützte Rollenspiele für die berufliche Bildung: Drei Anwendungsbeispiele und ein Implementierungsleitfaden,” in Handbuch E-Learning, A. Hohenstein and K. Wilbers, Eds., Fachverlag Deutscher Wirtschaftsdienst, 2024, vol. 107. Ergänzungslieferung.
    [BibTeX] [Abstract]
    In Virtual Reality (VR)-Umgebungen können Auszubildende, Umzuschulende und Weiterzubildende vergleichsweise kostengünstig, risikoarm und methodisch vielfältig – wie zum Beispiel in Form von virtuellen Rollenspielen – wichtige Handlungs- und Sozialkompetenzen für ihre aktuelle oder zukünftige berufliche Tätigkeit trainieren. Mit dem vorliegenden Beitrag möchten wir Organisationen der beruflichen Bildung, wie zum Beispiel Bildungsträger oder Betriebe, in diese Thematik einführen. Dazu geben wir zunächst einen Überblick über die Potenziale und Forschungsergebnisse zur bildungsbezogenen Nutzung von VR-Technologie sowie speziell zu VR-gestützten Rollenspielen. Zur Veranschaulichung ihrer Einsatzmöglichkeiten beschreiben wir anschließend drei Anwendungsfälle aus den Bereichen Tourismus, Immobilienwirtschaft und Gesundheit. Um Organisationen eine Unterstützung bei der Erwägung und Implementierung von VR-gestützten Lernszenarien bereitzustellen, erarbeiten wir abschließend einen Handlungsleitfaden mit sechs Schritten.
    @incollection{Pfeiffer2024a,
    title = {Virtual Reality-gestützte Rollenspiele für die berufliche Bildung: Drei Anwendungsbeispiele und ein Implementierungsleitfaden},
    author = {Dyrna, Jonathan and Arnold, Maik and Fischer, Helge and Pfeiffer, Thies and Meyer, Leonard and Köhler, Thomas},
    year = 2024,
    booktitle = {Handbuch E-Learning},
    publisher = {Fachverlag Deutscher Wirtschaftsdienst},
    volume = {107. Ergänzungslieferung},
    number = {6.57},
    isbn = {978-3-87156-298-3},
    editor = {Hohenstein, Andreas and Wilbers, Karl},
    abstract = {In Virtual Reality (VR)-Umgebungen können Auszubildende, Umzuschulende und Weiterzubildende vergleichsweise kostengünstig, risikoarm und methodisch vielfältig – wie zum Beispiel in Form von virtuellen Rollenspielen – wichtige Handlungs- und Sozialkompetenzen für ihre aktuelle oder zukünftige berufliche Tätigkeit trainieren. Mit dem vorliegenden Beitrag möchten wir Organisationen der beruflichen Bildung, wie zum Beispiel Bildungsträger oder Betriebe, in diese Thematik einführen. Dazu geben wir zunächst einen Überblick über die Potenziale und Forschungsergebnisse zur bildungsbezogenen Nutzung von VR-Technologie sowie speziell zu VR-gestützten Rollenspielen. Zur Veranschaulichung ihrer Einsatzmöglichkeiten beschreiben wir anschließend drei Anwendungsfälle aus den Bereichen Tourismus, Immobilienwirtschaft und Gesundheit. Um Organisationen eine Unterstützung bei der Erwägung und Implementierung von VR-gestützten Lernszenarien bereitzustellen, erarbeiten wir abschließend einen Handlungsleitfaden mit sechs Schritten.}
    }
  • P. Mavrogiorgou, P. Böhme, M. Kramer, S. Vanscheidt, T. Schoppa, V. Hooge, N. Lüdike, T. Pfeiffer, and G. Juckel, “Virtuelle Realität in der Lehre mit psychisch kranken Patientenavataren,” Der Nervenarzt, vol. 95, iss. 3, pp. 247-253, 2024. doi:10.1007/s00115-024-01610-y
    [BibTeX] [Abstract] [Download PDF]
    Ärztliche Interaktions- und Explorationstechniken sind die wichtigsten Werkzeuge, die Medizinstudierende im Fach Psychiatrie und Psychotherapie zu erwerben haben. Die aktuell verfügbaren modernen digitalen Technologien wie Virtual Reality (VR) können als wichtige Ergänzungen zu einer Verbesserung der Vermittlung psychiatrisch-psychopathologischer Lerninhalte sowie Diagnosestellung beitragen.
    @article{Mavrogiorgou2024,
    author={Mavrogiorgou, Paraskevi
    and B{\"o}hme, Pierre
    and Kramer, Marco
    and Vanscheidt, Simon
    and Schoppa, Thomas
    and Hooge, Vitalij
    and L{\"u}dike, Nico
    and Pfeiffer, Thies
    and Juckel, Georg},
    title={Virtuelle Realit{\"a}t in der Lehre mit psychisch kranken Patientenavataren},
    journal={Der Nervenarzt},
    year={2024},
    month={Mar},
    day={01},
    volume={95},
    number={3},
    pages={247-253},
    abstract={{\"A}rztliche Interaktions- und Explorationstechniken sind die wichtigsten Werkzeuge, die Medizinstudierende im Fach Psychiatrie und Psychotherapie zu erwerben haben. Die aktuell verf{\"u}gbaren modernen digitalen Technologien wie Virtual Reality (VR) k{\"o}nnen als wichtige Erg{\"a}nzungen zu einer Verbesserung der Vermittlung psychiatrisch-psychopathologischer Lerninhalte sowie Diagnosestellung beitragen.},
    issn={1433-0407},
    doi={10.1007/s00115-024-01610-y},
    url={https://doi.org/10.1007/s00115-024-01610-y}
    }
  • T. Potempa, I. Stroucken, T. Uelzen, I. Johannsen, M. Sandner, B. Kendelbacher, M. Müller, Y. Lonkai, S. Lamanov, K. Mecke, J. Blandfort, I. Belaiba, S. Azer, M. Haupt, K. Kölln, M. Sohn, M. Rauschenberger, T. Pfeiffer, R. Ahlbrecht, J. Rempe, T. Mehring, L. Weber, E. Bertram, D. Gouverneur, and S. Föste, Lehr- und LernOrte vernetzen (Zwischenergebnisse des Futur.A-Projekts), T. Potempa and I. Stroucken, Eds., , 2023. doi:10.26271/opus-1670
    [BibTeX] [Abstract] [Download PDF]
    Das anwendungsbezogene Lernen hat für Studierende an Fachhochschulen eine hohe Bedeutung. Es übt nicht nur den Theorie-Praxis-Transfer in den angewandten Wissenschaften, sondern fördert unter persönlicher Anleitung auch motorische sowie interaktive Kompetenzentwicklung und nicht zuletzt die Motivation. Die Erweiterung von Lernumgebungen durch Digitalisierung solcher Lehrveranstaltungen erfordert eine besondere technische und didaktische Qualität, um Lernerfolge trotz der Nicht-Präsenz bestmöglich zu gewährleisten. Sie bietet jedoch auch neue Möglichkeiten für interdisziplinäres Lehren und Lernen. Das Futur.A-Projekt (Future Skills. Apllied) ist ein Verbundprojekt von sechs nieder-sächsischen Fachhochschulen und fokussiert im Teilprojekt 3 die Entwicklung, Erprobung und curriculare Verankerung digital gestützten anwendungsbezogenen Lernens. Erste Ergebnisse aus diesem Teilprojekt werden in diesem Band vorgestellt und beschrieben. Beispielhafte Lernorte sind Labore sowie Lehrveranstaltungen, in denen komplexe Themen mit starkem Anwendungsbezug behandelt werden.
    @book{PotempaStrouckenUelzenetal.2023,
    author = {Thomas Potempa and Ilona Stroucken and Thorsten Uelzen and Ingo Johannsen and Marvin Sandner and Bj{\"o}rn Kendelbacher and Michael M{\"u}ller and Yannick Lonkai and Sergej Lamanov and Kai Mecke and Julia Blandfort and Iheb Belaiba and Sebastian Azer and Matthias Haupt and Kristina K{\"o}lln and Martin Sohn and Maria Rauschenberger and Thies Pfeiffer and Regina Ahlbrecht and Julia Rempe and Tanja Mehring and Lars Weber and Erik Bertram and Dirk Gouverneur and Sebastian F{\"o}ste},
    title = {Lehr- und LernOrte vernetzen (Zwischenergebnisse des Futur.A-Projekts)},
    editor = {Thomas Potempa and Ilona Stroucken},
    doi = {10.26271/opus-1670},
    url={https://opus.ostfalia.de/frontdoor/index/index/docId/1670},
    abstract={Das anwendungsbezogene Lernen hat für Studierende an Fachhochschulen eine hohe Bedeutung. Es übt nicht nur den Theorie-Praxis-Transfer in den angewandten Wissenschaften, sondern fördert unter persönlicher Anleitung auch motorische sowie interaktive Kompetenzentwicklung und nicht zuletzt die Motivation. Die Erweiterung von Lernumgebungen durch Digitalisierung solcher Lehrveranstaltungen erfordert eine besondere technische und didaktische Qualität, um Lernerfolge trotz der Nicht-Präsenz bestmöglich zu gewährleisten. Sie bietet jedoch auch neue Möglichkeiten für interdisziplinäres Lehren und Lernen. Das Futur.A-Projekt (Future Skills. Apllied) ist ein Verbundprojekt von sechs nieder-sächsischen Fachhochschulen und fokussiert im Teilprojekt 3 die Entwicklung, Erprobung und curriculare Verankerung digital gestützten anwendungsbezogenen Lernens. Erste Ergebnisse aus diesem Teilprojekt werden in diesem Band vorgestellt und beschrieben. Beispielhafte Lernorte sind Labore sowie Lehrveranstaltungen, in denen komplexe Themen mit starkem Anwendungsbezug behandelt werden.},
    year = {2023},
    }
  • T. Pfeiffer, “The Homunculus in the Metaverse: Is Virtual Reality Prepared for Our SevenSenses?,” NIM Marketing Intelligence Review, vol. 15, iss. 2, p. 36–41, 2023. doi:doi:10.2478/nimmir-2023-0015
    [BibTeX] [Abstract] [Download PDF]
    In millions of years, evolution has shaped our human bodies and brains for sensing, acting and thinking in the physical or actual world. To probe, handle and assess food – a necessity for survival -, hands and lips played a key role for tactile sensing. The highest density of receptors can be found in our lips and fingertips, despite their small physical size. Consequently, the proportion of the brain dedicated to tactile sensing is relatively large. As a play of thought, let’s assume that the relevance of our individual senses for acting in the actual world corresponds to the proportions of the brain dedicated to their processing. What if we could visualize these proportions in an intuitively accessible way? Meet Penfield’s homunculus, a deformed human shape in which the size of body parts is changed to match those proportions. It seems, evolution has prepared us well for the physical world – but how so for the metaverse?
    @article{Pfeiffer2023NIM,
    author = {Thies Pfeiffer},
    doi = {doi:10.2478/nimmir-2023-0015},
    url = {https://doi.org/10.2478/nimmir-2023-0015},
    title = {The Homunculus in the Metaverse: Is Virtual Reality Prepared for Our SevenSenses?},
    journal = {NIM Marketing Intelligence Review},
    number = {2},
    volume = {15},
    year = {2023},
    pages = {36--41},
    abstract = {In millions of years, evolution has shaped our human bodies and brains for sensing, acting and thinking in the physical or actual world. To probe, handle and assess food - a necessity for survival -, hands and lips played a key role for tactile sensing. The highest density of receptors can be found in our lips and fingertips, despite their small physical size. Consequently, the proportion of the brain dedicated to tactile sensing is relatively large. As a play of thought, let's assume that the relevance of our individual senses for acting in the actual world corresponds to the proportions of the brain dedicated to their processing. What if we could visualize these proportions in an intuitively accessible way? Meet Penfield's homunculus, a deformed human shape in which the size of body parts is changed to match those proportions. It seems, evolution has prepared us well for the physical world – but how so for the metaverse?}
    }
  • J. Blattgerste, J. Behrends, and T. Pfeiffer, “TrainAR: An Open-Source Visual Scripting-Based Authoring Tool for Procedural Mobile Augmented Reality Trainings,” Information, vol. 14, iss. 4, 2023. doi:10.3390/info14040219
    [BibTeX] [Abstract] [Download PDF]
    Mobile Augmented Reality (AR) is a promising technology for educational purposes. It allows for interactive, engaging, and spatially independent learning. While the didactic benefits of AR have been well studied in recent years and commodity smartphones already come with AR capabilities, concepts and tools for a scalable deployment of AR are still missing. The proposed solution TrainAR combines an interaction concept, a didactic framework and an authoring tool for procedural AR training applications for smartphones. The contribution of this paper is the open-source visual scripting-based authoring tool of TrainAR in the form of a Unity Editor extension. With this approach, TrainAR allows non-programmer domain experts to create (“author”) their own procedural AR trainings by offering a customized editor, while at any time programmers may decide to utilize Unity’s full capabilities. Furthermore, utility and usability evaluations of several already developed TrainAR trainings (combined n = 317) show that TrainAR trainings provide utility in several contexts and are usable by the target groups. A systematic usability evaluation of the TrainAR Authoring Tool (n = 30) shows that it would be usable by non-programmer domain experts, though the learning curve depends on the media competency of the authors.
    @Article{Blattgerste2023TrainAR,
    AUTHOR = {Blattgerste, Jonas and Behrends, Jan and Pfeiffer, Thies},
    TITLE = {TrainAR: An Open-Source Visual Scripting-Based Authoring Tool for Procedural Mobile Augmented Reality Trainings},
    JOURNAL = {Information},
    VOLUME = {14},
    YEAR = {2023},
    NUMBER = {4},
    ARTICLE-NUMBER = {219},
    URL = {https://mixality.de/wp-content/uploads/2023/04/Blattgerste2023TrainAR.pdf},
    ISSN = {2078-2489},
    ABSTRACT = {Mobile Augmented Reality (AR) is a promising technology for educational purposes. It allows for interactive, engaging, and spatially independent learning. While the didactic benefits of AR have been well studied in recent years and commodity smartphones already come with AR capabilities, concepts and tools for a scalable deployment of AR are still missing. The proposed solution TrainAR combines an interaction concept, a didactic framework and an authoring tool for procedural AR training applications for smartphones. The contribution of this paper is the open-source visual scripting-based authoring tool of TrainAR in the form of a Unity Editor extension. With this approach, TrainAR allows non-programmer domain experts to create (“author”) their own procedural AR trainings by offering a customized editor, while at any time programmers may decide to utilize Unity’s full capabilities. Furthermore, utility and usability evaluations of several already developed TrainAR trainings (combined n = 317) show that TrainAR trainings provide utility in several contexts and are usable by the target groups. A systematic usability evaluation of the TrainAR Authoring Tool (n = 30) shows that it would be usable by non-programmer domain experts, though the learning curve depends on the media competency of the authors.},
    DOI = {10.3390/info14040219}
    }
  • U. Hejna, C. Hainke, T. Pfeiffer, and S. Seeling, “Mehrbenutzer-VR-Anwendungen für ein rollenbasiertes Falltraining: Ein explorativer Einsatz im Kontext der Pflegeausbildung,” MedienPädagogik: Zeitschrift für Theorie und Praxis der Medienbildung, vol. 51, p. 314–344, 2023.
    [BibTeX] [Abstract] [Download PDF]
    Da Gruppenarbeit die Kompetenz- und Qualifikationsentwicklung bei Lernenden fördern kann, wird sie zur Stärkung beruflicher Handlungs-, Personal- und Methodenkompetenz eingesetzt. Die zunehmende Digitalisierung bringt jedoch Herausforderungen für Gruppenarbeit im Online-Format mit sich. Zurzeit werden für deren Umsetzung häufig Videokonferenztools verwendet. Dabei steigt mit der Entwicklung von Consumer-freundlicher VR-Hardware das Interesse an Virtual Reality (VR) in der Bildung, da Motivation und Engagement der Lernenden gesteigert werden können, ressourcenschonende Lehre möglich ist und seltene oder gefährliche Situationen beliebig oft wiederholt und eingeübt werden können. Trotz der Vorteile, die das Lernen mit VR mit sich bringt, wird das Medium in der Lehre selten verwendet. Gründe dafür könnten die aufwendige Einarbeitung in die noch neue Technologie sein sowie die Einbindung ins Curriculum ohne eine etablierte didaktische Grundlage mitzudenken. Um die Einbindung zu erleichtern, soll in diesem Beitrag ein Implementierungsbeispiel für eine Mehrpersonen-VR-Anwendung vorgestellt und Ergebnisse einer ersten Erprobung im hochschulischen Lehrkontext aufgeführt werden. Die Anwendung wurde verwendet, um Pflege-Studierenden eine Möglichkeit zur multiperspektivischen Besprechung eines Fallbeispiels zu geben. Während das Feedback der Studierenden zur Nutzung der Anwendung überwiegend positiv ausfiel, zeigt der erhöhte Implementierungsaufwand die Notwendigkeit eines Implementierungs- und (Fach)-Didaktik-Konzeptes, um den Aufwand für den Einsatz von VR in der Lehre zu minimieren.
    @article{hejna2023mehrbenutzer,
    title={Mehrbenutzer-VR-Anwendungen f{\"u}r ein rollenbasiertes Falltraining: Ein explorativer Einsatz im Kontext der Pflegeausbildung},
    author={Hejna, Urszula and Hainke, Carolin and Pfeiffer, Thies and Seeling, Stefanie},
    journal={MedienP{\"a}dagogik: Zeitschrift f{\"u}r Theorie und Praxis der Medienbildung},
    volume={51},
    pages={314--344},
    year={2023},
    abstract={Da Gruppenarbeit die Kompetenz- und Qualifikationsentwicklung bei Lernenden fördern kann, wird sie zur Stärkung beruflicher Handlungs-, Personal- und Methodenkompetenz eingesetzt. Die zunehmende Digitalisierung bringt jedoch Herausforderungen für Gruppenarbeit im Online-Format mit sich. Zurzeit werden für deren Umsetzung häufig Videokonferenztools verwendet. Dabei steigt mit der Entwicklung von Consumer-freundlicher VR-Hardware das Interesse an Virtual Reality (VR) in der Bildung, da Motivation und Engagement der Lernenden gesteigert werden können, ressourcenschonende Lehre möglich ist und seltene oder gefährliche Situationen beliebig oft wiederholt und eingeübt werden können. Trotz der Vorteile, die das Lernen mit VR mit sich bringt, wird das Medium in der Lehre selten verwendet. Gründe dafür könnten die aufwendige Einarbeitung in die noch neue Technologie sein sowie die Einbindung ins Curriculum ohne eine etablierte didaktische Grundlage mitzudenken. Um die Einbindung zu erleichtern, soll in diesem Beitrag ein Implementierungsbeispiel für eine Mehrpersonen-VR-Anwendung vorgestellt und Ergebnisse einer ersten Erprobung im hochschulischen Lehrkontext aufgeführt werden. Die Anwendung wurde verwendet, um Pflege-Studierenden eine Möglichkeit zur multiperspektivischen Besprechung eines Fallbeispiels zu geben. Während das Feedback der Studierenden zur Nutzung der Anwendung überwiegend positiv ausfiel, zeigt der erhöhte Implementierungsaufwand die Notwendigkeit eines Implementierungs- und (Fach)-Didaktik-Konzeptes, um den Aufwand für den Einsatz von VR in der Lehre zu minimieren.},
    url={https://www.medienpaed.com/article/view/1594/1239}
    }
  • K. Vogel, A. Bernloehr, C. Lewa, J. Blattgerste, M. Joswig, T. Schäfer, T. Pfeiffer, and N. H. Bauer, “Augmented Reality based training for student midwives (Heb@AR) – what kind of support do teachers need?,” 6. Internationale Konferenz Deutsche Gesellschaft für Hebammenwissenschaft (DGHWi e.V.), 2022.
    [BibTeX] [Download PDF]
    @article{Vogel202Augmented,
    title={Augmented Reality based training for student midwives (Heb@AR) – what kind of support do teachers need?},
    author={Vogel, Kristina and Bernloehr, Annette and Lewa, Carmen and Blattgerste, Jonas and Joswig, Matthias and Sch{\"a}fer, Thorsten and Pfeiffer, Thies and Bauer, Nicola H.},
    journal={6. Internationale Konferenz Deutsche Gesellschaft f{\"u}r Hebammenwissenschaft (DGHWi e.V.)},
    year={2022},
    url = {https://mixality.de/wp-content/uploads/2023/03/Vogel2022Support.pdf}
    }
  • C. Lewa, M. Joswig, T. Willmeroth, K. Luksch, J. Blattgerste, T. Pfeiffer, A. Bernloehr, N. H. Bauer, and T. Schäfer, “Notfallszenarien transformiert in eine Augmented Reality Lehr-/Lernbegleitung für ein realitätsnahes Training in der hochschulischen Hebammenausbildung,” Jahrestagung der Gesellschaft für Medizinische Ausbildung (GMA), 2022.
    [BibTeX] [Download PDF]
    @article{lewa2022notfallszenarien,
    title={Notfallszenarien transformiert in eine Augmented Reality Lehr-/Lernbegleitung f{\"u}r ein realit{\"a}tsnahes Training in der hochschulischen Hebammenausbildung},
    author={Lewa, Carmen and Joswig, Matthias and Willmeroth, Tabea and Luksch, Kristina and Blattgerste, Jonas and Pfeiffer, Thies and Bernloehr, Annette and Bauer, Nicola H. and Sch{\"a}fer, Thorsten},
    year={2022},
    journal={Jahrestagung der Gesellschaft f{\"u}r Medizinische Ausbildung (GMA)},
    url = {https://mixality.de/wp-content/uploads/2023/03/Lewa2022Notfall.pdf}
    }
  • J. Blattgerste, J. Franssen, M. Arztmann, and T. Pfeiffer, “Motivational benefits and usability of a handheld Augmented Reality game for anatomy learning,” in 2022 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR), 2022.
    [BibTeX] [Download PDF]
    @inproceedings{Blattgerste2022Motivational,
    title={Motivational benefits and usability of a handheld Augmented Reality game for anatomy learning},
    author={Blattgerste, Jonas and Franssen, Jannik and Arztmann, Michaela and Pfeiffer, Thies},
    booktitle={2022 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)},
    year={2022},
    organization={IEEE},
    url = {https://mixality.de/wp-content/uploads/2022/12/Blattgerste2022Motivational.pdf}
    }
  • J. Blattgerste and T. Pfeiffer, “TrainAR: Ein Augmented Reality Training Autorensystem,” in Wettbewerbsband AVRiL 2022, Bonn, 2022, pp. 40-45. doi:10.18420/avril2022_06
    [BibTeX] [Download PDF]
    @inproceedings{Blattgerste2022TrainAR,
    author = {Blattgerste, Jonas AND Pfeiffer, Thies},
    title = {TrainAR: Ein Augmented Reality Training Autorensystem},
    booktitle = {Wettbewerbsband AVRiL 2022},
    year = {2022},
    editor = {Söbke, Heinrich AND Zender, Raphael} ,
    pages = {40-45} ,
    doi = {10.18420/avril2022_06},
    publisher = {Gesellschaft für Informatik e.V.},
    address = {Bonn},
    url = {https://mixality.de/wp-content/uploads/2022/12/Blattgerste2022TrainAR.pdf}
    }
  • J. Blattgerste, C. Lewa, K. Vogel, T. Willmeroth, S. Janßen, J. Franssen, J. Behrends, M. Joswig, T. Schäfer, N. H. Bauer, A. Bernloehr, and T. Pfeiffer, “Die Heb@AR App – Eine Android & iOS App mit Augmented Reality Trainings für selbstbestimmtes und curriculares Lernen in der hochschulischen Hebammenausbildung,” in Wettbewerbsband AVRiL 2022, Bonn, 2022, pp. 4-9. doi:10.18420/avril2022_01
    [BibTeX] [Download PDF]
    @inproceedings{Blattgerste2022HebARAVRiL,
    author = {Blattgerste, Jonas AND Lewa , Carmen AND Vogel, Kristina AND Willmeroth, Tabea AND Janßen, Sven AND Franssen, Jannik AND Behrends, Jan AND Joswig, Matthias AND Schäfer, Thorsten AND Bauer, Nicola H. AND Bernloehr, Annette AND Pfeiffer, Thies},
    title = {Die Heb@AR App - Eine Android & iOS App mit Augmented Reality Trainings für selbstbestimmtes und curriculares Lernen in der hochschulischen Hebammenausbildung},
    booktitle = {Wettbewerbsband AVRiL 2022},
    year = {2022},
    editor = {Söbke, Heinrich AND Zender, Raphael} ,
    pages = {4-9} ,
    doi = {10.18420/avril2022_01},
    publisher = {Gesellschaft für Informatik e.V.},
    address = {Bonn},
    url = {https://mixality.de/wp-content/uploads/2022/12/Blattgerste2022HebARAVRiL.pdf}
    }
  • Y. Tehreem, S. G. Fracaro, T. Gallagher, R. Toyoda, K. Bernaerts, J. Glassey, F. R. Abegão, S. Wachsmuth, M. Wilk, and T. Pfeiffer, “May I Remain Seated: A Pilot Study on the Impact of Reducing Room-Scale Trainings to Seated Conditions for Long Procedural Virtual Reality Trainings,” in 2022 8th International Conference on Virtual Reality (ICVR), 2022, pp. 62-71. doi:10.1109/ICVR55215.2022.9848222
    [BibTeX] [Abstract] [Download PDF]
    Although modern consumer level head-mounted-displays of today provide high-quality room scale tracking, and thus support a high level of immersion and presence, there are application contexts in which constraining oneself to seated set-ups is necessary. Classroom sized training groups are one highly relevant example. However, what is lost when constraining cybernauts to a stationary seated physical space? What is the impact on immersion, presence, cybersickness and what implications does this have on training success? Can a careful design for seated virtual reality (VR) amend some of these aspects? In this line of research, the study provides data on a comparison between standing and seated long (50–60 min) procedural VR training sessions of chemical operators in a realistic and lengthy chemical procedure (combination of digital and physical actions) inside a large 3-floor virtual chemical plant. Besides, a VR training framework based on Maslow’s hierarchy of needs (MHN) is also proposed to systematically analyze the needs in VR environments. In the first of a series of studies, the physiological and safety needs of MHN are evaluated among seated and standing groups in the form of cybersickness, usability and user experience. The results (n=32, real personnel of a chemical plant) show no statistically significant differences among seated and standing groups. There were low levels of cybersickness along with good scores of usability and user experience for both conditions. From these results, it can be implied that the seated condition does not impose significant problems that might hinder its application in classroom training. A follow-up study with a larger sample will provide a more detailed analysis on differences in experienced presence and learning success.
    @inproceedings{Tehreem2022MayIRemainSeated,
    author={Tehreem, Yusra and Fracaro, Sofia Garcia and Gallagher, Timothy and Toyoda, Ryo and Bernaerts, Kristel and Glassey, Jarka and Abeg{\~{a}}o, Fernando Russo and Wachsmuth, Sven and Wilk, Michael and Pfeiffer, Thies},
    booktitle={2022 8th International Conference on Virtual Reality (ICVR)},
    title={May I Remain Seated: A Pilot Study on the Impact of Reducing Room-Scale Trainings to Seated Conditions for Long Procedural Virtual Reality Trainings},
    abstract={Although modern consumer level head-mounted-displays of today provide high-quality room scale tracking, and thus support a high level of immersion and presence, there are application contexts in which constraining oneself to seated set-ups is necessary. Classroom sized training groups are one highly relevant example. However, what is lost when constraining cybernauts to a stationary seated physical space? What is the impact on immersion, presence, cybersickness and what implications does this have on training success? Can a careful design for seated virtual reality (VR) amend some of these aspects? In this line of research, the study provides data on a comparison between standing and seated long (50–60 min) procedural VR training sessions of chemical operators in a realistic and lengthy chemical procedure (combination of digital and physical actions) inside a large 3-floor virtual chemical plant. Besides, a VR training framework based on Maslow's hierarchy of needs (MHN) is also proposed to systematically analyze the needs in VR environments. In the first of a series of studies, the physiological and safety needs of MHN are evaluated among seated and standing groups in the form of cybersickness, usability and user experience. The results (n=32, real personnel of a chemical plant) show no statistically significant differences among seated and standing groups. There were low levels of cybersickness along with good scores of usability and user experience for both conditions. From these results, it can be implied that the seated condition does not impose significant problems that might hinder its application in classroom training. A follow-up study with a larger sample will provide a more detailed analysis on differences in experienced presence and learning success.},
    year={2022},
    pages={62-71},
    url= {https://mixality.de/wp-content/uploads/2022/08/Tehreem2022ICVR.pdf},
    doi={10.1109/ICVR55215.2022.9848222}
    }
  • J. Blattgerste, J. Behrends, and T. Pfeiffer, “A Web-Based Analysis Toolkit for the System Usability Scale,” in Proceedings of the 12th ACM International Conference on PErvasive Technologies Related to Assistive Environments, 2022.
    [BibTeX] [Download PDF]
    @inproceedings{Blattgerste2022SUS,
    author = {Blattgerste, Jonas and Behrends, Jan and Pfeiffer, Thies},
    booktitle = {Proceedings of the 12th ACM International Conference on PErvasive Technologies Related to Assistive Environments},
    location = {Corfu, Greece},
    title = {A Web-Based Analysis Toolkit for the System Usability Scale},
    url = {https://mixality.de/wp-content/uploads/2022/07/Blattgerste2022SUS.pdf},
    year = {2022}
    }
  • J. Blattgerste, K. Vogel, C. Lewa, T. Willmeroth, M. Joswig, T. Schäfer, N. H. Bauer, A. Bernloehr, and T. Pfeiffer, “The Heb@AR App – Five Augmented Reality Trainings for Self-Directed Learning in Academic Midwifery Education,” in DELFI 2022 – Die 20. Fachtagung Bildungstechnologien der Gesellschaft für Informatik eV, 2022.
    [BibTeX] [Download PDF]
    @inproceedings{Blattgerste2022HebARApp,
    author = {Blattgerste, Jonas and Vogel, Kristina and Lewa, Carmen and Willmeroth, Tabea and Joswig, Matthias and Schäfer, Thorsten and Bauer, Nicola H. and Bernloehr, Annette and Pfeiffer, Thies},
    booktitle = {DELFI 2022 – Die 20. Fachtagung Bildungstechnologien der Gesellschaft für Informatik eV},
    location = {Karlsruhe, Germany},
    title = {The Heb@AR App – Five Augmented Reality Trainings for Self-Directed Learning in Academic Midwifery Education},
    url = {https://mixality.de/wp-content/uploads/2022/07/Blattgerste2022DELFI.pdf},
    year = {2022}
    }
  • M. Bartolles, A. Kamin, L. Meyer, and T. Pfeiffer, “VR-basierte Digital Reusable Learning Objects: Ein interdisziplinäres Fortbildungskonzept für Bildungspersonal in der Pflegebildung,” MedienPädagogik: Zeitschrift für Theorie und Praxis der Medienbildung, vol. 47, iss. AR/VR – Part 1, p. 138–156, 2022. doi:10.21240/mpaed/47/2022.04.07.X
    [BibTeX] [Abstract] [Download PDF]
    Seit einigen Jahren finden VR-Technologien Einzug in die Gesundheitsberufe. Häufig wird der Einsatz durch Forschungsprojekte begleitet, eine mediendidaktische Einbettung oder lerntheoretische Begründung steht hier jedoch zumeist aus. Hinzu kommt, dass Schulungsmassnahmen zur Nutzung von VR-Technologien bisweilen den Fokus auf die Vermittlung von instrumentell-qualifikatorischen Bedienfähigkeiten legen. Der nachfolgende Artikel zeigt auf, wie das Modell des Technological Pedagogical Content Knowledge (TPACK) von Mishra und Koehler (2006) als Grundlage für die Konzeption interdisziplinär entworfener, sowohl fachwissenschaftlich und -didaktisch als auch medienpädagogisch begründeter Fortbildungsmassnahmen für Bildungspersonal in der Pflege genutzt werden kann. Neben der interdisziplinären Entwicklung der Fortbildungsmassnahme wird ein innovativer Ansatz zur niederschwelligen und praxisorientierten Erstellung und Nutzung von 360°-VR-Szenarien in der Pflegeausbildung vorgestellt.
    @article{Bartolles_Kamin_Meyer_Pfeiffer_2022,
    title={VR-basierte Digital Reusable Learning Objects: Ein interdisziplinäres Fortbildungskonzept für Bildungspersonal in der Pflegebildung},
    volume={47},
    url={https://www.medienpaed.com/article/view/1358},
    DOI={10.21240/mpaed/47/2022.04.07.X},
    abstract={Seit einigen Jahren finden VR-Technologien Einzug in die Gesundheitsberufe. Häufig wird der Einsatz durch Forschungsprojekte begleitet, eine mediendidaktische Einbettung oder lerntheoretische Begründung steht hier jedoch zumeist aus. Hinzu kommt, dass Schulungsmassnahmen zur Nutzung von VR-Technologien bisweilen den Fokus auf die Vermittlung von instrumentell-qualifikatorischen Bedienfähigkeiten legen. Der nachfolgende Artikel zeigt auf, wie das Modell des Technological Pedagogical Content Knowledge (TPACK) von Mishra und Koehler (2006) als Grundlage für die Konzeption interdisziplinär entworfener, sowohl fachwissenschaftlich und -didaktisch als auch medienpädagogisch begründeter Fortbildungsmassnahmen für Bildungspersonal in der Pflege genutzt werden kann. Neben der interdisziplinären Entwicklung der Fortbildungsmassnahme wird ein innovativer Ansatz zur niederschwelligen und praxisorientierten Erstellung und Nutzung von 360°-VR-Szenarien in der Pflegeausbildung vorgestellt.},
    number={AR/VR - Part 1},
    journal={MedienPädagogik: Zeitschrift für Theorie und Praxis der Medienbildung},
    author={Bartolles, Maureen and Kamin, Anna-Maria and Meyer, Leonard and Pfeiffer, Thies},
    year={2022},
    month={Apr.},
    pages={138–156} }
  • U. Hejna, C. Hainke, S. Seeling, and T. Pfeiffer, “Welche Merkmale zeigt eine vollimmersive Mehrpersonen-VR-Simulation im Vergleich zum Einsatz von Videokonferenzsoftware in Gruppenarbeitsprozessen?,” MedienPädagogik: Zeitschrift für Theorie und Praxis der Medienbildung, vol. 47, iss. AR/VR – Part 1, p. 220–245, 2022. doi:10.21240/mpaed/47/2022.04.11.X
    [BibTeX] [Abstract] [Download PDF]
    Der Einsatz von vollimmersiven VR-Lernumgebungen fördert bei Lernenden die individuellen Fähigkeiten und ihr Vorwissen. Konkrete Lerneffekte und Integrationskonzepte sind jedoch noch nicht ausreichend untersucht. Im Rahmen eines vom BMBF geförderten Forschungsprojektes soll mit diesem Beitrag deshalb der Frage nachgegangen werden: Welche didaktisch-gestalterischen sowie kommunikativ-interaktiven Unterschiede zeigen vollimmersive virtuelle Lernumgebungen gegenüber dem Einsatz von Videokonferenzsoftware im Kontext der Gruppenarbeit? Das Ziel ist es, den Einsatz von Multiplayer-VR-Szenarien der Nutzung von Videokonferenztools für Gruppenarbeitsprozesse im Rahmen der Fallarbeit gegenüberzustellen und deren Vor- und Nachteile aufzuzeigen. Die Ergebnisse zeigen, dass sich für Gruppenarbeitsprozesse in beiden Formaten Vor- und Nachteile finden lassen. Die Umsetzung des Konzeptes der Fallarbeit fällt jedoch in beiden Formaten positiv aus. Folglich ist der Erfolg einer Gruppenarbeit von der konzeptionellen Einbindung der Methode in den Lehrkontext abhängig, sodass die Form der Umsetzung vorwiegend Einfluss auf die Performanz nimmt. Zukünftig gilt es, konkrete Implementierungskonzepte für den Einsatz von VR-Anwendungen in der Lehre zu entwickeln und zu erproben.
    @article{Hejna_Hainke_Seeling_Pfeiffer_2022,
    title={Welche Merkmale zeigt eine vollimmersive Mehrpersonen-VR-Simulation im Vergleich zum Einsatz von Videokonferenzsoftware in Gruppenarbeitsprozessen?},
    volume={47},
    url={https://www.medienpaed.com/article/view/1363},
    DOI={10.21240/mpaed/47/2022.04.11.X},
    abstract={Der Einsatz von vollimmersiven VR-Lernumgebungen fördert bei Lernenden die individuellen Fähigkeiten und ihr Vorwissen. Konkrete Lerneffekte und Integrationskonzepte sind jedoch noch nicht ausreichend untersucht. Im Rahmen eines vom BMBF geförderten Forschungsprojektes soll mit diesem Beitrag deshalb der Frage nachgegangen werden: Welche didaktisch-gestalterischen sowie kommunikativ-interaktiven Unterschiede zeigen vollimmersive virtuelle Lernumgebungen gegenüber dem Einsatz von Videokonferenzsoftware im Kontext der Gruppenarbeit? Das Ziel ist es, den Einsatz von Multiplayer-VR-Szenarien der Nutzung von Videokonferenztools für Gruppenarbeitsprozesse im Rahmen der Fallarbeit gegenüberzustellen und deren Vor- und Nachteile aufzuzeigen. Die Ergebnisse zeigen, dass sich für Gruppenarbeitsprozesse in beiden Formaten Vor- und Nachteile finden lassen. Die Umsetzung des Konzeptes der Fallarbeit fällt jedoch in beiden Formaten positiv aus. Folglich ist der Erfolg einer Gruppenarbeit von der konzeptionellen Einbindung der Methode in den Lehrkontext abhängig, sodass die Form der Umsetzung vorwiegend Einfluss auf die Performanz nimmt. Zukünftig gilt es, konkrete Implementierungskonzepte für den Einsatz von VR-Anwendungen in der Lehre zu entwickeln und zu erproben.},
    number={AR/VR - Part 1},
    journal={MedienPädagogik: Zeitschrift für Theorie und Praxis der Medienbildung},
    author={Hejna, Urszula and Hainke, Carolin and Seeling, Stefanie and Pfeiffer, Thies},
    year={2022},
    month={Apr.},
    pages={220–245}
    }
  • M. Buhr, T. Pfeiffer, D. Reiners, C. Cruz-Neira, and B. Jung, “Real-Time Aspects of VR Systems,” in Virtual and Augmented Reality (VR/AR), Springer, 2022, p. 245–289. doi:https://doi.org/10.1007/978-3-030-79062-2_7
    [BibTeX] [Download PDF]
    @incollection{buhr2022real,
    title={Real-Time Aspects of VR Systems},
    author={Buhr, Mathias and Pfeiffer, Thies and Reiners, Dirk and Cruz-Neira, Carolina and Jung, Bernhard},
    booktitle={Virtual and Augmented Reality (VR/AR)},
    pages={245--289},
    year={2022},
    publisher={Springer},
    url="https://link.springer.com/chapter/10.1007/978-3-030-79062-2_7",
    doi="https://doi.org/10.1007/978-3-030-79062-2_7"
    }
  • J. L. Domínguez Alfaro, S. Gantois, J. Blattgerste, R. De Croon, K. Verbert, T. Pfeiffer, and P. Van Puyvelde, “Mobile Augmented Reality Laboratory for Learning Acid–Base Titration,” Journal of Chemical Education, 2022. doi:10.1021/acs.jchemed.1c00894
    [BibTeX] [Abstract] [Download PDF]
    {Traditionally, laboratory practice aims to establish schemas learned by students in theoretical courses through concrete experiences. However, access to laboratories might not always be available to students. Therefore, it is advantageous to diversify the tools that students could use to train practical skills. This technology report describes the design, development, and first testing of a mobile augmented reality application that enables a hands-on learning experience of a titration experiment. Additionally, it presents the extension of the TrainAR framework for chemical education through the implementation of specific domain features, i.e., logbook, graph, and practical oriented hints. To test the application, 15 participants were recruited from five different high schools and two universities in Belgium. The findings reflect that the MAR Lab app was well-received by the users. In addition, they valued the design elements (e.g., logbook and multiple-choice questions), and the system has “good” usability (SUS score 72.8
    @article{doi:10.1021/acs.jchemed.1c00894,
    author = {Domínguez Alfaro, Jessica Lizeth and Gantois, Stefanie and Blattgerste, Jonas and De Croon, Robin and Verbert, Katrien and Pfeiffer, Thies and Van Puyvelde, Peter},
    title = {Mobile Augmented Reality Laboratory for Learning Acid–Base Titration},
    journal = {Journal of Chemical Education},
    year = {2022},
    doi = {10.1021/acs.jchemed.1c00894},
    abstract={Traditionally, laboratory practice aims to establish schemas learned by students in theoretical courses through concrete experiences. However, access to laboratories might not always be available to students. Therefore, it is advantageous to diversify the tools that students could use to train practical skills. This technology report describes the design, development, and first testing of a mobile augmented reality application that enables a hands-on learning experience of a titration experiment. Additionally, it presents the extension of the TrainAR framework for chemical education through the implementation of specific domain features, i.e., logbook, graph, and practical oriented hints. To test the application, 15 participants were recruited from five different high schools and two universities in Belgium. The findings reflect that the MAR Lab app was well-received by the users. In addition, they valued the design elements (e.g., logbook and multiple-choice questions), and the system has “good” usability (SUS score 72.8, SD = 14.0). Nevertheless, the usability and learners’ experience can be improved by tackling technical problems, providing more explicit instructions for subtasks, and modifying certain features. Therefore, future development will concentrate on improving upon these shortcomings, adding additional levels to target a larger audience, and evaluating the improvements’ effects with more participants.},
    URL = {https://doi.org/10.1021/acs.jchemed.1c00894},
    eprint = {https://doi.org/10.1021/acs.jchemed.1c00894}
    }
  • P. Mavrogiorgou, P. Böhme, V. Hooge, T. Pfeiffer, and G. Juckel, “Virtual reality in teaching of psychiatry and psychotherapy at medical school,” Der Nervenarzt, p. 1–7, 2021. doi:https://doi.org/10.1007/s00115-021-01227-5
    [BibTeX] [Abstract]
    Die seit nunmehr über einem Jahr die Menschheit geißelnde Corona-Pandemie scheint den digitalen und medientechnischen Fortschritt in Windeseile zu beflügeln. Der Einsatz und Gebrauch neuer Medien vereinfacht unser tägliches Miteinander, zumal das Virus eine unsichtbare und nicht minder lebensgefährliche Barriere zwischen uns schafft. Vor allem im Kontext unserer Arbeit mit Menschen, die Hilfe bedürfen, und dem dazu überaus notwendigen kollegialen Austausch von Informationen stellen die technischen Möglichkeiten im Sinne einer „positiven Technologie“ gerade in dieser Zeit der sozialen Distanzhaltung einen immensen Benefit dar [18]. Dies wiederum motiviert, die bereits vorhandenen Technologien noch effektiver zu nutzen und weiter in der Entwicklung voranzutreiben. Vor allem im Bereich der Medizin, hier speziell der Psychiatrie und Psychotherapie, spielt die Nutzung von Technik und Medien, wie z. B. die verschiedenen videotechnologischen Methoden zur Aufnahme und Darstellung psychopathologischer Verhaltensänderungen, aber auch zu Lehrzwecken, schon lange eine große Rolle [5]. Daher verwundert es nicht, dass in diesem Bereich schon seit einer Reihe von Jahren auch neue technische Verfahren, wie die VR(„virtual reality“)-Technologie, zur Diagnostik und Therapie psychischer Störungen genutzt werden [2, 17]. Trotzdem muss man einschränkend festhalten, dass bis dato die VR-Technologie sowohl klinisch, wissenschaftlich und als auch als Lehr- und Lernmethode für Studierende der Medizin sowie Facharztkandidaten keine flächendeckende oder gar etablierte Vorgehensweise darstellt [13, 14]. In der folgenden Arbeit soll daher die VR-Technologie hinsichtlich ihrer bisherigen und zukünftigen Einsatzmöglichkeiten vor allem im Fachgebiet von Psychiatrie und Psychotherapie am Beispiel des Bochumer Avatar-Explorationsprojektes („AVEX“) als eine nützliche Möglichkeit in der Lehre von Medizinstudierenden, aber auch in der Fort- und Weiterbildung von Weiterbildungskandidaten und sonst in der Psychiatrie Tätigen näher dargestellt werden.
    @article{mavrogiorgou2021virtual,
    title={Virtual reality in teaching of psychiatry and psychotherapy at medical school},
    author={Mavrogiorgou, Paraskevi and B{\"o}hme, Pierre and Hooge, Vitalij and Pfeiffer, Thies and Juckel, Georg},
    journal={Der Nervenarzt},
    year = {2021},
    pages={1--7},
    abstract={Die seit nunmehr über einem Jahr die Menschheit geißelnde Corona-Pandemie scheint den digitalen und medientechnischen Fortschritt in Windeseile zu beflügeln. Der Einsatz und Gebrauch neuer Medien vereinfacht unser tägliches Miteinander, zumal das Virus eine unsichtbare und nicht minder lebensgefährliche Barriere zwischen uns schafft. Vor allem im Kontext unserer Arbeit mit Menschen, die Hilfe bedürfen, und dem dazu überaus notwendigen kollegialen Austausch von Informationen stellen die technischen Möglichkeiten im Sinne einer „positiven Technologie“ gerade in dieser Zeit der sozialen Distanzhaltung einen immensen Benefit dar [18]. Dies wiederum motiviert, die bereits vorhandenen Technologien noch effektiver zu nutzen und weiter in der Entwicklung voranzutreiben. Vor allem im Bereich der Medizin, hier speziell der Psychiatrie und Psychotherapie, spielt die Nutzung von Technik und Medien, wie z. B. die verschiedenen videotechnologischen Methoden zur Aufnahme und Darstellung psychopathologischer Verhaltensänderungen, aber auch zu Lehrzwecken, schon lange eine große Rolle [5]. Daher verwundert es nicht, dass in diesem Bereich schon seit einer Reihe von Jahren auch neue technische Verfahren, wie die VR(„virtual reality“)-Technologie, zur Diagnostik und Therapie psychischer Störungen genutzt werden [2, 17]. Trotzdem muss man einschränkend festhalten, dass bis dato die VR-Technologie sowohl klinisch, wissenschaftlich und als auch als Lehr- und Lernmethode für Studierende der Medizin sowie Facharztkandidaten keine flächendeckende oder gar etablierte Vorgehensweise darstellt [13, 14]. In der folgenden Arbeit soll daher die VR-Technologie hinsichtlich ihrer bisherigen und zukünftigen Einsatzmöglichkeiten vor allem im Fachgebiet von Psychiatrie und Psychotherapie am Beispiel des Bochumer Avatar-Explorationsprojektes („AVEX“) als eine nützliche Möglichkeit in der Lehre von Medizinstudierenden, aber auch in der Fort- und Weiterbildung von Weiterbildungskandidaten und sonst in der Psychiatrie Tätigen näher dargestellt werden.},
    doi={https://doi.org/10.1007/s00115-021-01227-5}
    }
  • L. Stubbemann, R. Refflinghaus, and T. Pfeiffer, “Eye-Tracking zur Kundenanforderungsvalidierung im Produktentwicklungsprozess,” in Qualitätsmanagement in den 20er Jahren – Trends und Perspektiven, Berlin, Heidelberg, 2021, p. 146–165. doi:https://doi.org/10.1007/978-3-662-63243-7_8
    [BibTeX] [Abstract] [Download PDF]
    Qualitativ hochwertige Produkte können durch die Entwicklung von Produktmerkmalen in Übereinstimmung mit den Kundenwünschen erreicht werden. Um die Konformität von Produkten mit den Anforderungen während des Produktentwicklungsprozesses zu validieren, werden zunehmend objektive biometrische Verfahren wie Eye-Tracking eingesetzt. Das vorliegende Papier gibt daher einen Überblick über den Einsatz von Eye-Tracking in der experimentellen Validierung von Produktdesigns. Darauf aufbauend wird ein Konzept zur Eye-Tracking-unterstützten Kundenanforderungsvalidierung vorgestellt und anhand einer ersten Machbarkeitsstudie überprüft. Anhand der Erkenntnisse wird dargelegt, wie die Präzision, Belastbarkeit und Reichhaltigkeit von Kundenanforderungsanalysen durch den Einsatz von Eye-Tracking erhöht werden können. Die Forschungsarbeiten legen damit einen Grundstein für objektivere und aufwandsärmere Kundenanforderungsanalysen. Sie ebnen so den Weg hin zu einer verstärkten Customer-Co-Creation und einer qualitätsorientierten Produktentwicklung auch für Konsumprodukte.
    @InProceedings{10.1007/978-3-662-63243-7_8,
    author="Stubbemann, Lena
    and Refflinghaus, Robert
    and Pfeiffer, Thies",
    editor="Leyendecker, Bert",
    title="Eye-Tracking zur Kundenanforderungsvalidierung im Produktentwicklungsprozess",
    booktitle="Qualitätsmanagement in den 20er Jahren - Trends und Perspektiven",
    year="2021",
    publisher="Springer Berlin Heidelberg",
    address="Berlin, Heidelberg",
    pages="146--165",
    abstract="Qualitativ hochwertige Produkte können durch die Entwicklung von Produktmerkmalen in Übereinstimmung mit den Kundenwünschen erreicht werden. Um die Konformität von Produkten mit den Anforderungen während des Produktentwicklungsprozesses zu validieren, werden zunehmend objektive biometrische Verfahren wie Eye-Tracking eingesetzt. Das vorliegende Papier gibt daher einen Überblick über den Einsatz von Eye-Tracking in der experimentellen Validierung von Produktdesigns. Darauf aufbauend wird ein Konzept zur Eye-Tracking-unterstützten Kundenanforderungsvalidierung vorgestellt und anhand einer ersten Machbarkeitsstudie überprüft. Anhand der Erkenntnisse wird dargelegt, wie die Präzision, Belastbarkeit und Reichhaltigkeit von Kundenanforderungsanalysen durch den Einsatz von Eye-Tracking erhöht werden können. Die Forschungsarbeiten legen damit einen Grundstein für objektivere und aufwandsärmere Kundenanforderungsanalysen. Sie ebnen so den Weg hin zu einer verstärkten Customer-Co-Creation und einer qualitätsorientierten Produktentwicklung auch für Konsumprodukte.",
    isbn="978-3-662-63243-7",
    doi="https://doi.org/10.1007/978-3-662-63243-7_8",
    url="https://link.springer.com/chapter/10.1007%2F978-3-662-63243-7_8"
    }
  • M. Meißner, J. Pfeiffer, C. Peukert, H. Dietrich, and T. Pfeiffer, “How virtual reality affects consumer choice,” Journal of Business Research, vol. 117, pp. 219-231, 2020. doi:https://doi.org/10.1016/j.jbusres.2020.06.004
    [BibTeX] [Abstract] [Download PDF]
    With high-immersive virtual reality (VR) systems approaching mass markets, companies are seeking to better understand how consumers behave when shopping in VR. A key feature of high-immersive VR environments is that they can create a strong illusion of reality to the senses, which could substantially change consumer choice behavior compared to online shopping. We compare consumer choice from virtual shelves in two environments: (i) a high-immersive VR environment using a head-mounted display and hand-held controllers with (ii) a low-immersive environment showing products as rotatable 3-D models on a desktop computer screen. We use an incentive-aligned choice experiment to investigate how immersion affects consumer choice. Our investigation comprises three key choice characteristics: variety-seeking, price-sensitivity, and satisfaction with the choice made. The empirical results provide evidence that consumers in high-immersive VR choose a larger variety of products and are less price-sensitive. Choice satisfaction, however, did not increase in high-immersive VR.
    @article{MEINER2020219,
    title = {How virtual reality affects consumer choice},
    journal = {Journal of Business Research},
    volume = {117},
    pages = {219-231},
    year = {2020},
    issn = {0148-2963},
    doi = {https://doi.org/10.1016/j.jbusres.2020.06.004},
    url = {https://www.sciencedirect.com/science/article/pii/S0148296320303684},
    author = {Martin Meißner and Jella Pfeiffer and Christian Peukert and Holger Dietrich and Thies Pfeiffer},
    keywords = {Virtual reality, Variety-seeking, Price-sensitivity, Satisfaction, Conjoint analysis},
    abstract = {With high-immersive virtual reality (VR) systems approaching mass markets, companies are seeking to better understand how consumers behave when shopping in VR. A key feature of high-immersive VR environments is that they can create a strong illusion of reality to the senses, which could substantially change consumer choice behavior compared to online shopping. We compare consumer choice from virtual shelves in two environments: (i) a high-immersive VR environment using a head-mounted display and hand-held controllers with (ii) a low-immersive environment showing products as rotatable 3-D models on a desktop computer screen. We use an incentive-aligned choice experiment to investigate how immersion affects consumer choice. Our investigation comprises three key choice characteristics: variety-seeking, price-sensitivity, and satisfaction with the choice made. The empirical results provide evidence that consumers in high-immersive VR choose a larger variety of products and are less price-sensitive. Choice satisfaction, however, did not increase in high-immersive VR.}
    }
  • J. Fegert, J. Pfeiffer, P. Reitzer, T. Götz, A. Hariharan, N. Pfeiffer-Leßmann, P. Renner, T. Pfeiffer, and C. Weinhardt, “Ich sehe was, was du auch siehst: Über die Möglichkeiten von Augmented und Virtual Reality für die digitale Beteiligung von Bürger:innen in der Bau- und Stadtplanung,” HMD Praxis der Wirtschaftsinformatik, 2021. doi:10.1365/s40702-021-00772-6
    [BibTeX] [Abstract] [Download PDF]
    Digital Government eröffnet Möglichkeiten, Verwaltungs- und Regierungsprozesse kritisch zu reflektieren und sie entsprechend neu zu denken. Oblagen Bürgerbeteiligungsprozesse in der Vergangenheit zahlreichen Hürden, bietet die e-Partizipation Möglichkeiten, sie mit modernen Technologien zu verbinden, die eine niedrigschwellige Teilhabe ermöglichen. In dem Forschungsprojekt Take Part, gefördert durch das Bundesministerium für Bildung und Forschung, werden innovative Formen der Beteiligung von Bürger:innen in der Stadt- und Bauplanung mithilfe von Augmented und Virtual Reality (AR und VR) erforscht. Dabei geht es vor allem darum, neue Anreize zu schaffen, Bürger:innen zur Beteiligung zu motivieren und durch diese das Konfliktpotential um Bauprojekte zu reduzieren. Mithilfe der innerhalb von Take Part entwickelten App können Bürger:innen Bauvorhaben diskutieren, Feedback geben oder über sie abstimmen während sie dabei den Beteiligungsgegenstand anschaulich in AR und VR präsentiert bekommen. Zugleich können auch Initiator:innen mithilfe eines Partizipationsökosystems die Beteiligung im jeweiligen Bauvorhaben konfigurieren, indem sie vorhandene Module kombinieren und konfigurieren und passende Dienstleistungen, wie beispielsweise 3D Modellierungen, einkaufen. In diesem Beitrag sollen die konkreten technologischen Entwicklungen (u.a. Outdoor-AR-Tracking und räumlich verankerte Diskussionen), sowie das Partizipationsökosystem (Dienstentwicklungs- und Ausführungsplattform) vorgestellt werden. Auf die Herausforderung, eine e-Partizipations App zu entwickeln, die die die Möglichkeit bietet, verschiedene Interaktionskonzepte ineinander zu integrieren und gleichzeitig eine überzeugende User-Experience bietet, soll ebenfalls eingegangen werden. Anschließend wird das Potenzial einer solchen Lösung für die digitale Mitbestimmung in lokaler Verwaltung vor allem in Bezug auf gesteigerte Vorstellungskraft und Motivation zur Teilhabe für Nutzer:innen diskutiert und in den Kontext der Covid-19 Pandemie gesetzt.
    @Article{takepart2021,
    author={Jonas Fegert and Jella Pfeiffer and Pauline Reitzer and Tobias Götz and Anuja Hariharan and Nadine Pfeiffer-Leßmann and Patrick Renner and Thies Pfeiffer and Christof Weinhardt},
    title={Ich sehe was, was du auch siehst: Über die Möglichkeiten von Augmented und Virtual Reality für die digitale Beteiligung von Bürger:innen in der Bau- und Stadtplanung},
    journal={HMD Praxis der Wirtschaftsinformatik},
    doi={10.1365/s40702-021-00772-6},
    year={2021},
    abstract={Digital Government eröffnet Möglichkeiten, Verwaltungs- und Regierungsprozesse kritisch zu reflektieren und sie entsprechend neu zu denken. Oblagen Bürgerbeteiligungsprozesse in der Vergangenheit zahlreichen Hürden, bietet die e-Partizipation Möglichkeiten, sie mit modernen Technologien zu verbinden, die eine niedrigschwellige Teilhabe ermöglichen. In dem Forschungsprojekt Take Part, gefördert durch das Bundesministerium für Bildung und Forschung, werden innovative Formen der Beteiligung von Bürger:innen in der Stadt- und Bauplanung mithilfe von Augmented und Virtual Reality (AR und VR) erforscht. Dabei geht es vor allem darum, neue Anreize zu schaffen, Bürger:innen zur Beteiligung zu motivieren und durch diese das Konfliktpotential um Bauprojekte zu reduzieren. Mithilfe der innerhalb von Take Part entwickelten App können Bürger:innen Bauvorhaben diskutieren, Feedback geben oder über sie abstimmen während sie dabei den Beteiligungsgegenstand anschaulich in AR und VR präsentiert bekommen. Zugleich können auch Initiator:innen mithilfe eines Partizipationsökosystems die Beteiligung im jeweiligen Bauvorhaben konfigurieren, indem sie vorhandene Module kombinieren und konfigurieren und passende Dienstleistungen, wie beispielsweise 3D Modellierungen, einkaufen. In diesem Beitrag sollen die konkreten technologischen Entwicklungen (u.a. Outdoor-AR-Tracking und räumlich verankerte Diskussionen), sowie das Partizipationsökosystem (Dienstentwicklungs- und Ausführungsplattform) vorgestellt werden. Auf die Herausforderung, eine e-Partizipations App zu entwickeln, die die die Möglichkeit bietet, verschiedene Interaktionskonzepte ineinander zu integrieren und gleichzeitig eine überzeugende User-Experience bietet, soll ebenfalls eingegangen werden. Anschließend wird das Potenzial einer solchen Lösung für die digitale Mitbestimmung in lokaler Verwaltung vor allem in Bezug auf gesteigerte Vorstellungskraft und Motivation zur Teilhabe für Nutzer:innen diskutiert und in den Kontext der Covid-19 Pandemie gesetzt.},
    url = {https://mixality.de/wp-content/uploads/2021/08/2021_-_Fegert_et_al-2021-HMD_Praxis_der_Wirtschaftsinformatik.pdf}
    }
  • C. R. Scotto, A. Moscatelli, T. Pfeiffer, and M. O. Ernst, “Visual pursuit biases tactile velocity perception,” Journal of Neurophysiology, 2021. doi:10.1152/jn.00541.2020
    [BibTeX] [Abstract] [Download PDF]
    During a smooth pursuit eye movement of a target stimulus, a briefly flashed stationary background appears to move in the opposite direction as the eye’s motion ― an effect known as the Filehne illusion. Similar illusions occur in audition, in the vestibular system, and in touch. Recently, we found that the movement of a surface perceived from tactile slip was biased if this surface was sensed with the hand. This suggests a common process of motion perception between the eye and the hand. In the present study, we further assessed the interplay between these effectors by investigating a novel paradigm that associated an eye pursuit with a tactile motion over the skin of the fingertip. We showed that smooth pursuit eye movements can bias the perceived direction of motion in touch. Similarly to the classical report from the Filehne illusion in vision, a static tactile surface was perceived as moving rightward with a leftward pursuit eye movement, and vice versa. However, this time the direction of surface motion was perceived from touch. The biasing effects of eye pursuit on tactile motion were modulated by the reliability of the tactile and visual estimates, as predicted by a Bayesian model of motion perception. Overall, these results support a modality- and effector-independent process with common representations for motion perception.
    @Article{Scotto2021,
    author={Cecile R. Scotto and Allesandro Moscatelli and Thies Pfeiffer and Marc O. Ernst},
    title={Visual pursuit biases tactile velocity perception},
    year={2021},
    journal = {Journal of Neurophysiology},
    URL = {https://journals.physiology.org/doi/abs/10.1152/jn.00541.2020},
    DOI = {10.1152/jn.00541.2020},
    abstract = {During a smooth pursuit eye movement of a target stimulus, a briefly flashed stationary background appears to move in the opposite direction as the eye’s motion ― an effect known as the Filehne illusion. Similar illusions occur in audition, in the vestibular system, and in touch. Recently, we found that the movement of a surface perceived from tactile slip was biased if this surface was sensed with the hand. This suggests a common process of motion perception between the eye and the hand. In the present study, we further assessed the interplay between these effectors by investigating a novel paradigm that associated an eye pursuit with a tactile motion over the skin of the fingertip. We showed that smooth pursuit eye movements can bias the perceived direction of motion in touch. Similarly to the classical report from the Filehne illusion in vision, a static tactile surface was perceived as moving rightward with a leftward pursuit eye movement, and vice versa. However, this time the direction of surface motion was perceived from touch. The biasing effects of eye pursuit on tactile motion were modulated by the reliability of the tactile and visual estimates, as predicted by a Bayesian model of motion perception. Overall, these results support a modality- and effector-independent process with common representations for motion perception.}
    }
  • C. Hainke and T. Pfeiffer, “Eye Movements in VR Training: Expertise Measurement and its Meaning for Adaptive Chess Training,” in Advances in Usability, User Experience, Wearable and Assistive Technology, 2021, p. 123–130.
    [BibTeX] [Abstract] [Download PDF]
    Observing behavior can provide insights into mental states and cognitive processes. In the context of expertise measurement, research is taking advantage of analyzing gaze-based data to understand those underlying processes. The movements of the eyes represent search strategies in problem-solving scenarios, which can be used to distinguish a novice from a person that is more experienced with the topic and the solving process. Applications such as learning environments can be improved by taking expertise into account, as expertise-related instructions and design-decisions could then be adjusted to the learner’s individual needs. In the following, we will discuss the meaning of expertise and instruction design for learning in more detail. Prior work has shown that different groups of expertise can be distinguished using eye movements. We verify this for virtual reality-based chess trainings by presenting a study on chess problem solving in VR. Based on these findings, we discuss suggestions for the implementation of adaptive learning applications on the example of chess.
    @InProceedings{hainkepfeifferahfe21,
    author={Hainke, Carolin and Pfeiffer, Thies},
    title={{E}ye {M}ovements in {VR} {T}raining: {E}xpertise {M}easurement and its {M}eaning for {A}daptive {C}hess {T}raining},
    booktitle={Advances in Usability, User Experience, Wearable and Assistive Technology},
    year={2021},
    month={07},
    pages={123--130},
    publisher={{S}pringer {I}nternational {P}ublishing},
    abstract={Observing behavior can provide insights into mental states and cognitive processes. In the context of expertise measurement, research is taking advantage of analyzing gaze-based data to understand those underlying processes. The movements of the eyes represent search strategies in problem-solving scenarios, which can be used to distinguish a novice from a person that is more experienced with the topic and the solving process. Applications such as learning environments can be improved by taking expertise into account, as expertise-related instructions and design-decisions could then be adjusted to the learner’s individual needs. In the following, we will discuss the meaning of expertise and instruction design for learning in more detail. Prior work has shown that different groups of expertise can be distinguished using eye movements. We verify this for virtual reality-based chess trainings by presenting a study on chess problem solving in VR. Based on these findings, we discuss suggestions for the implementation of adaptive learning applications on the example of chess.},
    url = {https://mixality.de/wp-content/uploads/2021/08/Hainke-Pfeiffer2021_Chapter_EyeMovementsInVRTrainingExpert.pdf}
    }
  • J. Blattgerste, K. Luksch, C. Lewa, and T. Pfeiffer, “TrainAR: A Scalable Interaction Concept and Didactic Framework for Procedural Trainings Using Handheld Augmented Reality,” Multimodal Technologies and Interaction, vol. 5, iss. 7, 2021. doi:10.3390/mti5070030
    [BibTeX] [Abstract] [Download PDF]
    The potential of Augmented Reality (AR) for educational and training purposes is well known. While large-scale deployments of head-mounted AR headsets remain challenging due to technical limitations and cost factors, advances in mobile devices and tracking solutions introduce handheld AR devices as a powerful, broadly available alternative, yet with some restrictions. One of the current limitations of AR training applications on handheld AR devices is that most offer rather static experiences, only providing descriptive knowledge with little interactivity. Holistic concepts for the coverage of procedural knowledge are largely missing. The contribution of this paper is twofold. We propose a scalabe interaction concept for handheld AR devices with an accompanied didactic framework for procedural training tasks called TrainAR. Then, we implement TrainAR for a training scenario in academics for the context of midwifery and explain the educational theories behind our framework and how to apply it for procedural training tasks. We evaluate and subsequently improve the concept based on three formative usability studies (n = 24), where explicitness, redundant feedback mechanisms and onboarding were identified as major success factors. Finally, we conclude by discussing derived implications for improvements and ongoing and future work.
    @Article{mti5070030,
    AUTHOR = {Blattgerste, Jonas and Luksch, Kristina and Lewa, Carmen and Pfeiffer, Thies},
    TITLE = {Train{AR}: {A} {S}calable {I}nteraction {C}oncept and {D}idactic {F}ramework for {P}rocedural {T}rainings {U}sing {H}andheld {A}ugmented {R}eality},
    JOURNAL = {{M}ultimodal {T}echnologies and {I}nteraction},
    VOLUME = {5},
    YEAR = {2021},
    NUMBER = {7},
    ARTICLE-NUMBER = {30},
    URL = {https://www.mdpi.com/2414-4088/5/7/30},
    ISSN = {2414-4088},
    ABSTRACT = {The potential of Augmented Reality (AR) for educational and training purposes is well known. While large-scale deployments of head-mounted AR headsets remain challenging due to technical limitations and cost factors, advances in mobile devices and tracking solutions introduce handheld AR devices as a powerful, broadly available alternative, yet with some restrictions. One of the current limitations of AR training applications on handheld AR devices is that most offer rather static experiences, only providing descriptive knowledge with little interactivity. Holistic concepts for the coverage of procedural knowledge are largely missing. The contribution of this paper is twofold. We propose a scalabe interaction concept for handheld AR devices with an accompanied didactic framework for procedural training tasks called TrainAR. Then, we implement TrainAR for a training scenario in academics for the context of midwifery and explain the educational theories behind our framework and how to apply it for procedural training tasks. We evaluate and subsequently improve the concept based on three formative usability studies (n = 24), where explicitness, redundant feedback mechanisms and onboarding were identified as major success factors. Finally, we conclude by discussing derived implications for improvements and ongoing and future work.},
    DOI = {10.3390/mti5070030}
    }
  • S. Garcia Fracaro, P. Chan, T. Gallagher, Y. Tehreem, R. Toyoda, B. Kristel, G. Jarka, T. Pfeiffer, B. Slof, S. Wachsmuth, and M. Wilk, “Towards Design Guidelines for Virtual Reality Training for the Chemical Industry,” Education for Chemical Engineers, 2021. doi:https://doi.org/10.1016/j.ece.2021.01.014
    [BibTeX] [Abstract] [Download PDF]
    Operator training in the chemical industry is important because of the potentially hazardous nature of procedures and the way operators’ mistakes can have serious consequences on process operation and safety. Currently, operator training is facing some challenges, such as high costs, safety limitations and time constraints. Also, there have been some indications of a lack of engagement of employees during mandatory training. Immersive technologies can provide solutions to these challenges. Specifically, virtual reality (VR) has the potential to improve the way chemical operators experience training sessions, increasing motivation, virtually exposing operators to unsafe situations, and reducing classroom training time. In this paper, we present research being conducted to develop a virtual reality training solution as part of the EU Horizon 2020 CHARMING Project, a project focusing on the education of current and future chemical industry stakeholders. This paper includes the design principles for a virtual reality training environment including the features that enhance the effectiveness of virtual reality training such as game-based learning elements, learning analytics, and assessment methods. This work can assist those interested in exploring the potential of virtual reality training environments in the chemical industry from a multidisciplinary perspective.
    @article{GARCIAFRACARO2021,
    title = {{T}owards {D}esign {G}uidelines for {V}irtual {R}eality {T}raining for the {C}hemical {I}ndustry},
    journal = {{E}ducation for {C}hemical {E}ngineers},
    year = {2021},
    issn = {1749-7728},
    doi = {https://doi.org/10.1016/j.ece.2021.01.014},
    url = {https://www.sciencedirect.com/science/article/pii/S1749772821000142},
    author = {Sofia {Garcia Fracaro} and Philippe Chan and Timothy Gallagher and Yusra Tehreem and Ryo Toyoda and Bernaerts Kristel and Glassey Jarka and Thies Pfeiffer and Bert Slof and Sven Wachsmuth and Michael Wilk},
    keywords = {Virtual Reality, Chemical industry, Operator training, Learning analytics, Gamebased learning, assessment},
    abstract = {Operator training in the chemical industry is important because of the potentially hazardous nature of procedures and the way operators' mistakes can have serious consequences on process operation and safety. Currently, operator training is facing some challenges, such as high costs, safety limitations and time constraints. Also, there have been some indications of a lack of engagement of employees during mandatory training. Immersive technologies can provide solutions to these challenges. Specifically, virtual reality (VR) has the potential to improve the way chemical operators experience training sessions, increasing motivation, virtually exposing operators to unsafe situations, and reducing classroom training time. In this paper, we present research being conducted to develop a virtual reality training solution as part of the EU Horizon 2020 CHARMING Project, a project focusing on the education of current and future chemical industry stakeholders. This paper includes the design principles for a virtual reality training environment including the features that enhance the effectiveness of virtual reality training such as game-based learning elements, learning analytics, and assessment methods. This work can assist those interested in exploring the potential of virtual reality training environments in the chemical industry from a multidisciplinary perspective.}
    }
  • A. M. Monteiro and T. Pfeiffer, “Virtual Reality in Second Language Acquisition Research: A Case on Amazon Sumerian.” 2020, pp. 125-128. doi:10.33965/icedutech2020_202002R018
    [BibTeX] [Abstract] [Download PDF]
    Virtual reality (VR) has gained increasing academic attention in recent years, and a possible reason for that might be its spread-out applications across different sectors of life. From the advent of the WebVR 1.0 API (application program interface), released in 2016, it has become easier for developers, without extensive knowledge of programming and modeling of 3D objects, to build and host applications that can be accessed anywhere by a minimum setup of devices. The development of WebVR, now continued as WebXR, is, therefore, especially relevant for research on education and teaching since experiments in VR had required not only expertise in the computer science domain but were also dependent on state-of-the-art hardware, which could have been limiting aspects to researchers and teachers. This paper presents the result of a project conducted at CITEC (Cluster of Excellence Cognitive Interaction Technology), Bielefeld University, Germany, which intended to teach English for a specific purpose in a VR environment using Amazon Sumerian, a web-based service. Contributions and limitations of this project are also discussed.
    @inproceedings{monteiro20,
    author = {Monteiro, Ana Maria and Pfeiffer, Thies},
    year = {2020},
    month = {02},
    pages = {125-128},
    title = {Virtual Reality in Second Language Acquisition Research: A Case on Amazon Sumerian},
    doi = {10.33965/icedutech2020_202002R018},
    url = {http://www.iadisportal.org/digital-library/virtual-reality-in-second-language-acquisition-research-a-case-on-amazon-sumerian},
    keywords = {Virtual Reality, Second Language Acquisition, WebVR},
    abstract = {Virtual reality (VR) has gained increasing academic attention in recent years, and a possible reason for that might be its spread-out applications across different sectors of life. From the advent of the WebVR 1.0 API (application program interface), released in 2016, it has become easier for developers, without extensive knowledge of programming and modeling of 3D objects, to build and host applications that can be accessed anywhere by a minimum setup of devices. The development of WebVR, now continued as WebXR, is, therefore, especially relevant for research on education and teaching since experiments in VR had required not only expertise in the computer science domain but were also dependent on state-of-the-art hardware, which could have been limiting aspects to researchers and teachers. This paper presents the result of a project conducted at CITEC (Cluster of Excellence Cognitive Interaction Technology), Bielefeld University, Germany, which intended to teach English for a specific purpose in a VR environment using Amazon Sumerian, a web-based service. Contributions and limitations of this project are also discussed.}
    }
  • E. Lampen, J. Lehwald, and T. Pfeiffer, “A Context-Aware Assistance Framework for Implicit Interaction with an Augmented Human,” in Virtual, Augmented and Mixed Reality. Industrial and Everyday Life Applications, Cham, 2020, p. 91–110.
    [BibTeX] [Abstract]
    The automotive industry is currently facing massive challenges. Shorter product life cycles together with mass customization lead to a high complexity for manual assembly tasks. This induces the need for effective manual assembly assistances which guide the worker faultlessly through different assembly steps while simultaneously decrease their completion time and cognitive load. While in the literature a simulation-based assistance visualizing an augmented digital human was proposed, it lacks the ability to incorporate knowledge about the context of an assembly scenario through arbitrary sensor data. Within this paper, a general framework for the modular acquisition, interpretation and management of context is presented. Furthermore, a novel context-aware assistance application in augmented reality is introduced which enhances the previously proposed simulation-based assistance method by several context-aware features. Finally, a preliminary study (N = 6) is conducted to give a first insight into the effectiveness of context-awareness for the simulation-based assistance with respect to subjective perception criteria. The results suggest that the user experience is improved by context-awareness in general and the developed context-aware features were overall perceived as useful in terms of error, time and cognitive load reduction as well as motivational increase. However, the developed software architecture offers potential for improvement and future research considering performance parameters is mandatory.
    @inproceedings{10.1007/978-3-030-49698-2_7,
    author={Lampen, Eva and Lehwald, Jannes and Pfeiffer, Thies},
    editor={Chen, Jessie Y. C. and Fragomeni, Gino},
    title={A Context-Aware Assistance Framework for Implicit Interaction with an Augmented Human},
    booktitle={Virtual, Augmented and Mixed Reality. Industrial and Everyday Life Applications},
    year={2020},
    publisher={Springer International Publishing},
    address={Cham},
    pages={91--110},
    abstract={The automotive industry is currently facing massive challenges. Shorter product life cycles together with mass customization lead to a high complexity for manual assembly tasks. This induces the need for effective manual assembly assistances which guide the worker faultlessly through different assembly steps while simultaneously decrease their completion time and cognitive load. While in the literature a simulation-based assistance visualizing an augmented digital human was proposed, it lacks the ability to incorporate knowledge about the context of an assembly scenario through arbitrary sensor data. Within this paper, a general framework for the modular acquisition, interpretation and management of context is presented. Furthermore, a novel context-aware assistance application in augmented reality is introduced which enhances the previously proposed simulation-based assistance method by several context-aware features. Finally, a preliminary study (N = 6) is conducted to give a first insight into the effectiveness of context-awareness for the simulation-based assistance with respect to subjective perception criteria. The results suggest that the user experience is improved by context-awareness in general and the developed context-aware features were overall perceived as useful in terms of error, time and cognitive load reduction as well as motivational increase. However, the developed software architecture offers potential for improvement and future research considering performance parameters is mandatory.},
    isbn={978-3-030-49698-2}
    }
  • J. Pfeiffer, T. Pfeiffer, M. Meißner, and E. Weiß, “Eye-Tracking-Based Classification of Information Search Behavior Using Machine Learning: Evidence from Experiments in Physical Shops and Virtual Reality Shopping Environments,” Information Systems Research, 2020. doi:10.1287/isre.2019.0907
    [BibTeX] [Abstract] [Download PDF]
    How can we tailor assistance systems, such as recommender systems or decision support systems, to consumers’ individual shopping motives? How can companies unobtrusively identify shopping motives without explicit user input? We demonstrate that eye movement data allow building reliable prediction models for identifying goal-directed and exploratory shopping motives. Our approach is validated in a real supermarket and in an immersive virtual reality supermarket. Several managerial implications of using gaze-based classification of information search behavior are discussed: First, the advent of virtual shopping environments makes using our approach straightforward as eye movement data are readily available in next-generation virtual reality devices. Virtual environments can be adapted to individual needs once shopping motives are identified and can be used to generate more emotionally engaging customer experiences. Second, identifying exploratory behavior offers opportunities for marketers to adapt marketing communication and interaction processes. Personalizing the shopping experience and profiling customers’ needs based on eye movement data promises to further increase conversion rates and customer satisfaction. Third, eye movement-based recommender systems do not need to interrupt consumers and thus do not take away attention from the purchase process. Finally, our paper outlines the technological basis of our approach and discusses the practical relevance of individual predictors.
    @article{pfeiffer2020eyetracking,
    author = {Pfeiffer, Jella and Pfeiffer, Thies and Meißner, Martin and Weiß, Elisa},
    title = {Eye-Tracking-Based Classification of Information Search Behavior Using Machine Learning: Evidence from Experiments in Physical Shops and Virtual Reality Shopping Environments},
    journal = {Information Systems Research},
    year = {2020},
    doi = {10.1287/isre.2019.0907},
    URL = {https://doi.org/10.1287/isre.2019.0907},
    eprint = {https://doi.org/10.1287/isre.2019.0907},
    abstract = { How can we tailor assistance systems, such as recommender systems or decision support systems, to consumers’ individual shopping motives? How can companies unobtrusively identify shopping motives without explicit user input? We demonstrate that eye movement data allow building reliable prediction models for identifying goal-directed and exploratory shopping motives. Our approach is validated in a real supermarket and in an immersive virtual reality supermarket. Several managerial implications of using gaze-based classification of information search behavior are discussed: First, the advent of virtual shopping environments makes using our approach straightforward as eye movement data are readily available in next-generation virtual reality devices. Virtual environments can be adapted to individual needs once shopping motives are identified and can be used to generate more emotionally engaging customer experiences. Second, identifying exploratory behavior offers opportunities for marketers to adapt marketing communication and interaction processes. Personalizing the shopping experience and profiling customers’ needs based on eye movement data promises to further increase conversion rates and customer satisfaction. Third, eye movement-based recommender systems do not need to interrupt consumers and thus do not take away attention from the purchase process. Finally, our paper outlines the technological basis of our approach and discusses the practical relevance of individual predictors. }
    }
  • E. Lampen, J. Lehwald, and T. Pfeiffer, “Virtual Humans in AR: Evaluation of Presentation Concepts in an Industrial Assistance Use Case,” in Proceedings of the 26th ACM Symposium on Virtual Reality Software and Technology, New York, NY, USA, 2020. doi:10.1145/3385956.3418974
    [BibTeX] [Abstract] [Download PDF]
    Embedding virtual humans in educational settings enables the transfer of the approved concepts of learning by observation and imitation of experts to extended reality scenarios. Whilst various presentation concepts of virtual humans for learning have been investigated in sports and rehabilitation, little is known regarding industrial use cases. In prior work on manual assembly, Lampen et al. [21] show that three-dimensional (3D) registered virtual humans can provide assistance as effective as state-of-the-art HMD-based AR approaches. We extend this work by conducting a comparative user study (N=30) to verify implementation costs of assistive behavior features and 3D registration. The results reveal that the basic concept of a 3D registered virtual human is limited and comparable to a two-dimensional screen aligned presentation. However, by incorporating additional assistive behaviors, the 3D assistance concept is enhanced and shows significant advantages in terms of cognitive savings and reduced errors. Thus, it can be concluded, that this presentation concept is valuable in situations where time is less crucial, e.g. in learning scenarios or during complex tasks.
    @inproceedings{10.1145/3385956.3418974,
    author = {Lampen, Eva and Lehwald, Jannes and Pfeiffer, Thies},
    title = {Virtual Humans in AR: Evaluation of Presentation Concepts in an Industrial Assistance Use Case},
    year = {2020},
    isbn = {9781450376198},
    publisher = {Association for Computing Machinery},
    address = {New York, NY, USA},
    url = {https://doi.org/10.1145/3385956.3418974},
    doi = {10.1145/3385956.3418974},
    abstract = {Embedding virtual humans in educational settings enables the transfer of the approved concepts of learning by observation and imitation of experts to extended reality scenarios. Whilst various presentation concepts of virtual humans for learning have been investigated in sports and rehabilitation, little is known regarding industrial use cases. In prior work on manual assembly, Lampen et al. [21] show that three-dimensional (3D) registered virtual humans can provide assistance as effective as state-of-the-art HMD-based AR approaches. We extend this work by conducting a comparative user study (N=30) to verify implementation costs of assistive behavior features and 3D registration. The results reveal that the basic concept of a 3D registered virtual human is limited and comparable to a two-dimensional screen aligned presentation. However, by incorporating additional assistive behaviors, the 3D assistance concept is enhanced and shows significant advantages in terms of cognitive savings and reduced errors. Thus, it can be concluded, that this presentation concept is valuable in situations where time is less crucial, e.g. in learning scenarios or during complex tasks.},
    booktitle = {Proceedings of the 26th ACM Symposium on Virtual Reality Software and Technology},
    articleno = {31},
    numpages = {5},
    keywords = {Virtual Human, Expert-Based Learning, Augmented Reality},
    location = {Virtual Event, Canada},
    series = {VRST '20}
    }
  • J. Blattgerste, P. Renner, and T. Pfeiffer, “Authorable Augmented Reality Instructions for Assistance and Training in Work Environments,” in Proceedings of the 18th International Conference on Mobile and Ubiquitous Multimedia, New York, NY, USA, 2019. doi:10.1145/3365610.3365646
    [BibTeX] [Abstract] [Download PDF]
    Augmented Reality (AR) is a promising technology for assistance and training in work environments, as it can provide instructions and feedback contextualised. Not only, but especially impaired workers can benefit from this technology. While previous work mostly focused on using AR to assist or train specific predefined tasks, “general purpose” AR applications, that can be used to intuitively author new tasks at run-time, are widely missing. The contribution of this work is twofold: First we develop an AR authoring tool on the Microsoft HoloLens in combination with a Smartphone as an additional controller following considerations based on related work, guidelines and focus group interviews. Then, we evaluate the usability of the authoring tool itself and the produced AR instructions on a qualitative level in realistic scenarios and gather feedback. As the results reveal a positive reception, we discuss authorable AR as a viable form of AR assistance or training in work environments.
    @inproceedings{blattgerste2019authorable,
    author = {Blattgerste, Jonas and Renner, Patrick and Pfeiffer, Thies},
    title = {Authorable Augmented Reality Instructions for Assistance and Training in Work Environments},
    year = {2019},
    isbn = {9781450376242},
    publisher = {Association for Computing Machinery},
    address = {New York, NY, USA},
    doi = {10.1145/3365610.3365646},
    booktitle = {Proceedings of the 18th International Conference on Mobile and Ubiquitous Multimedia},
    articleno = {34},
    numpages = {11},
    keywords = {training, cognitive impairments, augmented reality, annotation, mixed reality, authoring, assistance},
    location = {Pisa, Italy},
    series = {MUM 19},
    url = {https://mixality.de/wp-content/uploads/2020/07/blattgerste2019authorable.pdf},
    abstract = {Augmented Reality (AR) is a promising technology for assistance and training in work environments, as it can provide instructions and feedback contextualised. Not only, but especially impaired workers can benefit from this technology. While previous work mostly focused on using AR to assist or train specific predefined tasks, "general purpose" AR applications, that can be used to intuitively author new tasks at run-time, are widely missing. The contribution of this work is twofold: First we develop an AR authoring tool on the Microsoft HoloLens in combination with a Smartphone as an additional controller following considerations based on related work, guidelines and focus group interviews. Then, we evaluate the usability of the authoring tool itself and the produced AR instructions on a qualitative level in realistic scenarios and gather feedback. As the results reveal a positive reception, we discuss authorable AR as a viable form of AR assistance or training in work environments.}
    }
  • J. Blattgerste and T. Pfeiffer, “Promptly Authored Augmented Reality Instructions Can Be Sufficient to Enable Cognitively Impaired Workers,” in GI VR / AR Workshop 2020, 2020.
    [BibTeX] [Abstract] [Download PDF]
    The benefits of contextualising information through Augmented Reality (AR) instructions to assist cognitively impaired workers are well known, but most findings are based on AR instructions carefully designed for predefined standard tasks. Previous findings indicate that the modality and quality of provided AR instructions have a significant impact on the provided benefits. The emergence of commercial products providing tools for instructors to promptly author their own AR instructions elicits the question, whether instructions created through those are sufficient to support cognitively impaired workers. This paper explores this question through a qualitative study using an AR authoring tool to create AR instructions for a task that none out of 10 participants was able to complete previously. Using promptly authored instructions, however, most were able to complete the task. Additionally, they reported good usability and gave qualitative feedback indicating they would like to use comparable AR instructions more often.
    @inproceedings{blattgerste2020prompty,
    title={Promptly Authored Augmented Reality Instructions Can Be Sufficient to Enable Cognitively Impaired Workers},
    author={Blattgerste, Jonas and Pfeiffer, Thies},
    booktitle={{GI VR / AR Workshop 2020}},
    year={2020},
    url = {https://mixality.de/wp-content/uploads/2020/07/blattgerste2020prompty.pdf},
    abstract = {The benefits of contextualising information through Augmented Reality (AR) instructions to assist cognitively impaired workers are well known, but most findings are based on AR instructions carefully designed for predefined standard tasks. Previous findings indicate that the modality and quality of provided AR instructions have a significant impact on the provided benefits. The emergence of commercial products providing tools for instructors to promptly author their own AR instructions elicits the question, whether instructions created through those are sufficient to support cognitively impaired workers. This paper explores this question through a qualitative study using an AR authoring tool to create AR instructions for a task that none out of 10 participants was able to complete previously. Using promptly authored instructions, however, most were able to complete the task. Additionally, they reported good usability and gave qualitative feedback indicating they would like to use comparable AR instructions more often.}
    }
  • P. Renner and T. Pfeiffer, “AR-Glasses-Based Attention Guiding for Complex Environments – Requirements, Classification and Evaluation,” in Proceedings of the 13th ACM International Conference on PErvasive Technologies Related to Assistive Environments, New York, NY, USA, 2020. doi:10.1145/3389189.3389198
    [BibTeX] [Abstract] [Download PDF]
    Augmented Reality (AR) based assistance has a huge potential in the context of Industry 4.0: AR links digital information to physical objects and processes in a mobile and, in the case of AR glasses, hands-free way. In most companies, order-picking is still done using paper lists. With the rapid development of AR hardware during the last years, the interest in digitizing picking processes using AR rises. AR-based guiding for picking tasks can reduce the time needed for visual search and reduce errors, such as wrongly picked items or false placements. Choosing the best guiding technique is a non-trivial task: Different environments bring their own inherent constraints and requirements. In the literature, many kinds of guiding techniques were proposed, but the majority of techniques were only compared to non-AR picking assistance. To reveal advantages and disadvantages of AR-based guiding techniques, the contribution of this paper is three-fold: First, an analysis of tasks and environments reveals requirements and constraints on attention guiding techniques which are condensed to a taxonomy of attention guiding techniques. Second, guiding techniques covering a range of approaches from the literature are evaluated in a large-scale picking environment with a focus on task performance and on factors as the users’ feeling of autonomy and ergonomics. Finally, a 3D path-based guiding technique supporting multiple goals simultaneously in complex environments is proposed.
    @inproceedings{renner2020AR,
    author = {Renner, Patrick and Pfeiffer, Thies},
    title = {{AR-Glasses-Based Attention Guiding for Complex Environments - Requirements, Classification and Evaluation}},
    year = {2020},
    isbn = {9781450377737},
    publisher = {Association for Computing Machinery},
    address = {New York, NY, USA},
    doi = {10.1145/3389189.3389198},
    booktitle = {Proceedings of the 13th ACM International Conference on PErvasive Technologies Related to Assistive Environments},
    articleno = {31},
    numpages = {10},
    location = {Corfu, Greece},
    series = {PETRA ’20},
    url = {https://mixality.de/wp-content/uploads/2020/07/renner2020AR.pdf},
    abstract = {Augmented Reality (AR) based assistance has a huge potential in the context of Industry 4.0: AR links digital information to physical objects and processes in a mobile and, in the case of AR glasses, hands-free way. In most companies, order-picking is still done using paper lists. With the rapid development of AR hardware during the last years, the interest in digitizing picking processes using AR rises. AR-based guiding for picking tasks can reduce the time needed for visual search and reduce errors, such as wrongly picked items or false placements. Choosing the best guiding technique is a non-trivial task: Different environments bring their own inherent constraints and requirements. In the literature, many kinds of guiding techniques were proposed, but the majority of techniques were only compared to non-AR picking assistance. To reveal advantages and disadvantages of AR-based guiding techniques, the contribution of this paper is three-fold: First, an analysis of tasks and environments reveals requirements and constraints on attention guiding techniques which are condensed to a taxonomy of attention guiding techniques. Second, guiding techniques covering a range of approaches from the literature are evaluated in a large-scale picking environment with a focus on task performance and on factors as the users' feeling of autonomy and ergonomics. Finally, a 3D path-based guiding technique supporting multiple goals simultaneously in complex environments is proposed.}
    }
  • J. Blattgerste, K. Luksch, C. Lewa, M. Kunzendorf, N. H. Bauer, A. Bernloehr, M. Joswig, T. Schäfer, and T. Pfeiffer, “Project Heb@AR: Exploring handheld Augmented Reality training to supplement academic midwifery education,” in DELFI 2020 – Die 18. Fachtagung Bildungstechnologien der Gesellschaft für Informatik e.V., Bonn, 2020, pp. 103-108.
    [BibTeX] [Abstract] [Download PDF]
    Augmented Reality (AR) promises great potential for training applications as it allows to provide the trainee with instructions and feedback that is contextualized. In recent years, AR reached a state of technical feasibility that not only allows for larger, long term evaluations, but also for explorations of its application to specific training use cases. In the BMBF funded project Heb@AR, the utilization of handheld AR as a supplementary tool for the practical training in academic midwifery education is explored. Specifically, how and where AR can be used most effectively in this context, how acceptability and accessibility for tutors and trainees can be ensured and how well emergency situations can be simulated using the technology. In this paper an overview of the Heb@AR project is provided, the goals of the project are stated and the project’s research questions are discussed from a technical perspective. Furthermore, insights into the current state and the development process of the first AR training prototype are provided: The preparation of a tocolytic injection.
    @inproceedings{blattgerste2020hebar,
    author = {Blattgerste, Jonas AND Luksch, Kristina AND Lewa, Carmen AND Kunzendorf, Martina AND Bauer, Nicola H. AND Bernloehr, Annette AND Joswig, Matthias AND Schäfer, Thorsten AND Pfeiffer, Thies},
    title = {Project Heb@AR: Exploring handheld Augmented Reality training to supplement academic midwifery education},
    booktitle = {DELFI 2020 – Die 18. Fachtagung Bildungstechnologien der Gesellschaft für Informatik e.V.},
    year = {2020},
    editor = {Zender, Raphael AND Ifenthaler, Dirk AND Leonhardt, Thiemo AND Schumacher, Clara},
    pages = { 103-108 },
    publisher = {Gesellschaft für Informatik e.V.},
    address = {Bonn},
    url = {https://dl.gi.de/bitstream/handle/20.500.12116/34147/103%20DELFI2020_paper_79.pdf?sequence=1&isAllowed=y},
    abstract = {Augmented Reality (AR) promises great potential for training applications as it allows to provide the trainee with instructions and feedback that is contextualized. In recent years, AR reached a state of technical feasibility that not only allows for larger, long term evaluations, but also for explorations of its application to specific training use cases. In the BMBF funded project Heb@AR, the utilization of handheld AR as a supplementary tool for the practical training in academic midwifery education is explored. Specifically, how and where AR can be used most effectively in this context, how acceptability and accessibility for tutors and trainees can be ensured and how well emergency situations can be simulated using the technology. In this paper an overview of the Heb@AR project is provided, the goals of the project are stated and the project’s research questions are discussed from a technical perspective. Furthermore, insights into the current state and the development process of the first AR training prototype are provided: The preparation of a tocolytic injection.}
    }
  • C. Hainke and T. Pfeiffer, “Adapting virtual trainings of applied skills to cognitive processes in medical and health care education within the DiViFaG project,” in DELFI 2020 – Die 18. Fachtagung Bildungstechnologien der Gesellschaft für Informatik e.V., Bonn, 2020, pp. 355-356.
    [BibTeX] [Abstract] [Download PDF]
    The use of virtual reality technology in education rises in popularity, especially in professions that include the training of practical skills. By offering the possibility to repeatedly practice and apply skills in controllable environments, VR training can help to improve the education process. The training simulations that are going to be developed within this project will make use of the high controllability by evaluating behavioral data as well as gaze-based data during the training process. This analysis can reveal insights in the user’s mental states and offers the opportunity of autonomous training adaption.
    @inproceedings{hainke2020adapting,
    author = {Hainke, Carolin AND Pfeiffer, Thies},
    title = {Adapting virtual trainings of applied skills to cognitive processes in medical and health care education within the DiViFaG project},
    booktitle = {DELFI 2020 – Die 18. Fachtagung Bildungstechnologien der Gesellschaft für Informatik e.V.},
    year = {2020},
    editor = {Zender, Raphael AND Ifenthaler, Dirk AND Leonhardt, Thiemo AND Schumacher, Clara},
    pages = { 355-356 },
    publisher = {Gesellschaft für Informatik e.V.},
    address = {Bonn},
    url = {https://dl.gi.de/bitstream/handle/20.500.12116/34184/355%20DELFI2020_paper_85.pdf?sequence=1&isAllowed=y},
    abstract = {The use of virtual reality technology in education rises in popularity, especially in professions that include the training of practical skills. By offering the possibility to repeatedly practice and apply skills in controllable environments, VR training can help to improve the education process. The training simulations that are going to be developed within this project will make use of the high controllability by evaluating behavioral data as well as gaze-based data during the training process. This analysis can reveal insights in the user’s mental states and offers the opportunity of autonomous training adaption.}
    }
  • L. Meyer and T. Pfeiffer, “Comparing Virtual Reality and Screen-based Training Simulations in Terms of Learning and Recalling Declarative Knowledge,” in DELFI 2020 – Die 18. Fachtagung Bildungstechnologien der Gesellschaft für Informatik e.V., Bonn, 2020, pp. 55-66.
    [BibTeX] [Abstract] [Download PDF]
    This paper discusses how much the more realistic user interaction in a life-sized fully immersive VR Training is a benefit for acquiring declarative knowledge compared to the same training via a screen-based first-person application. Two groups performed a nursing training scenario in immersive VR and on a tablet. A third group learned the necessary steps using a classic text-picture-manual (TP group). Afterwards all three groups had to perform a recall test with repeated measurement (one week). The results showed no significant differences between VR training and tablet training. In the first test shortly after completion of the training both training simulation conditions were worse than the TP group. In the long-term test, however, the knowledge loss of the TP group was significantly higher than that of the two simulation groups. Ultimately, VR training in this study design proved to be as efficient as training on a tablet for declarative knowledge acquisition. Nevertheless, it is possible that acquired procedural knowledge distinguishes VR training from the screen-based application.
    @inproceedings{meyer2020comparing,
    author = {Meyer, Leonard AND Pfeiffer, Thies},
    title = {Comparing Virtual Reality and Screen-based Training Simulations in Terms of Learning and Recalling Declarative Knowledge},
    booktitle = {DELFI 2020 – Die 18. Fachtagung Bildungstechnologien der Gesellschaft für Informatik e.V.},
    year = {2020},
    editor = {Zender, Raphael AND Ifenthaler, Dirk AND Leonhardt, Thiemo AND Schumacher, Clara},
    pages = { 55-66 },
    publisher = {Gesellschaft für Informatik e.V.},
    address = {Bonn},
    url = {https://dl.gi.de/bitstream/handle/20.500.12116/34204/055%20DELFI2020_paper_92.pdf?sequence=1&isAllowed=y},
    abstract = {This paper discusses how much the more realistic user interaction in a life-sized fully immersive VR Training is a benefit for acquiring declarative knowledge compared to the same training via a screen-based first-person application. Two groups performed a nursing training scenario in immersive VR and on a tablet. A third group learned the necessary steps using a classic text-picture-manual (TP group). Afterwards all three groups had to perform a recall test with repeated measurement (one week). The results showed no significant differences between VR training and tablet training. In the first test shortly after completion of the training both training simulation conditions were worse than the TP group. In the long-term test, however, the knowledge loss of the TP group was significantly higher than that of the two simulation groups. Ultimately, VR training in this study design proved to be as efficient as training on a tablet for declarative knowledge acquisition. Nevertheless, it is possible that acquired procedural knowledge distinguishes VR training from the screen-based application.}
    }
  • Y. Tehreem and T. Pfeiffer, “Immersive Virtual Reality Training for the Operation of Chemical Reactors,” in DELFI 2020 – Die 18. Fachtagung Bildungstechnologien der Gesellschaft für Informatik e.V., Bonn, 2020, pp. 359-360.
    [BibTeX] [Abstract] [Download PDF]
    This paper discusses virtual reality (VR) training for chemical operators on hazardous or costly operations of chemical plants. To this end, a prototypical training scenario is developed which will be deployed to industrial partners and evaluated regarding efficiency and effectiveness. In this paper, the current version of the prototype is presented, that allows life-sized trainings in a virtual simulation of a chemical reactor. Building up on this prototype scenario, means for measuring performance, providing feedback, and guiding users through VR-based trainings are explored and evaluated, targeting at an optimized transfer of knowledge from virtual to real world. This work is embedded in the Marie-Skłodowska-Curie Innovative Training Network CHARMING3, in which 15 PhD candidates from six European countries are cooperating.
    @inproceedings{tehreem2020immersive,
    author = {Tehreem, Yusra AND Pfeiffer, Thies},
    title = {Immersive Virtual Reality Training for the Operation of Chemical Reactors},
    booktitle = {DELFI 2020 – Die 18. Fachtagung Bildungstechnologien der Gesellschaft für Informatik e.V.},
    year = {2020},
    editor = {Zender, Raphael AND Ifenthaler, Dirk AND Leonhardt, Thiemo AND Schumacher, Clara},
    pages = { 359-360 },
    publisher = {Gesellschaft für Informatik e.V.},
    address = {Bonn},
    url = {https://dl.gi.de/bitstream/handle/20.500.12116/34186/359%20DELFI2020_paper_81.pdf?sequence=1&isAllowed=y},
    abstract = {This paper discusses virtual reality (VR) training for chemical operators on hazardous or costly operations of chemical plants. To this end, a prototypical training scenario is developed which will be deployed to industrial partners and evaluated regarding efficiency and effectiveness. In this paper, the current version of the prototype is presented, that allows life-sized trainings in a virtual simulation of a chemical reactor. Building up on this prototype scenario, means for measuring performance, providing feedback, and guiding users through VR-based trainings are explored and evaluated, targeting at an optimized transfer of knowledge from virtual to real world. This work is embedded in the Marie-Skłodowska-Curie Innovative Training Network CHARMING3, in which 15 PhD candidates from six European countries are cooperating.}
    }
  • D. Mardanbegi and T. Pfeiffer, “EyeMRTK: A Toolkit for Developing Eye Gaze Interactive Applications in Virtual and Augmented Reality,” in Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications (ETRA ’19), 2019, p. 76:1–76:5. doi:10.1145/3317956.3318155
    [BibTeX] [Abstract] [Download PDF]
    For head mounted displays, like they are used in mixed reality applications, eye gaze seems to be a natural interaction modality. EyeMRTK provides building blocks for eye gaze interaction in virtual and augmented reality. Based on a hardware abstraction layer, it allows interaction researchers and developers to focus on their interaction concepts, while enabling them to evaluate their ideas on all supported systems. In addition to that, the toolkit provides a simulation layer for debugging purposes, which speeds up prototyping during development on the desktop.
    @inproceedings{2937153,
    abstract = {For head mounted displays, like they are used in mixed reality applications, eye gaze seems to be a natural interaction modality. EyeMRTK provides building blocks for eye gaze interaction in virtual and augmented reality. Based on a hardware abstraction layer, it allows interaction researchers and developers to focus on their interaction concepts, while enabling them to evaluate their ideas on all supported systems. In addition to that, the toolkit provides a simulation layer for debugging purposes, which speeds up prototyping during development on the desktop.},
    author = {Mardanbegi, Diako and Pfeiffer, Thies},
    booktitle = {Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications (ETRA '19)},
    isbn = {978-1-4503-6709-7},
    keywords = {eye tracking, gaze interaction, unity, virtual reality},
    pages = {76:1--76:5},
    publisher = {ACM},
    title = {{EyeMRTK: A Toolkit for Developing Eye Gaze Interactive Applications in Virtual and Augmented Reality}},
    url = {https://pub.uni-bielefeld.de/record/2937153},
    doi = {10.1145/3317956.3318155},
    year = {2019},
    }
  • E. Lampen, J. Teuber, F. Gaisbauer, T. Bär, T. Pfeiffer, and S. Wachsmuth, “Combining Simulation and Augmented Reality Methods for Enhanced Worker Assistance in Manual Assembly,” Procedia CIRP, vol. 81, p. 588–593, 2019. doi:10.1016/j.procir.2019.03.160
    [BibTeX] [Abstract] [Download PDF]
    Due to mass customization product variety increased steeply in the automotive industry, entailing the increment of worker’s cognitive load during manual assembly tasks. Although worker assistance methods for cognitive automation already exist, they proof insufficient in terms of usability and achieved time saving. Given the rising importance of simulation towards autonomous production planning, a novel approach is proposed using human simulation data in context of worker assistance methods to alleviate cognitive load during manual assembly tasks. Within this paper, a new concept for augmented reality-based worker assistance is presented. Additionally, a comparative user study (N=24) was conducted with conventional worker assistance methods to evaluate a prototypical implementation of the concept. The results illustrate the enhancing opportunity of the novel approach to save cognitive abilities and to induce performance improvements. The implementation provided stable information presentation during the entire experiment. However, with regard to the recentness, there has to be carried out further developments and research, concerning performance adaptions and investigations of the effectiveness.
    @article{2937152,
    abstract = {Due to mass customization product variety increased steeply in the automotive industry, entailing the increment of worker’s cognitive load during manual assembly tasks. Although worker assistance methods for cognitive automation already exist, they proof insufficient in terms of usability and achieved time saving. Given the rising importance of simulation towards autonomous production planning, a novel approach is proposed using human simulation data in context of worker assistance methods to alleviate cognitive load during manual assembly tasks. Within this paper, a new concept for augmented reality-based worker assistance is presented. Additionally, a comparative user study (N=24) was conducted with conventional worker assistance methods to evaluate a prototypical implementation of the concept. The results illustrate the enhancing opportunity of the novel approach to save cognitive abilities and to induce performance improvements. The implementation provided stable information presentation during the entire experiment. However, with regard to the recentness, there has to be carried out further developments and research, concerning performance adaptions and investigations of the effectiveness.},
    author = {Lampen, Eva and Teuber, Jonas and Gaisbauer, Felix and Bär, Thomas and Pfeiffer, Thies and Wachsmuth, Sven},
    issn = {2212-8271},
    journal = {Procedia CIRP},
    keywords = {Virtual Reality, Augmented Reality, Manual Assembly},
    pages = {588--593},
    publisher = {Elsevier},
    title = {{Combining Simulation and Augmented Reality Methods for Enhanced Worker Assistance in Manual Assembly}},
    url = {https://pub.uni-bielefeld.de/record/2937152},
    doi = {10.1016/j.procir.2019.03.160},
    volume = {81},
    year = {2019},
    }
  • C. Peukert, J. Pfeiffer, M. Meißner, T. Pfeiffer, and C. Weinhardt, “Shopping in Virtual Reality Stores. The Influence of Immersion on System Adoption,” Journal of Management Information Systems, vol. 36, iss. 3, p. 1–34, 2019. doi:10.1080/07421222.2019.1628889
    [BibTeX] [Abstract] [Download PDF]
    Companies have the opportunity to better engage potential customers by presenting products to them in a highly immersive virtual reality (VR) shopping environment. However, a minimal amount is known about why and whether customers will adopt such fully immersive shopping environments. We therefore develop and experimentally validate a theoretical model, which explains how immersion affects adoption. The participants experienced the environment by using a head-mounted display (high immersion) or by viewing product models in 3D on a desktop (low immersion). We find that immersion does not affect the users’ intention to reuse the shopping environment, because two paths cancel each other out: Highly immersive shopping environments positively influence a hedonic path through telepresence, but surprisingly, they negatively influence a utilitarian path through product diagnosticity. We can explain this effect via low readability of product information in the VR environment and expect VR’s full potential to develop when the technology is further advanced. Our study contributes to literature on immersive systems and IS adoption research by introducing a research model for the adoption of VR shopping environments. A key practical implication of our study is that system designers need to pay special attention to the current state of technology when designing VR applications. # Video Contributions: ## High vs. Low Immersion The following video shows examples from the two conditions high immersion and low immersion.
    ## High Immersion vs. Physical Reality The following video shows a side-by-side comparison between the high immersive setup and the physical reality setup.
    @article{2934590,
    abstract = {Companies have the opportunity to better engage potential customers by presenting products to them in a highly immersive virtual reality (VR) shopping environment. However, a minimal amount is known about why and whether customers will adopt such fully immersive shopping environments. We therefore develop and experimentally validate a theoretical model, which explains how immersion affects adoption. The participants experienced the environment by using a head-mounted display (high immersion) or by viewing product models in 3D on a desktop (low immersion). We find that immersion does not affect the users’ intention to reuse the shopping environment, because two paths cancel each other out: Highly immersive shopping environments positively influence a hedonic path through telepresence, but surprisingly, they negatively influence a utilitarian path through product diagnosticity. We can explain this effect via low readability of product information in the VR environment and expect VR’s full potential to develop when the technology is further advanced. Our study contributes to literature on immersive systems and IS adoption research by introducing a research model for the adoption of VR shopping environments. A key practical implication of our study is that system designers need to pay special attention to the current state of technology when designing VR applications.
    # Video Contributions:
    ## High vs. Low Immersion
    The following video shows examples from the two conditions high immersion and low immersion.
    
    ## High Immersion vs. Physical Reality The following video shows a side-by-side comparison between the high immersive setup and the physical reality setup.
    }, author = {Peukert, Christian and Pfeiffer, Jella and Meißner, Martin and Pfeiffer, Thies and Weinhardt, Christof}, issn = {0742-1222}, journal = {Journal of Management Information Systems}, number = {3}, pages = {1--34}, publisher = {Taylor & Francis}, title = {{Shopping in Virtual Reality Stores. The Influence of Immersion on System Adoption}}, url = {https://pub.uni-bielefeld.de/record/2934590}, doi = {10.1080/07421222.2019.1628889}, volume = {36}, year = {2019}, }
  • L. Christoforakos, S. Tretter, S. Diefenbach, S. Bibi, M. Fröhner, K. Kohler, D. Madden, T. Marx, T. Pfeiffer, N. Pfeiffer-Leßmann, and N. Valkanova, “Potential and Challenges of Prototyping in Product Development and Innovation,” i-com, vol. 18, iss. 2, p. 179–187, 2019. doi:10.1515/icom-2019-0010
    [BibTeX] [Abstract] [Download PDF]
    Prototyping represents an established, essential method of product development and innovation, widely accepted across the industry. Obviously, the use of prototypes, i. e., simple representations of a product in development, in order to explore, communicate and evaluate the product idea, can provide many benefits. From a business perspective, a central advantage lies in cost-efficient testing. Consequently, the idea to “fail early”, and to continuously rethink and optimize design decisions before cost-consuming implementations, lies at the heart of prototyping. Still, taking a closer look at prototyping in practice, many organizations do not live up to this ideal. In fact, there are several typical misunderstandings and unsatisfying outcomes regarding the effective use of prototypes (e. g. Christoforakos & Diefenbach [3]; Diefenbach, Chien, Lenz, & Hassenzahl [4]). For example, although prominent literature repeatedly underlines the importance of the fit between a prototyping method or tool and its underlying research question and purpose (e. g. Schneider [7]), practitioners often seem to lack reflection and structure regarding their choice of prototyping approaches. Instead, the used prototypes often simply rest on organizational routines. As a result, prototypes can fail their purpose and might not contribute to the initial research question or aim of prototyping. Furthermore, the varying interests of different stakeholders within the prototyping process are often not considered with much detail either. According to Blomkvist and Holmlid [1], stakeholders of prototyping can be broadly categorized in colleagues (i. e. team members involved in the process of product development), clients (i. e. clients, whom the product is being developed for or potential new clients to be acquired) users (i. e. potential users of the final product). Each of these stakeholders employ different purposes of prototyping due to their distinct responsibilities within the process of product development. Moreover, they can hold different expectations regarding the prototyping process, and thus, have different preferences for certain methods or tools. Yet, the substantial role of stakeholders in the appropriate choice of prototyping approach and methods is often overlooked.
    @article{2936802,
    abstract = {Prototyping represents an established, essential method of product development and innovation, widely accepted across the industry. Obviously, the use of prototypes, i. e., simple representations of a product in development, in order to explore, communicate and evaluate the product idea, can provide many benefits. From a business perspective, a central advantage lies in cost-efficient testing. Consequently, the idea to “fail early”, and to continuously rethink and optimize design decisions before cost-consuming implementations, lies at the heart of prototyping. Still, taking a closer look at prototyping in practice, many organizations do not live up to this ideal. In fact, there are several typical misunderstandings and unsatisfying outcomes regarding the effective use of prototypes (e. g. Christoforakos & Diefenbach [3]; Diefenbach, Chien, Lenz, & Hassenzahl [4]). For example, although prominent literature repeatedly underlines the importance of the fit between a prototyping method or tool and its underlying research question and purpose (e. g. Schneider [7]), practitioners often seem to lack reflection and structure regarding their choice of prototyping approaches. Instead, the used prototypes often simply rest on organizational routines. As a result, prototypes can fail their purpose and might not contribute to the initial research question or aim of prototyping. Furthermore, the varying interests of different stakeholders within the prototyping process are often not considered with much detail either. According to Blomkvist and Holmlid [1], stakeholders of prototyping can be broadly categorized in colleagues (i. e. team members involved in the process of product development), clients (i. e. clients, whom the product is being developed for or potential new clients to be acquired) users (i. e. potential users of the final product). Each of these stakeholders employ different purposes of prototyping due to their distinct responsibilities within the process of product development. Moreover, they can hold different expectations regarding the prototyping process, and thus, have different preferences for certain methods or tools. Yet, the substantial role of stakeholders in the appropriate choice of prototyping approach and methods is often overlooked.},
    author = {Christoforakos, Lara and Tretter, Stefan and Diefenbach, Sarah and Bibi, Sven-Anwar and Fröhner, Moritz and Kohler, Kirstin and Madden, Dominick and Marx, Tobias and Pfeiffer, Thies and Pfeiffer-Leßmann, Nadine and Valkanova, Nina},
    issn = {2196-6826},
    journal = {i-com},
    number = {2},
    pages = {179--187},
    publisher = {de Gruyter},
    title = {{Potential and Challenges of Prototyping in Product Development and Innovation}},
    url = {https://pub.uni-bielefeld.de/record/2936802},
    doi = {10.1515/icom-2019-0010},
    volume = {18},
    year = {2019},
    }
  • J. Blattgerste, P. Renner, and T. Pfeiffer, “Augmented Reality Action Assistance and Learning for Cognitively Impaired People. A Systematic Literature Review,” in The 12th PErvasive Technologies Related to Assistive Environments Conference (PETRA ’19), 2019. doi:10.1145/3316782.3316789
    [BibTeX] [Abstract] [Download PDF]
    Augmented reality (AR) is a promising tool for many situations in which assistance is needed, as it allows for instructions and feedback to be contextualized. While research and development in this area have been primarily driven by industry, AR could also have a huge impact on those who need assistance the most: cognitively impaired people of all ages. In recent years some primary research on applying AR for action assistance and learning in the context of this target group has been conducted. However, the research field is sparsely covered and contributions are hard to categorize. An overview of the current state of research is missing. We contribute to filling this gap by providing a systematic literature review covering 52 publications. We describe the often rather technical publications on an abstract level and quantitatively assess their usage purpose, the targeted age group and the type of AR device used. Additionally, we provide insights on the current challenges and chances of AR learning and action assistance for people with cognitive impairments. We discuss trends in the research field, including potential future work for researchers to focus on.
    @inproceedings{2934446,
    abstract = {Augmented reality (AR) is a promising tool for many situations in which assistance is needed, as it allows for instructions and feedback to be contextualized. While research and development in this area have been primarily driven by industry, AR could also have a huge impact on those who need assistance the most: cognitively impaired people of all ages. In recent years some primary research on applying AR for action assistance and learning in the context of this target group has been conducted. However, the research field is sparsely covered and contributions are hard to categorize. An overview of the current state of research is missing. We contribute to filling this gap by providing a systematic literature review covering 52 publications. We describe the often rather technical publications on an abstract level and quantitatively assess their usage purpose, the targeted age group and the type of AR device used. Additionally, we provide insights on the current challenges and chances of AR learning and action assistance for people with cognitive
    impairments. We discuss trends in the research field, including potential future work for researchers to focus on.},
    author = {Blattgerste, Jonas and Renner, Patrick and Pfeiffer, Thies},
    booktitle = {The 12th PErvasive Technologies Related to Assistive Environments Conference (PETRA ’19)},
    location = {Rhodes, Greece},
    publisher = {ACM},
    title = {{Augmented Reality Action Assistance and Learning for Cognitively Impaired People. A Systematic Literature Review}},
    url = {https://nbn-resolving.org/urn:nbn:de:0070-pub-29344462, https://pub.uni-bielefeld.de/record/2934446},
    doi = {10.1145/3316782.3316789},
    year = {2019},
    }
  • M. Meißner, J. Pfeiffer, T. Pfeiffer, and H. Oppewal, “Combining virtual reality and mobile eye tracking to provide a naturalistic experimental environment for shopper research,” Journal of Business Research, vol. 100, p. 445–458, 2019. doi:10.1016/j.jbusres.2017.09.028
    [BibTeX] [Abstract] [Download PDF]
    Technological advances in eye tracking methodology have made it possible to unobtrusively measure consumer visual attention during the shopping process. Mobile eye tracking in field settings however has several limitations, including a highly cumbersome data coding process. In addition, field settings allow only limited control of important interfering variables. The present paper argues that virtual reality can provide an alternative setting that combines the benefits of mobile eye tracking with the flexibility and control provided by lab experiments. The paper first reviews key advantages of different eye tracking technologies as available for desktop, natural and virtual environments. It then explains how combining virtual reality settings with eye tracking provides a unique opportunity for shopper research in particular regarding the use of augmented reality to provide shopper assistance.
    @article{2914094,
    abstract = {Technological advances in eye tracking methodology have made it possible to unobtrusively measure consumer visual attention during the shopping process. Mobile eye tracking in field settings however has several limitations, including a highly cumbersome data coding process. In addition, field settings allow only limited control of important interfering variables. The present paper argues that virtual reality can provide an alternative setting that combines the benefits of mobile eye tracking with the flexibility and control provided by lab experiments. The paper first reviews key advantages of different eye tracking technologies as available for desktop, natural and virtual environments. It then explains how combining virtual reality settings with eye tracking provides a unique opportunity for shopper research in particular regarding the use of augmented reality to provide shopper assistance.},
    author = {Meißner, Martin and Pfeiffer, Jella and Pfeiffer, Thies and Oppewal, Harmen},
    issn = {0148-2963},
    journal = {Journal of Business Research},
    keywords = {Eye tracking, Visual attention, Virtual reality, Augmented reality, Assistance system, Shopper behavior, CLF_RESEARCH_HIGHLIGHT},
    pages = {445--458},
    publisher = {Elsevier BV},
    title = {{Combining virtual reality and mobile eye tracking to provide a naturalistic experimental environment for shopper research}},
    url = {https://pub.uni-bielefeld.de/record/2914094},
    doi = {10.1016/j.jbusres.2017.09.028},
    volume = {100},
    year = {2019},
    }
  • M. Andersen, T. Pfeiffer, S. Müller, and U. Schjoedt, “Agency detection in predictive minds. A virtual reality study,” Religion, Brain & Behavior, vol. 9, iss. 1, p. 52–64, 2019. doi:10.1080/2153599x.2017.1378709
    [BibTeX] [Abstract] [Download PDF]
    Since its inception, explaining the cognitive foundations governing sensory experiences of supernatural agents has been a central topic in the cognitive science of religion. Following recent developments in perceptual psychology, this preregistered study examines the effects of expectations and sensory reliability on agency detection. Participants were nstructed to detect beings in a virtual forest. Results reveal that participants expecting a high probability of encountering an agent in the forest are much more likely to make false detections than participants expecting a low probability of such encounters. Furthermore, low sensory reliability increases the false detection rate compared to high sensory reliability, but this effect is much smaller than the effect of expectations. While previous accounts of agency detection have speculated that false detections of agents may give rise to or strengthen religious beliefs, our results suggest that the reverse direction of causality may also be true. Religious teachings may first produce expectations in believers, which in turn elicit false detections of agents. These experiences may subsequently work to confirm the teachings and narratives upon which the values of a given culture are built.
    @article{2914550,
    abstract = {Since its inception, explaining the cognitive foundations governing sensory experiences of supernatural agents has been a central topic in the cognitive science of religion. Following recent developments in perceptual psychology, this preregistered study examines the effects of expectations and sensory reliability on agency detection. Participants were nstructed to detect beings in a virtual forest. Results reveal that participants expecting a high probability of encountering an agent in the forest are much more likely to make false detections than participants expecting a low probability of such encounters. Furthermore, low sensory reliability increases the false detection rate compared to high sensory reliability, but this effect is much smaller than the effect of expectations. While previous accounts of agency detection have speculated that false detections of agents may give rise to or strengthen religious beliefs, our results suggest that the reverse direction of causality may also be true. Religious teachings may first produce expectations in believers, which in turn elicit false detections of agents. These experiences may subsequently work to confirm the teachings and narratives upon which the values of a given culture are built.},
    author = {Andersen, Marc and Pfeiffer, Thies and Müller, Sebastian and Schjoedt, Uffe},
    issn = {2153-5981},
    journal = {Religion, Brain & Behavior},
    keywords = {CLF_RESEARCH_HIGHLIGHT},
    number = {1},
    pages = {52--64},
    publisher = {Routledge},
    title = {{Agency detection in predictive minds. A virtual reality study}},
    url = {https://pub.uni-bielefeld.de/record/2914550},
    doi = {10.1080/2153599x.2017.1378709},
    volume = {9},
    year = {2019},
    }
  • P. Renner, F. Lier, F. Friese, T. Pfeiffer, and S. Wachsmuth, “WYSIWICD: What You See is What I Can Do,” in HRI ’18 Companion: 2018 ACM/IEEE International Conference on Human-Robot Interaction Companion, 2018. doi:10.1145/3173386.3177032
    [BibTeX] [Download PDF]
    @inproceedings{2916801,
    author = {Renner, Patrick and Lier, Florian and Friese, Felix and Pfeiffer, Thies and Wachsmuth, Sven},
    booktitle = {HRI '18 Companion: 2018 ACM/IEEE International Conference on Human-Robot Interaction Companion},
    keywords = {Augmented Reality, Natural Interfaces, Sensor Fusion},
    location = {Chicago},
    publisher = {ACM},
    title = {{WYSIWICD: What You See is What I Can Do}},
    url = {https://nbn-resolving.org/urn:nbn:de:0070-pub-29168017, https://pub.uni-bielefeld.de/record/2916801},
    doi = {10.1145/3173386.3177032},
    year = {2018},
    }
  • N. Mitev, P. Renner, T. Pfeiffer, and M. Staudte, “Towards efficient human–machine collaboration. Effects of gaze-driven feedback and engagement on performance,” Cognitive Research: Principles and Implications, vol. 3, iss. 3, 2018. doi:10.1186/s41235-018-0148-x
    [BibTeX] [Abstract] [Download PDF]
    Referential success is crucial for collaborative task-solving in shared environments. In face-to-face interactions, humans, therefore, exploit speech, gesture, and gaze to identify a specific object. We investigate if and how the gaze behavior of a human interaction partner can be used by a gaze-aware assistance system to improve referential success. Specifically, our system describes objects in the real world to a human listener using on-the-fly speech generation. It continuously interprets listener gaze and implements alternative strategies to react to this implicit feedback. We used this system to investigate an optimal strategy for task performance: providing an unambiguous, longer instruction right from the beginning, or starting with a shorter, yet ambiguous instruction. Further, the system provides gaze-driven feedback, which could be either underspecified (“No, not that one!”) or contrastive (“Further left!”). As expected, our results show that ambiguous instructions followed by underspecified feedback are not beneficial for task performance, whereas contrastive feedback results in faster interactions. Interestingly, this approach even outperforms unambiguous instructions (manipulation between subjects). However, when the system alternates between underspecified and contrastive feedback to initially ambiguous descriptions in an interleaved manner (within subjects), task performance is similar for both approaches. This suggests that listeners engage more intensely with the system when they can expect it to be cooperative. This, rather than the actual informativity of the spoken feedback, may determine the efficiency of information uptake and performance.
    @article{2932893,
    abstract = {Referential success is crucial for collaborative task-solving in shared environments. In face-to-face interactions, humans, therefore, exploit speech, gesture, and gaze to identify a specific object. We investigate if and how the gaze behavior of a human interaction partner can be used by a gaze-aware assistance system to improve referential success. Specifically, our system describes objects in the real world to a human listener using on-the-fly speech generation. It continuously interprets listener gaze and implements alternative strategies to react to this implicit feedback. We used this system to investigate an optimal strategy for task performance: providing an unambiguous, longer instruction right from the beginning, or starting with a shorter, yet ambiguous instruction. Further, the system provides gaze-driven feedback, which could be either underspecified (“No, not that one!”) or contrastive (“Further left!”). As expected, our results show that ambiguous instructions followed by underspecified feedback are not beneficial for task performance, whereas contrastive feedback results in faster interactions. Interestingly, this approach even outperforms unambiguous instructions (manipulation between subjects). However, when the system alternates between underspecified and contrastive feedback to initially ambiguous descriptions in an interleaved manner (within subjects), task performance is similar for both approaches. This suggests that listeners engage more intensely with the system when they can expect it to be cooperative. This, rather than the actual informativity of the spoken feedback, may determine the efficiency of information uptake and performance.},
    author = {Mitev, Nikolina and Renner, Patrick and Pfeiffer, Thies and Staudte, Maria},
    issn = {2365-7464},
    journal = {Cognitive Research: Principles and Implications},
    keywords = {Human–computer interaction, Natural language generation, Listener gaze, Referential success, Multimodal systems},
    number = {3},
    publisher = {Springer Nature},
    title = {{Towards efficient human–machine collaboration. Effects of gaze-driven feedback and engagement on performance}},
    url = {https://pub.uni-bielefeld.de/record/2932893},
    doi = {10.1186/s41235-018-0148-x},
    volume = {3},
    year = {2018},
    }
  • F. Summann, T. Pfeiffer, and M. Preis, “Kooperative Entwicklung digitaler Services an Hochschulbibliotheken,” Bibliotheksdienst, vol. 52, iss. 8, p. 595–609, 2018. doi:10.1515/bd-2018-0070
    [BibTeX] [Download PDF]
    @article{2930431,
    author = {Summann, Friedrich and Pfeiffer, Thies and Preis, Matthias},
    issn = {2194-9646},
    journal = {Bibliotheksdienst},
    keywords = {Virtuelle Forschungsumgebung, Virtuelle Realiät},
    number = {8},
    pages = {595--609},
    publisher = {de Gruyter },
    title = {{Kooperative Entwicklung digitaler Services an Hochschulbibliotheken}},
    url = {https://pub.uni-bielefeld.de/record/2930431},
    doi = {10.1515/bd-2018-0070},
    volume = {52},
    year = {2018},
    }
  • M. Andersen, K. L. Nielbo, U. Schjoedt, T. Pfeiffer, A. Roepstorff, and J. Sørensen, “Predictive minds in Ouija board sessions,” Phenomenology and the Cognitive Sciences, vol. 18, iss. 3, p. 577–588, 2018. doi:10.1007/s11097-018-9585-8
    [BibTeX] [Abstract] [Download PDF]
    Ouija board sessions are illustrious examples of how subjective feelings of control the Sense of Agency (SoA) – can be manipulated in real life settings. We present findings from a field experiment at a paranormal conference, where Ouija enthusiasts were equipped with eye trackers while using the Ouija board. Our results show that participants have a significantly lower probability at visually predicting letters in a Ouija board session compared to a condition in which they are instructed to deliberately spell out words with the Ouija board planchette. Our results also show that Ouija board believers report lower SoA compared to sceptic participants. These results support previous research which claim that low sense of agency is caused by a combination of retrospective inference and an inhibition of predictive processes. Our results show that users in Ouija board sessions become increasingly better at predicting letters as responses unfold over time, and that meaningful responses from the Ouija board can only be accounted for when considering interactions that goes on at the participant pair level. These results suggest that meaningful responses from the Ouija board may be an emergent property of interacting and predicting minds that increasingly impose structure on initially random events in Ouija sessions.
    @article{2921413,
    abstract = {Ouija board sessions are illustrious examples of how subjective feelings of control the Sense of Agency (SoA) - can be manipulated in real life settings. We present findings from a field experiment at a paranormal conference, where Ouija enthusiasts were equipped with eye trackers while using the Ouija board. Our results show that participants have a significantly lower probability at visually predicting letters in a Ouija board session compared to a condition in which they are instructed to deliberately spell out words with the Ouija board planchette. Our results also show that Ouija board believers report lower SoA compared to sceptic participants. These results support previous research which claim that low sense of agency is caused by a combination of retrospective inference and an inhibition of predictive processes. Our
    results show that users in Ouija board sessions become increasingly better at predicting letters as responses unfold over time, and that meaningful responses from the Ouija board can only be accounted for when considering interactions that goes on at the participant pair level. These results suggest that meaningful responses from the Ouija board may be an emergent property of interacting and predicting minds that increasingly impose structure on initially random events in Ouija sessions.},
    author = {Andersen, Marc and Nielbo, Kristoffer L. and Schjoedt, Uffe and Pfeiffer, Thies and Roepstorff, Andreas and Sørensen, Jesper},
    issn = {1572-8676},
    journal = {Phenomenology and the Cognitive Sciences},
    number = {3},
    pages = {577--588},
    publisher = {Springer Nature},
    title = {{Predictive minds in Ouija board sessions}},
    url = {https://pub.uni-bielefeld.de/record/2921413},
    doi = {10.1007/s11097-018-9585-8},
    volume = {18},
    year = {2018},
    }
  • T. Pfeiffer, C. Hainke, L. Meyer, M. Fruhner, and M. Niebling, “Virtual SkillsLab – Trainingsanwendung zur Infusionsvorbereitung (Wettbewerbssieger) ,” in DeLFI Workshops 2018. Proceedings der Pre-Conference-Workshops der 16. E-Learning Fachtagung Informatik co-located with 16th e-Learning Conference of the German Computer Society (DeLFI 2018), Frankfurt, Germany, September 10, 2018, 2018.
    [BibTeX] [Abstract] [Download PDF]
    In der Ausbildung in der Pflege gibt es großen Bedarf an praktischem Training. SkillsLabs, physikalische Nachbauten von realen Arbeitsräumen an den Lehrstätten, bieten die einzigartige Möglichkeit, praktisches Wissen in direkter Nähe und in direkter Einbindung mit den Lehrenden zu erarbeiten und mit der theoretischen Ausbildung zu verzahnen. Häufig stehen jedoch die notwendigen Ressourcen (Räume, Arbeitsmittel) nicht in ausreichendem Maße zur Verfügung. Virtuelle SkillsLabs können hier den Bedarf zum Teil abdecken und eine Brücke zwischen Theorie und Praxis bilden. Im Beitrag wird eine solche Umsetzung mit verschiedenen Ausbaustufen vorgestellt.
    @inproceedings{2932685,
    abstract = {In der Ausbildung in der Pflege gibt es großen Bedarf an praktischem Training. SkillsLabs, physikalische Nachbauten von realen Arbeitsräumen an den Lehrstätten, bieten die einzigartige Möglichkeit, praktisches Wissen in direkter Nähe und in direkter Einbindung mit den Lehrenden zu erarbeiten und mit der theoretischen Ausbildung zu verzahnen. Häufig stehen jedoch die notwendigen Ressourcen (Räume, Arbeitsmittel) nicht in ausreichendem Maße zur Verfügung.
    Virtuelle SkillsLabs können hier den Bedarf zum Teil abdecken und eine Brücke zwischen Theorie und Praxis bilden. Im Beitrag wird eine solche Umsetzung mit verschiedenen Ausbaustufen vorgestellt.},
    author = {Pfeiffer, Thies and Hainke, Carolin and Meyer, Leonard and Fruhner, Maik and Niebling, Moritz},
    booktitle = {DeLFI Workshops 2018. Proceedings der Pre-Conference-Workshops der 16. E-Learning Fachtagung Informatik co-located with 16th e-Learning Conference of the German Computer Society (DeLFI 2018), Frankfurt, Germany, September 10, 2018},
    editor = {Schiffner, Daniel},
    issn = {1613-0073},
    keywords = {Virtual Skills Lab, Virtuelle Realität},
    title = {{Virtual SkillsLab - Trainingsanwendung zur Infusionsvorbereitung (Wettbewerbssieger) }},
    url = {https://pub.uni-bielefeld.de/record/2932685},
    volume = {2250},
    year = {2018},
    }
  • P. Agethen, V. Subramanian Sekar, F. Gaisbauer, T. Pfeiffer, M. Otto, and E. Rukzio, “Behavior Analysis of Human Locomotion in Real World and Virtual Reality for Manufacturing Industry,” ACM Transactions on Applied Perception (TAP), vol. 15, iss. 3, 2018. doi:10.1145/3230648
    [BibTeX] [Abstract] [Download PDF]
    With the rise of immersive visualization techniques, many domains within the manufacturing industry are increasingly validating production processes in virtual reality (VR). The validity of the results gathered in such simulations, however, is widely unknown – in particular with regard to human locomotion behavior. To bridge this gap, this paper presents an experiment, analyzing the behavioral disparity between human locomotion being performed without any equipment and in immersive virtual reality while wearing a head-mounted display (HMD). The presented study (n = 30) is split up in three sections and covers linear walking, non-linear walking and obstacle avoidance. Special care has been given to design the experiment so that findings are generally valid and can be applied to a wide range of domains beyond the manufacturing industry. The findings provide novel insights into the effect of immersive virtual reality on specific gait parameters. In total, a comprehensive sample of 18.09 km is analyzed. The results reveal that the HMD had a medium effect (up to 13%) on walking velocity, on non-linear walking towards an oriented target and on clearance distance. The overall-differences are modeled using multiple regression models, thus allowing the general usage within various domains. Summarizing, it can be concluded that VR can be used to analyze and plan human locomotion, however, specific details may have to be adjusted in order to transfer findings to the real world.
    @article{2921256,
    abstract = {With the rise of immersive visualization techniques, many domains within the manufacturing industry are increasingly
    validating production processes in virtual reality (VR). The validity of the results gathered in such simulations, however,
    is widely unknown - in particular with regard to human locomotion behavior. To bridge this gap, this paper presents an
    experiment, analyzing the behavioral disparity between human locomotion being performed without any equipment and in immersive virtual reality while wearing a head-mounted display (HMD). The presented study (n = 30) is split up in three sections and covers linear walking, non-linear walking and obstacle avoidance. Special care has been given to design the experiment so that findings are generally valid and can be applied to a wide range of domains beyond the manufacturing industry. The findings provide novel insights into the effect of immersive virtual reality on specific gait parameters. In total, a comprehensive sample of 18.09 km is analyzed. The results reveal that the HMD had a medium effect (up to 13%) on walking velocity, on non-linear walking towards an oriented target and on clearance distance. The overall-differences are modeled using multiple regression models, thus allowing the general usage within various domains. Summarizing, it can be concluded that VR can be used to analyze and plan human locomotion, however, specific details may have to be adjusted in order to transfer findings to the real world.},
    author = {Agethen, Philipp and Subramanian Sekar, Viswa and Gaisbauer, Felix and Pfeiffer, Thies and Otto, Michael and Rukzio, Enrico},
    issn = {1544-3558},
    journal = {ACM Transactions on Applied Perception (TAP)},
    keywords = {CLF_RESEARCH_HIGHLIGHT},
    number = {3},
    publisher = {ACM},
    title = {{Behavior Analysis of Human Locomotion in Real World and Virtual Reality for Manufacturing Industry}},
    url = {https://nbn-resolving.org/urn:nbn:de:0070-pub-29212563, https://pub.uni-bielefeld.de/record/2921256},
    doi = {10.1145/3230648},
    volume = {15},
    year = {2018},
    }
  • S. Meyer zu Borgsen, P. Renner, F. Lier, T. Pfeiffer, and S. Wachsmuth, “Improving Human-Robot Handover Research by Mixed Reality Techniques,” in VAM-HRI 2018. The Inaugural International Workshop on Virtual, Augmented and Mixed Reality for Human-Robot Interaction. Proceedings, 2018. doi:10.4119/unibi/2919957
    [BibTeX] [Download PDF]
    @inproceedings{2919957,
    author = {Meyer zu Borgsen, Sebastian and Renner, Patrick and Lier, Florian and Pfeiffer, Thies and Wachsmuth, Sven},
    booktitle = {VAM-HRI 2018. The Inaugural International Workshop on Virtual, Augmented and Mixed Reality for Human-Robot Interaction. Proceedings},
    location = {Chicago},
    title = {{Improving Human-Robot Handover Research by Mixed Reality Techniques}},
    url = {https://nbn-resolving.org/urn:nbn:de:0070-pub-29199579, https://pub.uni-bielefeld.de/record/2919957},
    doi = {10.4119/unibi/2919957},
    year = {2018},
    }
  • P. Agethen, M. Link, F. Gaisbauer, T. Pfeiffer, and E. Rukzio, “Counterbalancing virtual reality induced temporal disparities of human locomotion for the manufacturing industry,” in Proceedings of the 11th Annual International Conference on Motion, Interaction, and Games – MIG ’18, 2018. doi:10.1145/3274247.3274517
    [BibTeX] [Download PDF]
    @inproceedings{2932220,
    author = {Agethen, Philipp and Link, Max and Gaisbauer, Felix and Pfeiffer, Thies and Rukzio, Enrico},
    booktitle = {Proceedings of the 11th Annual International Conference on Motion, Interaction, and Games - MIG '18},
    isbn = {978-1-4503-6015-9},
    publisher = {ACM Press},
    title = {{Counterbalancing virtual reality induced temporal disparities of human locomotion for the manufacturing industry}},
    url = {https://pub.uni-bielefeld.de/record/2932220},
    doi = {10.1145/3274247.3274517},
    year = {2018},
    }
  • T. Pfeiffer and P. Renner, “Quantifying the interplay of gaze and gesture in deixis using an experimental-simulative approach,” in Eye-tracking in Interaction. Studies on the role of eye gaze in dialogue, G. Brône and B. Oben, Eds., John Benjamins Publishing Company, 2018, vol. 10, p. 109–138. doi:10.1075/ais.10.06pfe
    [BibTeX] [Download PDF]
    @inbook{2931842,
    author = {Pfeiffer, Thies and Renner, Patrick},
    booktitle = {Eye-tracking in Interaction. Studies on the role of eye gaze in dialogue},
    editor = {Brône, Geert and Oben, Bert},
    isbn = {9789027201522},
    pages = {109--138},
    publisher = {John Benjamins Publishing Company},
    title = {{Quantifying the interplay of gaze and gesture in deixis using an experimental-simulative approach}},
    url = {https://pub.uni-bielefeld.de/record/2931842},
    doi = {10.1075/ais.10.06pfe},
    volume = {10},
    year = {2018},
    }
  • J. Blattgerste, P. Renner, and T. Pfeiffer, “Advantages of Eye-Gaze over Head-Gaze-Based Selection in Virtual and Augmented Reality under Varying Field of Views,” in COGAIN ’18. Proceedings of the Symposium on Communication by Gaze Interaction, 2018. doi:10.1145/3206343.3206349
    [BibTeX] [Abstract] [Download PDF]
    The current best practice for hands-free selection using Virtual and Augmented Reality (VR/AR) head-mounted displays is to use head-gaze for aiming and dwell-time or clicking for triggering the selection. There is an observable trend for new VR and AR devices to come with integrated eye-tracking units to improve rendering, to provide means for attention analysis or for social interactions. Eye-gaze has been successfully used for human-computer interaction in other domains, primarily on desktop computers. In VR/AR systems, aiming via eye-gaze could be significantly faster and less exhausting than via head-gaze. To evaluate benefits of eye-gaze-based interaction methods in VR and AR, we compared aiming via head-gaze and aiming via eye-gaze. We show that eye-gaze outperforms head-gaze in terms of speed, task load, required head movement and user preference. We furthermore show that the advantages of eye-gaze further increase with larger FOV sizes.
    @inproceedings{2919602,
    abstract = {The current best practice for hands-free selection using Virtual and Augmented Reality (VR/AR) head-mounted displays is to use head-gaze for aiming and dwell-time or clicking for triggering the selection. There is an observable trend for new VR and AR devices to come with integrated eye-tracking units to improve rendering, to provide means for attention analysis or for social interactions. Eye-gaze has been successfully used for human-computer interaction in other domains, primarily on desktop computers. In VR/AR systems, aiming via eye-gaze could be significantly faster and less exhausting than via head-gaze.
    To evaluate benefits of eye-gaze-based interaction methods in VR and AR, we compared aiming via head-gaze and aiming via eye-gaze. We show that eye-gaze outperforms head-gaze in terms of speed, task load, required head movement and user preference. We furthermore show that the advantages of eye-gaze further increase with larger FOV sizes.},
    author = {Blattgerste, Jonas and Renner, Patrick and Pfeiffer, Thies},
    booktitle = {COGAIN '18. Proceedings of the Symposium on Communication by Gaze Interaction},
    isbn = {978-1-4503-5790-6},
    keywords = {Augmented Reality, Virtual Reality, Assistance Systems, Head-Mounted Displays, Eye-Tracking, Field of View, Human Computer Interaction},
    location = {Warsaw, Poland},
    publisher = {ACM},
    title = {{Advantages of Eye-Gaze over Head-Gaze-Based Selection in Virtual and Augmented Reality under Varying Field of Views}},
    url = {https://nbn-resolving.org/urn:nbn:de:0070-pub-29196024, https://pub.uni-bielefeld.de/record/2919602},
    doi = {10.1145/3206343.3206349},
    year = {2018},
    }
  • N. Mitev, P. Renner, T. Pfeiffer, and M. Staudte, “Using Listener Gaze to Refer in Installments Benefits Understanding,” in Proceedings of the 40th Annual Conference of the Cognitive Science Society, 2018.
    [BibTeX] [Download PDF]
    @inproceedings{2930542,
    author = {Mitev, Nikolina and Renner, Patrick and Pfeiffer, Thies and Staudte, Maria},
    booktitle = {Proceedings of the 40th Annual Conference of the Cognitive Science Society},
    location = {Madison, Wisconsin, USA},
    title = {{Using Listener Gaze to Refer in Installments Benefits Understanding}},
    url = {https://pub.uni-bielefeld.de/record/2930542},
    year = {2018},
    }
  • T. Pfeiffer and N. Pfeiffer-Leßmann, “Virtual Prototyping of Mixed Reality Interfaces with Internet of Things (IoT) Connectivity,” i-com, vol. 17, iss. 2, p. 179–186, 2018. doi:10.1515/icom-2018-0025
    [BibTeX] [Abstract] [Download PDF]
    One key aspect of the Internet of Things (IoT) is, that human machine interfaces are disentangled from the physicality of the devices. This provides designers with more freedom, but also may lead to more abstract interfaces, as they lack the natural context created by the presence of the machine. Mixed Reality (MR) on the other hand, is a key technology that enables designers to create user interfaces anywhere, either linked to a physical context (augmented reality, AR) or embedded in a virtual context (virtual reality, VR). Especially today, designing MR interfaces is a challenge, as there is not yet a common design language nor a set of standard functionalities or patterns. In addition to that, neither customers nor future users have substantial experiences in using MR interfaces.
    @article{2930374,
    abstract = {One key aspect of the Internet of Things (IoT) is, that human machine interfaces are disentangled from the physicality of the devices. This provides designers with more freedom, but also may lead to more abstract interfaces, as they lack the natural context created by the presence of the machine. Mixed Reality (MR) on the other hand, is a key technology that enables designers to create user interfaces anywhere, either linked to a physical context (augmented reality, AR) or embedded in a virtual context (virtual reality, VR). Especially today, designing MR interfaces is a challenge, as there is not yet a common design language nor a set of standard functionalities or patterns. In addition to that, neither customers nor future users have substantial experiences in using MR interfaces.},
    author = {Pfeiffer, Thies and Pfeiffer-Leßmann, Nadine},
    issn = {2196-6826},
    journal = {i-com},
    number = {2},
    pages = {179--186},
    publisher = {Walter de Gruyter GmbH},
    title = {{Virtual Prototyping of Mixed Reality Interfaces with Internet of Things (IoT) Connectivity}},
    url = {https://pub.uni-bielefeld.de/record/2930374},
    doi = {10.1515/icom-2018-0025},
    volume = {17},
    year = {2018},
    }
  • P. Renner, F. Lier, F. Friese, T. Pfeiffer, and S. Wachsmuth, “Facilitating HRI by Mixed Reality Techniques,” in HRI ’18 Companion: 2018 ACM/IEEE International Conference on Human-Robot Interaction Companion, 2018. doi:10.1145/3173386.3177032
    [BibTeX] [Abstract] [Download PDF]
    Mobile robots start to appear in our everyday life, e.g., in shopping malls, airports, nursing homes or warehouses. Often, these robots are operated by non-technical staff with no prior experience/education in robotics. Additionally, as with all new technology, there is certain reservedness when it comes to accepting robots in our personal space. In this work, we propose making use of state-of-the-art Mixed Reality (MR) technology to facilitate acceptance and interaction with mobile robots. By integrating a Microsoft HoloLens into the robot’s operating space, the MR device can be used to a) visualize the robot’s behavior-state and sensor data, b) visually notify the user about planned/future behavior and possible problems/obstacles of the robot, and c) to actively use the device as an additional external sensor source. Moreover, by using the HoloLens, users can operate and interact with the robot without being close to it, as the robot is able to \textit{sense with the users’ eyes}
    @inproceedings{2916803,
    abstract = {Mobile robots start to appear in our everyday life, e.g., in shopping malls, airports, nursing homes or warehouses. Often, these robots are operated by non-technical staff with no prior experience/education in robotics. Additionally, as with all new technology, there is certain reservedness when it comes to accepting robots in our personal space. In this work, we propose making use of state-of-the-art Mixed Reality (MR) technology to facilitate acceptance and interaction with mobile robots. By integrating a Microsoft HoloLens into the robot's operating space, the MR device can be used to a) visualize the robot's behavior-state and sensor data, b) visually notify the user about planned/future behavior and possible problems/obstacles of the robot, and c) to actively use the device as an additional external sensor source. Moreover, by using the HoloLens, users can operate and interact with the robot without being close to it, as the robot is able to \textit{sense with the users' eyes}},
    author = {Renner, Patrick and Lier, Florian and Friese, Felix and Pfeiffer, Thies and Wachsmuth, Sven},
    booktitle = {HRI '18 Companion: 2018 ACM/IEEE International Conference on Human-Robot Interaction Companion},
    isbn = {978-1-4503-5615-2/18/03},
    keywords = {Augmented Reality, Natural Interfaces, Sensor Fusion},
    location = {Chicago},
    publisher = {ACM/IEEE},
    title = {{Facilitating HRI by Mixed Reality Techniques}},
    url = {https://nbn-resolving.org/urn:nbn:de:0070-pub-29168035, https://pub.uni-bielefeld.de/record/2916803},
    doi = {10.1145/3173386.3177032},
    year = {2018},
    }
  • P. Renner, J. Blattgerste, and T. Pfeiffer, “A Path-based Attention Guiding Technique for Assembly Environments with Target Occlusions,” in IEEE Virtual Reality 2018, 2018.
    [BibTeX] [Download PDF]
    @inproceedings{2917385,
    author = {Renner, Patrick and Blattgerste, Jonas and Pfeiffer, Thies},
    booktitle = {IEEE Virtual Reality 2018},
    location = {Reutlingen},
    publisher = {IEEE},
    title = {{A Path-based Attention Guiding Technique for Assembly Environments with Target Occlusions}},
    url = {https://nbn-resolving.org/urn:nbn:de:0070-pub-29173854, https://pub.uni-bielefeld.de/record/2917385},
    year = {2018},
    }
  • N. Pfeiffer-Leßmann and T. Pfeiffer, “ExProtoVAR. A Lightweight Tool for Experience-Focused Prototyping of Augmented Reality Applications Using Virtual Reality,” in HCI International 2018 – Posters’ Extended Abstracts, Springer International Publishing, 2018, vol. 851, p. 311–318. doi:10.1007/978-3-319-92279-9_42
    [BibTeX] [Abstract] [Download PDF]
    The exciting thing about augmented reality (AR) as an emerging technology is that there is not yet a common design language, nor a set of standard functionalities or patterns. Users and designers do not have much experience with AR interfaces. When designing AR projects for customers, this is a huge challenge. It is our conviction that prototypes play a crucial role in the design process of AR experiences by capturing the key interactions of AR and delivering early user experiences situated in the relevant context. With ExProtoVAR, we present a lightweight tool to create interactive virtual prototypes of AR applications.
    @inbook{2921194,
    abstract = {The exciting thing about augmented reality (AR) as an emerging technology is that there is not yet a common design language, nor a set of standard functionalities or patterns. Users and designers do not have much experience with AR interfaces. When designing AR projects for customers, this is a huge challenge. It is our conviction that prototypes play a crucial role in the design process of AR experiences by capturing the key interactions of AR and delivering early user experiences situated in the relevant context. With ExProtoVAR, we present a lightweight tool to create interactive virtual prototypes of AR applications.},
    author = {Pfeiffer-Leßmann, Nadine and Pfeiffer, Thies},
    booktitle = {HCI International 2018 – Posters' Extended Abstracts},
    isbn = {978-3-319-92278-2},
    issn = {1865-0929},
    keywords = {Augmented Reality, Erweiterte Realität, Virtual Reality, Virtuelle Realität, Prototyping},
    pages = {311--318},
    publisher = {Springer International Publishing},
    title = {{ExProtoVAR. A Lightweight Tool for Experience-Focused Prototyping of Augmented Reality Applications Using Virtual Reality}},
    url = {https://pub.uni-bielefeld.de/record/2921194},
    doi = {10.1007/978-3-319-92279-9_42},
    volume = {851},
    year = {2018},
    }
  • J. Blattgerste, P. Renner, B. Strenge, and T. Pfeiffer, “In-Situ Instructions Exceed Side-by-Side Instructions in Augmented Reality Assisted Assembly,” in Proceedings of the 11th ACM International Conference on PErvasive Technologies Related to Assistive Environments (PETRA’18), 2018, p. 133–140. doi:10.1145/3197768.3197778
    [BibTeX] [Abstract] [Download PDF]
    Driven by endeavors towards Industry 4.0, there is increasing interest in augmented reality (AR) as an approach for assistance in areas like picking, assembly and maintenance. In this work our focus is on AR-based assistance in manual assembly. The design space for AR instructions in this context includes, e.g., side-by-side, 3D or projected 2D presentations. In previous research, the low quality of the AR devices available at the respective time had a significant impact on performance evaluations. Today, a proper and up-to-date comparison of different presentation approaches is missing. This paper presents an improved 3D in-situ instruction and compares it to previously presented techniques. All instructions are implemented on up-to-date AR hardware, namely the Microsoft HoloLens. To support reproducible research, the comparison is made using a standardized benchmark scenario. The results show, contrary to previous research, that in-situ instructions on state-of-the-art AR glasses outperform side-by-side instructions in terms of errors made, task completion time, and perceived task load.
    @inproceedings{2919601,
    abstract = {Driven by endeavors towards Industry 4.0, there is increasing interest in augmented reality (AR) as an approach for assistance in areas like picking, assembly and maintenance. In this work our focus is on AR-based assistance in manual assembly. The design space for AR instructions in this context includes, e.g., side-by-side, 3D or projected 2D presentations. In previous research, the low quality of the AR devices available at the respective time had a significant impact on performance evaluations. Today, a proper and up-to-date comparison of different presentation approaches is missing.
    This paper presents an improved 3D in-situ instruction and compares it to previously presented techniques. All instructions are implemented on up-to-date AR hardware, namely the Microsoft HoloLens. To support reproducible research, the comparison is made using a standardized benchmark scenario. The results show, contrary to previous research, that in-situ instructions on state-of-the-art AR glasses outperform side-by-side instructions in terms of errors made, task completion time, and perceived task load.},
    author = {Blattgerste, Jonas and Renner, Patrick and Strenge, Benjamin and Pfeiffer, Thies},
    booktitle = {Proceedings of the 11th ACM International Conference on PErvasive Technologies Related to Assistive Environments (PETRA'18)},
    isbn = {978-1-4503-6390-7},
    keywords = {Augmented Reality, Assistance Systems, Head-Mounted Displays, Smart Glasses, Benchmarking},
    location = {Corfu, Greece},
    pages = {133--140},
    publisher = {ACM},
    title = {{In-Situ Instructions Exceed Side-by-Side Instructions in Augmented Reality Assisted Assembly}},
    url = {https://nbn-resolving.org/urn:nbn:de:0070-pub-29196019, https://pub.uni-bielefeld.de/record/2919601},
    doi = {10.1145/3197768.3197778},
    year = {2018},
    }
  • F. Summann, T. Pfeiffer, J. Hellriegel, S. Wolf, and C. Pietsch, “Virtuelle Realität zur Bereitstellung integrierter Suchumgebungen.” 2017.
    [BibTeX] [Abstract] [Download PDF]
    Im Exzellenzcluster Kognitive Interaktionstechnologie (CITEC) an der Universität Bielefeld beschäftigt Thies Pfeiffer sich seit 2013 mit der virtuellen Realität (VR). Ausgehend von konkreten Projektkooperationen (Publikations- und Forschungsdatenmanagement) mit der Universitäts-bibliothek ist die Idee entstanden, die im Labor entwickelten Interaktionstechniken, die sich auf in 2016 neu angebotene Konsumer-VR-Hardware stützen, auf geeignete Szenarien anzuwenden. Als interessantes Anwendungsgebiet kristallisierte sich im gemeinsamen Diskurs die Suche heraus: in einer konkreten Suchumgebung (via API angebunden) sollen komfortabel und schnell Dokumente gesucht, Ergebnisse abgelegt und weiterbearbeitet werden können. Konzeptioneller Ansatz ist es, dem Nutzer intuitiv assoziierte Metaphern anzubieten (etwa Bücherregale oder auch Bücherwagen zur Ablage, Gegenstände zur Anfragekennzeichnung), um Vorgänge in vertrauter Weise zu visualisieren. Dabei bewegt sich der Nutzer frei durch den virtuellen Raum und kann die gefundenen Informationen im Gegensatz zum Desktop oder mobilen Endgerät auch räumlich strukturieren. Als Suchsystem wurde die Bielefelder BASE-Datenbank (d.i. Bielefeld Academic Search Engine mit inzwischen mehr als 100 Mill. indexierte Dokumente) ausgewählt, zunächst, weil die von zahlreichen externen Institutionen genutzte Schnittstelle (Anfragesprache CQL-orientiert und XML- oder JSON-konforme Ergebnisübermittlung) sich als universell und robust erwiesen hat und dabei umfangreiche Funktionen bereitstellt. Geplant wird eine virtuelle Suchumgebung, die ein Retrieval in einem Suchraum von Online-Dokumenten realisiert und dabei die Ergebnisse intuitiv verwalten lässt, durch Ergebnisanzeige, Sortierung, Optimierung des Suchergebnisses durch Suchverfeinerung (Drilldown-basiert) oder Anfrageerweiterung, Suchhistorie und Wiederverwendung von abgelegten Ergebnissen. Gleichzeitig wird der Zugriff- und Lizenzstatus visualisiert und, wenn möglich, die Anzeige des Objektes integriert.
    @inproceedings{2913127,
    abstract = {Im Exzellenzcluster Kognitive Interaktionstechnologie (CITEC) an der Universität Bielefeld beschäftigt Thies Pfeiffer sich seit 2013 mit der virtuellen Realität (VR). Ausgehend von konkreten Projektkooperationen (Publikations- und Forschungsdatenmanagement) mit der Universitäts-bibliothek ist die Idee entstanden, die im Labor entwickelten Interaktionstechniken, die sich auf in 2016 neu angebotene Konsumer-VR-Hardware stützen, auf geeignete Szenarien anzuwenden. Als interessantes Anwendungsgebiet kristallisierte sich im gemeinsamen Diskurs die Suche heraus: in einer konkreten Suchumgebung (via API angebunden) sollen komfortabel und schnell Dokumente gesucht, Ergebnisse abgelegt und weiterbearbeitet werden können. Konzeptioneller Ansatz ist es, dem Nutzer intuitiv assoziierte Metaphern anzubieten (etwa Bücherregale oder auch Bücherwagen zur Ablage, Gegenstände zur Anfragekennzeichnung), um Vorgänge in vertrauter Weise zu visualisieren. Dabei bewegt sich der Nutzer frei durch den virtuellen Raum und kann die gefundenen Informationen im Gegensatz zum Desktop oder mobilen Endgerät auch räumlich strukturieren. Als Suchsystem wurde die Bielefelder BASE-Datenbank (d.i. Bielefeld Academic Search Engine mit inzwischen mehr als 100 Mill. indexierte Dokumente) ausgewählt, zunächst, weil die von zahlreichen externen Institutionen genutzte Schnittstelle (Anfragesprache CQL-orientiert und XML- oder JSON-konforme Ergebnisübermittlung) sich als universell und robust erwiesen hat und dabei umfangreiche Funktionen bereitstellt. Geplant wird eine virtuelle Suchumgebung, die ein Retrieval in einem Suchraum von Online-Dokumenten realisiert und dabei die Ergebnisse intuitiv verwalten lässt, durch Ergebnisanzeige, Sortierung, Optimierung des Suchergebnisses durch Suchverfeinerung (Drilldown-basiert) oder Anfrageerweiterung, Suchhistorie und Wiederverwendung von abgelegten Ergebnissen. Gleichzeitig wird der Zugriff- und Lizenzstatus visualisiert und, wenn möglich, die Anzeige des Objektes integriert.},
    author = {Summann, Friedrich and Pfeiffer, Thies and Hellriegel, Jens and Wolf, Sebastian and Pietsch, Christian},
    keywords = {Virtual Reality, Bielefeld Academic Search Engine},
    location = {Frankfurt},
    title = {{Virtuelle Realität zur Bereitstellung integrierter Suchumgebungen}},
    url = {https://nbn-resolving.org/urn:nbn:de:0070-pub-29131278, https://pub.uni-bielefeld.de/record/2913127},
    year = {2017},
    }
  • J. Blattgerste, B. Strenge, P. Renner, T. Pfeiffer, and K. Essig, “Comparing Conventional and Augmented Reality Instructions for Manual Assembly Tasks,” in Proceedings of the 10th International Conference on PErvasive Technologies Related to Assistive Environments, 2017, p. 75 – 82. doi:10.1145/3056540.3056547
    [BibTeX] [Abstract] [Download PDF]
    Augmented Reality (AR) gains increased attention as a means to provide assistance for different human activities. Hereby the suitability of AR does not only depend on the respective task, but also to a high degree on the respective device. In a standardized assembly task, we tested AR-based in-situ assistance against conventional pictorial instructions using a smartphone, Microsoft HoloLens and Epson Moverio BT-200 smart glasses as well as paper-based instructions. Participants solved the task fastest using the paper instructions, but made less errors with AR assistance on the Microsoft HoloLens smart glasses than with any other system. Methodically we propose operational definitions of time segments and other optimizations for standardized benchmarking of AR assembly instructions.
    @inproceedings{2909322,
    abstract = {Augmented Reality (AR) gains increased attention as a
    means to provide assistance for different human activities.
    Hereby the suitability of AR does not only depend on the
    respective task, but also to a high degree on the respective
    device. In a standardized assembly task, we tested
    AR-based in-situ assistance against conventional pictorial
    instructions using a smartphone, Microsoft HoloLens and
    Epson Moverio BT-200 smart glasses as well as paper-based
    instructions. Participants solved the task fastest using the
    paper instructions, but made less errors with AR assistance
    on the Microsoft HoloLens smart glasses than with
    any other system. Methodically we propose operational
    definitions of time segments and other optimizations for
    standardized benchmarking of AR assembly instructions.},
    author = {Blattgerste, Jonas and Strenge, Benjamin and Renner, Patrick and Pfeiffer, Thies and Essig, Kai},
    booktitle = {Proceedings of the 10th International Conference on PErvasive Technologies Related to Assistive Environments},
    isbn = {978-1-4503-5227-7},
    keywords = {Assistance Systems, Head-Mounted Displays, Smartglasses, Benchmarking, CLF_RESEARCH_HIGHLIGHT},
    location = {Island of Rhodes, Greece},
    pages = {75 -- 82},
    publisher = {ACM},
    title = {{Comparing Conventional and Augmented Reality Instructions for Manual Assembly Tasks}},
    url = {https://nbn-resolving.org/urn:nbn:de:0070-pub-29093227, https://pub.uni-bielefeld.de/record/2909322},
    doi = {10.1145/3056540.3056547},
    year = {2017},
    }
  • P. Renner and T. Pfeiffer, “[POSTER] Augmented Reality Assistance in the Central Field-of-View Outperforms Peripheral Displays for Order Picking: Results from a Virtual Reality Simulation Study,” in ISMAR 2017, 2017. doi:10.1109/ISMAR-Adjunct.2017.59
    [BibTeX] [Abstract] [Download PDF]
    One area in which glasses-based augmented reality (AR) is successfully applied in industry is order picking in logistics (pick-byvision). Here, the almost hands-free operation and the direct integration into the digital workflow provided by augmented reality glasses are direct advantages. A common non-AR guidance technique for order picking is pickby-light. This is an efficient approach for single users and low numbers of alternative targets. AR glasses have the potential to overcome these limitations. However, making a grounded decision on the specific AR device and the particular guidance techniques to choose for a specific scenario is difficult, given the diversity of device characteristics and the lack of experience with smart glasses in industry at larger scale. The contributions of the paper are twofold. First, we present a virtual reality (VR) simulation approach to ground design decisions for AR-based solutions and apply it to the scenario of order picking. Second, we present results from a simulator study with implemented simulations for monocular and binocular head-mounted displays and compared existing techniques for attention guiding with our own SWave approach and the integration of eye tracking. Our results show clear benefits for the use of pick-by-vision compared to pick-by-light. In addition to that, we can show that binocular AR solutions outperform monocular ones in the attention guiding task.
    @inproceedings{2913138,
    abstract = {One area in which glasses-based augmented reality (AR) is successfully applied in industry is order picking in logistics (pick-byvision). Here, the almost hands-free operation and the direct integration into the digital workflow provided by augmented reality glasses are direct advantages.
    A common non-AR guidance technique for order picking is pickby-light. This is an efficient approach for single users and low numbers of alternative targets. AR glasses have the potential to overcome these limitations. However, making a grounded decision on the specific AR device and the particular guidance techniques to choose for a specific scenario is difficult, given the diversity of device characteristics and the lack of experience with smart glasses
    in industry at larger scale.
    The contributions of the paper are twofold. First, we present a virtual reality (VR) simulation approach to ground design decisions for AR-based solutions and apply it to the scenario of order picking. Second, we present results from a simulator study with implemented simulations for monocular and binocular head-mounted displays and compared existing techniques for attention guiding with our own SWave approach and the integration of eye tracking.
    Our results show clear benefits for the use of pick-by-vision compared to pick-by-light. In addition to that, we can show that binocular AR solutions outperform monocular ones in the attention guiding task.},
    author = {Renner, Patrick and Pfeiffer, Thies},
    booktitle = {ISMAR 2017},
    keywords = {Augmented Reality, Order Picking, Virtual Reality Simulation},
    location = {Nantes},
    publisher = {IEEE},
    title = {{[POSTER] Augmented Reality Assistance in the Central Field-of-View Outperforms Peripheral Displays for Order Picking: Results from a Virtual Reality Simulation Study}},
    url = {https://nbn-resolving.org/urn:nbn:de:0070-pub-29131389, https://pub.uni-bielefeld.de/record/2913138},
    doi = {10.1109/ISMAR-Adjunct.2017.59},
    year = {2017},
    }
  • C. Eghtebas, Y. S. Pai, K. Väänänen, T. Pfeiffer, J. Meyer, and S. Lukosch, “Initial Model of Social Acceptability for Human Augmentation Technologies,” in Amplify Workshop @ CHI 2017, 2017.
    [BibTeX] [Abstract] [Download PDF]
    Academia and industry engage in major efforts to develop technologies for augmenting human senses and activities. Many of these technologies, such as augmented reality (AR) and virtual reality (VR) head mounted displays (HMD), haptic augmentation systems, and exoskeletons can be applied in numerous usage contexts and scenarios. We argue that these technologies may strongly affect the perceptions of and interactions with other people in the social contexts where they are used. The altered interactions may lead to rejection of the augmentations. In this position paper we present a set of potential usage scenarios and an initial model of acceptance of augmentation technologies by users and other people involved in the social context.
    @inproceedings{2913140,
    abstract = {Academia and industry engage in major efforts to develop technologies for augmenting human senses and activities. Many of these technologies, such as augmented reality (AR) and virtual reality (VR) head mounted displays (HMD), haptic augmentation systems, and exoskeletons
    can be applied in numerous usage contexts and scenarios. We argue that these technologies may strongly affect the perceptions of and interactions with other people in the social contexts where they are used. The altered interactions may lead to rejection of the augmentations.
    In this position paper we present a set of potential usage scenarios and an initial model of acceptance of augmentation technologies by users and other people involved in the social context.},
    author = {Eghtebas, Chloe and Pai, Yun Suen and Väänänen, Kaisa and Pfeiffer, Thies and Meyer, Joachim and Lukosch, Stephan},
    booktitle = {Amplify Workshop @ CHI 2017},
    title = {{Initial Model of Social Acceptability for Human Augmentation Technologies}},
    url = {https://pub.uni-bielefeld.de/record/2913140},
    year = {2017},
    }
  • P. Renner and T. Pfeiffer, “Eye-Tracking-Based Attention Guidance in Mobile Augmented Reality Assistance Systems,” in Abstracts of the 19th European Conference on Eye Movements, 2017, p. 218. doi:10.16910/jemr.10.6
    [BibTeX] [Download PDF]
    @inproceedings{2917484,
    author = {Renner, Patrick and Pfeiffer, Thies},
    booktitle = {Abstracts of the 19th European Conference on Eye Movements},
    editor = {Radach, Ralph and Deubel, Heiner and Vorstius, Christian and Hofmann, Markus J.},
    issn = {1995-8692},
    location = {Wuppertal},
    number = {6},
    pages = {218},
    title = {{Eye-Tracking-Based Attention Guidance in Mobile Augmented Reality Assistance Systems}},
    url = {https://pub.uni-bielefeld.de/record/2917484},
    doi = {10.16910/jemr.10.6},
    volume = {10},
    year = {2017},
    }
  • P. Renner and T. Pfeiffer, “Evaluation of Attention Guiding Techniques for Augmented Reality-based Assistance in Picking and Assembly Tasks,” in Proceedings of the 22nd International Conference on Intelligent User Interfaces Companion, 2017, p. 89–92. doi:10.1145/3030024.3040987
    [BibTeX] [Download PDF]
    @inproceedings{2908016,
    author = {Renner, Patrick and Pfeiffer, Thies},
    booktitle = {Proceedings of the 22nd International Conference on Intelligent User Interfaces Companion},
    isbn = {978-1-4503-4893-5},
    location = {Limassol},
    pages = {89--92},
    publisher = {ACM},
    title = {{Evaluation of Attention Guiding Techniques for Augmented Reality-based Assistance in Picking and Assembly Tasks}},
    url = {https://pub.uni-bielefeld.de/record/2908016},
    doi = {10.1145/3030024.3040987},
    year = {2017},
    }
  • P. Renner and T. Pfeiffer, “Attention Guiding Techniques using Peripheral Vision and Eye Tracking for Feedback in Augmented-Reality-based Assistance Systems,” in 2017 IEEE Symposium on 3D User Interfaces (3DUI), 2017, p. 186–194. doi:10.1109/3DUI.2017.7893338
    [BibTeX] [Abstract] [Download PDF]
    A limiting factor of current smart glasses-based augmented reality (AR) systems is their small field of view. AR assistance systems designed for tasks such as order picking or manual assembly are supposed to guide the visual attention of the user towards the item that is relevant next. This is a challenging task, as the user may initially be in an arbitrary position and orientation relative to the target. As a result of the small field of view, in most cases the target will initially not be covered by the AR display, even if it is visible to the user. This raises the question of how to design attention guiding for such ”off-screen gaze” conditions. The central idea put forward in this paper is to display cues for attention guidance in a way that they can still be followed using peripheral vision. While the eyes’ focus point is beyond the AR display, certain visual cues presented on the display are still detectable by the human. In addition to that, guidance methods that are adaptive to the eye movements of the user are introduced and evaluated. In the frame of a research project on smart glasses-based assistance systems for a manual assembly station, several attention guiding techniques with and without eye tracking have been designed, implemented and tested. As evaluation method simulated AR in a virtual reality HMD setup was used, which supports a repeatable and highly-controlled experimental design.
    @inproceedings{2908162,
    abstract = {A limiting factor of current smart glasses-based augmented reality (AR) systems is their small field of view. AR assistance systems designed for tasks such as order picking or manual assembly are supposed to guide the visual attention of the user towards the item that is relevant next. This is a challenging task, as the user may
    initially be in an arbitrary position and orientation relative to the target. As a result of the small field of view, in most cases the target will initially not be covered by the AR display, even if it is visible to the user. This raises the question of how to design attention guiding for such ”off-screen gaze” conditions.
    The central idea put forward in this paper is to display cues for attention guidance in a way that they can still be followed using peripheral vision. While the eyes’ focus point is beyond the AR display, certain visual cues presented on the display are still detectable by the human. In addition to that, guidance methods that are adaptive to the eye movements of the user are introduced and evaluated.
    In the frame of a research project on smart glasses-based assistance systems for a manual assembly station, several attention guiding techniques with and without eye tracking have been designed, implemented and tested. As evaluation method simulated AR in a virtual reality HMD setup was used, which supports a repeatable and highly-controlled experimental design.},
    author = {Renner, Patrick and Pfeiffer, Thies},
    booktitle = {2017 IEEE Symposium on 3D User Interfaces (3DUI)},
    keywords = {Virtuelle Realität, Virtual Reality, Augmented Reality, Erweiterte Realität},
    pages = {186--194},
    publisher = {IEEE},
    title = {{Attention Guiding Techniques using Peripheral Vision and Eye Tracking for Feedback in Augmented-Reality-based Assistance Systems}},
    url = {https://nbn-resolving.org/urn:nbn:de:0070-pub-29081628, https://pub.uni-bielefeld.de/record/2908162},
    doi = {10.1109/3DUI.2017.7893338},
    year = {2017},
    }
  • C. Hainke and T. Pfeiffer, “Messen mentaler Auslastung in einer VR-Umgebung basierend auf Eyetrackingdaten,” in Virtuelle und Erweiterte Realität – 14. Workshop der GI-Fachgruppe VR/AR, 2017, p. 43–54.
    [BibTeX] [Abstract] [Download PDF]
    Ein aufmerksamer Assistent liefert proaktiv relevante Informationen immer genau zum richtigen Zeitpunkt. Er muss dafür über den aktuellen Handlungskontext und das erforderliche Domänenwissen verfügen und in der Lage sein, die aktuelle kognitive Situation des Nutzers gut einschätzen zu können. Die vorliegende Arbeit untersucht unter Einsatz eines Head-Mounted-Displays (HMDs) mit integriertem Eyetracker, ob die kognitive Belastung über die Echtzeitanalyse der Pupillengröße verlässlich geschätzt werden kann. Es wird gezeigt, dass es in einer VR-Umgebung möglich ist, mit diesen Mitteln die Pupillengröße mit der kognitiven Belastung in Zusammenhang zu setzen, da sie sich bei erhöhter Belastung vergrößert. Ausgenutzt wird dabei, dass die Helligkeit der Umgebung primär durch den Inhalt auf dem HMD-Display bestimmt wird und sich diese damit zur Laufzeit leicht bestimmen lässt.
    @inproceedings{2915064,
    abstract = {Ein aufmerksamer Assistent liefert proaktiv relevante Informationen immer genau zum richtigen Zeitpunkt. Er muss dafür über den aktuellen Handlungskontext und das erforderliche Domänenwissen verfügen und in der Lage sein, die aktuelle kognitive Situation des Nutzers gut einschätzen zu können. Die vorliegende Arbeit untersucht unter Einsatz eines Head-Mounted-Displays (HMDs) mit integriertem Eyetracker, ob die kognitive Belastung über die Echtzeitanalyse der Pupillengröße verlässlich geschätzt werden kann.
    Es wird gezeigt, dass es in einer VR-Umgebung möglich ist, mit diesen Mitteln die Pupillengröße mit der kognitiven Belastung in Zusammenhang zu setzen, da sie sich bei erhöhter Belastung vergrößert. Ausgenutzt wird dabei, dass die Helligkeit der Umgebung primär durch den Inhalt auf dem HMD-Display bestimmt wird und sich diese damit zur Laufzeit leicht bestimmen lässt.},
    author = {Hainke, Carolin and Pfeiffer, Thies},
    booktitle = {Virtuelle und Erweiterte Realität - 14. Workshop der GI-Fachgruppe VR/AR},
    editor = {Dörner, Ralf and Kruse, Rolf and Mohler, Betty and Weller, René},
    isbn = {978-3-8440-5606-8},
    keywords = {Virtuelle Realität, Mentale Auslastung, Eyetracking, Blickbewegungsmessung, Training},
    location = {Tübingen},
    pages = {43--54},
    publisher = {Shaker Verlag},
    title = {{Messen mentaler Auslastung in einer VR-Umgebung basierend auf Eyetrackingdaten}},
    url = {https://nbn-resolving.org/urn:nbn:de:0070-pub-29150647, https://pub.uni-bielefeld.de/record/2915064},
    year = {2017},
    }
  • T. Pfeiffer, F. Summann, J. Hellriegel, S. Wolf, and C. Pietsch, “Virtuelle Realität zur Bereitstellung integrierter Suchumgebungen,” o-bib. Das offene Bibliotheksjournal, vol. 4, iss. 4, p. 94–107, 2017. doi:10.5282/o-bib/2017H4S94-107
    [BibTeX] [Abstract] [Download PDF]
    Das Exzellenzcluster Kognitive Interaktionstechnologie (CITEC) an der Universität Bielefeld beschäftigt sich seit 2013 mit der virtuellen Realität (VR). Ausgehend von konkreten Projektkooperationen (Publikations- und Forschungsdatenmanagement) mit der Universitätsbibliothek ist die Idee entstanden, mit der in 2016 neu angebotenen Konsumer-VR-Hardware die im Labor entwickelten Interaktionstechniken auf geeignete Szenarien im Bereich von bibliothekarischen Umgebungen anzuwenden. Als interessantes Anwendungsgebiet kristallisierte sich im gemeinsamen Diskurs die Literatursuche heraus: Als Suchsystem wurde die Bielefelder BASE-Datenbank (d.i. Bielefeld Academic Search Engine mit inzwischen mehr als 100 Mio. indexierten Dokumenten) ausgewählt. Diese Auswahl erfolgte vor dem Hintergrund, dass sich die von zahlreichen externen Institutionen bereits genutzte API-Schnittstelle als universell und robust erwiesen hat und umfangreiche Funktionen bereitstellt. Auf der Grundlage der umfangreichen theoretischen und praktischen Erfahrungen des CITEC mit VRTechniken wurde der Prototyp für eine virtuelle Suchumgebung realisiert, der ein Retrieval in einem Suchraum von Online-Dokumenten erlaubt. Die Nutzerinnen und Nutzer können die Suchanfrage explorativ zusammenstellen und dabei die Ergebnisse intuitiv verwalten. Unterstützt werden sie dabei durch Ergebnisanzeige, Sortierung, Optimierung des Suchergebnisses mittels Suchverfeinerung (Drilldown-basiert) oder Anfrageerweiterung und Wiederverwendung von abgelegten Ergebnissen. Gleichzeitig wird der Zugriff- und Lizenzstatus visualisiert und die Detailanzeige der Metadaten des Objektes integriert.
    @article{2915937,
    abstract = {Das Exzellenzcluster Kognitive Interaktionstechnologie (CITEC) an der Universität Bielefeld beschäftigt sich seit 2013 mit der virtuellen Realität (VR). Ausgehend von konkreten Projektkooperationen (Publikations- und Forschungsdatenmanagement) mit der Universitätsbibliothek ist die Idee entstanden, mit der in 2016 neu angebotenen Konsumer-VR-Hardware die im Labor entwickelten Interaktionstechniken auf geeignete Szenarien im Bereich von bibliothekarischen Umgebungen anzuwenden. Als interessantes Anwendungsgebiet kristallisierte sich im gemeinsamen Diskurs die Literatursuche heraus: Als Suchsystem wurde die Bielefelder BASE-Datenbank (d.i. Bielefeld Academic Search Engine mit inzwischen mehr als 100 Mio. indexierten Dokumenten) ausgewählt. Diese Auswahl erfolgte vor dem Hintergrund, dass sich die von zahlreichen externen Institutionen bereits genutzte API-Schnittstelle als universell und robust erwiesen hat und umfangreiche Funktionen bereitstellt. Auf der Grundlage der umfangreichen theoretischen und praktischen Erfahrungen des CITEC mit VRTechniken wurde der Prototyp für eine virtuelle Suchumgebung realisiert, der ein Retrieval in einem Suchraum von Online-Dokumenten erlaubt. Die Nutzerinnen und Nutzer können die Suchanfrage explorativ zusammenstellen und dabei die Ergebnisse intuitiv verwalten. Unterstützt werden sie dabei durch Ergebnisanzeige, Sortierung, Optimierung des Suchergebnisses mittels Suchverfeinerung (Drilldown-basiert) oder Anfrageerweiterung und Wiederverwendung von abgelegten Ergebnissen. Gleichzeitig wird der Zugriff- und Lizenzstatus visualisiert und die Detailanzeige der Metadaten des Objektes integriert.},
    author = {Pfeiffer, Thies and Summann, Friedrich and Hellriegel, Jens and Wolf, Sebastian and Pietsch, Christian},
    issn = {2363-9814},
    journal = {o-bib. Das offene Bibliotheksjournal},
    keywords = {Virtuelle Realität, Bibliothek, Immersive Wissensräume},
    number = {4},
    pages = {94--107},
    publisher = {VDB},
    title = {{Virtuelle Realität zur Bereitstellung integrierter Suchumgebungen}},
    url = {https://nbn-resolving.org/urn:nbn:de:0070-pub-29159371, https://pub.uni-bielefeld.de/record/2915937},
    doi = {10.5282/o-bib/2017H4S94-107},
    volume = {4},
    year = {2017},
    }
  • I. de Kok, F. Hülsmann, T. Waltemate, C. Frank, J. Hough, T. Pfeiffer, D. Schlangen, T. Schack, M. Botsch, and S. Kopp, “The Intelligent Coaching Space: A Demonstration,” in Intelligent Virtual Agents: 17th International Conference on Intelligent Virtual Agents from August 27th to 30th in Stockholm, Sweden, 2017, p. 105–108. doi:10.1007/978-3-319-67401-8
    [BibTeX] [Download PDF]
    @inproceedings{2913339,
    author = {de Kok, Iwan and Hülsmann, Felix and Waltemate, Thomas and Frank, Cornelia and Hough, Julian and Pfeiffer, Thies and Schlangen, David and Schack, Thomas and Botsch, Mario and Kopp, Stefan},
    booktitle = {Intelligent Virtual Agents: 17th International Conference on Intelligent Virtual Agents from August 27th to 30th in Stockholm, Sweden},
    editor = {Beskow, Jonas and Peters, Christopher and Castellano, Ginevra and O'Sullivan, Carol and Leite, Iolanda and Kopp, Stefan},
    isbn = {978-3-319-67400-1},
    location = {Stockholm, Sweden},
    pages = {105--108},
    publisher = {Springer},
    title = {{The Intelligent Coaching Space: A Demonstration}},
    url = {https://nbn-resolving.org/urn:nbn:de:0070-pub-29133393, https://pub.uni-bielefeld.de/record/2913339},
    doi = {10.1007/978-3-319-67401-8},
    volume = {10498},
    year = {2017},
    }
  • J. Pfeiffer, T. Pfeiffer, A. Greif-Winzrieth, M. Meißner, P. Renner, and C. Weinhardt, “Adapting Human-Computer-Interaction of Attentive Smart Glasses to the Trade-Off Conflict in Purchase Decisions: An Experiment in a Virtual Supermarket,” in AC 2017: Augmented Cognition. Neurocognition and Machine Learning, 2017, p. 219–235. doi:10.1007/978-3-319-58628-1
    [BibTeX] [Abstract] [Download PDF]
    In many everyday purchase decisions, consumers have to trade-off their decisions between alternatives. For example, consumers often have to decide whether to buy the more expensive high quality product or the less expensive product of lower quality. Marketing researchers are especially interested in finding out how consumers make decisions when facing such trade-off conflicts and eye-tracking has been used as a tool to investigate the allocation of attention in such situations. Conflicting decision situations are also particularly interesting for human-computer interaction research because designers may use knowledge about the information acquisition behavior to build assistance systems which can help the user to solve the trade-off conflict. In this paper, we build and test such an assistance system that monitors the user’s information acquisition processes using mobile eye-tracking in the virtual reality. In particular, we test whether and how strongly the trade-off conflict influences how consumers direct their attention to products and features. We find that trade-off conflict, task experience and task involvement significantly influence how much attention products receive. We discuss how this knowledge might be used in the future to build assistance systems in the form of attentive smart glasses.
    @inproceedings{2913139,
    abstract = {In many everyday purchase decisions, consumers have to trade-off their decisions between alternatives. For example, consumers often have to decide whether to buy the more expensive high quality product or the less expensive product of lower quality. Marketing researchers are especially interested in finding out how consumers make decisions when facing such trade-off conflicts and eye-tracking has been used as a tool to investigate the allocation of attention in such situations. Conflicting decision situations are also particularly interesting for human-computer interaction research because designers may use knowledge about the information acquisition behavior to build assistance systems which can help the user to solve the trade-off conflict. In this paper, we build and test such an assistance system that monitors the user's information acquisition processes using mobile eye-tracking in the virtual reality. In particular, we test whether and how strongly the trade-off conflict influences how consumers direct their attention to products and features. We find that trade-off conflict, task experience and task involvement significantly influence how much attention products receive. We discuss how this knowledge might be used in the future to build assistance systems in the form of attentive smart glasses.},
    author = {Pfeiffer, Jella and Pfeiffer, Thies and Greif-Winzrieth, Anke and Meißner, Martin and Renner, Patrick and Weinhardt, Christof},
    booktitle = {AC 2017: Augmented Cognition. Neurocognition and Machine Learning},
    editor = {Schmorrow, Dylan D. and Fidopiastis, Cali M.},
    isbn = {978-3-319-58627-4},
    issn = {0302-9743},
    keywords = {Smart Glasses, Virtual Supermarket, Purchase Decisions},
    location = {Vancouver},
    pages = {219--235},
    publisher = {Springer International Publishing},
    title = {{Adapting Human-Computer-Interaction of Attentive Smart Glasses to the Trade-Off Conflict in Purchase Decisions: An Experiment in a Virtual Supermarket}},
    url = {https://pub.uni-bielefeld.de/record/2913139},
    doi = {10.1007/978-3-319-58628-1},
    volume = {10284},
    year = {2017},
    }
  • L. Meyer and T. Pfeiffer, “Vergleich von Leap Motion Hand-Interaktion mit den HTC-Vive MotionControllern in einer VR-Trainingssimulation für manuelle Arbeiten,” in Virtuelle und Erweiterte Realität – 14. Workshop der GI-Fachgruppe VR/AR, 2017, p. 91–102.
    [BibTeX] [Abstract] [Download PDF]
    Dieser Artikel befasst sich mit der Frage, ob durch den Einsatz von kontaktfreiem Tracking natürlicher Handbewegungen in einem VR-Trainingsprogramm ein signifikant positiver Einfluss auf Lerneffekt und Nutzererfahrung erzielt werden kann im Vergleich zur Nutzung von Controllern. Als Evaluationsumgebung dient ein virtuelles Labor, in welchem eine medizinische Infusion vorbereitet werden soll. Als Steuerungsinterface für die virtuellen Hände dienen in Kombination mit einer HTC-Vive eine Leap Motion, sowie die nativen HTC-Vive MotionController in der Vergleichsgruppe. Die Studie ergibt, dass die Nutzung von kontaktfreiem Tracking in der VR durchaus positiv von den Versuchspersonen aufgenommen wird, jedoch nicht wesentlich positiver als die Nutzung der Controller. Bezogen auf den Lerneffekt wird kein statistisch signifikanter Unterschied unter den gegebenen Testbedingungen gefunden, allerdings ein Effizienzunterschied, welcher sich in einer signifikant schnelleren Aufgabenbewältigung der Probanden in der Leap Motion Gruppe äußert.
    @inproceedings{2915065,
    abstract = {Dieser Artikel befasst sich mit der Frage, ob durch den Einsatz von kontaktfreiem Tracking natürlicher Handbewegungen in einem VR-Trainingsprogramm ein signifikant positiver Einfluss auf Lerneffekt und Nutzererfahrung erzielt werden kann im Vergleich zur Nutzung von Controllern. Als Evaluationsumgebung dient ein virtuelles Labor, in welchem eine medizinische Infusion vorbereitet werden soll. Als Steuerungsinterface für die virtuellen Hände dienen in Kombination mit einer HTC-Vive eine Leap Motion, sowie die nativen HTC-Vive MotionController in der Vergleichsgruppe. Die Studie ergibt, dass die Nutzung von kontaktfreiem Tracking in der VR durchaus positiv von den Versuchspersonen aufgenommen wird, jedoch nicht wesentlich positiver als die Nutzung der Controller. Bezogen auf den Lerneffekt wird kein statistisch signifikanter Unterschied unter den gegebenen Testbedingungen gefunden, allerdings ein Effizienzunterschied, welcher sich in einer signifikant schnelleren Aufgabenbewältigung der Probanden in der Leap Motion Gruppe äußert.},
    author = {Meyer, Leonard and Pfeiffer, Thies},
    booktitle = {Virtuelle und Erweiterte Realität - 14. Workshop der GI-Fachgruppe VR/AR},
    editor = {Dörner, Ralf and Kruse, Rolf and Mohler, Betty and Weller, René},
    isbn = {978-3-8440-5606-8},
    keywords = {Virtuelle Realität, Handbasierte Interaktion, Training},
    location = {Tübingen},
    pages = {91--102},
    publisher = {Shaker Verlag},
    title = {{Vergleich von Leap Motion Hand-Interaktion mit den HTC-Vive MotionControllern in einer VR-Trainingssimulation für manuelle Arbeiten}},
    url = {https://nbn-resolving.org/urn:nbn:de:0070-pub-29150655, https://pub.uni-bielefeld.de/record/2915065},
    year = {2017},
    }
  • T. Pfeiffer, A. Schmidt, and P. Renner, “Detecting Movement Patterns from Inertial Data of a Mobile Head-Mounted-Display for Navigation via Walking-in-Place,” in IEEE Virtual Reality 2016, 2016, p. 265.
    [BibTeX] [Abstract] [Download PDF]
    While display quality and rendering for Head-Mounted-Displays (HMDs) has increased in quality and performance, the interaction capabilities with these devices are still very limited or relying on expensive technology. Current experiences offered for mobile HMDs often stick to dome-like looking around, automatic or gaze-triggered movement, or flying techniques. We developed an easy to use walking-in-place technique that does not require additional hardware to enable basic navigation, such as walking, running, or jumping, in virtual environments. Our approach is based on the analysis of data from the inertial unit embedded in mobile HMDs. In a first prototype realized for the Samsung Galaxy Gear VR we detect steps and jumps. A user study shows that users novice to virtual reality easily pick up the method. In comparison to a classic input device, using our walking-in-place technique study participants felt more present in the virtual environment and preferred our method for exploration of the virtual world.
    @inproceedings{2900167,
    abstract = {While display quality and rendering for Head-Mounted-Displays (HMDs) has increased in quality and performance, the interaction capabilities with these devices are still very limited or relying on expensive technology. Current experiences offered for mobile HMDs often stick to dome-like looking around, automatic or gaze-triggered movement, or flying techniques.
    We developed an easy to use walking-in-place technique that does not require additional hardware to enable basic navigation, such as walking, running, or jumping, in virtual environments. Our approach is based on the analysis of data from the inertial unit embedded in mobile HMDs. In a first prototype realized for the Samsung Galaxy Gear VR we detect steps and jumps. A user study shows that users novice to virtual reality easily pick up the method. In comparison to a classic input device, using our walking-in-place technique study participants felt more present in the virtual environment and preferred our method for exploration of the virtual world.},
    author = {Pfeiffer, Thies and Schmidt, Aljoscha and Renner, Patrick},
    booktitle = {IEEE Virtual Reality 2016},
    isbn = {978-1-5090-0837-7},
    keywords = {Virtual Reality, Ultra Mobile Head Mounted Displays, Navigation, Walking-in-place},
    location = {Greenville, South Carolina},
    pages = {265},
    publisher = {IEEE},
    title = {{Detecting Movement Patterns from Inertial Data of a Mobile Head-Mounted-Display for Navigation via Walking-in-Place}},
    url = {https://pub.uni-bielefeld.de/record/2900167},
    year = {2016},
    }
  • C. Memili, T. Pfeiffer, and P. Renner, “Bestimmung von Nutzerpräferenzen für Augmented-Reality Systeme durch Prototyping in einer Virtual-Reality Simulation,” in Virtuelle und Erweiterte Realität – 13. Workshop der GI-Fachgruppe VR/AR, 2016, p. 25–36.
    [BibTeX] [Abstract] [Download PDF]
    Augmented-Reality (AR)-fähige Endgeräte durchdringen mittlerweile den Markt. Es gibt jedoch noch viel zu wenig Erfahrungen mit dem Einsatz und der Gestaltung von AR Lösungen. Insbesondere stellen sich Fragen z.B. nach der Relevanz von Field-of-View oder Latenz beim Tracking für die jeweiligen Anwendungskontexte, die früh im Projekt entschieden werden müssen, um sich auf passende Hardware festzulegen. Zur Beantwortung dieser Fragen wurde ein System entwickelt, welches verschiedene AR-Systeme mittels virtueller Realität simulieren kann. So können Parameter systematisch verändert und untersucht werden, während die Testumgebung kontrolliert und somit reproduzierbar bleibt. In einer Studie wurden Simulationen eines 800-Tablets und der Epson Moverio BT-200 AR-Brille mit jeweils drei simulierten Latenzstufen beim Tracking in einer Aufgabe zur Informationssuche im Supermarkt getestet. Dabei wurden die Einschätzungen der Nutzer sowohl bezüglich der simulierten AR-Systeme, als auch des Simulationssystems selbst evaluiert.
    @inproceedings{2905704,
    abstract = {Augmented-Reality (AR)-fähige Endgeräte durchdringen mittlerweile den Markt. Es gibt jedoch noch viel zu wenig Erfahrungen mit dem Einsatz und der Gestaltung von AR Lösungen. Insbesondere stellen sich Fragen z.B. nach der Relevanz von Field-of-View oder Latenz beim Tracking für die jeweiligen Anwendungskontexte, die früh im Projekt entschieden werden müssen, um sich auf passende Hardware festzulegen. Zur Beantwortung dieser Fragen wurde ein System entwickelt, welches verschiedene AR-Systeme mittels virtueller Realität simulieren kann. So können Parameter systematisch verändert und untersucht werden, während die Testumgebung kontrolliert und somit reproduzierbar bleibt.
    In einer Studie wurden Simulationen eines 800-Tablets und der Epson Moverio BT-200 AR-Brille mit jeweils drei simulierten Latenzstufen beim Tracking in einer Aufgabe zur Informationssuche im Supermarkt getestet. Dabei wurden die Einschätzungen der Nutzer sowohl bezüglich der simulierten AR-Systeme, als auch des Simulationssystems selbst evaluiert.},
    author = {Memili, Cem and Pfeiffer, Thies and Renner, Patrick},
    booktitle = {Virtuelle und Erweiterte Realität - 13. Workshop der GI-Fachgruppe VR/AR},
    editor = {Pfeiffer, Thies and Fröhlich, Julia and Kruse, Rolf},
    isbn = {978-3-8440-4718-9},
    keywords = {Virtuelle Realität, Virtual Reality, Augmented Reality, Erweiterte Realität},
    location = {Bielefeld},
    pages = {25--36},
    publisher = {Shaker Verlag},
    title = {{Bestimmung von Nutzerpräferenzen für Augmented-Reality Systeme durch Prototyping in einer Virtual-Reality Simulation}},
    url = {https://pub.uni-bielefeld.de/record/2905704},
    year = {2016},
    }
  • T. Pfeiffer, “Smart Eyewear for Cognitive Interaction Technology,” in Eyewear Computing – Augmenting the Human with Head-Mounted Wearable Assistants, A. Bulling, O. Cakmakci, K. Kunze, and J. M. Regh, Eds., Schloss Dagstuhl – Leibniz-Zentrum für Informatik, Dagstuhl Publishing, Germany, 2016, vol. 6,1, p. 177. doi:10.4230/DagRep.6.1.160
    [BibTeX] [Download PDF]
    @inbook{2904711,
    author = {Pfeiffer, Thies},
    booktitle = {Eyewear Computing - Augmenting the Human with Head-Mounted Wearable Assistants},
    editor = {Bulling, Andreas and Cakmakci, Ozan and Kunze, Kai and Regh, James M.},
    issn = {2192-5283},
    pages = {177},
    publisher = {Schloss Dagstuhl - Leibniz-Zentrum für Informatik, Dagstuhl Publishing, Germany},
    title = {{Smart Eyewear for Cognitive Interaction Technology}},
    url = {https://pub.uni-bielefeld.de/record/2904711},
    doi = {10.4230/DagRep.6.1.160},
    volume = {6,1},
    year = {2016},
    }
  • M. Derksen, L. Zhang, M. Schäfer, D. Schröder, and T. Pfeiffer, “Virtuelles Training in der Krankenpflege: Erste Erfahrungen mit Ultra-mobilen Head-Mounted-Displays,” in Virtuelle und Erweiterte Realität – 13. Workshop der GI-Fachgruppe VR/AR, 2016, p. 137–144.
    [BibTeX] [Abstract] [Download PDF]
    In einem kooperativen Lehrprojekt zwischen Fachhochschule und Universität wurde ein immersives Trainingsprogramm für das Head-Mounted-Display (HMD) Samsung Gear VR entwickelt. Anwendungsbeispiel ist die Vorbereitung einer Infusion, wie sie im Rahmen der Lehre für die Pflege geschult wird. Motivatoren sind Einsparungen von Kosten, sowie das Bereitstellen von flexiblen, selbstgesteuerten Trainingsformen für die berufsbegleitende Ausbildung. Das Paper beschreibt einen ersten funktionalen Prototypen, der die Machbarkeit demonstriert, und zeigt anhand einer Nutzerstudie, dass die Technologie von der Zielgruppe akzeptiert wird.
    @inproceedings{2905707,
    abstract = {In einem kooperativen Lehrprojekt zwischen Fachhochschule und Universität wurde ein immersives Trainingsprogramm für das Head-Mounted-Display (HMD) Samsung Gear VR entwickelt. Anwendungsbeispiel ist die Vorbereitung einer Infusion, wie sie im Rahmen der Lehre für die Pflege geschult wird. Motivatoren sind Einsparungen von Kosten, sowie das Bereitstellen von flexiblen, selbstgesteuerten Trainingsformen für die berufsbegleitende Ausbildung. Das Paper beschreibt einen ersten funktionalen Prototypen, der die Machbarkeit demonstriert, und zeigt anhand einer Nutzerstudie, dass die Technologie von der Zielgruppe akzeptiert wird.},
    author = {Derksen, Melanie and Zhang, Le and Schäfer, Marc and Schröder, Dimitri and Pfeiffer, Thies},
    booktitle = {Virtuelle und Erweiterte Realität - 13. Workshop der GI-Fachgruppe VR/AR},
    editor = {Pfeiffer, Thies and Fröhlich, Julia and Kruse, Rolf},
    isbn = {978-3-8440-4718-9},
    keywords = {Virtuelle Realität, Training},
    location = {Bielefeld},
    pages = {137--144},
    publisher = {Shaker Verlag},
    title = {{Virtuelles Training in der Krankenpflege: Erste Erfahrungen mit Ultra-mobilen Head-Mounted-Displays}},
    url = {https://nbn-resolving.org/urn:nbn:de:0070-pub-29057074, https://pub.uni-bielefeld.de/record/2905707},
    year = {2016},
    }
  • F. Hülsmann, T. Waltemate, T. Pfeiffer, C. Frank, T. Schack, M. Botsch, and S. Kopp, “The ICSPACE Platform: A Virtual Reality Setup for Experiments in Motor Learning.” 2016.
    [BibTeX] [Download PDF]
    @inproceedings{2905377,
    author = {Hülsmann, Felix and Waltemate, Thomas and Pfeiffer, Thies and Frank, Cornelia and Schack, Thomas and Botsch, Mario and Kopp, Stefan},
    location = {Tübingen, Germany},
    title = {{The ICSPACE Platform: A Virtual Reality Setup for Experiments in Motor Learning}},
    url = {https://nbn-resolving.org/urn:nbn:de:0070-pub-29053771, https://pub.uni-bielefeld.de/record/2905377},
    year = {2016},
    }
  • C. Frank, I. de Kok, I. Senna, T. Waltemate, F. Hülsmann, T. Pfeiffer, M. O. Ernst, S. Kopp, M. Botsch, and T. Schack, “Latency, sensorimotor feedback and virtual agents: Feedback channels for motor learning using the ICSPACE platform.” 2016.
    [BibTeX] [Download PDF]
    @inproceedings{2905098,
    author = {Frank, Cornelia and de Kok, Iwan and Senna, Irene and Waltemate, Thomas and Hülsmann, Felix and Pfeiffer, Thies and Ernst, Marc O. and Kopp, Stefan and Botsch, Mario and Schack, Thomas},
    location = {Tübingen, Germany},
    title = {{Latency, sensorimotor feedback and virtual agents: Feedback channels for motor learning using the ICSPACE platform}},
    url = {https://pub.uni-bielefeld.de/record/2905098},
    year = {2016},
    }
  • T. Pfeiffer, J. Hellmers, E. Schön, and J. Thomaschewski, “Empowering User Interfaces for the Industry 4.0,” Proceedings of the IEEE, vol. 104, iss. 5: Special Issue on Cyberphysical Systems, p. 986 – 996, 2016. doi:10.1109/JPROC.2015.2508640
    [BibTeX] [Abstract] [Download PDF]
    Industrie 4.0 (English translation: Industry 4.0) stands for functional integration, dynamic reorganization, and resource efficiency. Technical advances in control and communication create infrastructures that handle more and more tasks automatically. As a result, the complexity of today’s and future technical systems is hidden from the user. These advances, however, come with distinct challenges for user interface design. A central question is: how to empower users to understand, monitor, and control the automated processes of Industrie 4.0? Addressing these design challenges requires a full integration of user-centered design (UCD) processes into the development process. This paper discusses flexible but powerful methods for usability and user experience engineering in the context of Industrie 4.0.
    @article{2900166,
    abstract = {Industrie 4.0 (English translation: Industry 4.0) stands for functional integration, dynamic reorganization, and resource efficiency. Technical advances in control and communication create infrastructures that handle more and more tasks automatically. As a result, the complexity of today’s and future technical systems is hidden from the user. These advances, however, come with distinct challenges for user interface design. A central question is: how to empower users to understand, monitor, and control the automated processes of Industrie 4.0? Addressing these design challenges requires a full integration of user-centered design (UCD) processes into the development process. This paper discusses flexible but powerful methods for usability and user experience engineering in the context of Industrie 4.0.},
    author = {Pfeiffer, Thies and Hellmers, Jens and Schön, Eva-Maria and Thomaschewski, Jörg},
    issn = {1558-2256},
    journal = {Proceedings of the IEEE },
    keywords = {user centered design, industry 4.0, eye tracking, agile},
    number = {5: Special Issue on Cyberphysical Systems},
    pages = {986 -- 996},
    publisher = {IEEE},
    title = {{Empowering User Interfaces for the Industry 4.0}},
    url = {https://pub.uni-bielefeld.de/record/2900166},
    doi = {10.1109/JPROC.2015.2508640},
    volume = {104},
    year = {2016},
    }
  • T. Pfeiffer and C. Memili, “Model-based Real-time Visualization of Realistic Three-Dimensional Heat Maps for Mobile Eye Tracking and Eye Tracking in Virtual Reality,” in Proceedings of the Ninth Biennial ACM Symposium on Eye Tracking Research & Applications, 2016, p. 95–102. doi:10.1145/2857491.2857541
    [BibTeX] [Abstract] [Download PDF]
    Heat maps, or more generally, attention maps or saliency maps are an often used technique to visualize eye-tracking data. With heat maps qualitative information about visual processing can be easily visualized and communicated between experts and laymen. They are thus a versatile tool for many disciplines, in particular for usability engineering, and are often used to get a first overview about recorded eye-tracking data. Today, heat maps are typically generated for 2D stimuli that have been presented on a computer display. In such cases the mapping of overt visual attention on the stimulus is rather straight forward and the process is well understood. However, when turning towards mobile eye tracking and eye tracking in 3D virtual environments, the case is much more complicated. In the first part of the paper, we discuss several challenges that have to be considered in 3D environments, such as changing perspectives, multiple viewers, object occlusions, depth of fixations, or dynamically moving objects. In the second part, we present an approach for the generation of 3D heat maps addressing the above mentioned issues while working in real-time. Our visualizations provide high-quality output for multi-perspective eye-tracking recordings of visual attention in 3D environments.
    @inproceedings{2900168,
    abstract = {Heat maps, or more generally, attention maps or saliency maps are an often used technique to visualize eye-tracking data. With heat maps qualitative information about visual processing can be easily visualized and communicated between experts and laymen. They are thus a versatile tool for many disciplines, in particular for usability engineering, and are often used to get a first overview about recorded eye-tracking data.
    Today, heat maps are typically generated for 2D stimuli that have been presented on a computer display. In such cases the mapping of overt visual attention on the stimulus is rather straight forward and the process is well understood. However, when turning towards mobile eye tracking and eye tracking in 3D virtual environments, the case is much more complicated.
    In the first part of the paper, we discuss several challenges that have to be considered in 3D environments, such as changing perspectives, multiple viewers, object occlusions, depth of fixations, or dynamically moving objects. In the second part, we present an approach for the generation of 3D heat maps addressing the above mentioned issues while working in real-time. Our visualizations provide high-quality output for multi-perspective eye-tracking recordings of visual attention in 3D environments.},
    author = {Pfeiffer, Thies and Memili, Cem},
    booktitle = {Proceedings of the Ninth Biennial ACM Symposium on Eye Tracking Research & Applications},
    isbn = {978-1-4503-4125-7/16/03},
    keywords = {Eyetracking, Gaze-based Interaction},
    location = {Charleston, SC, USA},
    pages = {95--102},
    publisher = {ACM Press},
    title = {{Model-based Real-time Visualization of Realistic Three-Dimensional Heat Maps for Mobile Eye Tracking and Eye Tracking in Virtual Reality}},
    url = {https://nbn-resolving.org/urn:nbn:de:0070-pub-29001685, https://pub.uni-bielefeld.de/record/2900168},
    doi = {10.1145/2857491.2857541},
    year = {2016},
    }
  • T. Pfeiffer, P. Renner, and N. Pfeiffer-Leßmann, “EyeSee3D 2.0: Model-based Real-time Analysis of Mobile Eye-Tracking in Static and Dynamic Three-Dimensional Scenes,” in Proceedings of the Ninth Biennial ACM Symposium on Eye Tracking Research & Applications, 2016, p. 189–196. doi:10.1145/2857491.2857532
    [BibTeX] [Abstract] [Download PDF]
    With the launch of ultra-portable systems, mobile eye tracking finally has the potential to become mainstream. While eye movements on their own can already be used to identify human activities, such as reading or walking, linking eye movements to objects in the environment provides even deeper insights into human cognitive processing. We present a model-based approach for the identification of fixated objects in three-dimensional environments. For evaluation, we compare the automatic labelling of fixations with those performed by human annotators. In addition to that, we show how the approach can be extended to support moving targets, such as individual limbs or faces of human interaction partners. The approach also scales to studies using multiple mobile eye-tracking systems in parallel. The developed system supports real-time attentive systems that make use of eye tracking as means for indirect or direct human-computer interaction as well as off-line analysis for basic research purposes and usability studies.
    @inproceedings{2900169,
    abstract = {With the launch of ultra-portable systems, mobile eye tracking finally has the potential to become mainstream. While eye movements on their own can already be used to identify human activities, such as reading or walking, linking eye movements to objects in the environment provides even deeper insights into human cognitive processing.
    We present a model-based approach for the identification of fixated objects in three-dimensional environments. For evaluation, we compare the automatic labelling of fixations with those performed by human annotators. In addition to that, we show how the approach can be extended to support moving targets, such as individual limbs or faces of human interaction partners. The approach also scales to studies using multiple mobile eye-tracking systems in parallel.
    The developed system supports real-time attentive systems that make use of eye tracking as means for indirect or direct human-computer interaction as well as off-line analysis for basic research purposes and usability studies.},
    author = {Pfeiffer, Thies and Renner, Patrick and Pfeiffer-Leßmann, Nadine},
    booktitle = {Proceedings of the Ninth Biennial ACM Symposium on Eye Tracking Research & Applications},
    isbn = {978-1-4503-4125-7},
    keywords = {Eyetracking, Gaze-based Interaction},
    location = {Charleston, SC, USA},
    pages = {189--196},
    publisher = {ACM Press},
    title = {{EyeSee3D 2.0: Model-based Real-time Analysis of Mobile Eye-Tracking in Static and Dynamic Three-Dimensional Scenes}},
    url = {https://nbn-resolving.org/urn:nbn:de:0070-pub-29001695, https://pub.uni-bielefeld.de/record/2900169},
    doi = {10.1145/2857491.2857532},
    year = {2016},
    }
  • F. Deppendorf, P. Renner, and T. Pfeiffer, “Taktiles Feedback in Head-Mounted-Displays: Prompting von Orientierungsänderungen in 360° Panoramen,” in Virtuelle und Erweiterte Realität – 13. Workshop der GI-Fachgruppe VR/AR, 2016, p. 65–76.
    [BibTeX] [Abstract] [Download PDF]
    Szenarien, die in der virtuellen Realität abgebildet werden, sind zunehmend komplexer. Da der Nutzer in der virtuellen Welt seinen Bildausschnitt selbständig durch Rotation des Kopfes innerhalb einer 360°-Umgebung wählt, ist es für Entwickler schwierig, auf bestimmte Elemente in seiner Umgebung hinzuweisen. Dies gilt insbesondere, wenn weder die optische noch die akustische Darstellung dafür manipuliert werden soll. Um dieses Problem zu lösen, wurde ein modulares System für taktiles Feedback entwickelt. Ein Basis-Modul, welches über Bluetooth mit dem Host-System (z.B. Smartphone oder PC) gekoppelt werden kann, unterstützt dabei mehrere flexibel anzubringende Vibrationsmotoren, die z.B. an einer Virtual-Reality-Brille befestigt werden können. Im Anwendungsfall könnte so durch gezielte Vibrationen an verschiedenen Stellen des Kopfes ein Nutzer zu einer Orientierungsänderung innerhalb eines 360°-Panoramas bewegt werden. Die Tragfähigkeit des Ansatzes wird im Rahmen einer Nutzerstudie evaluiert. Die Studie zeigt, dass ein alternatives explizites optisches Prompting zwar insbesondere bei der vertikalen Orientierung schneller zu Erfolg führt, über das taktile Prompting jedoch nach kurzer Lernphase fast genauso korrekte Orientierungsänderungen möglich sind.
    @inproceedings{2905706,
    abstract = {Szenarien, die in der virtuellen Realität abgebildet werden, sind zunehmend komplexer. Da der Nutzer in der virtuellen Welt seinen Bildausschnitt selbständig durch Rotation des Kopfes innerhalb einer 360°-Umgebung wählt, ist es für Entwickler schwierig, auf bestimmte Elemente in seiner Umgebung hinzuweisen. Dies gilt insbesondere, wenn weder die optische noch die akustische Darstellung dafür manipuliert werden soll.
    Um dieses Problem zu lösen, wurde ein modulares System für taktiles Feedback entwickelt. Ein Basis-Modul, welches über Bluetooth mit dem Host-System (z.B. Smartphone oder PC) gekoppelt werden kann, unterstützt dabei mehrere flexibel anzubringende Vibrationsmotoren, die z.B. an einer Virtual-Reality-Brille befestigt werden können. Im Anwendungsfall könnte so durch gezielte Vibrationen an verschiedenen Stellen des Kopfes ein Nutzer zu einer
    Orientierungsänderung innerhalb eines 360°-Panoramas bewegt werden.
    Die Tragfähigkeit des Ansatzes wird im Rahmen einer Nutzerstudie evaluiert. Die Studie zeigt, dass ein alternatives explizites optisches Prompting zwar insbesondere bei der vertikalen Orientierung schneller zu Erfolg führt, über das taktile Prompting jedoch nach kurzer Lernphase fast genauso korrekte Orientierungsänderungen möglich sind.},
    author = {Deppendorf, Fabian and Renner, Patrick and Pfeiffer, Thies},
    booktitle = {Virtuelle und Erweiterte Realität - 13. Workshop der GI-Fachgruppe VR/AR},
    editor = {Pfeiffer, Thies and Fröhlich, Julia and Kruse, Rolf},
    isbn = {978-3-8440-4718-9},
    keywords = {Virtuelle Realität, Mensch-Maschine-Interaktion, HCI},
    location = {Bielefeld},
    pages = {65--76},
    publisher = {Shaker Verlag},
    title = {{Taktiles Feedback in Head-Mounted-Displays: Prompting von Orientierungsänderungen in 360° Panoramen}},
    url = {https://pub.uni-bielefeld.de/record/2905706},
    year = {2016},
    }
  • T. Pfeiffer, S. K. Feiner, and W. W. Mayol-Cuevas, “Eyewear Computing for Skill Augmentation and Task Guidance,” in Eyewear Computing–Augmenting the Human with Head-Mounted Wearable Assistants, A. Bulling, O. Cakmakci, K. Kunze, and J. M. Regh, Eds., Schloss Dagstuhl – Leibniz-Zentrum für Informatik, Dagstuhl Publishing, Germany, 2016, vol. 23, p. 199. doi:10.4230/DagRep.6.1.160
    [BibTeX] [Download PDF]
    @inbook{2904710,
    author = {Pfeiffer, Thies and Feiner, Steven K and Mayol-Cuevas, Walterio W},
    booktitle = {Eyewear Computing--Augmenting the Human with Head-Mounted Wearable Assistants},
    editor = {Bulling, Andreas and Cakmakci, Ozan and Kunze, Kai and M. Regh, James},
    pages = {199},
    publisher = {Schloss Dagstuhl - Leibniz-Zentrum für Informatik, Dagstuhl Publishing, Germany},
    title = {{Eyewear Computing for Skill Augmentation and Task Guidance}},
    url = {https://pub.uni-bielefeld.de/record/2904710},
    doi = {10.4230/DagRep.6.1.160},
    volume = {23},
    year = {2016},
    }
  • A. Lücking, T. Pfeiffer, and H. Rieser, “Pointing and reference reconsidered,” Journal of Pragmatics, vol. 77, p. 56–79, 2015. doi:10.1016/j.pragma.2014.12.013
    [BibTeX] [Abstract] [Download PDF]
    Current semantic theory on indexical expressions claims that demonstratively used indexicals such as this lack a referent-determining meaning but instead rely on an accompanying demonstration act like a pointing gesture. While this view allows to set up a sound logic of demonstratives, the direct-referential role assigned to pointing gestures has never been scrutinized thoroughly in semantics or pragmatics. We investigate the semantics and pragmatics of co-verbal pointing from a foundational perspective combining experiments, statistical investigation, computer simulation and theoretical modeling techniques in a novel manner. We evaluate various referential hypotheses with a corpus of object identification games set up in experiments in which body movement tracking techniques have been extensively used to generate precise pointing measurements. Statistical investigation and computer simulations show that especially distal areas in the pointing domain falsify the semantic direct-referential hypotheses concerning pointing gestures. As an alternative, we propose that reference involving pointing rests on a default inference which we specify using the empirical data. These results raise numerous problems for classical semantics–pragmatics interfaces: we argue for pre-semantic pragmatics in order to account for inferential reference in addition to classical post-semantic Gricean pragmatics.
    @article{2488745,
    abstract = {Current semantic theory on indexical expressions claims that demonstratively used indexicals such as this lack a referent-determining meaning but instead rely on an accompanying demonstration act like a pointing gesture. While this view allows to set up a sound logic of demonstratives, the direct-referential role assigned to pointing gestures has never been scrutinized thoroughly in semantics or pragmatics. We investigate the semantics and pragmatics of co-verbal pointing from a foundational perspective combining experiments, statistical investigation, computer simulation and theoretical modeling techniques in a novel manner. We evaluate various referential hypotheses with a corpus of object identification games set up in experiments in which body movement tracking techniques have been extensively used to generate precise pointing measurements. Statistical investigation and computer simulations show that especially distal areas in the pointing domain falsify the semantic direct-referential hypotheses concerning pointing gestures. As an alternative, we propose that reference involving pointing rests on a default inference which we specify using the empirical data. These results raise numerous problems for classical semantics–pragmatics interfaces: we argue for pre-semantic pragmatics in order to account for inferential reference in addition to classical post-semantic Gricean pragmatics.},
    author = {Lücking, Andy and Pfeiffer, Thies and Rieser, Hannes},
    issn = {0378-2166},
    journal = {Journal of Pragmatics},
    keywords = {Multimodal Communication
    Gestures
    Pointing
    Reference},
    pages = {56--79},
    publisher = {Elsevier},
    title = {{Pointing and reference reconsidered}},
    url = {https://nbn-resolving.org/urn:nbn:de:0070-pub-24887450, https://pub.uni-bielefeld.de/record/2488745},
    doi = {10.1016/j.pragma.2014.12.013},
    volume = {77},
    year = {2015},
    }
  • T. Waltemate, F. Hülsmann, T. Pfeiffer, S. Kopp, and M. Botsch, “Realizing a Low-latency Virtual Reality Environment for Motor Learning,” in Proceedings of the 21st ACM Symposium on Virtual Reality Software and Technology, 2015, p. 139–147. doi:10.1145/2821592.2821607
    [BibTeX] [Abstract] [Download PDF]
    Virtual Reality (VR) has the potential to support motor learning in ways exceeding beyond the possibilities provided by real world environments. New feedback mechanisms can be implemented that support motor learning during the performance of the trainee and afterwards as a performance review. As a consequence, VR environments excel in controlled evaluations, which has been proven in many other application scenarios. However, in the context of motor learning of complex tasks, including full-body movements, questions regarding the main technical parameters of such a system, in particular that of the required maximum latency, have not been addressed in depth. To fill this gap, we propose a set of requirements towards VR systems for motor learning, with a special focus on motion capturing and rendering. We then assess and evaluate state-of-the-art techniques and technologies for motion capturing and rendering, in order to provide data on latencies for different setups. We focus on the end-to-end latency of the overall system, and present an evaluation of an exemplary system that has been developed to meet these requirements.
    @inproceedings{2774601,
    abstract = {Virtual Reality (VR) has the potential to support motor learning in ways exceeding beyond the possibilities provided by real world environments. New feedback mechanisms can be implemented that support motor learning during the performance of the trainee and afterwards as a performance review. As a consequence, VR environments excel in controlled evaluations, which has been proven in many other application scenarios.
    However, in the context of motor learning of complex tasks, including full-body movements, questions regarding the main technical parameters of such a system, in particular that of the required maximum latency, have not been addressed in depth. To fill this gap, we propose a set of requirements towards VR systems for motor learning, with a special focus on motion capturing and rendering. We then assess and evaluate state-of-the-art techniques and technologies for motion capturing and rendering, in order to provide data on latencies for different setups. We focus on the end-to-end latency of the overall system, and present an evaluation of an exemplary system that has been developed to meet these requirements.},
    author = {Waltemate, Thomas and Hülsmann, Felix and Pfeiffer, Thies and Kopp, Stefan and Botsch, Mario},
    booktitle = {Proceedings of the 21st ACM Symposium on Virtual Reality Software and Technology},
    isbn = {978-1-4503-3990-2},
    keywords = {low-latency, motor learning, virtual reality},
    location = {Beijing, China},
    pages = {139--147},
    publisher = {ACM},
    title = {{Realizing a Low-latency Virtual Reality Environment for Motor Learning}},
    url = {https://nbn-resolving.org/urn:nbn:de:0070-pub-27746019, https://pub.uni-bielefeld.de/record/2774601},
    doi = {10.1145/2821592.2821607},
    year = {2015},
    }
  • T. Pfeiffer, C. Memili, and P. Renner, “Capturing and Visualizing Eye Movements in 3D Environments,” in Proceedings of the 2nd International Workshop on Solutions for Automatic Gaze Data Analysis 2015 (SAGA 2015), 2015, p. 11–12. doi:10.2390/biecoll-saga2015_3
    [BibTeX] [Abstract] [Download PDF]
    Visual attention can be a viable source of information to assess human behaviors in many different contexts, from human-computer interaction, over sports or social interactions, to complex working environments, such as to be found in the context of Industry 4.0. In such scenarios in which the user is able to walk around freely, mobile eye-tracking systems are used to record eye movements, which are then mapped onto an ego-perspective video. The analysis of such recordings then requires large efforts for manually annotating the recorded videos on a frame-by-frame basis to label the fixations based on their locations to the target objects present in the video. First, we present a method to record eye movements in 3D scenarios and annotate fixations with corresponding labels for the objects of interest in real-time 2. For this purpose, we rely on computer-vision methods for the detection of the camera position and orientation in the world. Based on a coarse 3D model of the environment, representing the 3D areas of interest, fixations are mapped to areas of interest. As a result, we can identify the position of the fixation in terms of local object coordinates for each relevant object of interest. Second, we present a method for real-time creation and visualization of heatmaps for 3D objects 1. Based on a live-streaming of the recorded and analyzed eye movements, our solution renders heatmaps on top of the object s urfaces. The resulting visualizations are more realistic than standard 2D heatmaps, in that we consider occlusions, depth of focus and dynamic moving objects. Third, we present a new method which allows us to aggregate fixations on a per object basis, e.g. similar to regions/areas of interest. This allows us to transfer existing methods of analysis to 3D environments. We present examples from a virtual supermarket, a study on social interactions between two humans, examples from real-time gaze mapping on body parts of a moving humans and from studying 3D prototypes in a virtual reality environment.
    @inproceedings{2779322,
    abstract = {Visual attention can be a viable source of information to assess human behaviors in many different contexts, from human-computer interaction, over sports or social interactions, to complex working environments, such as to be found in the context of Industry 4.0. In such scenarios in which the user is able to walk around freely, mobile eye-tracking systems are used to record eye movements, which are then mapped onto an ego-perspective video. The analysis of such recordings then requires large efforts for manually annotating the recorded videos on a frame-by-frame basis to label the fixations based on their locations to the target objects present in the video. First, we present a method to record eye movements in 3D scenarios and annotate fixations with corresponding labels for the objects of interest in real-time 2. For this purpose, we rely on computer-vision methods for the detection of the camera position and orientation in the world. Based on a coarse 3D model of the environment, representing the 3D areas of interest, fixations are mapped to areas of interest. As a result, we can identify the position of the fixation in terms of local object coordinates for each relevant object of interest. Second, we present a method for real-time creation and visualization of heatmaps for 3D objects 1. Based on a live-streaming of the recorded and analyzed eye movements, our solution renders heatmaps on top of the object s urfaces. The resulting visualizations are more realistic than standard 2D heatmaps, in that we consider occlusions, depth of focus and dynamic moving objects. Third, we present a new method which allows us to aggregate fixations on a per object basis, e.g. similar to regions/areas of interest. This allows us to transfer existing methods of analysis to 3D environments. We present examples from a virtual supermarket, a study on social interactions between two humans, examples from real-time gaze mapping on body parts of a moving humans and from studying 3D prototypes in a virtual reality environment.},
    author = {Pfeiffer, Thies and Memili, Cem and Renner, Patrick},
    booktitle = {Proceedings of the 2nd International Workshop on Solutions for Automatic Gaze Data Analysis 2015 (SAGA 2015)},
    editor = {Pfeiffer, Thies and Essig, Kai},
    keywords = {eye tracking},
    location = {Bielefeld},
    pages = {11--12},
    publisher = {eCollections Bielefeld University},
    title = {{Capturing and Visualizing Eye Movements in 3D Environments}},
    url = {https://pub.uni-bielefeld.de/record/2779322},
    doi = {10.2390/biecoll-saga2015_3},
    year = {2015},
    }
  • J. Diekmann, P. Renner, and T. Pfeiffer, “Framework zur Evaluation von Trackingbibliotheken mittels gerenderter Videos von Tracking-Targets,” in Virtuelle und Erweiterte Realität – 12. Workshop der GI-Fachgruppe VR/AR, 2015, p. 89–100.
    [BibTeX] [Abstract] [Download PDF]
    Die Erkennung und Verfolgung von Objekten ist seit Jahrzehnten eine wichtige Basis für Anwendungen im Bereich der Erweiterten Realität. Muss aus der Vielzahl an Bibliotheken eine verfügbare ausgewählt werden, fehlt es häufig an Vergleichbarkeit, da es kein standardisiertes Testverfahren gibt. Gleichzeitig ist es für die Entwicklung eigener Verfahren essentiell, das Optimierungspotential genau bestimmen zu können. Die vorliegende Arbeit versucht diese Lücke zu füllen: Mithilfe von systematisch erstellten gerenderten Videos können verschiedene Verfahren und Bibliotheken bezüglich ihrer Genauigkeit und Performanz überprüft werden. Exemplarisch werden die Eigenschaften dreier Trackingbibliotheken miteinander verglichen.
    @inproceedings{2774922,
    abstract = {Die Erkennung und Verfolgung von Objekten ist seit Jahrzehnten eine wichtige Basis für Anwendungen im Bereich der Erweiterten Realität. Muss aus der Vielzahl an Bibliotheken eine verfügbare ausgewählt werden, fehlt es häufig an Vergleichbarkeit, da es kein standardisiertes Testverfahren gibt. Gleichzeitig ist es für die Entwicklung
    eigener Verfahren essentiell, das Optimierungspotential genau bestimmen zu können. Die vorliegende Arbeit versucht diese Lücke zu füllen: Mithilfe von systematisch erstellten gerenderten Videos können verschiedene Verfahren und Bibliotheken bezüglich ihrer Genauigkeit
    und Performanz überprüft werden. Exemplarisch werden die Eigenschaften dreier Trackingbibliotheken miteinander verglichen.},
    author = {Diekmann, Jonas and Renner, Patrick and Pfeiffer, Thies},
    booktitle = {Virtuelle und Erweiterte Realität - 12. Workshop der GI-Fachgruppe VR/AR},
    editor = {Hinkenjann, André and Maiero, Jens and Blach, Roland},
    isbn = {978-3-8440-3868-2},
    location = {Bonn},
    pages = {89--100},
    publisher = {Shaker Verlag},
    title = {{Framework zur Evaluation von Trackingbibliotheken mittels gerenderter Videos von Tracking-Targets}},
    url = {https://nbn-resolving.org/urn:nbn:de:0070-pub-27749223, https://pub.uni-bielefeld.de/record/2774922},
    year = {2015},
    }
  • N. Pfeiffer-Leßmann, P. Renner, and T. Pfeiffer, “Analyzing Patterns of Eye Movements in Social Interactions,” in SAGA 2015: 2nd International Workshop on Solutions for Automatic Gaze Data Analysis, 2015, p. 27–29. doi:10.2390/biecoll-saga2015_10
    [BibTeX] [Abstract] [Download PDF]
    Eye gaze plays an important role in human communication. One foundational skill in human social interaction is joint attention which is receiving increased interest in particular in the area of human-agent or human-robot interaction. We are focusing here on patterns of gaze interaction that emerge in the process of establishing joint attention. The approach, however, should be applicable to many other aspects of social communication in which eye gaze plays an important role. Attention has been characterized as an increased awareness 1 and intentionally directed perception 2 and is judged to be crucial for goal-directed behavior. Joint attention can be defined as simultaneously allocating attention to a target as a consequence of attending to each other’s attentional states 3. In other words: Interlocutors have to deliberatively focus on the same target while being mutually aware of sharing their focus of attention 2 4.
    @inproceedings{2779314,
    abstract = {Eye gaze plays an important role in human communication. One foundational skill in human social interaction is joint attention which is receiving increased interest in particular in the area of human-agent or human-robot interaction. We are focusing here on patterns of gaze interaction that emerge in the process of establishing joint attention. The approach, however, should be applicable to many other aspects of social communication in which eye gaze plays an important role. Attention has been characterized as an increased awareness 1 and intentionally directed perception 2 and is judged to be crucial for goal-directed behavior. Joint attention can be defined as simultaneously allocating attention to a target as a consequence of attending to each other's attentional states 3. In other words: Interlocutors have to deliberatively focus on the same target while being mutually aware of sharing their focus of attention 2 4.},
    author = {Pfeiffer-Leßmann, Nadine and Renner, Patrick and Pfeiffer, Thies},
    booktitle = {SAGA 2015: 2nd International Workshop on Solutions for Automatic Gaze Data Analysis},
    editor = {Pfeiffer, Thies and Essig, Kai},
    keywords = {joint attention, eye tracking},
    location = {Bielefeld},
    pages = {27--29},
    publisher = {eCollections Bielefeld University},
    title = {{Analyzing Patterns of Eye Movements in Social Interactions}},
    url = {https://pub.uni-bielefeld.de/record/2779314},
    doi = {10.2390/biecoll-saga2015_10},
    year = {2015},
    }
  • P. Renner, T. Pfeiffer, and N. Pfeiffer-Leßmann, “Automatic Analysis of a Mobile Dual Eye-Tracking Study on Joint Attention,” Abstracts of the 18th European Conference on Eye Movements, p. 116–116, 2015.
    [BibTeX] [Abstract] [Download PDF]
    Our research aims at cognitive modelling of joint attention for artificial agents, such as virtual agents or robots. With the current study, we are focusing on the observation of interaction patterns of joint attention and their time course. For this, we recorded and analyzed twenty sessions of two interacting participants using two mobile binocular eye-tracking systems. A key contribution of our work addresses methodological aspects of mobile eye-tracking studies with scene camera recordings in general. The standard procedure for the analysis of such gaze videos requires a manual annotation. This time consuming process often exceeds multiple times the duration of the original recordings (e.g. 30 times). This doubles if, as in our case, the gaze of two interlocutors is recorded simultaneously. In our approach, we build upon our EyeSee3D approach for marker-based tracking and registration of the environment and a 3D reconstruction of the relevant stimuli. We extend upon the previous approach in supporting more than one participant and dynamically changing stimuli, here the faces and eyes of the interlocutors. The full analysis of the time course of both interlocutor’s gaze is done in real-time and available for analysis right after the experiment without the requirement for manual annotation.
    @article{2771800,
    abstract = {Our research aims at cognitive modelling of joint attention for artificial agents, such as virtual
    agents or robots. With the current study, we are focusing on the observation of interaction
    patterns of joint attention and their time course. For this, we recorded and analyzed
    twenty sessions of two interacting participants using two mobile binocular eye-tracking
    systems.
    A key contribution of our work addresses methodological aspects of mobile eye-tracking
    studies with scene camera recordings in general. The standard procedure for the analysis of
    such gaze videos requires a manual annotation. This time consuming process often exceeds
    multiple times the duration of the original recordings (e.g. 30 times). This doubles if, as in
    our case, the gaze of two interlocutors is recorded simultaneously.
    In our approach, we build upon our EyeSee3D approach for marker-based tracking and
    registration of the environment and a 3D reconstruction of the relevant stimuli. We extend
    upon the previous approach in supporting more than one participant and dynamically
    changing stimuli, here the faces and eyes of the interlocutors. The full analysis of the time
    course of both interlocutor’s gaze is done in real-time and available for analysis right after
    the experiment without the requirement for manual annotation.},
    author = {Renner, Patrick and Pfeiffer, Thies and Pfeiffer-Leßmann, Nadine},
    issn = {1995-8692},
    journal = {Abstracts of the 18th European Conference on Eye Movements},
    keywords = {eye tracking, joint attention, 3D},
    pages = {116--116},
    title = {{Automatic Analysis of a Mobile Dual Eye-Tracking Study on Joint Attention}},
    url = {https://pub.uni-bielefeld.de/record/2771800},
    year = {2015},
    }
  • H. Neumann, P. Renner, and T. Pfeiffer, “Entwicklung und Evaluation eines Kinect v2-basierten Low-Cost Aufprojektionssystems,” in Virtuelle und Erweiterte Realität – 12. Workshop der GI-Fachgruppe VR/AR, 2015, p. 22–33.
    [BibTeX] [Abstract] [Download PDF]
    In diesem Paper wird die Entwicklung und Evaluation eines kostengünstigen interaktiven Aufprojektionssystems auf Basis einer Microsoft Kinect 2 beschrieben. Das Aufprojektionssystem nutzt einen LED-Projektor zur Darstellung von Informationen auf einer ebenen Projektionsfläche. Durch eine Analyse von Infrarot- und Tiefenbild werden Fingerbewegungen erkannt und als Multi-Touch-Events auf Windows-Betriebssystemebene bereitgestellt. Die Tragfähigkeit des Ansatzes wird bezogen auf die erreichbare Genaugkeit und die Nutzbarkeit in einer Nutzerstudie evaluiert. Abschließend werden Designempfehlungen für die Gestaltung von Benutzerschnittstellen mit dem entwickelten interaktiven Aufprojektionssystem formuliert.
    @inproceedings{2774917,
    abstract = {In diesem Paper wird die Entwicklung und Evaluation eines kostengünstigen interaktiven Aufprojektionssystems auf Basis einer Microsoft Kinect 2 beschrieben. Das Aufprojektionssystem nutzt einen LED-Projektor zur Darstellung von Informationen auf einer ebenen Projektionsfläche. Durch eine Analyse von Infrarot- und Tiefenbild werden Fingerbewegungen erkannt und als Multi-Touch-Events auf Windows-Betriebssystemebene bereitgestellt.
    Die Tragfähigkeit des Ansatzes wird bezogen auf die erreichbare Genaugkeit und die Nutzbarkeit in einer Nutzerstudie evaluiert. Abschließend werden Designempfehlungen für die Gestaltung von Benutzerschnittstellen mit dem entwickelten interaktiven Aufprojektionssystem formuliert.},
    author = {Neumann, Henri and Renner, Patrick and Pfeiffer, Thies},
    booktitle = {Virtuelle und Erweiterte Realität - 12. Workshop der GI-Fachgruppe VR/AR},
    editor = {Hinkenjann, André and Maiero, Jens and Blach, Roland},
    isbn = {978-3-8440-3868-2},
    location = {Bonn},
    pages = {22--33},
    publisher = {Shaker Verlag},
    title = {{Entwicklung und Evaluation eines Kinect v2-basierten Low-Cost Aufprojektionssystems}},
    url = {https://pub.uni-bielefeld.de/record/2774917},
    year = {2015},
    }
  • T. Pfeiffer and C. Memili, “GPU-accelerated Attention Map Generation for Dynamic 3D Scenes,” in Proceedings of the IEEE VR 2015, 2015, p. 257–258.
    [BibTeX] [Abstract] [Download PDF]
    Measuring visual attention has become an important tool during product development. Attention maps are important qualitative visualizations to communicate results within the team and to stakeholders. We have developed a GPU-accelerated approach which allows for real-time generation of attention maps for 3D models that can, e.g., be used for on-the-fly visualizations of visual attention distributions and for the generation of heat-map textures for offline high-quality renderings. The presented approach is unique in that it works with monocular and binocular data, respects the depth of focus, can handle moving objects and is ready to be used for selective rendering.
    @inproceedings{2726777,
    abstract = {Measuring visual attention has become an important tool during product development. Attention maps are important qualitative visualizations to communicate results within the team and to stakeholders. We have developed a GPU-accelerated approach which allows for real-time generation of attention maps for 3D models that can, e.g., be used for on-the-fly visualizations of visual attention
    distributions and for the generation of heat-map textures for offline high-quality renderings. The presented approach is unique in that it works with monocular and binocular data, respects the depth of focus, can handle moving objects and is ready to be used for selective
    rendering.},
    author = {Pfeiffer, Thies and Memili, Cem},
    booktitle = {Proceedings of the IEEE VR 2015},
    editor = {Höllerer, Tobias and Interrante, Victoria and Lécuyer, Anatole and II, J. Edward Swan},
    keywords = {Attention Volumes, Gaze-based Interaction, 3D, Eye Tracking},
    pages = {257--258},
    publisher = {IEEE},
    title = {{GPU-accelerated Attention Map Generation for Dynamic 3D Scenes}},
    url = {https://pub.uni-bielefeld.de/record/2726777},
    year = {2015},
    }
  • J. Pfeiffer, T. Pfeiffer, and M. Meißner, “Towards attentive in-store recommender systems,” in Reshaping Society through Analytics, Collaboration, and Decision Support, D. Power and I. Lakshmi, Eds., Springer International Publishing, 2015, vol. 18, p. 161–173.
    [BibTeX] [Abstract] [Download PDF]
    We present research-in-progress on an attentive in-store mobile recommender system that is integrated into the user’s glasses and worn during purchase decisions. The system makes use of the Attentive Mobile Interactive Cognitive Assistant (AMICA) platform prototype designed as a ubiquitous technology that supports people in their everyday-life. This paper gives a short overview of the technology and presents results from a pre-study in which we collected real-life eye-tracking data during decision processes in a supermarket. The data helps us to characterize and identify the different decision contexts based on differences in the observed attentional processes. AMICA provides eye-tracking data that can be used to classify decision-making behavior in real-time to make a recommendation process context-aware.
    @inbook{2679169,
    abstract = {We present research-in-progress on an attentive in-store mobile recommender system that is integrated into the user’s glasses and worn during purchase decisions. The system makes use of the Attentive Mobile Interactive Cognitive Assistant (AMICA) platform prototype designed as a ubiquitous technology that supports people in their everyday-life. This paper gives a short overview of the technology and presents results from a pre-study in which we collected real-life eye-tracking data during decision processes in a supermarket. The data helps us to characterize and identify the different decision contexts based on differences in the observed attentional processes. AMICA provides eye-tracking data that can be used to classify decision-making behavior in real-time to make a recommendation process context-aware.},
    author = {Pfeiffer, Jella and Pfeiffer, Thies and Meißner, Martin},
    booktitle = {Reshaping Society through Analytics, Collaboration, and Decision Support},
    editor = {Power, Daniel and Lakshmi, Iyer},
    isbn = {978-3-319-11575-7},
    keywords = {Mobile Cognitive Assistance SystemsInformation Systems},
    pages = {161--173},
    publisher = {Springer International Publishing},
    title = {{Towards attentive in-store recommender systems}},
    url = {https://pub.uni-bielefeld.de/record/2679169},
    volume = {18},
    year = {2015},
    }
  • P. Renner and T. Pfeiffer, “Online Visual Attention Monitoring for Mobile Assistive Systems,” in SAGA 2015: 2nd International Workshop on Solutions for Automatic Gaze Data Analysis, 2015, p. 14–15. doi:10.2390/biecoll-saga2015_6
    [BibTeX] [Abstract] [Download PDF]
    Every now and then there are situations in which we are not sure how to proceed and thus are seeking for help. For example, choosing the best product out of dozens of different brands in a supermarket can be difficult, especially when following a specific diet. There are, however, also people who have problems with decision making or sequencing actions in everyday life, e.g. because they suffer from dementia. In such situations, it may be welcomed when there is someone around noticing our problem and offering help. In more private situations, e.g. in the bathroom, help in shape of a human being cannot be expected or even is not welcomed. Our research focuses on the design of mobile assistive systems which could assist in everyday life activities. Such a system needs to detect situations of helplessness, identify the interaction context, conclude what would be an appropriate assistance, before finally engaging in interaction with the user.
    @inproceedings{2779330,
    abstract = {Every now and then there are situations in which we are not sure how to proceed and thus are seeking for help. For example, choosing the best product out of dozens of different brands in a supermarket can be difficult, especially when following a specific diet. There are, however, also people who have problems with decision making or sequencing actions in everyday life, e.g. because they suffer from dementia. In such situations, it may be welcomed when there is someone around noticing our problem and offering help. In more private situations, e.g. in the bathroom, help in shape of a human being cannot be expected or even is not welcomed. Our research focuses on the design of mobile assistive systems which could assist in everyday life activities. Such a system needs to detect situations of helplessness, identify the interaction context, conclude what would be an appropriate assistance, before finally engaging in interaction with the user.},
    author = {Renner, Patrick and Pfeiffer, Thies},
    booktitle = {SAGA 2015: 2nd International Workshop on Solutions for Automatic Gaze Data Analysis},
    editor = {Pfeiffer, Thies and Essig, Kai},
    keywords = {eye tracking, gaze-based interaction, ADAMAAS},
    location = {Bielefeld},
    pages = {14--15},
    publisher = {eCollections Bielefeld University},
    title = {{Online Visual Attention Monitoring for Mobile Assistive Systems}},
    url = {https://pub.uni-bielefeld.de/record/2779330},
    doi = {10.2390/biecoll-saga2015_6},
    year = {2015},
    }
  • K. Kurzhals, M. Burch, T. Pfeiffer, and D. Weiskopf, “Eye Tracking in Computer-Based Visualization,” Computing in Science & Engineering, vol. 17, iss. 5, p. 64–71, 2015. doi:10.1109/MCSE.2015.93
    [BibTeX] [Abstract] [Download PDF]
    Eye tracking helps evaluate the quality of data visualization techniques and facilitates advanced interaction techniques for visualization systems.
    @article{2770093,
    abstract = {Eye tracking helps evaluate the quality of data visualization techniques and facilitates advanced interaction techniques for visualization
    systems.},
    author = {Kurzhals, Kuno and Burch, Michael and Pfeiffer, Thies and Weiskopf, Daniel},
    issn = {1521-9615},
    journal = {Computing in Science & Engineering},
    keywords = {Eyetracking},
    number = {5},
    pages = {64--71},
    publisher = {IEEE},
    title = {{Eye Tracking in Computer-Based Visualization}},
    url = {https://pub.uni-bielefeld.de/record/2770093},
    doi = {10.1109/MCSE.2015.93},
    volume = {17},
    year = {2015},
    }
  • T. Pfeiffer and P. Renner, “EyeSee3D: a low-cost approach for analysing mobile 3D eye tracking data using augmented reality technology,” in Proceedings of the Symposium on Eye Tracking Research and Applications, 2014, p. 195–202. doi:10.1145/2578153.2578183
    [BibTeX] [Abstract] [Download PDF]
    For validly analyzing human visual attention, it is often necessary to proceed from computer-based desktop set-ups to more natural real-world settings. However, the resulting loss of control has to be counterbalanced by increasing participant and/or item count. Together with the effort required to manually annotate the gaze-cursor videos recorded with mobile eye trackers, this renders many studies unfeasible. We tackle this issue by minimizing the need for manual annotation of mobile gaze data. Our approach combines geo\-metric modelling with inexpensive 3D marker tracking to align virtual proxies with the real-world objects. This allows us to classify fixations on objects of interest automatically while supporting a completely free moving participant. The paper presents the EyeSee3D method as well as a comparison of an expensive outside-in (external cameras) and a low-cost inside-out (scene camera) tracking of the eyetracker’s position. The EyeSee3D approach is evaluated comparing the results from automatic and manual classification of fixation targets, which raises old problems of annotation validity in a modern context.
    @inproceedings{2652246,
    abstract = {For validly analyzing human visual attention, it is often necessary to proceed from computer-based desktop set-ups to more natural real-world settings. However, the resulting loss of control has to be counterbalanced by increasing
    participant and/or item count. Together with the effort required to manually annotate the gaze-cursor videos recorded with mobile eye trackers, this renders many studies unfeasible.
    We tackle this issue by minimizing the need for manual annotation of mobile gaze data. Our approach combines geo\-metric modelling with inexpensive 3D marker tracking to align virtual proxies with the real-world objects. This allows us to classify fixations on objects of interest automatically while supporting a completely free moving participant.
    The paper presents the EyeSee3D method as well as a comparison of an expensive outside-in (external cameras) and a low-cost inside-out (scene camera) tracking of the eyetracker's position. The EyeSee3D approach is evaluated comparing the results from automatic and manual classification of fixation targets, which raises old problems of annotation validity in a modern context.},
    author = {Pfeiffer, Thies and Renner, Patrick},
    booktitle = {Proceedings of the Symposium on Eye Tracking Research and Applications},
    isbn = {978-1-4503-2751-0},
    keywords = {Gaze-based InteractionEyetrackingAugmented Reality},
    pages = {195--202},
    publisher = {ACM},
    title = {{EyeSee3D: a low-cost approach for analysing mobile 3D eye tracking data using augmented reality technology}},
    url = {https://nbn-resolving.org/urn:nbn:de:0070-pub-26522467, https://pub.uni-bielefeld.de/record/2652246},
    doi = {10.1145/2578153.2578183},
    year = {2014},
    }
  • J. Pfeiffer, M. Meißner, J. Prosiegel, and T. Pfeiffer, “Classification of Goal-Directed Search and Exploratory Search Using Mobile Eye-Tracking,” in Proceedings of the International Conference on Information Systems 2014 (ICIS 2014), 2014.
    [BibTeX] [Abstract] [Download PDF]
    In this paper, we investigate the visual attention of consumers with the help of mobile eye-tracking technology. We explore attentional differences between goal-directed search and exploratory search used when consumers are purchasing a product at the point-of-sale. The aim of this study is to classify these two search processes based solely on the consumersâ eye movements. Using data from a field experiment in a supermarket, we build a model that learns about consumersâ attentional processes and makes predictions about the search process they used. Our results show that we can correctly classify the search processes used with an accuracy of nearly 70 after just the first nine seconds of the search. Later on in the search process, the accuracy of the classification can reach up to 77.
    @inproceedings{2696595,
    abstract = {In this paper, we investigate the visual attention of consumers with the help of mobile eye-tracking technology. We explore attentional differences between goal-directed search and exploratory search used when consumers are purchasing a product at the point-of-sale. The aim of this study is to classify these two search processes based solely on the consumersâ eye movements. Using data from a field experiment in a supermarket, we build a model that learns about consumersâ attentional processes and makes predictions about the search process they used. Our results show that we can correctly classify the search processes used with an accuracy of nearly 70 after just the first nine seconds of the search. Later on in the search process, the accuracy of the classification can reach up to 77.},
    author = {Pfeiffer, Jella and Meißner, Martin and Prosiegel, Jascha and Pfeiffer, Thies},
    booktitle = {Proceedings of the International Conference on Information Systems 2014 (ICIS 2014)},
    keywords = {Recommendation Systems, Decision Support Systems (DSS), Human Information Behavior, Neuro-IS, Mobile commerce},
    title = {{Classification of Goal-Directed Search and Exploratory Search Using Mobile Eye-Tracking}},
    url = {https://pub.uni-bielefeld.de/record/2696595},
    year = {2014},
    }
  • P. Renner, T. Pfeiffer, and S. Wachsmuth, “Towards a model for anticipating human gestures in human-robot interactions in shared space,” Cognitive Processing, vol. 15, iss. 1 Supplement, p. 59–60, 2014.
    [BibTeX] [Abstract] [Download PDF]
    Human-robot interaction in shared spaces might benefit from human skills of anticipating movements. We observed human-human interactions in a route planning scenario to identify relevant communication strategies with a focus on hand-eye coordination.
    @article{2682102,
    abstract = {Human-robot interaction in shared spaces might benefit from human skills of anticipating movements. We observed human-human interactions in a route planning scenario to identify relevant communication strategies with a focus on hand-eye coordination.},
    author = {Renner, Patrick and Pfeiffer, Thies and Wachsmuth, Sven},
    issn = {1612-4790},
    journal = {Cognitive Processing},
    keywords = {gestureseye trackingrobotics},
    number = {1 Supplement},
    pages = {59--60},
    publisher = {Springer Science + Business Media},
    title = {{Towards a model for anticipating human gestures in human-robot interactions in shared space}},
    url = {https://pub.uni-bielefeld.de/record/2682102},
    volume = {15},
    year = {2014},
    }
  • T. Pfeiffer, S. Stellmach, and Y. Sugano, “4th International Workshop on Pervasive Eye Tracking and Mobile Eye-Based Interaction,” in UbiComp’14 Adjunct: The 2014 ACM Conference on Ubiquitous Computing Adjunct Publication, 2014, p. 1085–1092.
    [BibTeX] [Abstract] [Download PDF]
    Previous work on eye tracking and eye-based human-computer interfaces mainly concentrated on making use of the eyes in traditional desktop settings. With the recent growth of interest in smart glass devices and low-cost eye trackers, however, gaze-based techniques for mobile computing is becoming increasingly important. PETMEI 2014 focuses on the pervasive eye tracking paradigm as a trailblazer for mobile eye-based interaction and eye-based context-awareness. We want to stimulate and explore the creativity of these communities with respect to the implications, key research challenges, and new applications for pervasive eye tracking in ubiquitous computing. The long-term goal is to create a strong interdisciplinary research community linking these fields together and to establish the workshop as the premier forum for research on pervasive eye tracking.
    @inproceedings{2685301,
    abstract = {Previous work on eye tracking and eye-based
    human-computer interfaces mainly concentrated on
    making use of the eyes in traditional desktop settings.
    With the recent growth of interest in smart glass devices
    and low-cost eye trackers, however, gaze-based techniques
    for mobile computing is becoming increasingly important.
    PETMEI 2014 focuses on the pervasive eye tracking
    paradigm as a trailblazer for mobile eye-based interaction
    and eye-based context-awareness. We want to stimulate
    and explore the creativity of these communities with
    respect to the implications, key research challenges, and
    new applications for pervasive eye tracking in ubiquitous
    computing. The long-term goal is to create a strong
    interdisciplinary research community linking these fields
    together and to establish the workshop as the premier
    forum for research on pervasive eye tracking.},
    author = {Pfeiffer, Thies and Stellmach, Sophie and Sugano, Yusuke},
    booktitle = {UbiComp'14 Adjunct: The 2014 ACM Conference on Ubiquitous Computing Adjunct Publication},
    isbn = {978-1-4503-3047-3},
    keywords = {Gaze-based Interaction},
    pages = {1085--1092},
    publisher = {ACM},
    title = {{4th International Workshop on Pervasive Eye Tracking and Mobile Eye-Based Interaction}},
    url = {https://pub.uni-bielefeld.de/record/2685301},
    year = {2014},
    }
  • J. Pfeiffer, J. Prosiegel, M. Meißner, and T. Pfeiffer, “Identifying goal-oriented and explorative information search patterns,” in Proceedings of the Gmunden Retreat on NeuroIS 2014, 2014, p. 23–25.
    [BibTeX] [Abstract] [Download PDF]
    One of the latest trends of ubiquitous Information Systems is the use of smartglasses, such as Google Glass or Epson Moverio BT-200 that are connected to the Internet and are augmenting reality with a head-up display. In order to develop recommendation agents (RAs) for the use at the point of sale, researchers have proposed to integrate a portable eye tracking system into such smartglasses (Pfeiffer et al. 2013). This would allow providing the customer with relevant product information and alternative products by making use of the customer’s information acquisition processes recorded during the purchase decision.
    @inproceedings{2679133,
    abstract = {One of the latest trends of ubiquitous Information Systems is the use of smartglasses, such as Google Glass or Epson Moverio BT-200 that are connected to the Internet and are augmenting reality with a head-up display. In order to develop recommendation agents (RAs) for the use at the point of sale, researchers have proposed to integrate a portable eye tracking system into such smartglasses (Pfeiffer et al. 2013). This would allow providing the customer with relevant product information and alternative products by making use of the customer’s information acquisition processes recorded during the purchase decision.},
    author = {Pfeiffer, Jella and Prosiegel, Jascha and Meißner, Martin and Pfeiffer, Thies},
    booktitle = {Proceedings of the Gmunden Retreat on NeuroIS 2014},
    editor = {Davis, Fred and Riedl, René and vom Brocke, Jan and Léger, Pierre-Majorique and Randolph, Adriane},
    keywords = {Mobile Cognitive Assistance Systems
    Information Systems},
    location = {Gmunden, Austria},
    pages = {23--25},
    title = {{Identifying goal-oriented and explorative information search patterns}},
    url = {https://nbn-resolving.org/urn:nbn:de:0070-pub-26791339, https://pub.uni-bielefeld.de/record/2679133},
    year = {2014},
    }
  • T. Pfeiffer, P. Renner, and N. Pfeiffer-Leßmann, “Efficient analysis of gaze-behavior in 3D environments,” Cognitive Processing, vol. 15, iss. Suppl. 1, p. S127–S129, 2014.
    [BibTeX] [Abstract] [Download PDF]
    We present an approach to identify the 3D point of regard and the fixated object in real-time based on 2D gaze videos without the need for manual annotation. The approach does not require additional hardware except for the mobile eye tracker. It is currently applicable for scenarios with static target objects and requires an instrumentation of the environment with markers. The system has already been tested in two different studies. Possible applications are visual world paradigms in complex 3D environments, research on visual attention or human-human/human-agent interaction studies.
    @article{2682098,
    abstract = {We present an approach to identify the 3D point of regard and the fixated object in real-time based on 2D gaze videos without the need for manual annotation. The approach does not require additional hardware except for the mobile eye tracker. It is currently applicable for scenarios with static target objects and requires an instrumentation of the environment with markers. The system has already been tested in two different studies. Possible applications are visual world paradigms in complex 3D environments, research on visual attention or human-human/human-agent interaction studies.},
    author = {Pfeiffer, Thies and Renner, Patrick and Pfeiffer-Leßmann, Nadine},
    issn = {1612-4790},
    journal = {Cognitive Processing},
    keywords = {Gaze-based Interaction
    Eye Tracking},
    number = {Suppl. 1},
    pages = {S127--S129},
    publisher = {Springer},
    title = {{Efficient analysis of gaze-behavior in 3D environments}},
    url = {https://nbn-resolving.org/urn:nbn:de:0070-pub-26820982, https://pub.uni-bielefeld.de/record/2682098},
    volume = {15},
    year = {2014},
    }
  • P. Renner, T. Pfeiffer, and I. Wachsmuth, “Spatial references with gaze and pointing in shared space of humans and robots,” in Spatial Cognition IX, C. Freksa, B. Nebel, M. Hegarty, and T. Barkowsky, Eds., Springer, 2014, vol. 8684, p. 121–136. doi:10.1007/978-3-319-11215-2_9
    [BibTeX] [Abstract] [Download PDF]
    For solving tasks cooperatively in close interaction with humans, robots need to have timely updated spatial representations. However, perceptual information about the current position of interaction partners is often late. If robots could anticipate the targets of upcoming manual actions, such as pointing gestures, they would have more time to physically react to human movements and could consider prospective space allocations in their planning. Many findings support a close eye-hand coordination in humans which could be used to predict gestures by observing eye gaze. However, effects vary strongly with the context of the interaction. We collect evidence of eye-hand coordination in a natural route planning scenario in which two agents interact over a map on a table. In particular, we are interested if fixations can predict pointing targets and how target distances affect the interlocutor’s pointing behavior. We present an automatic method combining marker tracking and 3D modeling that provides eye and gesture measurements in real-time.
    @inbook{2679177,
    abstract = {For solving tasks cooperatively in close interaction with humans, robots need to have timely updated spatial representations. However, perceptual information about the current position of interaction partners is often late. If robots could anticipate the targets of upcoming manual actions, such as pointing gestures, they would have more time to physically react to human movements and could consider prospective space allocations in their planning. Many findings support a close eye-hand coordination in humans which could be used to predict gestures by observing eye gaze. However, effects vary strongly with the context of the interaction. We collect evidence of eye-hand coordination in a natural route planning scenario in which two agents interact over a map on a table. In particular, we are interested if fixations can predict pointing targets and how target distances affect the interlocutor's pointing behavior. We present an automatic method combining marker tracking and 3D modeling that provides eye and gesture measurements in real-time.},
    author = {Renner, Patrick and Pfeiffer, Thies and Wachsmuth, Ipke},
    booktitle = {Spatial Cognition IX},
    editor = {Freksa, Christian and Nebel, Bernhard and Hegarty, Mary and Barkowsky, Thomas},
    isbn = {978-3-319-11214-5},
    keywords = {gestures, robotics, eye tracking, multimodal interaction},
    pages = {121--136},
    publisher = {Springer},
    title = {{Spatial references with gaze and pointing in shared space of humans and robots}},
    url = {https://nbn-resolving.org/urn:nbn:de:0070-pub-26791778, https://pub.uni-bielefeld.de/record/2679177},
    doi = {10.1007/978-3-319-11215-2_9},
    volume = {8684},
    year = {2014},
    }
  • V. Losing, L. Rottkamp, M. Zeunert, and T. Pfeiffer, “Guiding Visual Search Tasks Using Gaze-Contingent Auditory Feedback,” in UbiComp’14 Adjunct: The 2014 ACM Conference on Ubiquitous Computing Adjunct Publication, 2014, p. 1093–1102.
    [BibTeX] [Abstract] [Download PDF]
    In many applications it is necessary to guide humans’ visual attention towards certain points in the environment. This can be to highlight certain attractions in a touristic application for smart glasses, to signal important events to the driver of a car or to draw the attention of a user of a desktop system to an important message of the user interface. The question we are addressing here is: How can we guide visual attention if we are not able to do it visually? In the presented approach we use gaze-contingent auditory feedback (sonification) to guide visual attention and show that people are able to make use of this guidance to speed up visual search tasks significantly.
    @inproceedings{2685312,
    abstract = {In many applications it is necessary to guide humans' visual attention towards certain points in the environment. This can be to highlight certain attractions in a touristic application for smart glasses, to signal important events to the driver of a car or to draw the attention of a user of a desktop system to an important message of the user interface. The question we are addressing here is: How can we guide visual attention if we are not able to do it visually? In the presented approach we use gaze-contingent auditory feedback (sonification) to guide visual attention and show that people are able to make use of this guidance to speed up visual search tasks significantly.},
    author = {Losing, Viktor and Rottkamp, Lukas and Zeunert, Michael and Pfeiffer, Thies},
    booktitle = {UbiComp'14 Adjunct: The 2014 ACM Conference on Ubiquitous Computing Adjunct Publication},
    isbn = {978-1-4503-3047-3},
    keywords = {Gaze-based Interaction, Visual Search},
    location = {Seattle, WA, USA},
    pages = {1093--1102},
    publisher = {ACM Press},
    title = {{Guiding Visual Search Tasks Using Gaze-Contingent Auditory Feedback}},
    url = {https://pub.uni-bielefeld.de/record/2685312},
    year = {2014},
    }
  • M. Huschens, J. Pfeiffer, and T. Pfeiffer, “Important product features of mobile decision support systems for in-store purchase decisions: A user-perspective taking into account different purchase situations,” in Proceedings of the MKWI, 2014.
    [BibTeX] [Abstract] [Download PDF]
    Since the widespread diffusion of mobile devices, like smartphones, mobile decision support systems (MDSS) that provide product information, recommendations or other kind of decision support for in-store purchases have gained momentum. A user-centered design of MDSS requires a choice of features appropriate for the specific decision situation. This paper presents results of a study to identify important features customers expect of an in-store MDSS for electronic devices in different purchase situations. The study has been conducted as online questionnaire applying a preference measurement technique from marketing science.
    @inproceedings{2666066,
    abstract = {Since the widespread diffusion of mobile devices, like smartphones, mobile decision support systems (MDSS) that provide product information, recommendations or other kind of decision support for in-store purchases have gained momentum. A user-centered design of MDSS requires a choice of features appropriate for the specific decision situation. This paper presents results of a study to identify important features customers expect of an in-store MDSS for electronic devices in different purchase situations. The study has been conducted as online questionnaire applying a preference measurement technique from marketing science.},
    author = {Huschens, Martin and Pfeiffer, Jella and Pfeiffer, Thies},
    booktitle = {Proceedings of the MKWI},
    keywords = {Mobile Cognitive Assistance Systems
    Information Systems},
    title = {{Important product features of mobile decision support systems for in-store purchase decisions: A user-perspective taking into account different purchase situations}},
    url = {https://pub.uni-bielefeld.de/record/2666066},
    year = {2014},
    }
  • P. Renner and T. Pfeiffer, “Model-based acquisition and analysis of multimodal interactions for improving human-robot interaction,” in Proceedings of the Symposium on Eye Tracking Research and Applications, 2014, p. 361–362. doi:10.1145/2578153.2582176
    [BibTeX] [Abstract] [Download PDF]
    For solving complex tasks cooperatively in close interaction with robots, they need to understand natural human communication. To achieve this, robots could benefit from a deeper understanding of the processes that humans use for successful communication. Such skills can be studied by investigating human face-to-face interactions in complex tasks. In our work the focus lies on shared-space interactions in a path planning task and thus 3D gaze directions and hand movements are of particular interest. However, the analysis of gaze and gestures is a time-consuming task: Usually, manual annotation of the eye tracker’s scene camera video is necessary in a frame-by-frame manner. To tackle this issue, based on the EyeSee3D method, an automatic approach for annotating interactions is presented: A combination of geometric modeling and 3D marker tracking serves to align real world stimuli with virtual proxies. This is done based on the scene camera images of the mobile eye tracker alone. In addition to the EyeSee3D approach, face detection is used to automatically detect fixations on the interlocutor. For the acquisition of the gestures, an optical marker tracking system is integrated and fused in the multimodal representation of the communicative situation.
    @inproceedings{2666049,
    abstract = {For solving complex tasks cooperatively in close interaction with robots, they need to understand natural human communication. To achieve this, robots could benefit from a deeper understanding of the processes that humans use for successful communication. Such skills can be studied by investigating human face-to-face interactions in complex tasks. In our work the focus lies on shared-space interactions in a path planning task and thus 3D gaze directions and hand movements are of particular interest. However, the analysis of gaze and gestures is a time-consuming task: Usually, manual annotation of the eye tracker's scene camera video is necessary in a frame-by-frame manner. To tackle this issue, based on the EyeSee3D method, an automatic approach for annotating interactions is presented: A combination of geometric modeling and 3D marker tracking serves to align real world stimuli with virtual proxies. This is done based on the scene camera images of the mobile eye tracker alone. In addition to the EyeSee3D approach, face detection is used to automatically detect fixations on the interlocutor. For the acquisition of the gestures, an optical marker tracking system is integrated and fused in the multimodal representation of the communicative situation.},
    author = {Renner, Patrick and Pfeiffer, Thies},
    booktitle = {Proceedings of the Symposium on Eye Tracking Research and Applications},
    isbn = {978-1-4503-2751-0},
    keywords = {Eyetracking, geometric modelling, motion tracking, Gaze-based Interaction, 3D gaze analysis, Augmented Reality, eye tracking, marker tracking},
    pages = {361--362},
    publisher = {ACM},
    title = {{Model-based acquisition and analysis of multimodal interactions for improving human-robot interaction}},
    url = {https://nbn-resolving.org/urn:nbn:de:0070-pub-26660493, https://pub.uni-bielefeld.de/record/2666049},
    doi = {10.1145/2578153.2582176},
    year = {2014},
    }
  • K. Harmening and T. Pfeiffer, “Location-based online identification of objects in the centre of visual attention using eye tracking,” in Proceedings of the First International Workshop on Solutions for Automatic Gaze-Data Analysis 2013 (SAGA 2013), 2013, p. 38–40. doi:10.2390/biecoll-saga2013_10
    [BibTeX] [Abstract] [Download PDF]
    Modern mobile eye trackers calculate the point-of-regard relatively to the current image obtained by a scene-camera. They show where the wearer of the eye tracker is looking at in this 2D picture, but they fail to provide a link to the object of interest in the environment. To understand the context of the wearer’s current actions, human annotators therefore have to label the recorded fixations manually. This is very time consuming and also prevents an online interactive use in HCI. A popular scenario for mobile eye tracking are supermarkets. Gidlöf et al. (2013) used this scenario to study the visual behaviour in a decision-process. De Beugher et al. (2012) developed an offline approach to automate the analysis of object identification. For usage of mobile eye tracking in an online recommender system (Pfeiffer et al., 2013), that supports the user in a supermarket, it is essential to identify the object of interest immediately. Our work addresses this issue by using location information to speed-up the identification of the fixated object and at the same time making detection results more robust.
    @inproceedings{2632812,
    abstract = {Modern mobile eye trackers calculate the point-of-regard relatively to the current image obtained by a scene-camera. They show where the wearer of the eye tracker is looking at in this 2D picture, but they fail to provide a link to the object of interest in the environment. To understand the context of the wearer’s current actions, human annotators therefore have to label the recorded fixations manually. This is very time consuming and also prevents an online interactive use in HCI. A popular scenario for mobile eye tracking are supermarkets. Gidlöf et al. (2013) used this scenario to study the visual behaviour in a decision-process. De Beugher et al. (2012) developed an offline approach to automate the analysis of object identification. For usage of mobile eye tracking in an online recommender system (Pfeiffer et al., 2013), that supports the user in a supermarket, it is essential to identify the object of interest immediately. Our work addresses this issue by using location information to speed-up the identification of the fixated object and at the same time making detection results more robust.},
    author = {Harmening, Kai and Pfeiffer, Thies},
    booktitle = {Proceedings of the First International Workshop on Solutions for Automatic Gaze-Data Analysis 2013 (SAGA 2013)},
    editor = {Pfeiffer, Thies and Essig, Kai},
    keywords = {Gaze-based InteractionMobile Cognitive Assistance Systems},
    location = {Bielefeld},
    pages = {38--40},
    publisher = {Center of Excellence Cognitive Interaction Technology},
    title = {{Location-based online identification of objects in the centre of visual attention using eye tracking}},
    url = {https://pub.uni-bielefeld.de/record/2632812},
    doi = {10.2390/biecoll-saga2013_10},
    year = {2013},
    }
  • T. Pfeiffer, “Gaze-based assistive technologies,” in Assistive Technologies and Computer Access for Motor Disabilities, G. Kouroupetroglou, Ed., IGI Global, 2013, p. 90–109. doi:10.4018/978-1-4666-4438-0.ch004
    [BibTeX] [Abstract] [Download PDF]
    The eyes play an important role both in perception and communication. Technical interfaces that make use of their versatility can bring significant improvements to those who are unable to speak or to handle selection tasks elsewise such as with their hands, feet, noses or tools handled with the mouth. Using the eyes to enter texts into a computer system, which is called gaze-typing, is the most prominent gaze-based assistive technology. The article reviews the principles of eye movements, presents an overview of current eye-tracking systems, and discusses several approaches to gaze-typing. With the recent advent of mobile eye-tracking systems, gaze-based assistive technology is no longer restricted to interactions with desktop-computers. Gaze-based assistive technology is ready to expand its application into other areas of everyday life. The second part of the article thus discusses the use of gaze-based assistive technology in the household, or “the wild,” outside one’s own four walls.
    @inbook{2564595,
    abstract = {The eyes play an important role both in perception and communication. Technical interfaces that make use of their versatility can bring significant improvements to those who are unable to speak or to handle selection tasks elsewise such as with their hands, feet, noses or tools handled with the mouth. Using the eyes to enter texts into a computer system, which is called gaze-typing, is the most prominent gaze-based assistive technology. The article reviews the principles of eye movements, presents an overview of current eye-tracking systems, and discusses several approaches to gaze-typing. With the recent advent of mobile eye-tracking systems, gaze-based assistive technology is no longer restricted to interactions with desktop-computers. Gaze-based assistive technology is ready to expand its application into other areas of everyday life. The second part of the article thus discusses the use of gaze-based assistive technology in the household, or “the wild,” outside one’s own four walls.},
    author = {Pfeiffer, Thies},
    booktitle = {Assistive Technologies and Computer Access for Motor Disabilities},
    editor = {Kouroupetroglou, Georgios},
    isbn = {9781466644380},
    issn = {2327-9354},
    keywords = {Gaze-based Interaction
    Eye Tracking},
    pages = {90--109},
    publisher = {IGI Global},
    title = {{Gaze-based assistive technologies}},
    url = {https://pub.uni-bielefeld.de/record/2564595},
    doi = {10.4018/978-1-4666-4438-0.ch004},
    year = {2013},
    }
  • M. Meißner, J. Pfeiffer, and T. Pfeiffer, “Mobile eyetracking for decision analysis at the point-of-sale: Requirements from the perspectives of marketing research and human-computer interaction,” in Proceedings of the First International Workshop on Solutions for Automatic Gaze-Data Analysis 2013 (SAGA 2013), 2013, p. 10–13. doi:10.2390/biecoll-saga2013_3
    [BibTeX] [Abstract] [Download PDF]
    In a typical grocery-shopping trip consumers are overwhelmed not only by the number of products and brands in the store, but also by other possible distractions like advertisements, other consumers or smartphones. In this environment, attention is the key source for investigating the decision processes of customers. Recent mobile eyetracking systems have opened the gate to a better understanding of instore attention. We present perspectives from the two disciplines marketing research and human-computer interaction and refine methodical and technological requirements for attention analysis at the point-of-sale (POS).
    @inproceedings{2632819,
    abstract = {In a typical grocery-shopping trip consumers are overwhelmed not only by the number of products and brands in the store, but also by other possible distractions like advertisements, other consumers or smartphones. In this environment, attention is the key source for investigating the decision processes of customers. Recent mobile eyetracking systems have opened the gate to a better understanding of instore attention. We present perspectives from the two disciplines marketing research and human-computer interaction and refine methodical and technological requirements for attention analysis at the point-of-sale (POS).},
    author = {Meißner, Martin and Pfeiffer, Jella and Pfeiffer, Thies},
    booktitle = {Proceedings of the First International Workshop on Solutions for Automatic Gaze-Data Analysis 2013 (SAGA 2013)},
    editor = {Pfeiffer, Thies and Essig, Kai},
    keywords = {Gaze-based Interaction
    Mobile Cognitive Assistance Systems},
    location = {Bielefeld},
    pages = {10--13},
    publisher = {Center of Excellence Cognitive Interaction Technology},
    title = {{Mobile eyetracking for decision analysis at the point-of-sale: Requirements from the perspectives of marketing research and human-computer interaction}},
    url = {https://pub.uni-bielefeld.de/record/2632819},
    doi = {10.2390/biecoll-saga2013_3},
    year = {2013},
    }
  • T. Dankert, D. Heil, and T. Pfeiffer, “Stereo vision and acuity tests within a virtual reality set-up,” in Virtuelle und Erweiterte Realität – 10. Workshop der GI-Fachgruppe VR/AR, 2013, p. 185–188.
    [BibTeX] [Abstract] [Download PDF]
    The provision of stereo images to facilitate depth perception by stereopsis is one key aspect of many Virtual Reality installations and there are many technical approaches to do so. However, differences in visual capabilities of the user and technical limitations of a specific set-up might restrict the spatial range in which stereopsis can be facilitated. In this paper, we transfer an existent test for stereo vision from the real world to a virtual environment and extend it to measure stereo acuity.
    @inproceedings{2623964,
    abstract = {The provision of stereo images to facilitate depth perception by stereopsis is one key aspect of many Virtual Reality installations and there are many technical approaches to do so. However, differences in visual capabilities of the user and technical limitations of a specific set-up might restrict the spatial range in which stereopsis can be facilitated. In this paper, we transfer an existent test for stereo vision from the real world to a virtual environment and extend it to measure stereo acuity.},
    author = {Dankert, Timo and Heil, Dimitri and Pfeiffer, Thies},
    booktitle = {Virtuelle und Erweiterte Realität - 10. Workshop der GI-Fachgruppe VR/AR},
    editor = {Latoschik, Marc Erich and Staadt, Oliver and Steinicke, Frank},
    isbn = {978-3-8440-2211-7},
    keywords = {Virtual Reality},
    pages = {185--188},
    publisher = {Shaker Verlag},
    title = {{Stereo vision and acuity tests within a virtual reality set-up}},
    url = {https://nbn-resolving.org/urn:nbn:de:0070-pub-26239645, https://pub.uni-bielefeld.de/record/2623964},
    year = {2013},
    }
  • S. Kousidis, T. Pfeiffer, and D. Schlangen, “MINT.tools: Tools and Adaptors Supporting Acquisition, Annotation and Analysis of Multimodal Corpora,” in Proceedings of Interspeech 2013, 2013.
    [BibTeX] [Download PDF]
    @inproceedings{2581101,
    author = {Kousidis, Spyridon and Pfeiffer, Thies and Schlangen, David},
    booktitle = {Proceedings of Interspeech 2013},
    keywords = {Multimodal Communication},
    location = {Lyon, France},
    publisher = {ISCA},
    title = {{MINT.tools: Tools and Adaptors Supporting Acquisition, Annotation and Analysis of Multimodal Corpora}},
    url = {https://nbn-resolving.org/urn:nbn:de:0070-pub-25811018, https://pub.uni-bielefeld.de/record/2581101},
    year = {2013},
    }
  • E. Dyck, T. Pfeiffer, and M. Botsch, “Evaluation of Surround-View and Self-Rotation in the OCTAVIS VR-System,” in Joint Virtual Reality Conference of EGVE – EuroVR, 2013, p. 1–8. doi:10.2312/EGVE.JVRC13.001-008
    [BibTeX] [Abstract] [Download PDF]
    In this paper we evaluate spatial presence and orientation in the OCTAVIS system, a novel virtual reality platform aimed at training and rehabilitation of visual-spatial cognitive abilities. It consists of eight touch-screen displays surrounding the user, thereby providing a 360 horizontal panorama view. A rotating office chair and a joystick in the armrest serve as input devices to easily navigate through the virtual environment. We conducted a two-step experiment to investigate spatial orientation capabilities with our device. First, we examined whether the extension of the horizontal field of view from 135 (three displays) to 360 (eight displays) has an effect on spatial presence and on the accuracy in a pointing task. Second, driving the full eight screens, we explored the effect of embodied self-rotation using the same measures. In particular we compare navigation by rotating the world while the user is sitting stable to a stable world and a self-rotating user.
    @inproceedings{2647957,
    abstract = {In this paper we evaluate spatial presence and orientation in the OCTAVIS system, a novel virtual reality platform aimed at training and rehabilitation of visual-spatial cognitive abilities. It consists of eight touch-screen displays surrounding the user, thereby providing a 360 horizontal panorama view. A rotating office chair and a joystick in the armrest serve as input devices to easily navigate through the virtual environment. We conducted a two-step experiment to investigate spatial orientation capabilities with our device. First, we examined whether the extension of the horizontal field of view from 135 (three displays) to 360 (eight displays) has an effect on spatial presence and on the accuracy in a pointing task. Second, driving the full eight screens, we explored the effect of embodied self-rotation using the same measures. In particular we compare navigation by rotating the world while the user is sitting stable to a stable world and a self-rotating user.},
    author = {Dyck, Eugen and Pfeiffer, Thies and Botsch, Mario},
    booktitle = {Joint Virtual Reality Conference of EGVE - EuroVR},
    editor = {Mohler, Betty and Raffin, Bruno and Saito, Hideo and Staadt, Oliver},
    isbn = {978-3-905674-47-7},
    issn = {1727-530X},
    keywords = {Virtual Reality},
    pages = {1--8},
    publisher = {Eurographics Association},
    title = {{Evaluation of Surround-View and Self-Rotation in the OCTAVIS VR-System}},
    url = {https://pub.uni-bielefeld.de/record/2647957},
    doi = {10.2312/EGVE.JVRC13.001-008},
    year = {2013},
    }
  • T. Pfeiffer, “Documentation of gestures with data gloves,” in Body-Language-Communication: An International Handbook on Multimodality in Human Interaction, C. Müller, A. Cienki, E. Fricke, S. H. Ladewig, D. McNeill, and S. Teßendorf, Eds., Mouton de Gruyter, 2013, vol. 38/1, p. 868–869. doi:10.1515/9783110261318.868
    [BibTeX] [Abstract] [Download PDF]
    Human hand gestures are very swift and difficult to observe from the (often) distant perspective of a scientific overhearer. Not uncommonly, fingers are occluded by other body parts or context objects and the true hand posture is often only revealed to the addressee. In addition to that, as the hand has many degrees of freedom and the annotation has to cover positions and orientations in a 3D world – which is less accessible from the typical computer-desktop workplace of an annotator than, let’s say, spoken language – the annotation of hand postures is quite expensive and complex. Fortunately, the research on virtual reality technology has brought about data gloves in the first place, which were meant as an interaction device allowing humans to manipulate entities in a virtual world. Since its release, however, many different applications have been found. Data gloves are devices that track most of the joints of the human hand and generate data-sets describing the posture of the hand several times a second. The article reviews different types of data gloves, discusses representation formats and ways to support annotation, and presents best practices for study design using data gloves as recording devices.
    @inbook{2564568,
    abstract = {Human hand gestures are very swift and difficult to observe from the (often) distant perspective of a scientific overhearer. Not uncommonly, fingers are occluded by other body parts or context objects and the true hand posture is often only revealed to the addressee. In addition to that, as the hand has many degrees of freedom and the annotation has to cover positions and orientations in a 3D world – which is less accessible from the typical computer-desktop workplace of an annotator than, let’s say, spoken language – the annotation of hand postures is quite expensive and complex.
    Fortunately, the research on virtual reality technology has brought about data gloves in the first place, which were meant as an interaction device allowing humans to manipulate entities in a virtual world. Since its release, however, many different applications have been found. Data gloves are devices that track most of the joints of the human hand and generate data-sets describing the posture of the hand several times a second. The article reviews different types of data gloves, discusses representation formats and ways to support annotation, and presents best practices for study design using data gloves as recording devices.},
    author = {Pfeiffer, Thies},
    booktitle = {Body-Language-Communication: An International Handbook on Multimodality in Human Interaction},
    editor = {Müller, Cornelia and Cienki, Alan and Fricke, Ellen and Ladewig, Silva H. and McNeill, David and Teßendorf, Sedinha},
    isbn = {9783110261318},
    keywords = {Multimodal Communication, Multimodal Corpora, Motion Capturing, Data Gloves},
    pages = {868--869},
    publisher = {Mouton de Gruyter},
    title = {{Documentation of gestures with data gloves}},
    url = {https://pub.uni-bielefeld.de/record/2564568},
    doi = {10.1515/9783110261318.868},
    volume = {38/1},
    year = {2013},
    }
  • T. Pfeiffer, F. Hofmann, F. Hahn, H. Rieser, and I. Röpke, “Gesture semantics reconstruction based on motion capturing and complex event processing: a circular shape example,” in Proceedings of the Special Interest Group on Discourse and Dialog (SIGDIAL) 2013 Conference, 2013, p. 270–279.
    [BibTeX] [Abstract] [Download PDF]
    A fundamental problem in manual based gesture semantics reconstruction is the specification of preferred semantic concepts for gesture trajectories. This issue is complicated by problems human raters have annotating fast-paced three dimensional trajectories. Based on a detailed example of a gesticulated circular trajectory, we present a data-driven approach that covers parts of the semantic reconstruction by making use of motion capturing (mocap) technology. In our FA3ME framework we use a complex event processing approach to analyse and annotate multi-modal events. This framework provides grounds for a detailed description of how to get at the semantic concept of circularity observed in the data.
    @inproceedings{2608950,
    abstract = {A fundamental problem in manual based gesture semantics reconstruction is the specification of preferred semantic concepts for gesture trajectories. This issue is complicated by problems human raters have annotating fast-paced three dimensional trajectories. Based on a detailed example of a gesticulated circular trajectory,
    we present a data-driven approach that covers parts of the semantic reconstruction by making use of motion capturing
    (mocap) technology. In our FA3ME framework we use a complex event processing approach to analyse and annotate multi-modal events. This framework provides grounds for a detailed description of how to get at the semantic concept of circularity observed in the data.},
    author = {Pfeiffer, Thies and Hofmann, Florian and Hahn, Florian and Rieser, Hannes and Röpke, Insa},
    booktitle = {Proceedings of the Special Interest Group on Discourse and Dialog (SIGDIAL) 2013 Conference},
    editor = {Eskenazi, Maxine and Strube, Michael and Di Eugenio, Barbara and Williams, Jason D.},
    isbn = {978-1-937284-95-4},
    keywords = {Multimodal Communication},
    location = {Metz, France},
    pages = {270--279},
    publisher = {Association for Computational Linguistics},
    title = {{Gesture semantics reconstruction based on motion capturing and complex event processing: a circular shape example}},
    url = {https://pub.uni-bielefeld.de/record/2608950},
    year = {2013},
    }
  • T. Pfeiffer and K. Essig, “Analysis of eye movements in situated natural interactions,” in Book of Abstracts of the 17th European Conference on Eye Movements, 2013, p. 275–275.
    [BibTeX] [Download PDF]
    @inproceedings{2578917,
    author = {Pfeiffer, Thies and Essig, Kai},
    booktitle = {Book of Abstracts of the 17th European Conference on Eye Movements},
    editor = {Holmqvist, Kenneth and Mulvey, F. and Johansson, Roger},
    keywords = {Gaze-based Interaction, Eyetracking},
    location = {Lund, Sweden},
    number = {3},
    pages = {275--275},
    publisher = {Journal of Eye Movement Research},
    title = {{Analysis of eye movements in situated natural interactions}},
    url = {https://nbn-resolving.org/urn:nbn:de:0070-pub-25789176, https://pub.uni-bielefeld.de/record/2578917},
    volume = {6},
    year = {2013},
    }
  • N. Pfeiffer-Leßmann, T. Pfeiffer, and I. Wachsmuth, “A model of joint attention for humans and machines,” in Book of Abstracts of the 17th European Conference on Eye Movements, 2013, p. 152–152.
    [BibTeX] [Download PDF]
    @inproceedings{2578929,
    author = {Pfeiffer-Leßmann, Nadine and Pfeiffer, Thies and Wachsmuth, Ipke},
    booktitle = {Book of Abstracts of the 17th European Conference on Eye Movements},
    editor = {Holmqvist, Kenneth and Mulvey, F. and Johansson, Roger},
    keywords = {Joint Attention},
    location = {Lund, Sweden},
    number = {3},
    pages = {152--152},
    publisher = {Journal of Eye Movement Research},
    title = {{A model of joint attention for humans and machines}},
    url = {https://nbn-resolving.org/urn:nbn:de:0070-pub-25789293, https://pub.uni-bielefeld.de/record/2578929},
    volume = {6},
    year = {2013},
    }
  • T. Pfeiffer, J. Pfeiffer, and M. Meißner, “Mobile recommendation agents making online use of visual attention information at the point of sale,” in Proceedings of the Gmunden Retreat on NeuroIS 2013, 2013, p. 3–3.
    [BibTeX] [Abstract] [Download PDF]
    We aim to utilize online information about visual attention for developing mobile recommendation agents (RAs) for use at the point of sale. Up to now, most RAs are focussed exclusively at personalization in an e-commerce setting. Very little is known, however, about mobile RAs that offer information and assistance at the point of sale based on individual-level feature based preference models (Murray and Häubl 2009). Current attempts provide information about products at the point of sale by manually scanning barcodes or using RFID (Kowatsch et al. 2011, Heijden 2005), e.g. using specific apps for smartphones. We argue that an online access to the current visual attention of the user offers a much larger potential. Integrating mobile eye tracking into ordinary glasses would yield a direct benefit of applying neuroscience methods in the user’s everyday life. First, learning from consumers’ attentional processes over time and adapting recommendations based on this learning allows us to provide very accurate and relevant recommendations, potentially increasing the perceived usefulness. Second, our proposed system needs little explicit user input (no scanning or navigation on screen) making it easy to use. Thus, instead of learning from click behaviour and past customer ratings, as it is the case in the e-commerce setting, the mobile RA learns from eye movements by participating online in every day decision processes. We argue that mobile RAs should be built based on current research in human judgment and decision making (Murray et al. 2010). In our project, we therefore follow a two-step approach: In the empirical basic research stream, we aim to understand the user’s interaction with the product shelf: the actions and patterns of user’s behaviour (eye movements, gestures, approaching a product closer) and their correspondence to the user’s informational needs. In the empirical system development stream, we create prototypes of mobile RAs and test experimentally the factors that influence the user’s adoption. For example, we suggest that a user’s involvement in the process, such as a need for exact nutritional information or for assistance (e.g., reading support for elderly) will influence the user’s intention to use such as system. The experiments are conducted both in our immersive virtual reality supermarket presented in a CAVE, where we can also easily display information to the user and track the eye movement in great accuracy, as well as in real-world supermarkets (see Figure 1), so that the findings can be better generalized to natural decision situations (Gidlöf et al. 2013). In a first pilot study with five randomly chosen participants in a supermarket, we evaluated which sort of mobile RAs consumers favour in order to get a first impression of the user’s acceptance of the technology. Figure 1 shows an excerpt of one consumer’s eye movements during a decision process. First results show long eye cascades and short fixations on many products in situations where users are uncertain and in need for support. Furthermore, we find a surprising acceptance of the technology itself throughout all ages (23 – 61 years). At the same time, consumers express serious fear of being manipulated by such a technology. For that reason, they strongly prefer the information to be provided by trusted third party or shared with family members and friends (see also Murray and Häubl 2009). Our pilot will be followed by a larger field experiment in March in order to learn more about factors that influence the user’s acceptance as well as the eye movement patterns that reflect typical phases of decision processes and indicate the need for support by a RA.
    @inproceedings{2578942,
    abstract = {We aim to utilize online information about visual attention for developing mobile recommendation agents (RAs) for use at the point of sale. Up to now, most RAs are focussed exclusively at personalization in an e-commerce setting. Very little is known, however, about mobile RAs that offer information and assistance at the point of sale based on individual-level feature based preference models (Murray and Häubl 2009). Current attempts provide information about products at the point of sale by manually scanning barcodes or using RFID (Kowatsch et al. 2011, Heijden 2005), e.g. using specific apps for smartphones. We argue that an online access to the current visual attention of the user offers a much larger potential. Integrating mobile eye tracking into ordinary glasses would yield a direct benefit of applying neuroscience methods in the user’s everyday life. First, learning from consumers’ attentional processes over time and adapting recommendations based on this learning allows us to provide very accurate and relevant recommendations, potentially increasing the perceived usefulness. Second, our proposed system needs little explicit user input (no scanning or navigation on screen) making it easy to use. Thus, instead of learning from click behaviour and past customer ratings, as it is the case in the e-commerce setting, the mobile RA learns from eye movements by participating online in every day decision processes. We argue that mobile RAs should be built based on current research in human judgment and decision making (Murray et al. 2010). In our project, we therefore follow a two-step approach: In the empirical basic research stream, we aim to understand the user’s interaction with the product shelf: the actions and patterns of user’s behaviour (eye movements, gestures, approaching a product closer) and their correspondence to the user’s informational needs. In the empirical system development stream, we create prototypes of mobile RAs and test experimentally the factors that influence the user’s adoption. For example, we suggest that a user’s involvement in the process, such as a need for exact nutritional information or for assistance (e.g., reading support for elderly) will influence the user’s intention to use such as system. The experiments are conducted both in our immersive virtual reality supermarket presented in a CAVE, where we can also easily display information to the user and track the eye movement in great accuracy, as well as in real-world supermarkets (see Figure 1), so that the findings can be better generalized to natural decision situations (Gidlöf et al. 2013). In a first pilot study with five randomly chosen participants in a supermarket, we evaluated which sort of mobile RAs consumers favour in order to get a first impression of the user’s acceptance of the technology. Figure 1 shows an excerpt of one consumer’s eye movements during a decision process. First results show long eye cascades and short fixations on many products in situations where users are uncertain and in need for support. Furthermore, we find a surprising acceptance of the technology itself throughout all ages (23 – 61 years). At the same time, consumers express serious fear of being manipulated by such a technology. For that reason, they strongly prefer the information to be provided by trusted third party or shared with family members and friends (see also Murray and Häubl 2009). Our pilot will be followed by a larger field experiment in March in order to learn more about factors that influence the user’s acceptance as well as the eye movement patterns that reflect typical phases of decision processes and indicate the need for support by a RA.},
    author = {Pfeiffer, Thies and Pfeiffer, Jella and Meißner, Martin},
    booktitle = {Proceedings of the Gmunden Retreat on NeuroIS 2013},
    editor = {Davis, Fred and Riedl, René and Jan, vom Brocke and Léger, Pierre-Majorique and Randolph, Adriane},
    keywords = {Mobile Cognitive Assistance Systems
    Information Systems},
    location = {Gmunden},
    pages = {3--3},
    title = {{Mobile recommendation agents making online use of visual attention information at the point of sale}},
    url = {https://nbn-resolving.org/urn:nbn:de:0070-pub-25789420, https://pub.uni-bielefeld.de/record/2578942},
    year = {2013},
    }
  • T. Pfeiffer, “Documentation of gestures with motion capture,” in Handbücher zur Sprach- und Kommunikationswissenschaft / Handbooks of Linguistics and Communication Science, C. Müller, A. Cienki, E. Fricke, S. H. Ladewig, D. McNeill, and S. Teßendorf, Eds., Mouton de Gruyter, 2013, vol. 38/1, p. 857–868. doi:10.1515/9783110261318.857
    [BibTeX] [Abstract] [Download PDF]
    For the scientific observation of non-verbal communication behavior, video recordings are the state of the art. However, everyone who has conducted at least one video-based study has probably made the experience, that it is difficult to get the setup right, with respect to image resolution, illumination, perspective, occlusions, etc. And even more effort is needed for the annotation of the data. Frequently, even short interaction sequences may consume weeks or even months of rigorous full-time annotations. One way to overcome some of these issues is the use of motion capturing for assessing (not only) communicative body movements. There are several competing tracking technologies available, each with its own benefits and drawbacks. The article provides an overview of the basic types of tracking systems, presents representation formats and tools for the analysis of motion data, provides pointers to some studies using motion capture and discusses best practices for study design. However, the article also stresses that motion capturing still requires some expertise and is only starting to become mobile and reasonably priced – arguments not to be neglected.
    @inbook{2564547,
    abstract = {For the scientific observation of non-verbal communication behavior, video recordings are the state of the art. However, everyone who has conducted at least one video-based
    study has probably made the experience, that it is difficult to get the setup right, with respect to image resolution, illumination, perspective, occlusions, etc. And even more effort is needed for the annotation of the data. Frequently, even short interaction sequences may consume weeks or even months of rigorous full-time annotations.
    One way to overcome some of these issues is the use of motion capturing for assessing (not only) communicative body movements. There are several competing tracking technologies available, each with its own benefits and drawbacks. The article provides an overview of the basic types of tracking systems, presents representation formats and tools for the analysis of motion data, provides pointers to some studies using motion capture and discusses best practices for study design. However, the article also stresses that motion capturing still requires some expertise and is only starting to become mobile
    and reasonably priced – arguments not to be neglected.},
    author = {Pfeiffer, Thies},
    booktitle = {Handbücher zur Sprach- und Kommunikationswissenschaft / Handbooks of Linguistics and Communication Science},
    editor = {Müller, Cornelia and Cienki, Alan and Fricke, Ellen and Ladewig, Silva H. and McNeill, David and Teßendorf, Sedinha},
    isbn = {9783110261318},
    keywords = {Multimodal Communication, Motion Capturing, Gesture Annotation, Multimodal Corpora},
    pages = {857--868},
    publisher = {Mouton de Gruyter},
    title = {{Documentation of gestures with motion capture}},
    url = {https://pub.uni-bielefeld.de/record/2564547},
    doi = {10.1515/9783110261318.857},
    volume = {38/1},
    year = {2013},
    }
  • P. Renner and T. Pfeiffer, “Studying joint attention and hand-eye coordination in human-human interaction: A model-based approach to an automatic mapping of fixations to target objects,” in Proceedings of the First International Workshop on Solutions for Automatic Gaze-Data Analysis 2013 (SAGA 2013), 2013, p. 28–31. doi:10.2390/biecoll-saga2013_7
    [BibTeX] [Abstract] [Download PDF]
    If robots are to successfully interact in a space shared with humans, they should learn the communicative signals humans use in face-to-face interactions. For example, a robot can consider human presence for grasping decisions using a representation of peripersonal space (Holthaus &Wachsmuth, 2012). During interaction, the eye gaze of the interlocutor plays an important role. Using mechanisms of joint attention, gaze can be used to ground objects during interaction and knowledge about the current goals of the interlocutor are revealed (Imai et al., 2003). Eye movements are also known to precede hand pointing or grasping (Prablanc et al., 1979), which could help robots to predict areas with human activities, e.g. for security reasons. We aim to study patterns of gaze and pointing in interaction space. The human participants’ task is to jointly plan routes on a floor plan. For analysis, it is necessary to find fixations on specific rooms and floors as well as on the interlocutor’s face or hands. Therefore, a model-based approach for automating this mapping was developed. This approach was evaluated using a highly accurate outside-in tracking system as baseline and a newly developed low-cost inside-out marker-based tracking system making use of the eye tracker’s scene camera.
    @inproceedings{2632827,
    abstract = {If robots are to successfully interact in a space shared with humans, they should learn the communicative signals humans use in face-to-face interactions. For example, a robot can consider human presence for grasping decisions using a representation of peripersonal space (Holthaus &Wachsmuth, 2012). During interaction, the eye gaze of the interlocutor plays an important role. Using mechanisms of joint attention, gaze can be used to ground objects during interaction and knowledge about the current goals of the interlocutor are revealed (Imai et al., 2003). Eye movements are also known to precede hand pointing or grasping (Prablanc et al., 1979), which could help robots to predict areas with human activities, e.g. for security reasons. We aim to study patterns of gaze and pointing in interaction space. The human participants’ task is to jointly plan routes on a floor plan. For analysis, it is necessary to find fixations on specific rooms and floors as well as on the interlocutor’s face or hands. Therefore, a model-based approach for automating this mapping was developed. This approach was evaluated using a highly accurate outside-in tracking system as baseline and a newly developed low-cost inside-out marker-based tracking system making use of the eye tracker’s scene camera.},
    author = {Renner, Patrick and Pfeiffer, Thies},
    booktitle = {Proceedings of the First International Workshop on Solutions for Automatic Gaze-Data Analysis 2013 (SAGA 2013)},
    editor = {Pfeiffer, Thies and Essig, Kai},
    keywords = {Gaze-based Interaction},
    location = {Bielefeld},
    pages = {28--31},
    publisher = {Center of Excellence Cognitive Interaction Technology},
    title = {{Studying joint attention and hand-eye coordination in human-human interaction: A model-based approach to an automatic mapping of fixations to target objects}},
    url = {https://pub.uni-bielefeld.de/record/2632827},
    doi = {10.2390/biecoll-saga2013_7},
    year = {2013},
    }
  • M. Orlikowski, R. Bongartz, A. Reddersen, J. Reuter, and T. Pfeiffer, “Springen in der Virtuellen Realität: Analyse von Navigationsformen zur Überwindung von Höhenunterschieden am Beispiel von MinecraftVR,” in Virtuelle und Erweiterte Realität – 10. Workshop der GI-Fachgruppe VR/AR, 2013, p. 193–196.
    [BibTeX] [Abstract] [Download PDF]
    Das Paper arbeitet den Forschungsstand zur Überwindung von Höhenunterschieden in der Virtuellen Realität (VR) auf und diskutiert insbesondere deren Einsatz in egozentrischer Perspektive. Am konkreten Beispiel einer VR-Version des Computerspiels Minecraft wird herausgestellt, dass bestehende Ansätze den Anforderungen dieser Anwendungen nicht genügen.
    @inproceedings{2623948,
    abstract = {Das Paper arbeitet den Forschungsstand zur Überwindung von Höhenunterschieden in der Virtuellen Realität (VR) auf und diskutiert insbesondere deren Einsatz in egozentrischer Perspektive. Am konkreten Beispiel einer VR-Version des Computerspiels Minecraft wird herausgestellt, dass bestehende Ansätze den Anforderungen dieser Anwendungen nicht genügen.},
    author = {Orlikowski, Matthias and Bongartz, Richard and Reddersen, Andrea and Reuter, Jana and Pfeiffer, Thies},
    booktitle = {Virtuelle und Erweiterte Realität - 10. Workshop der GI-Fachgruppe VR/AR},
    editor = {Latoschik, Marc Erich and Staadt, Oliver and Steinicke, Frank},
    isbn = {978-3-8440-2211-7},
    keywords = {Virtual Reality, Human-Computer Interaction},
    pages = {193--196},
    publisher = {Shaker Verlag},
    title = {{Springen in der Virtuellen Realität: Analyse von Navigationsformen zur Überwindung von Höhenunterschieden am Beispiel von MinecraftVR}},
    url = {https://nbn-resolving.org/urn:nbn:de:0070-pub-26239489, https://pub.uni-bielefeld.de/record/2623948},
    year = {2013},
    }
  • T. Pfeiffer and I. Wachsmuth, “Multimodale blickbasierte Interaktion,” at – Automatisierungstechnik, vol. 61, iss. 11, p. 770–776, 2013. doi:10.1524/auto.2013.1058
    [BibTeX] [Abstract] [Download PDF]
    Unsere Augen sind für die Wahrnehmung unserer Umwelt wichtig und geben gleichzeitig wertvolle Informationen über unsere Aufmerksamkeit und damit unsere Denkprozesse preis. Wir Menschen nutzen dies ganz natürlich in der alltäglichen Kommunikation. Mit einer echtzeitfähigen Blickbewegungsmessung ausgestattet können auch technische Systeme den Nutzern wichtige Informationen von den Augen ablesen. Der Artikel beschreibt verschiedene Ansätze wie in der Konstruktion, der Instruktion von Robotern oder der Medizin Blickbewegungen nutzbringend eingesetzt werden können. / We use our eyes to perceive our everyday environment. In doing so, we also reveal our current focus of attention and thus allow others to draw insights regarding our internal cognition processes. We humans make use of this dual function of the eyes quite naturally in everyday communication. Using realtime eye tracking, technical systems can learn to read relevant information from their users’ eyes. This article describes approaches to make use of gaze information in construction tasks, the instruction of robots or in medical applications.
    @article{2624384,
    abstract = {Unsere Augen sind für die Wahrnehmung unserer Umwelt wichtig und geben gleichzeitig wertvolle Informationen über unsere Aufmerksamkeit und damit unsere Denkprozesse preis. Wir Menschen nutzen dies ganz natürlich in der alltäglichen Kommunikation. Mit einer echtzeitfähigen Blickbewegungsmessung ausgestattet können auch technische Systeme den Nutzern wichtige Informationen von den Augen ablesen. Der Artikel beschreibt verschiedene Ansätze wie in der Konstruktion, der Instruktion von Robotern oder der Medizin Blickbewegungen nutzbringend eingesetzt werden können.
    /
    We use our eyes to perceive our everyday environment. In doing so, we also reveal our current focus of attention and thus allow others to draw insights regarding our internal cognition processes. We humans make use of this dual function of the eyes quite naturally in everyday communication. Using realtime eye tracking, technical systems can learn to read relevant information from their users' eyes. This article describes approaches to make use of gaze information in construction tasks, the instruction of robots or in medical applications.},
    author = {Pfeiffer, Thies and Wachsmuth, Ipke},
    issn = {2196-677X},
    journal = {at - Automatisierungstechnik},
    keywords = {Visual Attention, Multimodal Interaction, Human-Machine Interaction, Eye Tracking, Attentive Interfaces, Visuelle Aufmerksamkeit, Multimodale Interaktion, Mensch-Maschine-Interaktion, Aufmerksame Benutzerschnittstellen, Blickbewegungsmessung, Gaze-based Interaction},
    number = {11},
    pages = {770--776},
    publisher = {Walter de Gruyter GmbH},
    title = {{Multimodale blickbasierte Interaktion}},
    url = {https://pub.uni-bielefeld.de/record/2624384},
    doi = {10.1524/auto.2013.1058},
    volume = {61},
    year = {2013},
    }
  • T. Pfeiffer, “Visuelle Aufmerksamkeit in Virtueller und Erweiterter Realität: Integration und Nutzung im Szenengraphen,” in 11. Paderborner Workshop Augmented and Virtual Reality in der Produktentstehung, 2013, p. 295–307.
    [BibTeX] [Abstract] [Download PDF]
    Die synthetische Stimulation der visuellen Wahrnehmung ist seit jeher im Fokus von Virtueller und Erweiterter Realität und die möglichst exakte Bestimmung der Nutzerperspektive auf die dreidimensionale Welt eine der Kernaufgaben. Bislang gibt es jedoch nur einige exemplarische Ansätze, in denen die Blickrichtung des Nutzers oder gar die Verteilung der visuellen Aufmerksamkeit im Raum genauer bestimmt wird. Macht man diese Informationen der Anwendungslogik verfügbar, könnten existierende Verfahren zur Visualisierung weiter optimiert und neue Verfahren entwickelt werden. Darüber hinaus erschließen sich damit Blicke als Interaktionsmodalität. Aufbauend auf langjährigen Erfahrungen mit der Blickinteraktion in der Virtuellen Realität beschreibt der Artikel Komponenten für einen Szenengraph, mit dem sich blickbasierte Interaktionen leicht und entlang gewohnter Prinzipien realisieren lassen.
    @inproceedings{2559884,
    abstract = {Die synthetische Stimulation der visuellen Wahrnehmung ist seit jeher im Fokus von Virtueller und Erweiterter Realität und die möglichst exakte Bestimmung der Nutzerperspektive auf die dreidimensionale Welt eine der Kernaufgaben. Bislang gibt es jedoch nur einige exemplarische Ansätze, in denen die Blickrichtung des Nutzers oder gar die Verteilung der visuellen Aufmerksamkeit im Raum genauer bestimmt wird. Macht man diese Informationen der Anwendungslogik verfügbar, könnten existierende Verfahren zur Visualisierung weiter optimiert und neue Verfahren entwickelt werden. Darüber hinaus erschließen sich damit Blicke als Interaktionsmodalität. Aufbauend auf langjährigen Erfahrungen mit der Blickinteraktion in der Virtuellen Realität beschreibt der Artikel Komponenten für einen Szenengraph, mit dem sich blickbasierte Interaktionen leicht und entlang gewohnter Prinzipien realisieren lassen.},
    author = {Pfeiffer, Thies},
    booktitle = {11. Paderborner Workshop Augmented and Virtual Reality in der Produktentstehung},
    editor = {Gausemeier, Jürgen and Grafe, Michael and Meyer auf der Heide, Friedhelm},
    isbn = {978-3-942647-30-4},
    keywords = {Virtual Reality, Human-Machine Interaction, Visual Attention, Gaze-based Interaction},
    pages = {295--307},
    publisher = {Heinz Nixdorf Institut, Universität Paderborn},
    title = {{Visuelle Aufmerksamkeit in Virtueller und Erweiterter Realität: Integration und Nutzung im Szenengraphen}},
    url = {https://pub.uni-bielefeld.de/record/2559884},
    volume = {311},
    year = {2013},
    }
  • K. Essig, T. Pfeiffer, J. Maycock, and T. Schack, “Attentive Systems: Modern Analysis Techniques for Gaze Movements in Sport Science,” in ISSP 13th World Congress of Sport Psychology – Harmony and Excellence in Sport and Life, 2013, p. 43–46.
    [BibTeX] [Download PDF]
    @inproceedings{2658912,
    author = {Essig, Kai and Pfeiffer, Thies and Maycock, Jonathan and Schack, Thomas},
    booktitle = {ISSP 13th World Congress of Sport Psychology - Harmony and Excellence in Sport and Life},
    editor = {Chi, J.},
    location = {Bejing Sports University, Bejing, China},
    pages = {43--46},
    title = {{Attentive Systems: Modern Analysis Techniques for Gaze Movements in Sport Science}},
    url = {https://pub.uni-bielefeld.de/record/2658912},
    year = {2013},
    }
  • T. Pfeiffer and K. Essig, “Analysis of eye movements in situated natural interactions,” in Book of Abstracts of the 17th European Conference on Eye Movements, 2013, p. 275.
    [BibTeX] [Download PDF]
    @inproceedings{2660602,
    author = {Pfeiffer, Thies and Essig, Kai},
    booktitle = {Book of Abstracts of the 17th European Conference on Eye Movements},
    pages = {275},
    publisher = {Journal of Eye Movement Research},
    title = {{Analysis of eye movements in situated natural interactions}},
    url = {https://pub.uni-bielefeld.de/record/2660602},
    year = {2013},
    }
  • T. Pfeiffer, “Measuring and visualizing attention in space with 3D attention volumes,” in Proceedings of the Symposium on Eye Tracking Research and Applications, 2012, p. 29–36. doi:10.1145/2168556.2168560
    [BibTeX] [Abstract] [Download PDF]
    Knowledge about the point of regard is a major key for the analysis of visual attention in areas such as psycholinguistics, psychology, neurobiology, computer science and human factors. Eye tracking is thus an established methodology in these areas, e.g. for investigating search processes, human communication behavior, product design or human-computer interaction. As eye tracking is a process which depends heavily on technology, the progress of gaze use in these scientific areas is tied to the advancements of eye-tracking technology. It is thus not surprising that in the last decades, research was primarily based on 2D stimuli and rather static scenarios, regarding both content and observer. Only with the advancements in mobile and robust eye-tracking systems, the observer is freed to physically interact in a 3D target scenario. Measuring and analyzing the point of regards in 3D space, however, requires additional techniques for data acquisition and scientific visualization. We describe the process for measuring the 3D point of regard and provide our own implementation of this process, which extends recent approaches of combining eye tracking with motion capturing, including holistic estimations of the 3D point of regard. In addition, we present a refined version of 3D attention volumes for representing and visualizing attention in 3D space.
    @inproceedings{2490021,
    abstract = {Knowledge about the point of regard is a major key for the analysis of visual attention in areas such as psycholinguistics, psychology, neurobiology, computer science and human factors. Eye tracking is thus an established methodology in these areas, e.g. for investigating search processes, human communication behavior, product design or human-computer interaction. As eye tracking is a process which depends heavily on technology, the progress of gaze use in these scientific areas is tied to the advancements of eye-tracking technology. It is thus not surprising that in the last decades, research was primarily based on 2D stimuli and rather static scenarios, regarding both content and observer. Only with the advancements in mobile and robust eye-tracking systems, the observer is freed to physically interact in a 3D target scenario. Measuring and analyzing the point of regards in 3D space, however, requires additional techniques for data acquisition and scientific visualization. We describe the process for measuring the 3D point of regard and provide our own implementation of this process, which extends recent approaches of combining eye tracking with motion capturing, including holistic estimations of the 3D point of regard. In addition, we present a refined version of 3D attention volumes for representing and visualizing attention in 3D space.},
    author = {Pfeiffer, Thies},
    booktitle = {Proceedings of the Symposium on Eye Tracking Research and Applications},
    editor = {Spencer, Stephen N.},
    isbn = {978-1-4503-1225-7},
    keywords = {gaze tracking, visualization, motion tracking, Gaze-based Interaction, visual attention, 3d},
    location = {Santa Barbara, CA, USA},
    pages = {29--36},
    publisher = {Association for Computing Machinery (ACM)},
    title = {{Measuring and visualizing attention in space with 3D attention volumes}},
    url = {https://nbn-resolving.org/urn:nbn:de:0070-pub-24900216, https://pub.uni-bielefeld.de/record/2490021},
    doi = {10.1145/2168556.2168560},
    year = {2012},
    }
  • S. Kousidis, T. Pfeiffer, Z. Malisz, P. Wagner, and D. Schlangen, “Evaluating a minimally invasive laboratory architecture for recording multimodal conversational data.,” in Proceedings of the Interdisciplinary Workshop on Feedback Behaviors in Dialog, INTERSPEECH2012 Satellite Workshop, 2012, p. 39–42.
    [BibTeX] [Abstract] [Download PDF]
    This paper presents ongoing work on the design, deployment and evaluation of a multimodal data acquisition architecture which utilises minimally invasive motion, head, eye and gaze tracking alongside high-quality audiovisual recording of human interactions. The different data streams are centrally collected and visualised at a single point and in real time by means of integration in a virtual reality (VR) environment. The overall aim of this endeavour is the implementation of a multimodal data acquisition facility for the purpose of studying non-verbal phenomena such as feedback gestures, hand and pointing gestures and multi-modal alignment. In the first part of this work that is described here, a series of tests were performed in order to evaluate the feasibility of tracking feedback head gestures using the proposed architecture.
    @inproceedings{2510937,
    abstract = {This paper presents ongoing work on the design, deployment and evaluation of a multimodal data acquisition architecture
    which utilises minimally invasive motion, head, eye and gaze tracking alongside high-quality audiovisual recording of
    human interactions. The different data streams are centrally collected and visualised at a single point and in real time by
    means of integration in a virtual reality (VR) environment. The overall aim of this endeavour is the implementation of a
    multimodal data acquisition facility for the purpose of studying non-verbal phenomena such as feedback gestures,
    hand and pointing gestures and multi-modal alignment. In the first part of this work that is described here, a series of tests
    were performed in order to evaluate the feasibility of tracking feedback head gestures using the proposed architecture.},
    author = {Kousidis, Spyridon and Pfeiffer, Thies and Malisz, Zofia and Wagner, Petra and Schlangen, David},
    booktitle = {Proceedings of the Interdisciplinary Workshop on Feedback Behaviors in Dialog, INTERSPEECH2012 Satellite Workshop},
    keywords = {Multimodal Communication},
    location = {Stevenson, WA},
    pages = {39--42},
    title = {{Evaluating a minimally invasive laboratory architecture for recording multimodal conversational data.}},
    url = {https://nbn-resolving.org/urn:nbn:de:0070-pub-25109377, https://pub.uni-bielefeld.de/record/2510937},
    year = {2012},
    }
  • N. Pfeiffer-Leßmann, T. Pfeiffer, and I. Wachsmuth, “An operational model of joint attention – Timing of the initiate-act in interactions with a virtual human,” in Proceedings of KogWis 2012, 2012, p. 96–97.
    [BibTeX] [Download PDF]
    @inproceedings{2509771,
    author = {Pfeiffer-Leßmann, Nadine and Pfeiffer, Thies and Wachsmuth, Ipke},
    booktitle = {Proceedings of KogWis 2012},
    editor = {Dörner, Dietrich and Goebel, Rainer and Oaksford, Mike and Pauen, Michael and Stern, Elsbeth},
    isbn = {978-3-86309-100-2},
    keywords = {gaze-based interaction, cognitive modeling, joint attention},
    location = {Bamberg, Germany},
    pages = {96--97},
    publisher = {University of Bamberg Press},
    title = {{An operational model of joint attention - Timing of the initiate-act in interactions with a virtual human}},
    url = {https://nbn-resolving.org/urn:nbn:de:0070-pub-25097711, https://pub.uni-bielefeld.de/record/2509771},
    year = {2012},
    }
  • T. Pfeiffer, “Using virtual reality technology in linguistic research,” in Virtual Reality Short Papers and Posters (VRW), 2012, p. 83–84. doi:10.1109/VR.2012.6180893
    [BibTeX] [Abstract] [Download PDF]
    In this paper, we argue that empirical research on genuine linguistic topics, such as on the production of multimodal utterances in the speaker and the interpretation of the multimodal signals in the interlocutor, can greatly benefit from the use of virtual reality technologies. Established methodologies for research on multimodal interactions, such as the presentation of pre-recorded 2D videos of interaction partners as stimuli and the recording of interaction partners using multiple 2D video cameras have crucial shortcomings regarding ecological validity and the precision of measurements that can be achieved. In addition, these methodologies enforce restrictions on the researcher. The stimuli, for example, are not very interactive and thus not as close to natural interactions as ultimately desired. Also, the analysis of 2D video recordings requires intensive manual annotations, often frame-by-frame, which negatively affects the feasible number of interactions which can be included in a study. The technologies bundled under the term virtual reality offer exciting possibilities for the linguistic researcher: gestures can be tracked without being restricted to fixed perspectives, annotation can be done on large corpora (semi-)automatically and virtual characters can be used to produce specific linguistic stimuli in a repetitive but interactive fashion. Moreover, immersive 3D visualizations can be used to recreate a simulation of the recorded interactions by fusing the raw data with theoretic models to support an iterative data-driven development of linguistic theories. This paper discusses the potential of virtual reality technologies for linguistic research and provides examples for the application of the methodology.
    @inproceedings{2476751,
    abstract = {In this paper, we argue that empirical research on genuine linguistic topics, such as on the production of multimodal utterances in the speaker and the interpretation of the multimodal signals in the interlocutor, can greatly benefit from the use of virtual reality technologies. Established methodologies for research on multimodal interactions, such as the presentation of pre-recorded 2D videos of interaction partners as stimuli and the recording of interaction partners using multiple 2D video cameras have crucial shortcomings regarding ecological validity and the precision of measurements that can be achieved. In addition, these methodologies enforce restrictions on the researcher. The stimuli, for example, are not very interactive and thus not as close to natural interactions as ultimately desired. Also, the analysis of 2D video recordings requires intensive manual annotations, often frame-by-frame, which negatively affects the feasible number of interactions which can be included in a study. The technologies bundled under the term virtual reality offer exciting possibilities for the linguistic researcher: gestures can be tracked without being restricted to fixed perspectives, annotation can be done on large corpora (semi-)automatically and virtual characters can be used to produce specific linguistic stimuli in a repetitive but interactive fashion. Moreover, immersive 3D visualizations can be used to recreate a simulation of the recorded interactions by fusing the raw data with theoretic models to support an iterative data-driven development of linguistic theories. This paper discusses the potential of virtual reality technologies for linguistic research and provides examples for the application of the methodology.},
    author = {Pfeiffer, Thies},
    booktitle = {Virtual Reality Short Papers and Posters (VRW)},
    editor = {Coquillart, Sabine and Feiner, Steven and Kiyokawa, Kiyoshi},
    isbn = {978-1-4673-1247-9},
    keywords = {Linguistics, Motion Capturing, Intelligent Virtual Agents, Multimodal Communication, Virtual Reality},
    location = {Costa Mesa, CA, USA},
    pages = {83--84},
    publisher = {Institute of Electrical and Electronics Engineers (IEEE)},
    title = {{Using virtual reality technology in linguistic research}},
    url = {https://pub.uni-bielefeld.de/record/2476751},
    doi = {10.1109/VR.2012.6180893},
    year = {2012},
    }
  • T. Pfeiffer, “Interaction between Speech and Gesture: Strategies for Pointing to Distant Objects,” in Gestures and Sign Language in Human-Computer Interaction and Embodied Communication, 9th International Gesture Workshop, GW 2011, 2012, p. 238–249. doi:10.1007/978-3-642-34182-3_22
    [BibTeX] [Abstract] [Download PDF]
    Referring to objects using multimodal deictic expressions is an important form of communication. This work addresses the question on how pragmatic factors affect content distribution between the modalities speech and gesture. This is done by analyzing a study on deictic pointing gestures to objects under two conditions: with and without speech. The relevant pragmatic factor was the distance to the referent object. As one main result two strategies were identified which were used by participants to adapt their gestures to the condition. This knowledge can be used, e.g., to improve the naturalness of pointing gestures employed by embodied conversational agents.
    @inproceedings{2497075,
    abstract = {Referring to objects using multimodal deictic expressions is an important form of communication. This work addresses the question on how pragmatic factors affect content distribution between the modalities speech and gesture. This is done by analyzing a study on deictic pointing gestures to objects under two conditions: with and without speech. The relevant pragmatic factor was the distance to the referent object. As one main result two strategies were identified which were used by participants to adapt their gestures to the condition. This knowledge can be used, e.g., to improve the naturalness of pointing gestures employed by embodied conversational agents.},
    author = {Pfeiffer, Thies},
    booktitle = {Gestures and Sign Language in Human-Computer Interaction and Embodied Communication, 9th International Gesture Workshop, GW 2011},
    editor = {Efthimiou, Eleni and Kouroupetroglou, Georgios and Fotinea, Stavroula-Evita},
    isbn = {978-3-642-34181-6},
    issn = {0302-9743},
    keywords = {Multimodal Communication},
    location = {Athens, Greece},
    pages = {238--249},
    publisher = {Springer-Verlag GmbH},
    title = {{Interaction between Speech and Gesture: Strategies for Pointing to Distant Objects}},
    url = {https://nbn-resolving.org/urn:nbn:de:0070-pub-24970755, https://pub.uni-bielefeld.de/record/2497075},
    doi = {10.1007/978-3-642-34182-3_22},
    year = {2012},
    }
  • C. Liguda and T. Pfeiffer, “Modeling math word problems with augmented semantic networks,” in Natural Language Processing and Information Systems/17th International Conference on Applications of Natural Language to Information Systems, 2012, p. 247–252. doi:10.1007/978-3-642-31178-9_29
    [BibTeX] [Abstract] [Download PDF]
    Modern computer-algebra programs are able to solve a wide range of mathematical calculations. However, they are not able to understand and solve math text problems in which the equation is described in terms of natural language instead of mathematical formulas. Interestingly, there are only few known approaches to solve math word problems algorithmically and most of employ models based on frames. To overcome problems with existing models, we propose a model based on augmented semantic networks to represent the mathematical structure behind word problems. This model is implemented in our Solver for Mathematical Text Problems (SoMaTePs) [1], where the math problem is extracted via natural language processing, transformed in mathematical equations and solved by a state-of-the-art computer-algebra program. SoMaTePs is able to understand and solve mathematical text problems from German primary school books and could be extended to other languages by exchanging the language model in the natural language processing module.
    @inproceedings{2497094,
    abstract = {Modern computer-algebra programs are able to solve a wide
    range of mathematical calculations. However, they are not able to understand and solve math text problems in which the equation is described in terms of natural language instead of mathematical formulas. Interestingly, there are only few known approaches to solve math word problems algorithmically and most of employ models based on frames. To overcome problems with existing models, we propose a model based on augmented semantic networks to represent the mathematical structure behind word problems. This model is implemented in our Solver for Mathematical Text Problems (SoMaTePs) [1], where the math problem is extracted via natural language processing, transformed in mathematical equations and solved by a state-of-the-art computer-algebra program. SoMaTePs is able to understand and solve mathematical text problems from German primary school books and could be extended to other languages by exchanging the language model in the natural language processing module.},
    author = {Liguda, Christian and Pfeiffer, Thies},
    booktitle = {Natural Language Processing and Information Systems/17th International Conference on Applications of Natural Language to Information Systems},
    editor = {Bouma, Gosse and Ittoo, Ashwin and Métais, Elisabeth and Wortmann, Hans},
    isbn = {978-3-642-31177-2},
    keywords = {Artificial Intelligence},
    location = {Groningen, Netherlands},
    pages = {247--252},
    publisher = {Springer},
    title = {{Modeling math word problems with augmented semantic networks}},
    url = {https://pub.uni-bielefeld.de/record/2497094},
    doi = {10.1007/978-3-642-31178-9_29},
    volume = {7337},
    year = {2012},
    }
  • T. Pfeiffer, “3D Attention Volumes for Usability Studies in Virtual Reality,” in Proceedings of the IEEE Virtual Reality 2012, 2012, p. 117–118.
    [BibTeX] [Abstract] [Download PDF]
    The time course and the distribution of visual attention are powerful measures for the evaluation of the usability of products. Eye tracking is thus an established method for evaluating websites, software ergonomy or modern cockpits for cars or airplanes. In most cases, however, the point of regard is measured on 2D products. This article presents work that uses an approach to measure the point of regard in 3D to generate 3D Attention Volumes as a qualitative 3D visualization of the distribution of visual attention. This visualization can be used to evaluate the design of virtual products in an immersive 3D setting, similar as heatmaps are used to assess the design of websites.
    @inproceedings{2476755,
    abstract = {The time course and the distribution of visual attention are powerful measures for the evaluation of the usability of products. Eye tracking is thus an established method for evaluating websites, software ergonomy or modern cockpits for cars or airplanes. In most cases, however, the point of regard is measured on 2D products. This article presents work that uses an approach to measure the point of regard in 3D to generate 3D Attention Volumes as a qualitative 3D visualization of the distribution of visual attention. This visualization can be used to evaluate the design of virtual products in an immersive 3D setting, similar as heatmaps are used to assess the design of websites.},
    author = {Pfeiffer, Thies},
    booktitle = {Proceedings of the IEEE Virtual Reality 2012},
    isbn = {978-1-4673-1246-2},
    keywords = {Gaze-based Interaction},
    location = {Orange County, CA, USA},
    pages = {117--118},
    publisher = {IEEE},
    title = {{3D Attention Volumes for Usability Studies in Virtual Reality}},
    url = {https://pub.uni-bielefeld.de/record/2476755},
    year = {2012},
    }
  • T. Pfeiffer, “Towards a Linguistically Motivated Model for Selection in Virtual Reality,” in Proceedings of the IEEE Virtual Reality 2012, 2012, p. 89–90.
    [BibTeX] [Abstract] [Download PDF]
    Swiftness and robustness of natural communication is tied to the redundancy and complementarity found in our multimodal communication. Swiftness and robustness of human-computer interaction (HCI) is also a key to the success of a virtual reality (VR) environment. The interpretation of multimodal interaction signals has therefore been considered a high goal in VR research, e.g. following the visions of Bolt’s put-that-there in 1980. It is our impression that research on user interfaces for VR systems has been focused primarily on finding and evaluating technical solutions and thus followed a technology-oriented approach to HCI. In this article, we argue to complement this by a human-oriented approach based on the observation of human-human interaction. The aim is to find models of human-human interaction that can be used to create user interfaces that feel natural. As the field of Linguistics is dedicated to the observation and modeling of human-human communication, it could be worthwhile to approach natural user interfaces from a linguistic perspective. We expect at least two benefits from following this approach. First, the human-oriented approach substantiates our understanding of natural human interactions. Second, it brings about a new perspective by taking the interaction capabilities of a human addressee into account, which are not often explicitly considered or compared with that of the system. As a consequence of following both approaches to create user interfaces, we expect more general models of human interaction to emerge.
    @inproceedings{2476753,
    abstract = {Swiftness and robustness of natural communication is tied to the redundancy and complementarity found in our multimodal communication. Swiftness and robustness of human-computer interaction (HCI) is also a key to the success of a virtual reality (VR) environment. The interpretation of multimodal interaction signals has therefore been considered a high goal in VR research, e.g. following the visions of Bolt's put-that-there in 1980. It is our impression that research on user interfaces for VR systems has been focused primarily on finding and evaluating technical solutions and thus followed a technology-oriented approach to HCI. In this article, we argue to complement this by a human-oriented approach based on the observation of human-human interaction. The aim is to find models of human-human interaction that can be used to create user interfaces that feel natural. As the field of Linguistics is dedicated to the observation and modeling of human-human communication, it could be worthwhile to approach natural user interfaces from a linguistic perspective. We expect at least two benefits from following this approach. First, the human-oriented approach substantiates our understanding of natural human interactions. Second, it brings about a new perspective by taking the interaction capabilities of a human addressee into account, which are not often explicitly considered or compared with that of the system. As a consequence of following both approaches to create user interfaces, we expect more general models of human interaction to emerge.},
    author = {Pfeiffer, Thies},
    booktitle = {Proceedings of the IEEE Virtual Reality 2012},
    isbn = {978-1-4673-1246-2},
    keywords = {Linguistics, Virtual Reality, Human-Computer Interaction, Deixis, Multimodal Communication},
    location = {Orange County, CA, USA},
    pages = {89--90},
    publisher = {IEEE},
    title = {{Towards a Linguistically Motivated Model for Selection in Virtual Reality}},
    url = {https://pub.uni-bielefeld.de/record/2476753},
    year = {2012},
    }
  • A. Lücking and T. Pfeiffer, “Framing Multimodal Technical Communication. With Focal Points in Speech-Gesture-Integration and Gaze Recognition,” in Handbook of Technical Communication, A. Mehler and L. Romary, Eds., Mouton de Gruyter, 2012, vol. 8, p. 591–644. doi:10.1515/9783110224948.591
    [BibTeX] [Download PDF]
    @inbook{2535691,
    author = {Lücking, Andy and Pfeiffer, Thies},
    booktitle = {Handbook of Technical Communication},
    editor = {Mehler, Alexander and Romary, Laurent},
    isbn = {978-3-11-018834-9},
    keywords = {Multimodal Communication},
    pages = {591--644},
    publisher = {Mouton de Gruyter},
    title = {{Framing Multimodal Technical Communication. With Focal Points in Speech-Gesture-Integration and Gaze Recognition}},
    url = {https://pub.uni-bielefeld.de/record/2535691},
    doi = {10.1515/9783110224948.591},
    volume = {8},
    year = {2012},
    }
  • E. Kim, J. Kim, T. Pfeiffer, I. Wachsmuth, and B. Zhang, “‘Is this right?’ or ‘Is that wrong?’: Evidence from dynamic eye-hand movement in decision making [Abstract],” in Proceedings of the 34th Annual Meeting of the Cognitive Science Society, 2012, p. 2723.
    [BibTeX] [Abstract] [Download PDF]
    Eye tracking and hand motion (or mouse) tracking are complementary techniques to study the dynamics underlying human cognition. Eye tracking provides information about attention, reasoning, mental imagery, but figuring out the dynamics of cognition is hard. On the other hand, hand movement reveals the hidden states of high-level cognition as a continuous trajectory, but the detailed process is difficult to infer. Here, we use both eye and hand tracking while the subject watches a video drama and plays a multimodal memory game (MMG), a memory recall task designed to investigate the mechanism of recalling the contents of dramas. Our experimental results show that eye tracking and mouse tacking provide complementary information on cognitive processes. In particular, we found that, when humans make difficult decisions, they tend to ask ’Is the distractor wrong?’, rather than ’Is the decision right?’.
    @inproceedings{2524430,
    abstract = {Eye tracking and hand motion (or mouse) tracking are complementary techniques to study the dynamics underlying
    human cognition. Eye tracking provides information about attention, reasoning, mental imagery, but figuring out the dynamics of cognition is hard. On the other hand, hand movement reveals the hidden states of high-level cognition as a continuous trajectory, but the detailed process is difficult to infer. Here, we use both eye and hand tracking while the subject watches a video drama and plays a multimodal memory game (MMG), a memory recall task designed to investigate the mechanism of recalling the contents of dramas. Our experimental results show that eye tracking and mouse tacking provide complementary information on cognitive processes. In particular, we found that, when humans make difficult decisions, they tend to ask ’Is the
    distractor wrong?’, rather than ’Is the decision right?’.},
    author = {Kim, Eun-Sol and Kim, Jiseob and Pfeiffer, Thies and Wachsmuth, Ipke and Zhang, Byoung-Tak},
    booktitle = {Proceedings of the 34th Annual Meeting of the Cognitive Science Society},
    editor = {Miyake, Naomi and Peebles, David and Cooper, Richard P.},
    isbn = {978-0-9768318-8-4},
    keywords = {Gaze-based Interaction},
    pages = {2723},
    title = {{‘Is this right?’ or ‘Is that wrong?’: Evidence from dynamic eye-hand movement in decision making [Abstract]}},
    url = {https://pub.uni-bielefeld.de/record/2524430},
    year = {2012},
    }
  • N. Pfeiffer-Leßmann, T. Pfeiffer, and I. Wachsmuth, “An operational model of joint attention – timing of gaze patterns in interactions between humans and a virtual human,” in Proceedings of the 34th Annual Conference of the Cognitive Science Society, 2012, p. 851–856.
    [BibTeX] [Download PDF]
    @inproceedings{2497069,
    author = {Pfeiffer-Leßmann, Nadine and Pfeiffer, Thies and Wachsmuth, Ipke},
    booktitle = {Proceedings of the 34th Annual Conference of the Cognitive Science Society},
    editor = {Miyake, Naomi and Peebles, David and Cooper, Richard P.},
    isbn = {978-0-9768318-8-4},
    keywords = {joint attention, Gaze-based Interaction, social interaction, virtual humans},
    location = {Sapporo, Japan},
    pages = {851--856},
    publisher = {Cognitive Science Society},
    title = {{An operational model of joint attention - timing of gaze patterns in interactions between humans and a virtual human}},
    url = {https://nbn-resolving.org/urn:nbn:de:0070-pub-24970695, https://pub.uni-bielefeld.de/record/2497069},
    year = {2012},
    }
  • K. Essig, T. Pfeiffer, N. Sand, J. Künsemöller, H. Ritter, and T. Schack, “JVideoGazer – Towards an Automatic Annotation of Gaze Videos from Natural Scenes.” 2011, p. 48.
    [BibTeX] [Download PDF]
    @inproceedings{2144155,
    author = {Essig, Kai and Pfeiffer, Thies and Sand, Norbert and Künsemöller, Jörn and Ritter, Helge and Schack, Thomas},
    keywords = {annotation, object recognition, computer vision, Gaze-based Interaction, eye-tracking},
    location = {Marseille},
    number = {3},
    pages = {48},
    title = {{JVideoGazer - Towards an Automatic Annotation of Gaze Videos from Natural Scenes}},
    url = {https://pub.uni-bielefeld.de/record/2144155},
    volume = {4},
    year = {2011},
    }
  • T. Pfeiffer, C. Liguda, I. Wachsmuth, and S. Stein, “Living with a Virtual Agent: Seven Years with an Embodied Conversational Agent at the Heinz Nixdorf MuseumsForum,” in Proceedings of the Re-Thinking Technology in Museums 2011 – Emerging Experiences, 2011, p. 121–131.
    [BibTeX] [Abstract] [Download PDF]
    Since 2004 the virtual agent Max is living at the Heinz Nixdorf MuseumsForum – a computer science museum. He is welcoming and entertaining visitors ten hours a day, six days a week, for seven years. This article brings together the experiences made by the staff of the museum, the scientists who created and maintained the installation, the visitors and the agent himself. It provides insights about the installation’s hard- and software and presents highlights of the agent’s ontogenesis in terms of the features he has gained. A special focus is on the means Max uses to engage with visitors and the features which make him attractive.
    @inproceedings{2144347,
    abstract = {Since 2004 the virtual agent Max is living at the Heinz Nixdorf MuseumsForum – a computer science museum. He is welcoming and entertaining visitors ten hours a day, six days a week, for seven years. This article brings together the experiences made by the staff of the museum, the scientists who created and maintained the installation, the visitors and the agent himself. It provides insights about the installation’s hard- and software and presents highlights of the agent’s ontogenesis in terms of the features he has gained. A special focus is on the means Max uses to engage with visitors and the features which make him attractive.},
    author = {Pfeiffer, Thies and Liguda, Christian and Wachsmuth, Ipke and Stein, Stefan},
    booktitle = {Proceedings of the Re-Thinking Technology in Museums 2011 - Emerging Experiences},
    editor = {Barbieri, Sara and Scott, Katherine and Ciolfi, Luigina},
    isbn = {978-1-905952-31-1},
    keywords = {Embodied Conversational Agent, ECA, Chatterbot, Max, Museum, Artificial Intelligence, Virtual Agent},
    location = {Limerick},
    pages = {121--131},
    publisher = {thinkk creative & the University of Limerick},
    title = {{Living with a Virtual Agent: Seven Years with an Embodied Conversational Agent at the Heinz Nixdorf MuseumsForum}},
    url = {https://nbn-resolving.org/urn:nbn:de:0070-pub-21443472, https://pub.uni-bielefeld.de/record/2144347},
    year = {2011},
    }
  • P. Renner, N. Lüdike, J. Wittrowski, and T. Pfeiffer, “Towards Continuous Gaze-Based Interaction in 3D Environments – Unobtrusive Calibration and Accuracy Monitoring,” in Proceedings of the Workshop Virtuelle & Erweiterte Realität 2011, 2011, p. 13–24.
    [BibTeX] [Abstract] [Download PDF]
    The idea of using gaze as an interaction modality has been put forward by the famous work of Bolt in 1981. In virtual reality (VR), gaze has been used for several means since then: view-dependent optimization of rendering, intelligent information visualization, reference communication in distributed telecommunication settings and object selection. Our own research aims at improving gaze-based interaction methods in general. In this paper, gaze-based interaction is examined in a fast-paced selection task to identify current usability problems of gaze-based interaction and to develop best practices. To this end, an immersive Asteroids-like shooter called Eyesteroids was developed to support a study comparing manual and gaze-based interaction methods. Criteria for the evaluation were interaction performance and user immersion. The results indicate that while both modalities (hand and gaze) work well for the task, manual interaction is easier to use and often more accurate than the implemented gaze-based methods. The reasons are discussed and the best practices as well as options for further improvements of gaze-based interaction methods are presented.
    @inproceedings{2308556,
    abstract = {The idea of using gaze as an interaction modality has been put forward by the famous work of Bolt in 1981. In virtual reality (VR), gaze has been used for several means since then: view-dependent optimization of rendering, intelligent information visualization, reference communication in distributed telecommunication settings and object selection.
    Our own research aims at improving gaze-based interaction methods in general. In this paper, gaze-based interaction is examined in a fast-paced selection task to identify current usability problems of gaze-based interaction and to develop best practices. To this end, an immersive Asteroids-like shooter called Eyesteroids was developed to support a study comparing manual and gaze-based interaction methods. Criteria for the evaluation were interaction performance and user immersion. The results indicate that while both modalities (hand and gaze) work well for the task, manual interaction is easier to use and often more accurate than the implemented gaze-based methods. The reasons are discussed and the best practices as well as options for further improvements of gaze-based interaction methods are presented.},
    author = {Renner, Patrick and Lüdike, Nico and Wittrowski, Jens and Pfeiffer, Thies},
    booktitle = {Proceedings of the Workshop Virtuelle & Erweiterte Realität 2011},
    editor = {Bohn, Christian-A. and Mostafawy, Sina},
    keywords = {Virtual Reality, Gaze-based Interaction},
    pages = {13--24},
    publisher = {Shaker Verlag},
    title = {{Towards Continuous Gaze-Based Interaction in 3D Environments - Unobtrusive Calibration and Accuracy Monitoring}},
    url = {https://nbn-resolving.org/urn:nbn:de:0070-pub-23085561, https://pub.uni-bielefeld.de/record/2308556},
    year = {2011},
    }
  • T. Pfeiffer, Understanding Multimodal Deixis with Gaze and Gesture in Conversational Interfaces, Shaker Verlag, 2011.
    [BibTeX] [Abstract] [Download PDF]
    When humans communicate, we use deictic expressions to refer to objects in our surrounding and put them in the context of our actions. In face to face interaction, we can complement verbal expressions with gestures and, hence, we do not need to be too precise in our verbal protocols. Our interlocutors hear our speaking; see our gestures and they even read our eyes. They interpret our deictic expressions, try to identify the referents and – normally – they will understand. If only machines could do alike. The driving vision behind the research in this thesis are multimodal conversational interfaces where humans are engaged in natural dialogues with computer systems. The embodied conversational agent Max developed in the A.I. group at Bielefeld University is an example of such an interface. Max is already able to produce multimodal deictic expressions using speech, gaze and gestures, but his capabilities to understand humans are not on par. If he was able to resolve multimodal deictic expressions, his understanding of humans would increase and interacting with him would become more natural. Following this vision, we as scientists are confronted with several challenges. First, accurate models for human pointing have to be found. Second, precise data on multimodal interactions has to be collected, integrated and analyzed in order to create these models. This data is multimodal (transcripts, voice and video recordings, annotations) and not directly accessible for analysis (voice and video recordings). Third, technologies have to be developed to support the integration and the analysis of the multimodal data. Fourth, the created models have to be implemented, evaluated and optimized until they allow a natural interaction with the conversational interface. To this ends, this work aims to deepen our knowledge of human non-verbal deixis, specifically of manual and gaze pointing, and to apply this knowledge in conversational interfaces. At the core of the theoretical and empirical investigations of this thesis are models for the interpretation of pointing gestures to objects. These models address the following questions: When are we pointing? Where are we pointing to? Which objects are we pointing at? With respect to these questions, this thesis makes the following three contributions: First, gaze-based interaction technology for 3D environments: Gaze plays an important role in human communication, not only in deictic reference. Yet, technology for gaze interaction is still less developed than technology for manual interaction. In this thesis, we have developed components for real-time tracking of eye movements and of the point of regard in 3D space and integrated them in a framework for Deictic Reference In Virtual Environments (DRIVE). DRIVE provides viable information about human communicative behavior in real-time. This data can be used to investigate and to design processes on higher cognitive levels, such as turn-taking, check-backs, shared attention and resolving deictic reference. Second, data-driven modeling: We answer the theoretical questions about timing, direction, accuracy and dereferential power of pointing by data-driven modeling. As empirical basis for the simulations, we created a substantial corpus with highprecision data from an extensive study on multimodal pointing. Two further studies complemented this effort with substantial data on gaze pointing in 3D. Based on this data, we have developed several models of pointing and successfully created a model for the interpretation of manual pointing that achieves a human-like performance level. Third, new methodologies for research on multimodal deixis in the fields of linguistics and computer science: The experimental-simulative approach to modeling – which we follow in this thesis – requires large collections of heterogeneous data to be recorded, integrated, analyzed and resimulated. To support the researcher in these tasks, we developed the Interactive Augmented Data Explorer (IADE). IADE is an innovative tool for research on multimodal interaction based on virtual reality technology. It allows researchers to literally immerse into multimodal data and interactively explore them in real-time and in virtual space. With IADE we have also extended established approaches for scientific visualization of linguistic data to 3D, which previously existed only for 2D methods of analysis (e.g. video recordings or computer screen experiments). By this means, we extended Mc-Neill’s 2D depiction of the gesture space to gesture space volumes expanding in time and space. Similarly, we created attention volumes, a new way to visualize the distribution of attention in 3D environments.
    @book{2445143,
    abstract = {When humans communicate, we use deictic expressions to refer to objects in our surrounding and put them in the context of our actions. In face to face interaction, we can complement verbal expressions with gestures and, hence, we do not need to be too precise in our verbal protocols. Our interlocutors hear our speaking; see our gestures and they even read our eyes. They interpret our deictic expressions, try to identify the referents and – normally – they will understand. If only machines could do alike.
    The driving vision behind the research in this thesis are multimodal conversational interfaces where humans are engaged in natural dialogues with computer systems. The embodied conversational agent Max developed in the A.I. group at Bielefeld University is an example of such an interface. Max is already able to produce multimodal deictic expressions using speech, gaze and gestures, but his capabilities to understand humans are not on par. If he was able to resolve multimodal deictic expressions, his understanding of humans would increase and interacting with him would become more natural.
    Following this vision, we as scientists are confronted with several challenges. First, accurate models for human pointing have to be found. Second, precise data on multimodal interactions has to be collected, integrated and analyzed in order to create these models. This data is multimodal (transcripts, voice and video recordings, annotations) and not directly accessible for analysis (voice and video recordings). Third, technologies have to be developed to support the integration and the analysis of the multimodal data. Fourth, the created models have to be implemented, evaluated and optimized until they allow a natural interaction with the conversational interface.
    To this ends, this work aims to deepen our knowledge of human non-verbal deixis, specifically of manual and gaze pointing, and to apply this knowledge in conversational interfaces. At the core of the theoretical and empirical investigations of this thesis are models for the interpretation of pointing gestures to objects. These models address the following questions: When are we pointing? Where are we pointing to? Which objects are we pointing at? With respect to these questions, this thesis makes the following three contributions: First, gaze-based interaction technology for 3D environments: Gaze plays an important role in human communication, not only in deictic reference. Yet, technology for gaze interaction is still less developed than technology for manual interaction.
    In this thesis, we have developed components for real-time tracking of eye movements and of the point of regard in 3D space and integrated them in a framework for Deictic Reference In Virtual Environments (DRIVE). DRIVE provides viable information about human communicative behavior in real-time. This data can be used to investigate and to design processes on higher cognitive levels, such as turn-taking, check-backs, shared attention and resolving deictic reference.
    Second, data-driven modeling: We answer the theoretical questions about timing, direction, accuracy and dereferential power of pointing by data-driven modeling.
    As empirical basis for the simulations, we created a substantial corpus with highprecision data from an extensive study on multimodal pointing. Two further studies complemented this effort with substantial data on gaze pointing in 3D. Based on this data, we have developed several models of pointing and successfully created a model for the interpretation of manual pointing that achieves a human-like performance level.
    Third, new methodologies for research on multimodal deixis in the fields of linguistics and computer science: The experimental-simulative approach to modeling – which we follow in this thesis – requires large collections of heterogeneous data to be recorded, integrated, analyzed and resimulated. To support the researcher in these tasks, we developed the Interactive Augmented Data Explorer (IADE). IADE is an innovative tool for research on multimodal interaction based on virtual reality technology. It allows researchers to literally immerse into multimodal data
    and interactively explore them in real-time and in virtual space. With IADE we have also extended established approaches for scientific visualization of linguistic data to 3D, which previously existed only for 2D methods of analysis (e.g. video recordings or computer screen experiments). By this means, we extended Mc-Neill’s 2D depiction of the gesture space to gesture space volumes expanding in time and space. Similarly, we created attention volumes, a new way to visualize the distribution of attention in 3D environments.},
    author = {Pfeiffer, Thies},
    isbn = {978-3-8440-0592-9},
    keywords = {Multimodal Communication, Gaze-based Interaction},
    pages = {217},
    publisher = {Shaker Verlag},
    title = {{Understanding Multimodal Deixis with Gaze and Gesture in Conversational Interfaces}},
    url = {https://pub.uni-bielefeld.de/record/2445143},
    year = {2011},
    }
  • C. Liguda and T. Pfeiffer, “A Question Answer System for Math Word Problems,” in First International Workshop on Algorithmic Intelligence, 2011.
    [BibTeX] [Abstract] [Download PDF]
    Solving word problems is an important part in school education in primary as well as in high school. Although, the equations that are given by a word problem could be solved by most computer algebra programs without problems, there are just few systems that are able to solve word problems. In this paper we present the ongoing work on a system, that is able to solve word problems from german primary school math books.
    @inproceedings{2423329,
    abstract = {Solving word problems is an important part in school education in primary as well as in high school. Although, the equations that are given by a word problem could be solved by most computer algebra programs without problems, there are just few systems that are able to solve word problems. In this paper we present the ongoing work on a system, that is able to solve word problems from german primary school math books.},
    author = {Liguda, Christian and Pfeiffer, Thies},
    booktitle = {First International Workshop on Algorithmic Intelligence},
    editor = {Messerschmidt, Hartmut},
    keywords = {Artificial Intelligence, Natural Language Processing, Math Word Problems},
    location = {Berlin},
    title = {{A Question Answer System for Math Word Problems}},
    url = {https://nbn-resolving.org/urn:nbn:de:0070-pub-24233290, https://pub.uni-bielefeld.de/record/2423329},
    year = {2011},
    }
  • T. Pfeiffer, “Interaction between Speech and Gesture: Strategies for Pointing to Distant Objects,” in Gestures in Embodied Communication and Human-Computer Interaction, 9th International Gesture Workshop, GW 2011, 2011, p. 109–112.
    [BibTeX] [Abstract] [Download PDF]
    Referring to objects using multimodal deictic expressions is an important form of communication. This work addresses the question on how content is distributed between the modalities speech and gesture by comparing deictic pointing gestures to objects with and without speech. As a result, two main strategies used by participants to adapt their gestures to the condition were identified. This knowledge can be used, e.g., to improve the naturalness of pointing gestures employed by embodied conversational agents.
    @inproceedings{2144360,
    abstract = {Referring to objects using multimodal deictic expressions is an important form of communication. This work addresses the question on how content is distributed between the modalities speech and gesture by comparing deictic pointing gestures to objects with and without speech. As a result, two main strategies used by participants to adapt their gestures to the condition were identified. This knowledge can be used, e.g., to improve the naturalness of pointing gestures employed by embodied conversational agents.},
    author = {Pfeiffer, Thies},
    booktitle = {Gestures in Embodied Communication and Human-Computer Interaction, 9th International Gesture Workshop, GW 2011},
    editor = {Efthimiou, Eleni and Kouroupetroglou, Georgios},
    isbn = {978-960-466-075-9},
    keywords = {Interaction between Speech and Gesture, Gesture, Speech, Pointing, Multimodal Fusion, Multimodal Communication},
    location = {Athens, Greece},
    pages = {109--112},
    publisher = {National and Kapodistrian University of Athens},
    title = {{Interaction between Speech and Gesture: Strategies for Pointing to Distant Objects}},
    url = {https://pub.uni-bielefeld.de/record/2144360},
    year = {2011},
    }
  • F. Hülsmann, T. Dankert, and T. Pfeiffer, “Comparing gaze-based and manual interaction in a fast-paced gaming task in Virtual Reality,” in Proceedings of the Workshop Virtuelle & Erweiterte Realität 2011, 2011, p. 1–12.
    [BibTeX] [Abstract] [Download PDF]
    The idea of using gaze as an interaction modality has been put forward by the famous work of Bolt in 1981. In virtual reality (VR), gaze has been used for several means since then: view-dependent optimization of rendering, intelligent information visualization, reference communication in distributed telecommunication settings and object selection. Our own research aims at improving gaze-based interaction methods in general. In this paper, gaze-based interaction is examined in a fast-paced selection task to identify current usability problems of gaze-based interaction and to develop best practices. To this end, an immersive Asteroids-like shooter called Eyesteroids was developed to support a study comparing manual and gaze-based interaction methods. Criteria for the evaluation were interaction performance and user immersion. The results indicate that while both modalities (hand and gaze) work well for the task, manual interaction is easier to use and often more accurate than the implemented gaze-based methods. The reasons are discussed and the best practices as well as options for further improvements of gaze-based interaction methods are presented.
    @inproceedings{2308550,
    abstract = {The idea of using gaze as an interaction modality has been put forward by the famous work of Bolt in 1981. In virtual reality (VR), gaze has been used for several means since then: view-dependent optimization of rendering, intelligent information visualization, reference communication in distributed telecommunication settings and object selection.
    Our own research aims at improving gaze-based interaction methods in general. In this paper, gaze-based interaction is examined in a fast-paced selection task to identify current usability problems of gaze-based interaction and to develop best practices. To this end, an immersive Asteroids-like shooter called Eyesteroids was developed to support a study comparing manual and gaze-based interaction methods. Criteria for the evaluation were interaction performance and user immersion. The results indicate that while both modalities (hand and gaze) work well for the task, manual interaction is easier to use and often more accurate than the implemented gaze-based methods. The reasons are discussed and the best practices as well as options for further improvements of gaze-based interaction methods are presented.},
    author = {Hülsmann, Felix and Dankert, Timo and Pfeiffer, Thies},
    booktitle = {Proceedings of the Workshop Virtuelle & Erweiterte Realität 2011},
    editor = {Bohn, Christian-A. and Mostafawy, Sina},
    keywords = {Gaze-based Interaction},
    pages = {1--12},
    publisher = {Shaker Verlag},
    title = {{Comparing gaze-based and manual interaction in a fast-paced gaming task in Virtual Reality}},
    url = {https://nbn-resolving.org/urn:nbn:de:0070-pub-23085509, https://pub.uni-bielefeld.de/record/2308550},
    year = {2011},
    }
  • T. Pfeiffer and I. Wachsmuth, “Dreidimensionale Erfassung visueller Aufmerksamkeit für Usability-Bewertungen an virtuellen Prototypen,” in 10. Paderborner Workshop Augmented and Virtual Reality in der Produktentstehung, 2011, p. 39–51.
    [BibTeX] [Abstract] [Download PDF]
    Die Messung visueller Aufmerksamkeit mittels Eye-Tracking ist eine etablierte Methode in der Bewertung von Ergonomie und Usability. Ihr Gegenstandsbereich beschränkt sich jedoch primär auf 2D-Inhalte wie Webseiten, Produktfotos oder –videos. Bewegte Interaktion im dreidimensionalen Raum wird selten erfasst, weder am realen Objekt, noch am virtuellen Prototyp. Mit einer Aufmerksamkeitsmessung im Raum könnte der Gegenstandsbereich um diese Fälle deutlich erweitert werden. Der vorliegende Artikel arbeitet den aktuellen Stand der Forschung zur Messung visueller Aufmerksamkeit im Raum auf. Dabei werden insbesondere die zu bewältigenden Schwierigkeiten herausgearbeitet und Lösungsansätze aufgezeigt. Als Schwerpunkt werden drei Themen an eigenen Arbeiten diskutiert: Aufbau und Kalibrierung der Systeme, Bestimmung des betrachteten Volumens und Visualisierung der Aufmerksamkeit im Raum.
    @inproceedings{2144329,
    abstract = {Die Messung visueller Aufmerksamkeit mittels Eye-Tracking ist eine etablierte Methode in der Bewertung von Ergonomie und Usability. Ihr Gegenstandsbereich beschränkt sich jedoch primär auf 2D-Inhalte wie Webseiten, Produktfotos oder –videos. Bewegte Interaktion im dreidimensionalen Raum wird selten erfasst, weder am realen Objekt, noch am virtuellen Prototyp. Mit einer Aufmerksamkeitsmessung im Raum könnte der Gegenstandsbereich um diese Fälle deutlich erweitert werden. Der vorliegende Artikel arbeitet den aktuellen Stand der Forschung zur Messung visueller Aufmerksamkeit im Raum auf. Dabei werden insbesondere die zu bewältigenden Schwierigkeiten herausgearbeitet und Lösungsansätze aufgezeigt. Als Schwerpunkt werden drei Themen an eigenen Arbeiten diskutiert: Aufbau und Kalibrierung der Systeme, Bestimmung des betrachteten Volumens und Visualisierung der Aufmerksamkeit im Raum.},
    author = {Pfeiffer, Thies and Wachsmuth, Ipke},
    booktitle = {10. Paderborner Workshop Augmented and Virtual Reality in der Produktentstehung},
    editor = {Gausemeier, Jürgen and Grafe, Michael and Meyer auf der Heide, Friedhelm},
    isbn = {978-3-942647-14-4},
    keywords = {Gaze-based Interaction},
    number = {295},
    pages = {39--51},
    publisher = {Heinz Nixdorf Institut, Universität Paderborn},
    title = {{Dreidimensionale Erfassung visueller Aufmerksamkeit für Usability-Bewertungen an virtuellen Prototypen}},
    url = {https://pub.uni-bielefeld.de/record/2144329},
    volume = {295},
    year = {2011},
    }
  • N. Mattar and T. Pfeiffer, “Interactive 3D graphs for web-based social networking platforms,” International Journal of Computer Information Systems and Industrial Management Applications, vol. 3, p. 427–434, 2011.
    [BibTeX] [Abstract] [Download PDF]
    Social networking platforms (SNPs) are meant to reflect the social relationships of their users. Users typically enter very personal information and should get useful feedback about their social network. They should indeed be empowered to exploit the information they have entered. In reality, however, most SNPs actually hide the structure of the user’s rich social network behind very restricted text-based user interfaces and large parts of the potential information which could be extracted from the entered data lies fallow. This article presents results from a user study showing that 3D visualizations of social graphs can be utilized more effectively – and moreover – are preferred by users compared to traditional text-based interfaces. Subsequently, the article addresses the problem of how to deploy interactive 3D graphical interfaces to large user communities. This is demonstrated on the social graph app- lication FriendGraph3D for Facebook.
    @article{2047078,
    abstract = {Social networking platforms (SNPs) are meant to reflect the social relationships of their users. Users typically enter very personal information and should get useful feedback about their social network. They should indeed be empowered to exploit the information they have entered. In reality, however, most SNPs actually hide the structure of the user’s rich social network behind very restricted text-based user interfaces and large parts of the potential information which could be extracted from the entered data lies fallow. This article presents results from a user study showing that 3D visualizations of social graphs can be utilized more effectively – and moreover – are preferred by users compared to traditional text-based interfaces. Subsequently, the article addresses the problem of how to deploy interactive 3D graphical interfaces to large user communities. This is demonstrated on the social graph app-
    lication FriendGraph3D for Facebook.},
    author = {Mattar, Nikita and Pfeiffer, Thies},
    issn = {2150-7988},
    journal = {International Journal of Computer Information Systems and Industrial Management Applications},
    keywords = {information visualization, interactive graphs, social networks, web technology, 3d graphics},
    pages = {427--434},
    publisher = {MIR Labs},
    title = {{Interactive 3D graphs for web-based social networking platforms}},
    url = {https://pub.uni-bielefeld.de/record/2047078},
    volume = {3},
    year = {2011},
    }
  • T. Pfeiffer, “Understanding multimodal deixis with gaze and gesture in conversational interfaces,” PhD Thesis, 2010.
    [BibTeX] [Abstract] [Download PDF]
    When humans communicate, we use deictic expressions to refer to objects in our surrounding and put them in the context of our actions. In face to face interaction, we can complement verbal expressions with gestures and, hence, we do not need to be too precise in our verbal protocols. Our interlocutors hear our speaking; see our gestures and they even read our eyes. They interpret our deictic expressions, try to identify the referents and – normally – they will understand. If only machines could do alike. The driving vision behind the research in this thesis are multimodal conversational interfaces where humans are engaged in natural dialogues with computer systems. The embodied conversational agent Max developed in the A.I. group at Bielefeld University is an example of such an interface. Max is already able to produce multimodal deictic expressions using speech, gaze and gestures, but his capabilities to understand humans are not on par. If he was able to resolve multimodal deictic expressions, his understanding of humans would increase and interacting with him would become more natural. Following this vision, we as scientists are confronted with several challenges. First, accurate models for human pointing have to be found. Second, precise data on multimodal interactions has to be collected, integrated and analyzed in order to create these models. This data is multimodal (transcripts, voice and video recordings, annotations) and not directly accessible for analysis (voice and video recordings). Third, technologies have to be developed to support the integration and the analysis of the multimodal data. Fourth, the created models have to be implemented, evaluated and optimized until they allow a natural interaction with the conversational interface. To this ends, this work aims to deepen our knowledge of human non-verbal deixis, specifically of manual and gaze pointing, and to apply this knowledge in conversational interfaces. At the core of the theoretical and empirical investigations of this thesis are models for the interpretation of pointing gestures to objects. These models address the following questions: When are we pointing? Where are we pointing to? Which objects are we pointing at? With respect to these questions, this thesis makes the following three contributions: First, gaze-based interaction technology for 3D environments: Gaze plays an important role in human communication, not only in deictic reference. Yet, technology for gaze interaction is still less developed than technology for manual interaction. In this thesis, we have developed components for real-time tracking of eye movements and of the point of regard in 3D space and integrated them in a framework for DRIVE. DRIVE provides viable information about human communicative behavior in real-time. This data can be used to investigate and to design processes on higher cognitive levels, such as turn-taking, check- backs, shared attention and resolving deictic reference. Second, data-driven modeling: We answer the theoretical questions about timing, direction, accuracy and dereferential power of pointing by data-driven modeling. As empirical basis for the simulations, we created a substantial corpus with high-precision data from an extensive study on multimodal pointing. Two further studies complemented this effort with substantial data on gaze pointing in 3D. Based on this data, we have developed several models of pointing and successfully created a model for the interpretation of manual pointing that achieves a human-like performance level. Third, new methodologies for research on multimodal deixis in the fields of linguistics and computer science: The experimental-simulative approach to modeling – which we follow in this thesis – requires large collections of heterogeneous data to be recorded, integrated, analyzed and resimulated. To support the researcher in these tasks, we developed the Interactive Augmented Data Explorer. IADE is an innovative tool for research on multimodal interaction based on virtual reality technology. It allows researchers to literally immerse into multimodal data and interactively explore them in real-time and in virtual space. With IADE we have also extended established approaches for scientific visualization of linguistic data to 3D, which previously existed only for 2D methods of analysis (e.g. video recordings or computer screen experiments). By this means, we extended McNeill’s 2D depiction of the gesture space to gesture space volumes expanding in time and space. Similarly, we created attention volumes, a new way to visualize the distribution of attention in 3D environments.
    @phdthesis{2308111,
    abstract = {When humans communicate, we use deictic expressions to refer to objects in our surrounding and put them in the context of our actions. In face to face interaction, we can complement verbal expressions with gestures and, hence, we do not need to be too precise in our verbal protocols. Our interlocutors hear our speaking; see our gestures and they even read our eyes. They interpret our deictic expressions, try to identify the referents and -- normally -- they will understand. If only machines could do alike.
    The driving vision behind the research in this thesis are multimodal conversational interfaces where humans are engaged in natural dialogues with computer systems. The embodied conversational agent Max developed in the A.I. group at Bielefeld University is an example of such an interface. Max is already able to produce multimodal deictic expressions using speech, gaze and gestures, but his capabilities to understand humans are not on par. If he was able to resolve multimodal deictic expressions, his understanding of humans would increase and interacting with him would become more natural.
    Following this vision, we as scientists are confronted with several challenges. First, accurate models for human pointing have to be found. Second, precise data on multimodal interactions has to be collected, integrated and analyzed in order to create these models. This data is multimodal (transcripts, voice and video recordings, annotations) and not directly accessible for analysis (voice and video recordings). Third, technologies have to be developed to support the integration and the analysis of the multimodal data. Fourth, the created models have to be implemented, evaluated and optimized until they allow a natural interaction with the conversational interface.
    To this ends, this work aims to deepen our knowledge of human non-verbal deixis, specifically of manual and gaze pointing, and to apply this knowledge in conversational interfaces. At the core of the theoretical and empirical investigations of this thesis are models for the interpretation of pointing gestures to objects. These models address the following questions: When are we pointing? Where are we pointing to? Which objects are we pointing at? With respect to these questions, this thesis makes the following three contributions:
    First, gaze-based interaction technology for 3D environments: Gaze plays an important role in human communication, not only in deictic reference. Yet, technology for gaze interaction is still less developed than technology for manual interaction. In this thesis, we have developed components for real-time tracking of eye movements and of the point of regard in 3D space and integrated them in a framework for DRIVE. DRIVE provides viable information about human communicative behavior in real-time. This data can be used to investigate and to design processes on higher cognitive levels, such as turn-taking, check- backs, shared attention and resolving deictic reference.
    Second, data-driven modeling: We answer the theoretical questions about timing, direction, accuracy and dereferential power of pointing by data-driven modeling. As empirical basis for the simulations, we created a substantial corpus with high-precision data from an extensive study on multimodal pointing. Two further studies complemented this effort with substantial data on gaze pointing in 3D. Based on this data, we have developed several models of pointing and successfully created a model for the interpretation of manual pointing that achieves a human-like performance level.
    Third, new methodologies for research on multimodal deixis in the fields of linguistics and computer science: The experimental-simulative approach to modeling -- which we follow in this thesis -- requires large collections of heterogeneous data to be recorded, integrated, analyzed and resimulated. To support the researcher in these tasks, we developed the Interactive Augmented Data Explorer. IADE is an innovative tool for research on multimodal interaction based on virtual reality technology. It allows researchers to literally immerse into multimodal data and interactively explore them in real-time and in virtual space. With IADE we have also extended established approaches for scientific visualization of linguistic data to 3D, which previously existed only for 2D methods of analysis (e.g. video recordings or computer screen experiments). By this means, we extended McNeill's 2D depiction of the gesture space to gesture space volumes expanding in time and space. Similarly, we created attention volumes, a new way to visualize the distribution of attention in 3D environments.},
    author = {Pfeiffer, Thies},
    keywords = {Reference, Gesture, Deixis, Human-Computer Interaction, Mensch-Maschine-Schnittstelle, Lokale Deixis, Blickbewegung, Gaze, Virtuelle Realität, Multimodales System, Referenz , Gestik, Multimodal Communication, Gaze-based Interaction},
    pages = {241},
    publisher = {Universitätsbibliothek},
    title = {{Understanding multimodal deixis with gaze and gesture in conversational interfaces}},
    url = {https://nbn-resolving.org/urn:nbn:de:hbz:361-23081117, https://pub.uni-bielefeld.de/record/2308111},
    year = {2010},
    }
  • T. Pfeiffer, “Object Deixis: Interaction Between Verbal Expressions and Manual Pointing Gestures,” in Proceedings of the KogWis 2010, 2010, p. 221–222.
    [BibTeX] [Abstract] [Download PDF]
    Object deixis is at the core of language and an ideal example of multimodality. Speech, gaze and manual gestures are used by interlocutors to refer to objects in their 3D environment. The interplay of verbal expressions and gestures during deixis is an active research topic in linguistics as well as in human-computer interaction. Previously, we conducted a study on manual pointing during dialogue games using state-of-the art tracking technologies to record gestures with high spatial precision (Kranstedt, Lücking, Pfeiffer, Rieser and Wachsmuth, 2006), To reveal strategies in manual pointing gestures, we present an analysis of this data with a new visualization technique.
    @inproceedings{1894499,
    abstract = {Object deixis is at the core of language and an ideal example of multimodality. Speech, gaze and manual gestures are used by interlocutors to refer to objects in their 3D environment. The interplay of verbal expressions and gestures during deixis is an active research topic in linguistics as well as in human-computer interaction. Previously, we conducted a study on manual pointing during dialogue games using state-of-the art tracking technologies to record gestures with high spatial precision (Kranstedt, Lücking, Pfeiffer, Rieser and Wachsmuth, 2006), To reveal strategies in manual pointing gestures, we present an analysis of this data with a new visualization technique.},
    author = {Pfeiffer, Thies},
    booktitle = {Proceedings of the KogWis 2010},
    editor = {Haack, Johannes and Wiese, Heike and Abraham, Andreas and Chiarcos, Christian},
    isbn = {978‐3‐86956‐087‐8},
    issn = {2190‐4545},
    keywords = {Multimodal Communication},
    location = {Potsdam},
    pages = {221--222},
    publisher = {Universitätsverlag Potsdam},
    title = {{Object Deixis: Interaction Between Verbal Expressions and Manual Pointing Gestures}},
    url = {https://pub.uni-bielefeld.de/record/1894499},
    year = {2010},
    }
  • P. Renner, T. Dankert, D. Schneider, N. Mattar, and T. Pfeiffer, “Navigating and selecting in the virtual supermarket: review and update of classic interaction techniques,” in Virtuelle und Erweiterte Realitaet: 7. Workshop der GI-Fachgruppe VR/AR, 2010, p. 71–82.
    [BibTeX] [Abstract] [Download PDF]
    Classic techniques for navigation and selection such as Image-Plane and World in Miniature have been around for more than 20 years. In the course of a seminar on interaction in virtual reality we reconsidered five methods for navigation and two for selection. These methods were significantly extended by the use of up-to-date hardware such as Fingertracking devices and the Nintendo Wii Balance Board and evaluated in a virtual supermarket scenario. Two user studies, one on experts and one on novices, revealed information on usability and efficiency. As an outcome, the combination of Ray-Casting and Walking in Place turned out to be the fastest.
    @inproceedings{1894436,
    abstract = {Classic techniques for navigation and selection such as Image-Plane and World in Miniature have been around for more than 20 years. In the course of a seminar on interaction in virtual reality we reconsidered five methods for navigation and two for selection. These methods were significantly extended by the use of up-to-date hardware such as Fingertracking devices and the Nintendo Wii Balance Board and evaluated in a virtual supermarket scenario. Two user studies, one on experts and one on novices, revealed information on usability and efficiency. As an outcome, the combination of Ray-Casting and Walking in Place turned out to be the fastest.},
    author = {Renner, Patrick and Dankert, Timo and Schneider, Dorothe and Mattar, Nikita and Pfeiffer, Thies},
    booktitle = {Virtuelle und Erweiterte Realitaet: 7. Workshop der GI-Fachgruppe VR/AR},
    keywords = {human-machine-interaction},
    location = {Stuttgart},
    pages = {71--82},
    publisher = {Shaker Verlag},
    title = {{Navigating and selecting in the virtual supermarket: review and update of classic interaction techniques}},
    url = {https://nbn-resolving.org/urn:nbn:de:0070-pub-18944367, https://pub.uni-bielefeld.de/record/1894436},
    year = {2010},
    }
  • T. Pfeiffer, “Tracking and Visualizing Visual Attention in Real 3D Space,” in Proceedings of the KogWis 2010, 2010, p. 220–221.
    [BibTeX] [Abstract] [Download PDF]
    Humans perceive, reason and act within a 3D environment. In empirical methods, however, researchers often restrict themselves to 2D, either in using 2D content or relying on 2D recordings for analysis, such as videos or 2D eye movements. Regarding, e.g., multimodal deixis, we address the open question of the morphology of the referential space (Butterworth and Itakura, 2000), For modeling the referential space of gaze pointing, precise knowledge about the target of our participants’ visual attention is crucial. To this ends, we developed methods to assess the location of the point of regard, which are outlined here.
    @inproceedings{1894506,
    abstract = {Humans perceive, reason and act within a 3D environment. In empirical methods, however, researchers often restrict themselves to 2D, either in using 2D content or relying on 2D recordings for analysis, such as videos or 2D eye movements. Regarding, e.g., multimodal deixis, we address the open question of the morphology of the referential space (Butterworth and Itakura, 2000), For modeling the referential space of gaze pointing, precise knowledge about the target of our participants’ visual attention is crucial. To this ends, we developed methods to assess the location of the point of regard, which are outlined here.},
    author = {Pfeiffer, Thies},
    booktitle = {Proceedings of the KogWis 2010},
    editor = {Haack, Johannes and Wiese, Heike and Abraham, Andreas and Chiarcos, Christian},
    isbn = {978‐3‐86956‐087‐8},
    issn = {2190‐4545},
    keywords = {Gaze-based Interaction},
    location = {Potsdam, Germany},
    pages = {220--221},
    publisher = {Universitätsverlag Potsdam},
    title = {{Tracking and Visualizing Visual Attention in Real 3D Space}},
    url = {https://pub.uni-bielefeld.de/record/1894506},
    year = {2010},
    }
  • N. Mattar and T. Pfeiffer, “Relationships in social networks revealed: a facebook app for social graphs in 3D based on X3DOM and WebGL,” in Proceedings of the IADIS International Conference Web Virtual Reality and Three-Dimensional Worlds 2010, 2010, p. 269–276.
    [BibTeX] [Abstract] [Download PDF]
    From the perspective of individual users, social networking platforms (SNPs) are meant to reflect their social relationships. SNPs should provide feedback allowing users to exploit the information they have entered. In reality, however, most SNPs actually hide the rich social network constructed by the users in their databases behind simple user interfaces. These interfaces reduce the complexity of a user’s social network to a text-based list in HTML. This article presents results from a user study showing that 3D visualizations of social graphs can be utilized more effectively – and moreover – are preferred by users compared to traditional text-based interfaces. Subsequently, the article addresses the problem of deployment of rich interfaces. A social graph application for Facebook is presented, demonstrating how WebGL and HTML5/X3D can be used to implement rich social applications based on upcoming web standards.
    @inproceedings{1990994,
    abstract = {From the perspective of individual users, social networking platforms (SNPs) are meant to reflect their social relationships. SNPs should provide feedback allowing users to exploit the information they have entered. In reality, however, most SNPs actually hide the rich social network constructed by the users in their databases behind simple user interfaces. These interfaces reduce the complexity of a user's social network to a text-based list in HTML. This article presents results from a user study showing that 3D visualizations of social graphs can be utilized more effectively – and moreover – are preferred by users compared to traditional text-based interfaces. Subsequently, the article addresses the problem of deployment of rich interfaces. A social graph application for Facebook is presented, demonstrating how WebGL and HTML5/X3D can be used to implement rich social applications based on upcoming web standards.},
    author = {Mattar, Nikita and Pfeiffer, Thies},
    booktitle = {Proceedings of the IADIS International Conference Web Virtual Reality and Three-Dimensional Worlds 2010},
    keywords = {social networks},
    pages = {269--276},
    publisher = {IADIS Press},
    title = {{Relationships in social networks revealed: a facebook app for social graphs in 3D based on X3DOM and WebGL}},
    url = {https://pub.uni-bielefeld.de/record/1990994},
    year = {2010},
    }
  • A. Bluhm, J. Eickmeyer, T. Feith, N. Mattar, and T. Pfeiffer, “Exploration von sozialen Netzwerken im 3D Raum am Beispiel von SONAR für Last.fm,” in Virtuelle und Erweiterte Realität: 6. Workshop der GI-Fachgruppe VR/AR, 2009, p. 269–280.
    [BibTeX] [Abstract] [Download PDF]
    Die Portale für soziale Netzwerke im Internet gehen mittlerweile deutlich über die Verwaltung einfacher Bekanntschaftsbeziehungen hinaus. Ihnen liegen immer reichhaltigere Datenmodelle zu Grunde. Darstellung und Exploration dieser Netzwerke sind eine grosse Herausforderung für die Entwickler, wenn beides nicht zu einer solchen für die Benutzer werden soll. Im Rahmen eines studentischen Projektes wurde die dritte Dimension für die Darstellung des komplexen sozialen Netzwerkes von Last.fm nutzbar gemacht. Durch die entwickelte Anwendung SoNAR wird das Netzwerk interaktiv und intuitiv sowohl am Desktop, als auch in der Immersion einer dreiseitigen CAVE explorierbar. Unterschiedliche Relationen des sozialen Graphen können parallel exploriert und damit Zusammenhänge zwischen Individuen intuitiv erfahren werden. Eine Suchfunktion erlaubt dabei die fexible Komposition verschiedener Startknoten für die Exploration.
    @inproceedings{1894550,
    abstract = {Die Portale für soziale Netzwerke im Internet gehen mittlerweile deutlich über die Verwaltung einfacher Bekanntschaftsbeziehungen hinaus. Ihnen liegen immer reichhaltigere Datenmodelle zu Grunde. Darstellung und Exploration dieser Netzwerke sind eine grosse Herausforderung für die Entwickler, wenn beides nicht zu einer solchen für die Benutzer werden soll. Im Rahmen eines studentischen Projektes wurde die dritte Dimension für die Darstellung des komplexen sozialen Netzwerkes von Last.fm nutzbar gemacht. Durch die entwickelte Anwendung SoNAR wird das Netzwerk interaktiv und intuitiv sowohl am Desktop, als auch in der Immersion einer dreiseitigen CAVE explorierbar. Unterschiedliche Relationen des sozialen Graphen können parallel exploriert und damit Zusammenhänge zwischen Individuen intuitiv erfahren werden. Eine Suchfunktion erlaubt dabei die fexible Komposition verschiedener Startknoten für die Exploration.},
    author = {Bluhm, Andreas and Eickmeyer, Jens and Feith, Tobias and Mattar, Nikita and Pfeiffer, Thies},
    booktitle = {Virtuelle und Erweiterte Realität: 6. Workshop der GI-Fachgruppe VR/AR},
    keywords = {social networks},
    pages = {269--280},
    publisher = {Shaker Verlag},
    title = {{Exploration von sozialen Netzwerken im 3D Raum am Beispiel von SONAR für Last.fm}},
    url = {https://nbn-resolving.org/urn:nbn:de:0070-pub-18945507, https://pub.uni-bielefeld.de/record/1894550},
    year = {2009},
    }
  • T. Pfeiffer and N. Mattar, “Benefits of locating overt visual attention in space using binocular eye tracking for mixed reality applications,” in Workshop-Proceedings der Tagung Mensch & Computer 2009: Grenzenlos frei!?, 2009.
    [BibTeX] [Abstract] [Download PDF]
    The “Where?” is quite important for Mixed Reality applications: Where is the user looking at? Where should augmentations be displayed? The location of the overt visual attention of the user can be used both to disambiguate referent objects and to inform an intelligent view management of the user interface. While the vertical and horizontal orientation of attention is quite commonly used, e.g. derived from the orientation of the head, only knowledge about the distance allows for an intrinsic measurement of the location of the attention. This contribution reviews our latest results on detecting the location of attention in 3D space using binocular eye tracking.
    @inproceedings{1894565,
    abstract = {The "Where?" is quite important for Mixed Reality applications: Where is the user looking at? Where should augmentations be displayed? The location of the overt visual attention of the user can be used both to disambiguate referent objects and to inform an intelligent view management of the user interface. While the vertical and horizontal orientation of attention is quite commonly used, e.g. derived from the orientation of the head, only knowledge about the distance allows for an intrinsic measurement of the location of the attention. This contribution reviews our latest results on detecting the location of attention in 3D space using binocular eye tracking.},
    author = {Pfeiffer, Thies and Mattar, Nikita},
    booktitle = {Workshop-Proceedings der Tagung Mensch & Computer 2009: Grenzenlos frei!?},
    keywords = {Gaze-based Interaction},
    publisher = {Logos Berlin},
    title = {{Benefits of locating overt visual attention in space using binocular eye tracking for mixed reality applications}},
    url = {https://nbn-resolving.org/urn:nbn:de:0070-pub-18945651, https://pub.uni-bielefeld.de/record/1894565},
    year = {2009},
    }
  • T. Pfeiffer and I. Wachsmuth, “Social Presence: The Role of Interpersonal Distances in Affective Computer-Mediated Communication,” in Proceedings of the 11th International Workshop on Presence, 2008, p. 275–279.
    [BibTeX] [Abstract] [Download PDF]
    Emotions and interpersonal distances are identified as key aspects in social interaction. A novel Affective Computer-Mediated Communication (ACMC) framework has been developed making the interplay of both aspects explicit to facilitate social presence. In this ACMC framework, the displays can be arranged in virtual space manually or automatically. We expect that, according to empirical findings, the social relation as well as momentarily affective appraisals will influence this arrangement. The proposed concept extends from desktop devices to fully immersive Virtual Reality interfaces.
    @inproceedings{2426987,
    abstract = {Emotions and interpersonal distances are identified as key aspects in social interaction. A novel Affective Computer-Mediated Communication (ACMC) framework has been developed making the interplay of both aspects explicit to facilitate social presence. In this ACMC framework, the displays can be arranged in virtual space manually or automatically. We expect that, according to empirical findings, the social relation as well as momentarily affective appraisals will influence this arrangement. The proposed concept extends from desktop devices to fully immersive Virtual Reality interfaces.},
    author = {Pfeiffer, Thies and Wachsmuth, Ipke},
    booktitle = {Proceedings of the 11th International Workshop on Presence},
    editor = {Spagnolli, Anna and Gamberini, Luciano},
    isbn = {978-88-6129-287-1},
    keywords = {Mediated Communication},
    pages = {275--279},
    publisher = {CLEUP Cooperativa Libraria Universitaria Padova},
    title = {{Social Presence: The Role of Interpersonal Distances in Affective Computer-Mediated Communication}},
    url = {https://pub.uni-bielefeld.de/record/2426987},
    year = {2008},
    }
  • T. Pfeiffer, M. E. Latoschik, and I. Wachsmuth, “Evaluation of binocular eye trackers and algorithms for 3D gaze interaction in virtual reality environments,” JVRB – Journal of Virtual Reality and Broadcasting, vol. 5, iss. 16, p. 1660, 2008.
    [BibTeX] [Abstract] [Download PDF]
    Tracking user’s visual attention is a fundamental aspect in novel human-computer interaction paradigms found in Virtual Reality. For example, multimodal interfaces or dialogue-based communications with virtual and real agents greatly benefit from the analysis of the user’s visual attention as a vital source for deictic references or turn-taking signals. Current approaches to determine visual attention rely primarily on monocular eye trackers. Hence they are restricted to the interpretation of two-dimensional fixations relative to a defined area of projection. The study presented in this article compares precision, accuracy and application performance of two binocular eye tracking devices. Two algorithms are compared which derive depth information as required for visual attention-based 3D interfaces. This information is further applied to an improved VR selection task in which a binocular eye tracker and an adaptive neural network algorithm is used during the disambiguation of partly occluded objects.
    @article{1998802,
    abstract = {Tracking user's visual attention is a fundamental aspect in novel human-computer interaction paradigms found in Virtual Reality. For example, multimodal interfaces or dialogue-based communications with virtual and real agents greatly benefit from the analysis of the user's visual attention as a vital source for deictic references or turn-taking signals. Current approaches to determine visual attention rely primarily on monocular eye trackers. Hence they are restricted to the interpretation of two-dimensional fixations relative to a defined area of projection. The study presented in this article compares precision, accuracy and application performance of two binocular eye tracking devices. Two algorithms are compared which derive depth information as required for visual attention-based 3D interfaces. This information is further applied to an improved VR selection task in which a binocular eye tracker and an adaptive neural network algorithm is used during the disambiguation of partly occluded objects.},
    author = {Pfeiffer, Thies and Latoschik, Marc Erich and Wachsmuth, Ipke},
    issn = {1860-2037},
    journal = {JVRB - Journal of Virtual Reality and Broadcasting},
    keywords = {human-computer interaction, object selection, virtual reality, gaze-based interaction, eye tracking},
    number = {16},
    pages = {1660},
    publisher = {Univ. of Applied Sciences},
    title = {{Evaluation of binocular eye trackers and algorithms for 3D gaze interaction in virtual reality environments}},
    url = {https://nbn-resolving.org/urn:nbn:de:0009-6-16605, https://pub.uni-bielefeld.de/record/1998802},
    volume = {5},
    year = {2008},
    }
  • M. Meißner, K. Essig, T. Pfeiffer, R. Decker, and H. Ritter, “Eye-tracking decision behaviour in choice-based conjoint analysis.” 2008, p. 97–97.
    [BibTeX] [Download PDF]
    @inproceedings{1635597,
    author = {Meißner, Martin and Essig, Kai and Pfeiffer, Thies and Decker, Reinhold and Ritter, Helge},
    issn = {0301-0066},
    location = {Utrecht, The Netherlands},
    number = {Suppl. 1},
    pages = {97--97},
    publisher = {Pion},
    title = {{Eye-tracking decision behaviour in choice-based conjoint analysis}},
    url = {https://pub.uni-bielefeld.de/record/1635597},
    volume = {37},
    year = {2008},
    }
  • A. Breuing, T. Pfeiffer, and S. Kopp, “Conversational Interface Agents for the Semantic Web – a Case Study,” in Proceedings of the Poster and Demonstration Session at the 7th International Semantic Web Conference (ISWC 2008), 2008.
    [BibTeX] [Abstract] [Download PDF]
    The Semantic Web is about to become a rich source of knowledge whose potential will be squandered if it is not accessible for everyone. Intuitive interfaces like conversational agents are needed to better disseminate this knowledge, either on request or even proactively in a context-aware manner. This paper presents work on extending an existing conversational agent, Max, with abilities to access the Semantic Web via natural language communication.
    @inproceedings{1894449,
    abstract = {The Semantic Web is about to become a rich source of knowledge whose potential will be squandered if it is not accessible for everyone. Intuitive interfaces like conversational agents are needed to better disseminate this knowledge, either on request or even proactively in a context-aware manner. This paper presents work on extending an existing conversational agent, Max, with abilities to access the Semantic Web via natural language communication.},
    author = {Breuing, Alexa and Pfeiffer, Thies and Kopp, Stefan},
    booktitle = {Proceedings of the Poster and Demonstration Session at the 7th International Semantic Web Conference (ISWC 2008)},
    editor = {Bizer, Christian and Joshi, Anupam},
    title = {{Conversational Interface Agents for the Semantic Web - a Case Study}},
    url = {https://pub.uni-bielefeld.de/record/1894449},
    year = {2008},
    }
  • T. Pfeiffer, “Towards Gaze Interaction in Immersive Virtual Reality: Evaluation of a Monocular Eye Tracking Set-Up,” in Virtuelle und Erweiterte Realität – Fünfter Workshop der GI-Fachgruppe VR/AR, 2008, p. 81–92.
    [BibTeX] [Abstract] [Download PDF]
    Of all senses, it is visual perception that is predominantly deluded in Virtual Realities. Yet, the eyes of the observer, despite the fact that they are the fastest perceivable moving body part, have gotten relatively little attention as an interaction modality. A solid integration of gaze, however, provides great opportunities for implicit and explicit human-computer interaction. We present our work on integrating a lightweight head-mounted eye tracking system in a CAVE-like Virtual Reality Set-Up and provide promising data from a user study on the achieved accuracy and latency.
    @inproceedings{1894513,
    abstract = {Of all senses, it is visual perception that is predominantly deluded in Virtual Realities. Yet, the eyes of the observer, despite the fact that they are the fastest perceivable moving body part, have gotten relatively little attention as an interaction modality. A solid integration of gaze, however, provides great opportunities for implicit and explicit human-computer interaction. We present our work on integrating a lightweight head-mounted eye tracking system in a CAVE-like Virtual Reality Set-Up and provide promising data from a user study on the achieved accuracy and latency.},
    author = {Pfeiffer, Thies},
    booktitle = {Virtuelle und Erweiterte Realität - Fünfter Workshop der GI-Fachgruppe VR/AR},
    editor = {Schumann, Marco and Kuhlen, Torsten},
    isbn = {978-3-8322-7572-3},
    keywords = {Gaze-based Interaction},
    pages = {81--92},
    publisher = {Shaker Verlag GmbH},
    title = {{Towards Gaze Interaction in Immersive Virtual Reality: Evaluation of a Monocular Eye Tracking Set-Up}},
    url = {https://nbn-resolving.org/urn:nbn:de:0070-pub-18945131, https://pub.uni-bielefeld.de/record/1894513},
    year = {2008},
    }
  • T. Pfeiffer, M. E. Latoschik, and I. Wachsmuth, “Conversational pointing gestures for virtual reality interaction: Implications from an empirical study,” in Proceedings of the IEEE VR 2008, 2008, p. 281–282. doi:10.1109/vr.2008.4480801
    [BibTeX] [Abstract] [Download PDF]
    Interaction in conversational interfaces strongly relies on the sys- tem’s capability to interpret the user’s references to objects via de- ictic expressions. Deictic gestures, especially pointing gestures, provide a powerful way of referring to objects and places, e.g., when communicating with an Embodied Conversational Agent in a Virtual Reality Environment. We highlight results drawn from a study on pointing and draw conclusions for the implementation of pointing-based conversational interactions in partly immersive Vir- tual Reality.
    @inproceedings{1998808,
    abstract = {Interaction in conversational interfaces strongly relies on the sys- tem's capability to interpret the user's references to objects via de- ictic expressions. Deictic gestures, especially pointing gestures, provide a powerful way of referring to objects and places, e.g., when communicating with an Embodied Conversational Agent in a Virtual Reality Environment. We highlight results drawn from a study on pointing and draw conclusions for the implementation of pointing-based conversational interactions in partly immersive Vir- tual Reality.},
    author = {Pfeiffer, Thies and Latoschik, Marc Erich and Wachsmuth, Ipke},
    booktitle = {Proceedings of the IEEE VR 2008},
    editor = {Lin, Ming and Steed, Anthony and Cruz-Neira, Carolina},
    isbn = {978-1-4244-1971-5},
    keywords = {Multimodal Communication},
    pages = {281--282},
    publisher = {IEEE Press},
    title = {{Conversational pointing gestures for virtual reality interaction: Implications from an empirical study}},
    url = {https://pub.uni-bielefeld.de/record/1998808},
    doi = {10.1109/vr.2008.4480801},
    year = {2008},
    }
  • P. Weiß, T. Pfeiffer, G. Schaffranietz, and G. Rickheit, “Coordination in dialog: Alignment of object naming in the Jigsaw Map Game,” in Cognitive Science 2007: Proceedings of the 8th Annual Conference of the Cognitive Science Society of Germany, 2008.
    [BibTeX] [Abstract] [Download PDF]
    People engaged in successful dialog have to share knowledge, e.g., when naming objects, for coordinating their actions. According to Clark (1996), this shared knowledge, the common ground, is explicitly established, particularly by negotiations. Pickering and Garrod (2004) propose with their alignment approach a more automatic and resource-sensitive mechanism based on priming. Within the collaborative research center (CRC) “Alignment in Communication” a series of experimental investigations of natural Face-to-face dialogs should bring about vital evidence to arbitrate between the two positions. This series should ideally be based on a common setting. In this article we review experimental settings in this research line and refine a set of requirements. We then present a flexible design called the Jigsaw Map Game and demonstrate its applicability by reporting on a first experiment on object naming.
    @inproceedings{1948783,
    abstract = {People engaged in successful dialog have to share knowledge, e.g., when naming objects, for coordinating their actions. According to Clark (1996), this shared knowledge, the common ground, is explicitly established, particularly by negotiations. Pickering and Garrod (2004) propose with their alignment approach a more automatic and resource-sensitive mechanism based on priming. Within the collaborative research center (CRC) "Alignment in Communication" a series of experimental investigations of natural Face-to-face dialogs should bring about vital evidence to arbitrate between the two positions. This series should ideally be based on a common setting. In this article we review experimental settings in this research line and refine a set of requirements. We then present a flexible design called the Jigsaw Map Game and demonstrate its applicability by reporting on a first experiment on object naming.},
    author = {Weiß, Petra and Pfeiffer, Thies and Schaffranietz, Gesche and Rickheit, Gert},
    booktitle = {Cognitive Science 2007: Proceedings of the 8th Annual Conference of the Cognitive Science Society of Germany},
    editor = {Zimmer, Hubert D. and Frings, C. and Mecklinger, Axel and Opitz, B. and Pospeschill, M. and Wentura, D.},
    keywords = {Multimodal Communication},
    location = {Saarbrücken, Germany},
    title = {{Coordination in dialog: Alignment of object naming in the Jigsaw Map Game}},
    url = {https://nbn-resolving.org/urn:nbn:de:0070-pub-19487832, https://pub.uni-bielefeld.de/record/1948783},
    year = {2008},
    }
  • T. Pfeiffer, M. Donner, M. E. Latoschik, and I. Wachsmuth, “3D fixations in real and virtual scenarios,” Journal of Eye Movement Research, vol. 1, iss. 5 (Special issue: Abstracts of the ECEM 2007), p. 13, 2007. doi:10.16910/jemr.1.5.1
    [BibTeX] [Abstract] [Download PDF]
    Humans perceive and act within a visual world. This has become important even in disciplines which are prima facie not concerned with vision and eye tracking is now used in a broad range of domains. However, the world we are in is not two-dimensional, as many experiments may convince us to believe. It is convenient to use stimuli in 2D or 21/2D and in most cases absolutely appropriate; but it is often technically motivated and not scientifically. To overcome these technical limitations we contribute results of an evaluation of different approaches to calculate the depth of a fixation based on the divergence of the eyes by testing them on different devices and within real and virtual scenarios.
    @article{1894526,
    abstract = {Humans perceive and act within a visual world. This has become important even in
    disciplines which are prima facie not concerned with vision and eye tracking is now used
    in a broad range of domains. However, the world we are in is not two-dimensional, as
    many experiments may convince us to believe. It is convenient to use stimuli in 2D or
    21/2D and in most cases absolutely appropriate; but it is often technically motivated and
    not scientifically.
    To overcome these technical limitations we contribute results of an evaluation of different
    approaches to calculate the depth of a fixation based on the divergence of the eyes
    by testing them on different devices and within real and virtual scenarios.},
    author = {Pfeiffer, Thies and Donner, Matthias and Latoschik, Marc Erich and Wachsmuth, Ipke},
    issn = {1995-8692},
    journal = {Journal of Eye Movement Research},
    keywords = {Gaze-based Interaction},
    number = {5 (Special issue: Abstracts of the ECEM 2007)},
    pages = {13},
    publisher = {Journal of Eye Movement Research},
    title = {{3D fixations in real and virtual scenarios}},
    url = {https://nbn-resolving.org/urn:nbn:de:0070-pub-18945267, https://pub.uni-bielefeld.de/record/1894526},
    doi = {10.16910/jemr.1.5.1},
    volume = {1},
    year = {2007},
    }
  • T. Pfeiffer and M. E. Latoschik, “Interactive Social Displays,” in IPT-EGVE 2007, Virtual Environments 2007, Short Papers and Posters, 2007, p. 41–42.
    [BibTeX] [Abstract] [Download PDF]
    The mediation of social presence is one of the most interesting challenges of modern communication technology. The proposed metaphor of Interactive Social Displays describes new ways of interactions with multi-/crossmodal interfaces prepared for a psychologically augmented communication. A first prototype demonstrates the application of this metaphor in a teleconferencing scenario.
    @inproceedings{2426961,
    abstract = {The mediation of social presence is one of the most interesting challenges of modern communication technology. The proposed metaphor of Interactive Social Displays describes new ways of interactions with multi-/crossmodal interfaces prepared for a psychologically augmented communication. A first prototype demonstrates the application of this metaphor in a teleconferencing scenario.},
    author = {Pfeiffer, Thies and Latoschik, Marc Erich},
    booktitle = {IPT-EGVE 2007, Virtual Environments 2007, Short Papers and Posters},
    editor = {Fröhlich, Bernd and Blach, Roland and Liere van, Robert},
    keywords = {Mediated Communication},
    pages = {41--42},
    publisher = {Eurographics Association},
    title = {{Interactive Social Displays}},
    url = {https://pub.uni-bielefeld.de/record/2426961},
    year = {2007},
    }
  • G. Schaffranietz, P. Weiß, T. Pfeiffer, and G. Rickheit, “Ein Experiment zur Koordination von Objektbezeichnungen im Dialog,” in Kognitionsforschung 2007 – Beiträge zur 8. Jahrestagung der Gesellschaft für Kognitionswissenschaft, 2007, p. 41–42.
    [BibTeX] [Abstract] [Download PDF]
    People engaged in successful dialog have to share knowledge, e.g., when naming objects, for coordinating their actions. According to Clark (1996), this shared knowledge, the common ground, is explicitly established, particularly by negotiations. Pickering and Garrod (2004) propose with their alignment approach a more automatic and resource-sensitive mechanism based on priming. Within the collaborative research center (CRC) “Alignment in Communication” a series of experimental investigations of natural face-to-face dialogs should bring about vital evidence to arbitrate between the two positions. This series should ideally be based on a common setting. In this article we review experimental settings in this research line and refine a set of requirements. We then present a flexible design called the Jigsaw Map Game and demonstrate its applicability by reporting on a first experiment on object naming.
    @inproceedings{2426876,
    abstract = {People engaged in successful dialog have to share knowledge, e.g., when naming objects, for coordinating their actions. According to Clark (1996), this shared knowledge, the common ground, is explicitly established, particularly by negotiations. Pickering and Garrod (2004) propose with their alignment approach a more automatic and resource-sensitive mechanism based on priming. Within the collaborative research center (CRC) “Alignment in Communication” a series of experimental investigations of natural face-to-face dialogs should bring about vital evidence to arbitrate between the two positions. This series should ideally be based on a common setting. In this article we review experimental settings in this research line and refine a set of requirements. We then present a flexible design called the Jigsaw Map Game and demonstrate its applicability by reporting on a first experiment on object naming.},
    author = {Schaffranietz, Gesche and Weiß, Petra and Pfeiffer, Thies and Rickheit, Gert},
    booktitle = {Kognitionsforschung 2007 - Beiträge zur 8. Jahrestagung der Gesellschaft für Kognitionswissenschaft},
    editor = {Frings, Christian and Mecklinger, Axel and Opitz, Betram and Pospeschill, Markus and Wentura, Dirk and Zimmer, Hubert D.},
    keywords = {Multimodal Communication},
    pages = {41--42},
    publisher = {Shaker Verlag},
    title = {{Ein Experiment zur Koordination von Objektbezeichnungen im Dialog}},
    url = {https://pub.uni-bielefeld.de/record/2426876},
    year = {2007},
    }
  • T. Pfeiffer and I. Wachsmuth, “Interpretation von Objektreferenzen in multimodalen Äußerungen,” in Kognitionsforschung 2007 – Beiträge zur 8. Jahrestagung der Gesellschaft für Kognitionswissenschaft, 2007, p. 109–110.
    [BibTeX] [Abstract] [Download PDF]
    Im Rahmen der Entwicklung einer multimodalen Schnittstelle für die Mensch-Maschine Kommunikation konzentriert sich diese Arbeit auf die Interpretation von Referenzen auf sichtbare Objekte. Im Vordergrund stehen dabei Fragen zur Genauigkeit von Zeigegesten und deren Interaktion mit sprachlichen Ausdrücken. Die Arbeit spannt dabei methodisch einen Bogen von Empirie über Simulation und Visualisierung zur Modellbildung und Evaluation. In Studien zur deiktischen Objektreferenz wurden neben sprachlichen Äußerungen unter dem Einsatz moderner Motion Capturing Technik umfangreiche Daten zum deiktischen Zeigen erhoben. Diese heterogenen Daten, bestehend aus Tracking Daten, sowie Video und Audio Aufzeichnungen, wurden annotiert und mit eigens entwickelten interaktiven Werkzeugen unter Einsatz von Techniken der Virtuellen Realität integriert und aufbereitet. Die statistische Auswertung der Daten erfolgte im Anschluß mittels der freien Statistik-Software R. Die datengetriebene Modellbildung bildet die Grundlage für die Weiterentwicklung eines unscharfen, fuzzy-basierten, Constraint Satisfaction Ansatzes zur Interpretation von Objektreferenzen. Wesentliches Ziel ist dabei eine inkrementelle, echtzeitfähige Verarbeitung, die den Einsatz in direkter Mensch-Maschine Interaktion erlaubt. Die Ergebnisse der Studie haben über die Fragestellung hinaus Einfluss auf ein Modell zur Produktion von deiktischen Ausdrücken und direkte Konsequenzen für einschlägige Theorien zur deiktischen Referenz.
    @inproceedings{2426897,
    abstract = {Im Rahmen der Entwicklung einer multimodalen Schnittstelle für die Mensch-Maschine Kommunikation konzentriert sich diese Arbeit auf die Interpretation von Referenzen auf sichtbare Objekte. Im Vordergrund stehen dabei Fragen zur Genauigkeit von Zeigegesten und deren Interaktion mit sprachlichen Ausdrücken. Die Arbeit spannt dabei methodisch einen Bogen von Empirie über Simulation und Visualisierung zur Modellbildung und Evaluation. In Studien zur deiktischen Objektreferenz wurden neben sprachlichen Äußerungen unter dem Einsatz moderner Motion Capturing Technik umfangreiche Daten zum deiktischen Zeigen erhoben. Diese heterogenen Daten, bestehend aus Tracking Daten, sowie Video und Audio Aufzeichnungen, wurden annotiert und mit eigens entwickelten interaktiven Werkzeugen unter Einsatz von Techniken der Virtuellen Realität integriert und aufbereitet. Die statistische Auswertung der Daten erfolgte im Anschluß mittels der freien Statistik-Software R. Die datengetriebene Modellbildung bildet die Grundlage für die Weiterentwicklung eines unscharfen, fuzzy-basierten, Constraint Satisfaction Ansatzes zur Interpretation von Objektreferenzen. Wesentliches Ziel ist dabei eine inkrementelle, echtzeitfähige Verarbeitung, die den Einsatz in direkter Mensch-Maschine Interaktion erlaubt. Die Ergebnisse der Studie haben über die Fragestellung hinaus Einfluss auf ein Modell zur Produktion von deiktischen Ausdrücken und direkte Konsequenzen für einschlägige Theorien zur deiktischen Referenz.},
    author = {Pfeiffer, Thies and Wachsmuth, Ipke},
    booktitle = {Kognitionsforschung 2007 - Beiträge zur 8. Jahrestagung der Gesellschaft für Kognitionswissenschaft},
    editor = {Frings, Christian and Mecklinger, Axel and Opitz, Bertram and Pospeschill, Markus and Wentura, Dirk and Zimmer, Hubert D.},
    isbn = {978-3-8322-5957-0},
    keywords = {multimodal communication},
    pages = {109--110},
    publisher = {Shaker Verlag},
    title = {{Interpretation von Objektreferenzen in multimodalen Äußerungen}},
    url = {https://pub.uni-bielefeld.de/record/2426897},
    volume = {8},
    year = {2007},
    }
  • T. Pfeiffer and M. E. Latoschik, “Interactive Social Displays,” 2007.
    [BibTeX] [Abstract] [Download PDF]
    The mediation of social presence is one of the most interesting challenges of modern communication technology. The proposed metaphor of Interactive Social Displays describes new ways of interactions with multi-/crossmodal interfaces prepared for a psychologically augmented communication. A first prototype demonstrates the application of this metaphor in a teleconferencing scenario.
    @techreport{2426914,
    abstract = {The mediation of social presence is one of the most interesting challenges of modern communication technology. The proposed metaphor of Interactive Social Displays describes new ways of interactions with multi-/crossmodal interfaces prepared for a psychologically augmented communication. A first prototype demonstrates the application of this metaphor in a teleconferencing scenario.},
    author = {Pfeiffer, Thies and Latoschik, Marc E.},
    keywords = {Mediated Communication},
    title = {{Interactive Social Displays}},
    url = {https://pub.uni-bielefeld.de/record/2426914},
    year = {2007},
    }
  • T. Pfeiffer, M. Donner, M. E. Latoschik, and I. Wachsmuth, “Blickfixationstiefe in stereoskopischen VR-Umgebungen: Eine vergleichende Studie,” in Virtuelle und Erweiterte Realität. 4. Workshop der GI-Fachgruppe VR/AR, 2007, p. 113–124.
    [BibTeX] [Abstract] [Download PDF]
    Für die Mensch-Maschine-Interaktion ist die Erfassung der Aufmerksamkeit des Benutzers von großem Interesse. Für Anwendungen in der Virtuellen Realität (VR) gilt dies insbesondere, nicht zuletzt dann, wenn Virtuelle Agenten als Benutzerschnittstelle eingesetzt werden. Aktuelle Ansätze zur Bestimmung der visuellen Aufmerksamkeit verwenden meist monokulare Eyetracker und interpretieren daher auch nur zweidimensionale bedeutungstragende Blickfixationen relativ zu einer Projektionsebene. Für typische Stereoskopie-basierte VR Anwendungen ist aber eine zusätzliche Berücksichtigung der Fixationstiefe notwendig, um so den Tiefenparameter für die Interaktion nutzbar zu machen, etwa für eine höhere Genauigkeit bei der Objektauswahl (Picking). Das in diesem Beitrag vorgestellte Experiment zeigt, dass bereits mit einem einfacheren binokularen Gerät leichter zwischen sich teilweise verdeckenden Objekten unterschieden werden kann. Trotz des positiven Ergebnisses kann jedoch noch keine uneingeschränkte Verbesserung der Selektionsleistung gezeigt werden. Der Beitrag schließt mit einer Diskussion nächster Schritte mit dem Ziel, die vorgestellte Technik weiter zu verbessern.
    @inproceedings{1894519,
    abstract = {Für die Mensch-Maschine-Interaktion ist die Erfassung der Aufmerksamkeit des Benutzers von großem Interesse. Für Anwendungen in der Virtuellen Realität (VR) gilt dies insbesondere, nicht zuletzt dann, wenn Virtuelle Agenten als Benutzerschnittstelle eingesetzt werden. Aktuelle Ansätze zur Bestimmung der visuellen Aufmerksamkeit verwenden meist monokulare Eyetracker und interpretieren daher auch nur zweidimensionale bedeutungstragende Blickfixationen relativ zu einer Projektionsebene. Für typische Stereoskopie-basierte VR Anwendungen ist aber eine zusätzliche Berücksichtigung der Fixationstiefe notwendig, um so den Tiefenparameter für die Interaktion nutzbar zu machen, etwa für eine höhere Genauigkeit bei der Objektauswahl (Picking). Das in diesem Beitrag vorgestellte Experiment zeigt, dass bereits mit einem einfacheren binokularen Gerät leichter zwischen sich teilweise verdeckenden Objekten unterschieden werden kann. Trotz des positiven Ergebnisses kann jedoch noch keine uneingeschränkte Verbesserung der Selektionsleistung gezeigt werden. Der Beitrag schließt mit einer Diskussion nächster Schritte mit dem Ziel, die vorgestellte Technik weiter zu verbessern.},
    author = {Pfeiffer, Thies and Donner, Matthias and Latoschik, Marc Erich and Wachsmuth, Ipke},
    booktitle = {Virtuelle und Erweiterte Realität. 4. Workshop der GI-Fachgruppe VR/AR},
    editor = {Latoschik, Marc Erich and Fröhlich, Bernd},
    isbn = {978-3-8322-6367-6},
    keywords = {Mensch-Maschine-Interaktion, Eyetracking, Virtuelle Realität},
    pages = {113--124},
    publisher = {Shaker},
    title = {{Blickfixationstiefe in stereoskopischen VR-Umgebungen: Eine vergleichende Studie}},
    url = {https://nbn-resolving.org/urn:nbn:de:0070-pub-18945192, https://pub.uni-bielefeld.de/record/1894519},
    year = {2007},
    }
  • T. Pfeiffer, A. Kranstedt, and A. Lücking, “Sprach-Gestik Experimente mit IADE, dem Interactive Augmented Data Explorer,” in Dritter Workshop Virtuelle und Erweiterte Realität der GI-Fachgruppe VR/AR, 2006, p. 61–72.
    [BibTeX] [Abstract] [Download PDF]
    Für die empirische Erforschung situierter natürlicher menschlicher Kommunikation sind wir auf die Akquise und Auswertung umfangreicher Daten angewiesen. Die Modalitäten, über die sich Menschen ausdrücken können, sind sehr unterschiedlich. Entsprechend heterogen sind die Repräsentationen, mit denen die erhobenen Daten für die Auswertung verfügbar gemacht werden können. Für eine Untersuchung des Zeigeverhaltens bei der Referenzierung von Objekten haben wir mit IADE ein Framework für die Aufzeichnung, Analyse und Simulation von Sprach-Gestik Daten entwickelt. Durch den Einsatz von Techniken aus der interaktiven VR erlaubt IADE die synchronisierte Aufnahme von Bewegungs-, Video- und Audiodaten und unterstützt einen iterativen Auswertungsprozess der gewonnenen Daten durch komfortable integrierte Revisualisierungen und Simulationen. Damit stellt IADE einen entscheidenden Fortschritt für unsere linguistische Experimentalmethodik dar.
    @inproceedings{2426853,
    abstract = {Für die empirische Erforschung situierter natürlicher menschlicher Kommunikation sind wir auf die Akquise und Auswertung umfangreicher Daten angewiesen. Die Modalitäten, über die sich Menschen ausdrücken können, sind sehr unterschiedlich. Entsprechend heterogen sind die Repräsentationen, mit denen die erhobenen Daten für die Auswertung verfügbar gemacht werden können. Für eine Untersuchung des Zeigeverhaltens bei der Referenzierung von Objekten haben wir mit IADE ein Framework für die Aufzeichnung, Analyse und Simulation von Sprach-Gestik Daten entwickelt. Durch den Einsatz von Techniken aus der interaktiven VR erlaubt IADE die synchronisierte Aufnahme von Bewegungs-, Video- und Audiodaten und unterstützt einen iterativen Auswertungsprozess der gewonnenen Daten durch komfortable integrierte Revisualisierungen und Simulationen. Damit stellt IADE einen entscheidenden Fortschritt für unsere linguistische Experimentalmethodik dar.},
    author = {Pfeiffer, Thies and Kranstedt, Alfred and Lücking, Andy},
    booktitle = {Dritter Workshop Virtuelle und Erweiterte Realität der GI-Fachgruppe VR/AR},
    editor = {Müller, Stefan and Zachmann, Gabriel},
    isbn = {3-8322-5474-9},
    keywords = {Multimodal Communication},
    pages = {61--72},
    publisher = {Shaker},
    title = {{Sprach-Gestik Experimente mit IADE, dem Interactive Augmented Data Explorer}},
    url = {https://nbn-resolving.org/urn:nbn:de:0070-pub-24268531, https://pub.uni-bielefeld.de/record/2426853},
    year = {2006},
    }
  • A. Kranstedt, A. Lücking, T. Pfeiffer, H. Rieser, and I. Wachsmuth, “Deictic object reference in task-oriented dialogue,” in Situated Communication, G. Rickheit and I. Wachsmuth, Eds., Mouton de Gruyter, 2006, p. 155–208. doi:10.1515/9783110197747.155
    [BibTeX] [Abstract] [Download PDF]
    This chapter presents an original approach towards a detailed understanding of the usage of pointing gestures accompanying referring expressions. This effort is undertaken in the context of human-machine interaction integrating empirical studies, theory of grammar and logics, and simulation techniques. In particular, we take steps to classify the role of pointing in deictic expressions and to model the focussed area of pointing gestures, the so-called pointing cone. This pointing cone serves as a central concept in a formal account of multi-modal integration at the linguistic speech-gesture interface as well as in a computational model of processing multi-modal deictic expressions.
    @inbook{1894485,
    abstract = {This chapter presents an original approach towards a detailed understanding of the usage of pointing gestures accompanying referring expressions. This effort is undertaken in the context of human-machine interaction integrating empirical studies, theory of grammar and logics, and simulation techniques. In particular, we take steps to classify the role of pointing in deictic expressions and to model the focussed area of pointing gestures, the so-called pointing cone. This pointing cone serves as a central concept in a formal account of multi-modal integration at the linguistic speech-gesture interface as well as in a computational model of processing multi-modal deictic expressions.},
    author = {Kranstedt, Alfred and Lücking, Andy and Pfeiffer, Thies and Rieser, Hannes and Wachsmuth, Ipke},
    booktitle = {Situated Communication},
    editor = {Rickheit, Gert and Wachsmuth, Ipke},
    isbn = {978-3-11-018897-4},
    keywords = {Multimodal Communication},
    pages = {155--208},
    publisher = {Mouton de Gruyter},
    title = {{Deictic object reference in task-oriented dialogue}},
    url = {https://pub.uni-bielefeld.de/record/1894485},
    doi = {10.1515/9783110197747.155},
    year = {2006},
    }
  • P. Weiß, T. Pfeiffer, H. Eikmeyer, and G. Rickheit, “Processing Instructions,” in Situated Communication, G. Rickheit and I. Wachsmuth, Eds., Mouton de Gruyter, 2006, p. 31–76.
    [BibTeX] [Abstract] [Download PDF]
    Instructions play an important role in everyday communication, e.g. in task-oriented dialogs. Based on a (psycho-)linguistic theoretical background, which classifies instructions as requests, we conducted experiments using a cross-modal experimental design in combination with a reaction time paradigm in order to get insights in human instruction processing. We concentrated on the interpretation of basic single sentence instructions. Here, we especially examined the effects of the specificity of verbs, object names, and prepositions in interaction with factors of the visual object context regarding an adequate reference resolution. We were able to show that linguistic semantic and syntactic factors as well as visual context information context influence the interpretation of instructions. Especially the context information proves to be very important. Above and beyond the relevance for basic research, these results are also important for the design of human-computer interfaces capable of understanding natural language. Thus, following the experimental-simulative approach, we also pursued the processing of instructions from the perspective of computer science. Here, a natural language processing interface created for a virtual reality environment served as basis for the simulation of the empirical findings. The comparison of human vs. virtual system performance using a local performance measure for instruction understanding based on fuzzy constraint satisfaction led to further insights concerning the complexity of instruction processing in humans and artificial systems. Using selected examples, we were able to show that the visual context has a comparable influence on the performance of both systems, whereas this approach is limited when it comes to explaining some effects due to variations of the linguistic structure. In order to get deeper insights into the timing and interaction of the sub-processes relevant for instruction understanding and to model these effects in the computer simulation, more specific data on human performance are necessary, e.g. by using eye-tracking techniques. In the long run, such an approach will result in the development of a more natural and cognitively adequate human-computer interface.
    @inbook{1895811,
    abstract = {Instructions play an important role in everyday communication, e.g. in task-oriented dialogs. Based on a (psycho-)linguistic theoretical background, which classifies instructions as requests, we conducted experiments using a cross-modal experimental design in combination with a reaction time paradigm in order to get insights in human instruction processing. We concentrated on the interpretation of basic single sentence instructions. Here, we especially examined the effects of the specificity of verbs, object names, and prepositions in interaction with factors of the visual object context regarding an adequate reference resolution. We were able to show that linguistic semantic and syntactic factors as well as visual context information context influence the interpretation of instructions. Especially the context information proves to be very important. Above and beyond the relevance for basic research, these results are also important for the design of human-computer interfaces capable of understanding natural language. Thus, following the experimental-simulative approach, we also pursued the processing of instructions from the perspective of computer science. Here, a natural language processing interface created for a virtual reality environment served as basis for the simulation of the empirical findings. The comparison of human vs. virtual system performance using a local performance measure for instruction understanding based on fuzzy constraint satisfaction led to further insights concerning the complexity of instruction processing in humans and artificial systems. Using selected examples, we were able to show that the visual context has a comparable influence on the performance of both systems, whereas this approach is limited when it comes to explaining some effects due to variations of the linguistic structure. In order to get deeper insights into the timing and interaction of the sub-processes relevant for instruction understanding and to model these effects in the computer simulation, more specific data on human performance are necessary, e.g. by using eye-tracking techniques. In the long run, such an approach will result in the development of a more natural and cognitively adequate human-computer interface.},
    author = {Weiß, Petra and Pfeiffer, Thies and Eikmeyer, Hans-Jürgen and Rickheit, Gert},
    booktitle = {Situated Communication},
    editor = {Rickheit, Gert and Wachsmuth, Ipke},
    isbn = {978-3-11-018897-4},
    keywords = {Multimodal Communication},
    pages = {31--76},
    publisher = {Mouton de Gruyter},
    title = {{Processing Instructions}},
    url = {https://pub.uni-bielefeld.de/record/1895811},
    year = {2006},
    }
  • A. Kranstedt, A. Lücking, T. Pfeiffer, H. Rieser, and I. Wachsmuth, “Deixis: How to determine demonstrated objects using a pointing cone,” in Gesture in Human-Computer Interaction and Simulation, 2006, p. 300–311. doi:10.1007/11678816_34
    [BibTeX] [Abstract] [Download PDF]
    We present a collaborative approach towards a detailed understanding of the usage of pointing gestures accompanying referring expressions. This effort is undertaken in the context of human-machine interaction integrating empirical studies, theory of grammar and logics, and simulation techniques. In particular, we attempt to measure the precision of the focussed area of a pointing gesture, the so-called pointing cone. ne pointing cone serves as a central concept in a formal account of multi-modal integration at the linguistic speech-gesture interface as well as in a computational model of processing multi-modal deictic expressions.
    @inproceedings{1599335,
    abstract = {We present a collaborative approach towards a detailed understanding of the usage of pointing gestures accompanying referring expressions. This effort is undertaken in the context of human-machine interaction integrating empirical studies, theory of grammar and logics, and simulation techniques. In particular, we attempt to measure the precision of the focussed area of a pointing gesture, the so-called pointing cone. ne pointing cone serves as a central concept in a formal account of multi-modal integration at the linguistic speech-gesture interface as well as in a computational model of processing multi-modal deictic expressions.},
    author = {Kranstedt, Alfred and Lücking, Andy and Pfeiffer, Thies and Rieser, Hannes and Wachsmuth, Ipke},
    booktitle = {Gesture in Human-Computer Interaction and Simulation},
    editor = {Gibet , Sylvie and Courty , Nicolas and Kamp , Jean-François},
    issn = {0302-9743},
    keywords = {Multimodal Communication},
    pages = {300--311},
    publisher = {Springer},
    title = {{Deixis: How to determine demonstrated objects using a pointing cone}},
    url = {https://pub.uni-bielefeld.de/record/1599335},
    doi = {10.1007/11678816_34},
    year = {2006},
    }
  • H. Flitter, T. Pfeiffer, and G. Rickheit, “Psycholinguistic experiments on spatial relations using stereoscopic presentation,” in Situated Communication, G. Rickheit and I. Wachsmuth, Eds., Mouton de Gruyter, 2006, p. 127–153.
    [BibTeX] [Abstract] [Download PDF]
    This contribution presents investigations of the usage of computer gene¬rated 3D stimuli for psycholinguistic experiments. In the first part, we introduce VDesigner. VDesigner is a visual programming environment that operates in two different modes, a design mode to implement the materials and the structure of an experiment, and a runtime mode to actually run the experiment. We have extended VDesigner to support interactive experimentation in 3D. In the second part, we de-scribe a practical application of the programming environment. We have replicated a previous 2½D study of the production of spatial terms in a 3D setting, with the objective of investigating the effect of the presentation modes (2½D vs. 3D) on the choice of the referential system. In each trial, on being presented with a scene, the participants had to verbally specify the position of a target object in relation to a reference object. We recorded the answers of the participants as well as their reac-tion times. The results suggest that stereoscopic 3D presentations are a promising technology to elicit a more natural behavior of participants in computer-based experiments.
    @inbook{1894456,
    abstract = {This contribution presents investigations of the usage of computer gene¬rated 3D stimuli for psycholinguistic experiments. In the first part, we introduce VDesigner. VDesigner is a visual programming environment that operates in two different modes, a design mode to implement the materials and the structure of an experiment, and a runtime mode to actually run the experiment. We have extended VDesigner to support interactive experimentation in 3D. In the second part, we de-scribe a practical application of the programming environment. We have replicated a previous 2½D study of the production of spatial terms in a 3D setting, with the objective of investigating the effect of the presentation modes (2½D vs. 3D) on the choice of the referential system. In each trial, on being presented with a scene, the participants had to verbally specify the position of a target object in relation to a reference object. We recorded the answers of the participants as well as their reac-tion times. The results suggest that stereoscopic 3D presentations are a promising technology to elicit a more natural behavior of participants in computer-based experiments.},
    author = {Flitter, Helmut and Pfeiffer, Thies and Rickheit, Gert},
    booktitle = {Situated Communication},
    editor = {Rickheit, Gert and Wachsmuth, Ipke},
    isbn = {3-11018-897-X},
    keywords = {Multimodal Communication},
    pages = {127--153},
    publisher = {Mouton de Gruyter},
    title = {{Psycholinguistic experiments on spatial relations using stereoscopic presentation}},
    url = {https://pub.uni-bielefeld.de/record/1894456},
    year = {2006},
    }
  • A. Kranstedt, A. Lücking, T. Pfeiffer, H. Rieser, and M. Staudacher, “Measuring and Reconstructing Pointing in Visual Contexts,” in Proceedings of the brandial 2006 – The 10th Workshop on the Semantics and Pragmatics of Dialogue, 2006, p. 82–89.
    [BibTeX] [Abstract] [Download PDF]
    We describe an experiment to gather original data on geometrical aspects of pointing. In particular, we are focusing upon the concept of the pointing cone, a geometrical model of a pointing’s extension. In our setting we employed methodological and technical procedures of a new type to integrate data from annotations as well as from tracker recordings. We combined exact information on position and orientation with rater’s classifications. Our first results seem to challenge classical linguistic and philosophical theories of demonstration in that they advise to separate pointings from reference.
    @inproceedings{1894467,
    abstract = {We describe an experiment to gather original data on geometrical aspects of pointing. In particular, we are focusing upon the concept of the pointing cone, a geometrical model of a pointing’s extension. In our setting we employed methodological and technical procedures of a new type to integrate data from annotations as well as from tracker recordings. We combined exact information on position and orientation with rater’s classifications. Our first results seem to challenge classical linguistic and philosophical theories of demonstration in that they advise to separate pointings from reference.},
    author = {Kranstedt, Alfred and Lücking, Andy and Pfeiffer, Thies and Rieser, Hannes and Staudacher, Marc},
    booktitle = {Proceedings of the brandial 2006 - The 10th Workshop on the Semantics and Pragmatics of Dialogue},
    editor = {Schlangen, David and Fernández, Raquel},
    isbn = {3-939469-29-7},
    keywords = {Multimodal Communication},
    pages = {82--89},
    publisher = {Universitätsverlag Potsdam},
    title = {{Measuring and Reconstructing Pointing in Visual Contexts}},
    url = {https://pub.uni-bielefeld.de/record/1894467},
    year = {2006},
    }
  • M. Weber, T. Pfeiffer, and B. Jung, “Pr@senZ – P@CE: Mobile Interaction with Virtual Reality,” in MOBILE HCI 05 Proceedings of the 7th International Conference on Human Computer Interaction with Mobile Devices and Services, 2005, p. 351–352.
    [BibTeX] [Abstract] [Download PDF]
    Recently videoconferencing has been extended from human face-to-face communication to human machine interaction with Virtual Environments (VE)[6]. Relying on established videoconferencing (VC) protocol standards this thin client solution does not require specialised 3D soft- or hardware and scales well to multimedia enabled mobile devices. This would bring a whole range of new applications to the mobile platform. To facilitate our research in mobile interaction the Open Source project P@CE has been started to bring a fullfeatured VC client to the Pocket PC platform.
    @inproceedings{1894818,
    abstract = {Recently videoconferencing has been extended from human face-to-face communication to human machine interaction with Virtual Environments (VE)[6]. Relying on established videoconferencing (VC) protocol standards this thin client solution does not require specialised 3D soft- or hardware and scales well to multimedia enabled mobile devices. This would bring a whole range of new applications to the mobile platform. To facilitate our research in mobile interaction the Open Source project P@CE has been started to bring a fullfeatured VC client to the Pocket PC platform.},
    author = {Weber, Matthias and Pfeiffer, Thies and Jung, Bernhard},
    booktitle = {MOBILE HCI 05 Proceedings of the 7th International Conference on Human Computer Interaction with Mobile Devices and Services},
    editor = {Tscheligi, Manfred and Bernhaupt, Regina and Mihalic, Kristijan},
    isbn = {1-59593-089-2},
    keywords = {Mediated Communication},
    pages = {351--352},
    publisher = {ACM},
    title = {{Pr@senZ - P@CE: Mobile Interaction with Virtual Reality}},
    url = {https://pub.uni-bielefeld.de/record/1894818},
    year = {2005},
    }
  • T. Pfeiffer, M. Weber, and B. Jung, “Ubiquitous Virtual Reality: Accessing Shared Virtual Environments through Videoconferencing Technology,” in Theory and Practice of Computer Graphics 2005, 2005, p. 209–216.
    [BibTeX] [Abstract] [Download PDF]
    This paper presents an alternative to existing methods for remotely accessing Virtual Reality (VR) systems. Common solutions are based on specialised software and/or hardware capable of rendering 3D content, which not only restricts accessibility to specific platforms but also increases the barrier for non expert users. Our approach addresses new audiences by making existing Virtual Environments (VEs) ubiquitously accessible. Its appeal is that a large variety of clients, like desktop PCs and handhelds, are ready to connect to VEs out of the box. We achieve this combining established videoconferencing protocol standards with a server based interaction handling. Currently interaction is based on natural speech, typed textual input and visual feedback, but extensions to support natural gestures are possible and planned. This paper presents the conceptual framework enabling videoconferencing with collaborative VEs as well as an example application for a virtual prototyping system.
    @inproceedings{1894812,
    abstract = {This paper presents an alternative to existing methods for remotely accessing Virtual Reality (VR) systems. Common solutions are based on specialised software and/or hardware capable of rendering 3D content, which not only restricts accessibility to specific platforms but also increases the barrier for non expert users. Our approach addresses new audiences by making existing Virtual Environments (VEs) ubiquitously accessible. Its appeal is that a large variety of clients, like desktop PCs and handhelds, are ready to connect to VEs out of the box. We achieve this combining established videoconferencing protocol standards with a server based interaction handling. Currently interaction is based on natural speech, typed textual input and visual feedback, but extensions to support natural gestures are possible and planned. This paper presents the conceptual framework enabling videoconferencing with collaborative VEs as well as an example application for a virtual prototyping system.},
    author = {Pfeiffer, Thies and Weber, Matthias and Jung, Bernhard},
    booktitle = {Theory and Practice of Computer Graphics 2005},
    editor = {Lever, Louise and McDerby, Marc},
    isbn = {3-905673-56-8},
    keywords = {Mediated Communication},
    pages = {209--216},
    publisher = {Eurographics Association},
    title = {{Ubiquitous Virtual Reality: Accessing Shared Virtual Environments through Videoconferencing Technology}},
    url = {https://pub.uni-bielefeld.de/record/1894812},
    year = {2005},
    }
  • T. Pfeiffer and M. E. Latoschik, “Resolving Object References in Multimodal Dialogues for Immersive Virtual Environments,” in Proceedings of the IEEE Virtual Reality 2004, 2004, p. 35–42.
    [BibTeX] [Abstract] [Download PDF]
    This paper describes the underlying concepts and the technical implementation of a system for resolving multimodal references in Virtual Reality (VR). In this system the temporal and semantic relations intrinsic to referential utterances are expressed as a constraint satisfaction problem, where the propositional value of each referential unit during a multimodal dialogue updates incrementally the active set of constraints. As the system is based on findings of human cognition research it also regards, e.g., constraints implicitly assumed by human communicators. The implementation takes VR related real-time and immersive conditions into account and adapts its architecture to well known scene-graph based design patterns by introducing a socalled reference resolution engine. Regarding the conceptual work as well as regarding the implementation, special care has been taken to allow further refinements and modifications to the underlying resolving processes on a high level basis.
    @inproceedings{1894806,
    abstract = {This paper describes the underlying concepts and the technical implementation of a system for resolving multimodal references in Virtual Reality (VR). In this system the temporal and semantic relations intrinsic to referential utterances are expressed as a constraint satisfaction problem, where the propositional value of each referential unit during a multimodal dialogue updates incrementally the active set of constraints. As the system is based on findings of human cognition research it also regards, e.g., constraints implicitly assumed by human communicators. The implementation takes VR related real-time and immersive conditions into account and adapts its architecture to well known scene-graph based design patterns by introducing a socalled reference resolution engine. Regarding the conceptual work as well as regarding the implementation, special care has been taken to allow further refinements and modifications to the underlying resolving processes on a high level basis.},
    author = {Pfeiffer, Thies and Latoschik, M. E.},
    booktitle = {Proceedings of the IEEE Virtual Reality 2004},
    editor = {Ikei, Yasushi and Göbel, Martin and Chen, Jim},
    isbn = {0-7803-8415-6},
    keywords = {inform::Constraint Satisfaction inform::Multimodality inform::Virtual Reality ling::Natural Language Processing ling::Reference Resolution, Multimodal Communication},
    pages = {35--42},
    title = {{Resolving Object References in Multimodal Dialogues for Immersive Virtual Environments}},
    url = {https://pub.uni-bielefeld.de/record/1894806},
    year = {2004},
    }
  • T. Pfeiffer and I. Voss, “Integrating Knowledge Bases Using UML Metamodels,” 2003.
    [BibTeX] [Abstract] [Download PDF]
    When merging different knowledge bases one has to cope with the problem of classifying and linking concepts as well as the possibly heterogeneous representations the knowledge is expressed in. We are presenting an implementation that follows the Model Driven Architecture (MDA) [Miller and Mukerji, 2003] approach defined by the Object Management Group (OMG). Metamodels defined in the Unified Modeling Language (UML) are used to implement different knowledge representation formalisms. Knowledge is expressed as a Model instantiating the Metamodel. Integrating Metamodels are defined for merging knowledge distributed over different knowledge bases.
    @techreport{2426757,
    abstract = {When merging different knowledge bases one has to cope with the problem of classifying and linking concepts as well as the possibly heterogeneous representations the knowledge is expressed in. We are presenting an implementation that follows the Model Driven Architecture (MDA) [Miller and Mukerji, 2003] approach defined by the Object Management Group (OMG). Metamodels defined in the Unified Modeling Language (UML) are used to implement different knowledge representation formalisms. Knowledge is expressed as a Model instantiating the Metamodel. Integrating Metamodels are defined for merging knowledge distributed over different knowledge bases.},
    author = {Pfeiffer, Thies and Voss, Ian},
    keywords = {Artificial Intelligence},
    title = {{Integrating Knowledge Bases Using UML Metamodels}},
    url = {https://pub.uni-bielefeld.de/record/2426757},
    year = {2003},
    }
  • T. Pfeiffer, “Eine Referenzauflösung für die dynamische Anwendung in Konstruktionssituationen in der Virtuellen Realität,” 2003.
    [BibTeX] [Download PDF]
    @techreport{1894517,
    author = {Pfeiffer, Thies},
    keywords = {inform::Constraint Satisfaction inform::Multimodality inform::Virtual Reality ling::Natural Language Processing, Multimodal Communication},
    publisher = {Faculty of Technology, University of Bielefeld},
    title = {{Eine Referenzauflösung für die dynamische Anwendung in Konstruktionssituationen in der Virtuellen Realität}},
    url = {https://pub.uni-bielefeld.de/record/1894517},
    year = {2003},
    }
  • T. Pfeiffer, I. Voss, and M. E. Latoschik, “Resolution of Multimodal Object References using Conceptual Short Term Memory,” in Proceedings of the EuroCogSci03, 2003, p. 426.
    [BibTeX] [Abstract] [Download PDF]
    This poster presents cognitive-motivated aspects of a technical system for the resolution of references to objects within an assembly-task domain. The research is integrated in the Collaborative Research Center SFB 360 which is concerned with situated artificial communicators. One application scenario consists of a task-oriented discourse between an instructor and a constructor who collaboratively build aggregates from a wooden toy kit (”Baufix”), or from generic CAD parts. In our current setting this scenario is embedded in a virtual reality (VR) installation, where the human user, taking the role of the instructor, guides the artificial constructor (embodied by the ECA Max) through the assembly process by means of multimodal task descriptions (see Figure 1). The system handles instructions like: ”Plug the left red screw from above in the middle hole of the wing and turn it this way.” accompanied by coverbal deictic and mimetic gestures (see Latoschik, 2001).
    @inproceedings{1894841,
    abstract = {This poster presents cognitive-motivated aspects of a technical system for the resolution of references to objects within an assembly-task domain. The research is integrated in the Collaborative Research Center SFB 360 which is concerned with situated artificial communicators. One application scenario consists of a task-oriented discourse between an instructor and a constructor who collaboratively build aggregates from a wooden toy kit (”Baufix”), or from generic CAD parts. In our current setting this scenario is embedded in a virtual reality (VR) installation, where the human user, taking the role of the instructor, guides the artificial constructor (embodied by the ECA Max) through the assembly process by means of multimodal task descriptions (see Figure 1). The system handles instructions like: ”Plug the left red screw from above in the middle hole of the wing and turn it this way.” accompanied by coverbal deictic and mimetic gestures (see Latoschik, 2001).},
    author = {Pfeiffer, Thies and Voss, Ian and Latoschik, Marc Erich},
    booktitle = {Proceedings of the EuroCogSci03},
    editor = {Schmalhofer, F. and Young, R.},
    keywords = {inform::Multimodality inform::Virtual Reality ling::Reference Resolution ling::Natural Language Processing, Artificial Intelligence, Multimodal Communication},
    pages = {426},
    publisher = {Lawrence Erlbaum Associates Inc},
    title = {{Resolution of Multimodal Object References using Conceptual Short Term Memory}},
    url = {https://pub.uni-bielefeld.de/record/1894841},
    year = {2003},
    }
  • B. Jung, T. Pfeiffer, and J. Zakotnik, “Natural Language Based Virtual Prototyping on the Web,” in Proceedings Structured Design of Virtual Environments and 3D-Components, 2002, p. 101–110.
    [BibTeX] [Abstract] [Download PDF]
    This contribution describes a WWW-based multi-user system for concurrent virtual prototyping. A 3D scene of CAD parts is presented to the users in the web browser. By instructing the system using simple natural language commands, complex aggregates can be assembled from the basic parts. The current state of the assembly is instantly published to all system users who can discuss design choices in a chat area. The implementation builds on an existing system for virtual assembly made available as a web service. The client side components are fully implemented as Java applets and require no plugin for visualization of 3D content. Http tunneled messaging between web clients and server ensures system accessibility from any modern web browser even behind firewalls. The system is first to demonstrate natural language based virtual prototyping on the web.
    @inproceedings{1894462,
    abstract = {This contribution describes a WWW-based multi-user system for concurrent virtual prototyping. A 3D scene of CAD parts is presented to the users in the web browser. By instructing the system using simple natural language commands, complex aggregates can be assembled from the basic parts. The current state of the assembly is instantly published to all system users who can discuss design choices in a chat area. The implementation builds on an existing system for virtual assembly made available as a web service. The client side components are fully implemented as Java applets and require no plugin for visualization of 3D content. Http tunneled messaging between web clients and server ensures system accessibility from any modern web browser even behind firewalls. The system is first to demonstrate natural language based virtual prototyping on the web.},
    author = {Jung, Bernhard and Pfeiffer, Thies and Zakotnik, Jure},
    booktitle = {Proceedings Structured Design of Virtual Environments and 3D-Components},
    editor = {Geiger et al., C.},
    keywords = {inform::Web inform::Internet inform::Collaborative Environments},
    pages = {101--110},
    publisher = {Shaker},
    title = {{Natural Language Based Virtual Prototyping on the Web}},
    url = {https://pub.uni-bielefeld.de/record/1894462},
    year = {2002},
    }

Academic Career

  1. 2019

    Professor for Human Computer Interaction

    University of Applied Sciences Emden/Leer
  2. 2013

    Head of the Virtual Reality Laboratory

    Center of Excellence Cognitive Interaction Technology, Central Lab Facilities, Bielefeld
  3. 2010

    Akademischer Rat (Assistant Professor) / AG Wissensbasierte Systeme (Künstliche Intelligenz)

    Bielefeld University, Faculty of Technology
  4. 2010

    Doctor rerum naturalium

    Bielefeld University
  5. 2003-2006

    Researcher in the CRC 360, Situated Artificial Communicators

    Bielefeld University, Faculty of Linguistics
  6. 2003

    Diplom-Informatiker (Diploma in Computer Science)

    Bielefeld University, Faculty of Technology

Award Prizes

  • 2022
    AVRiL 2022 in Gold als Gesamtsieger
    VR/AR-Learning – Arbeitskreis der GI-Fachgruppen E-Learning & VR/AR