Publications

2021

  • S. Garcia Fracaro, P. Chan, T. Gallagher, Y. Tehreem, R. Toyoda, B. Kristel, G. Jarka, T. Pfeiffer, B. Slof, S. Wachsmuth, and M. Wilk, “Towards design guidelines for virtual reality training for the chemical industry,” Education for chemical engineers, 2021. doi:https://doi.org/10.1016/j.ece.2021.01.014
    [BibTeX] [Abstract] [Download PDF]

    Operator training in the chemical industry is important because of the potentially hazardous nature of procedures and the way operators’ mistakes can have serious consequences on process operation and safety. Currently, operator training is facing some challenges, such as high costs, safety limitations and time constraints. Also, there have been some indications of a lack of engagement of employees during mandatory training. Immersive technologies can provide solutions to these challenges. Specifically, virtual reality (VR) has the potential to improve the way chemical operators experience training sessions, increasing motivation, virtually exposing operators to unsafe situations, and reducing classroom training time. In this paper, we present research being conducted to develop a virtual reality training solution as part of the EU Horizon 2020 CHARMING Project, a project focusing on the education of current and future chemical industry stakeholders. This paper includes the design principles for a virtual reality training environment including the features that enhance the effectiveness of virtual reality training such as game-based learning elements, learning analytics, and assessment methods. This work can assist those interested in exploring the potential of virtual reality training environments in the chemical industry from a multidisciplinary perspective.

    @article{GARCIAFRACARO2021,
    title = {Towards Design Guidelines for Virtual Reality Training for the Chemical Industry},
    journal = {Education for Chemical Engineers},
    year = {2021},
    issn = {1749-7728},
    doi = {https://doi.org/10.1016/j.ece.2021.01.014},
    url = {https://www.sciencedirect.com/science/article/pii/S1749772821000142},
    author = {Sofia {Garcia Fracaro} and Philippe Chan and Timothy Gallagher and Yusra Tehreem and Ryo Toyoda and Bernaerts Kristel and Glassey Jarka and Thies Pfeiffer and Bert Slof and Sven Wachsmuth and Michael Wilk},
    keywords = {Virtual Reality, Chemical industry, Operator training, Learning analytics, Gamebased learning, assessment},
    abstract = {Operator training in the chemical industry is important because of the potentially hazardous nature of procedures and the way operators' mistakes can have serious consequences on process operation and safety. Currently, operator training is facing some challenges, such as high costs, safety limitations and time constraints. Also, there have been some indications of a lack of engagement of employees during mandatory training. Immersive technologies can provide solutions to these challenges. Specifically, virtual reality (VR) has the potential to improve the way chemical operators experience training sessions, increasing motivation, virtually exposing operators to unsafe situations, and reducing classroom training time. In this paper, we present research being conducted to develop a virtual reality training solution as part of the EU Horizon 2020 CHARMING Project, a project focusing on the education of current and future chemical industry stakeholders. This paper includes the design principles for a virtual reality training environment including the features that enhance the effectiveness of virtual reality training such as game-based learning elements, learning analytics, and assessment methods. This work can assist those interested in exploring the potential of virtual reality training environments in the chemical industry from a multidisciplinary perspective.}
    }

2020

  • A. M. Monteiro and T. Pfeiffer, “Virtual reality in second language acquisition research: a case on amazon sumerian.” 2020, pp. 125-128. doi:10.33965/icedutech2020_202002R018
    [BibTeX] [Abstract] [Download PDF]

    Virtual reality (VR) has gained increasing academic attention in recent years, and a possible reason for that might be its spread-out applications across different sectors of life. From the advent of the WebVR 1.0 API (application program interface), released in 2016, it has become easier for developers, without extensive knowledge of programming and modeling of 3D objects, to build and host applications that can be accessed anywhere by a minimum setup of devices. The development of WebVR, now continued as WebXR, is, therefore, especially relevant for research on education and teaching since experiments in VR had required not only expertise in the computer science domain but were also dependent on state-of-the-art hardware, which could have been limiting aspects to researchers and teachers. This paper presents the result of a project conducted at CITEC (Cluster of Excellence Cognitive Interaction Technology), Bielefeld University, Germany, which intended to teach English for a specific purpose in a VR environment using Amazon Sumerian, a web-based service. Contributions and limitations of this project are also discussed.

    @inproceedings{inproceedings,
    author = {Monteiro, Ana Maria and Pfeiffer, Thies},
    year = {2020},
    month = {02},
    pages = {125-128},
    title = {Virtual Reality in Second Language Acquisition Research: A Case on Amazon Sumerian},
    doi = {10.33965/icedutech2020_202002R018},
    url = {http://www.iadisportal.org/digital-library/virtual-reality-in-second-language-acquisition-research-a-case-on-amazon-sumerian},
    keywords = {Virtual Reality, Second Language Acquisition, WebVR},
    abstract = {Virtual reality (VR) has gained increasing academic attention in recent years, and a possible reason for that might be its spread-out applications across different sectors of life. From the advent of the WebVR 1.0 API (application program interface), released in 2016, it has become easier for developers, without extensive knowledge of programming and modeling of 3D objects, to build and host applications that can be accessed anywhere by a minimum setup of devices. The development of WebVR, now continued as WebXR, is, therefore, especially relevant for research on education and teaching since experiments in VR had required not only expertise in the computer science domain but were also dependent on state-of-the-art hardware, which could have been limiting aspects to researchers and teachers. This paper presents the result of a project conducted at CITEC (Cluster of Excellence Cognitive Interaction Technology), Bielefeld University, Germany, which intended to teach English for a specific purpose in a VR environment using Amazon Sumerian, a web-based service. Contributions and limitations of this project are also discussed.}
    }

  • E. Lampen, J. Lehwald, and T. Pfeiffer, “A context-aware assistance framework for implicit interaction with an augmented human,” in Virtual, augmented and mixed reality. industrial and everyday life applications, Cham, 2020, p. 91–110.
    [BibTeX] [Abstract]

    The automotive industry is currently facing massive challenges. Shorter product life cycles together with mass customization lead to a high complexity for manual assembly tasks. This induces the need for effective manual assembly assistances which guide the worker faultlessly through different assembly steps while simultaneously decrease their completion time and cognitive load. While in the literature a simulation-based assistance visualizing an augmented digital human was proposed, it lacks the ability to incorporate knowledge about the context of an assembly scenario through arbitrary sensor data. Within this paper, a general framework for the modular acquisition, interpretation and management of context is presented. Furthermore, a novel context-aware assistance application in augmented reality is introduced which enhances the previously proposed simulation-based assistance method by several context-aware features. Finally, a preliminary study (N = 6) is conducted to give a first insight into the effectiveness of context-awareness for the simulation-based assistance with respect to subjective perception criteria. The results suggest that the user experience is improved by context-awareness in general and the developed context-aware features were overall perceived as useful in terms of error, time and cognitive load reduction as well as motivational increase. However, the developed software architecture offers potential for improvement and future research considering performance parameters is mandatory.

    @inproceedings{10.1007/978-3-030-49698-2_7,
    author="Lampen, Eva and Lehwald, Jannes and Pfeiffer, Thies",
    editor="Chen, Jessie Y. C. and Fragomeni, Gino",
    title="A Context-Aware Assistance Framework for Implicit Interaction with an Augmented Human",
    booktitle="Virtual, Augmented and Mixed Reality. Industrial and Everyday Life Applications",
    year="2020",
    publisher="Springer International Publishing",
    address="Cham",
    pages="91--110",
    abstract="The automotive industry is currently facing massive challenges. Shorter product life cycles together with mass customization lead to a high complexity for manual assembly tasks. This induces the need for effective manual assembly assistances which guide the worker faultlessly through different assembly steps while simultaneously decrease their completion time and cognitive load. While in the literature a simulation-based assistance visualizing an augmented digital human was proposed, it lacks the ability to incorporate knowledge about the context of an assembly scenario through arbitrary sensor data. Within this paper, a general framework for the modular acquisition, interpretation and management of context is presented. Furthermore, a novel context-aware assistance application in augmented reality is introduced which enhances the previously proposed simulation-based assistance method by several context-aware features. Finally, a preliminary study (N = 6) is conducted to give a first insight into the effectiveness of context-awareness for the simulation-based assistance with respect to subjective perception criteria. The results suggest that the user experience is improved by context-awareness in general and the developed context-aware features were overall perceived as useful in terms of error, time and cognitive load reduction as well as motivational increase. However, the developed software architecture offers potential for improvement and future research considering performance parameters is mandatory.",
    isbn="978-3-030-49698-2"
    }

  • J. Pfeiffer, T. Pfeiffer, M. Meißner, and E. Weiß, “Eye-tracking-based classification of information search behavior using machine learning: evidence from experiments in physical shops and virtual reality shopping environments,” Information systems research, 2020. doi:10.1287/isre.2019.0907
    [BibTeX] [Abstract] [Download PDF]

    How can we tailor assistance systems, such as recommender systems or decision support systems, to consumers’ individual shopping motives? How can companies unobtrusively identify shopping motives without explicit user input? We demonstrate that eye movement data allow building reliable prediction models for identifying goal-directed and exploratory shopping motives. Our approach is validated in a real supermarket and in an immersive virtual reality supermarket. Several managerial implications of using gaze-based classification of information search behavior are discussed: First, the advent of virtual shopping environments makes using our approach straightforward as eye movement data are readily available in next-generation virtual reality devices. Virtual environments can be adapted to individual needs once shopping motives are identified and can be used to generate more emotionally engaging customer experiences. Second, identifying exploratory behavior offers opportunities for marketers to adapt marketing communication and interaction processes. Personalizing the shopping experience and profiling customers’ needs based on eye movement data promises to further increase conversion rates and customer satisfaction. Third, eye movement-based recommender systems do not need to interrupt consumers and thus do not take away attention from the purchase process. Finally, our paper outlines the technological basis of our approach and discusses the practical relevance of individual predictors.

    @article{pfeiffer2020eyetracking,
    author = {Pfeiffer, Jella and Pfeiffer, Thies and Meißner, Martin and Weiß, Elisa},
    title = {Eye-Tracking-Based Classification of Information Search Behavior Using Machine Learning: Evidence from Experiments in Physical Shops and Virtual Reality Shopping Environments},
    journal = {Information Systems Research},
    year = {2020},
    doi = {10.1287/isre.2019.0907},
    URL = {https://doi.org/10.1287/isre.2019.0907},
    eprint = {https://doi.org/10.1287/isre.2019.0907},
    abstract = { How can we tailor assistance systems, such as recommender systems or decision support systems, to consumers’ individual shopping motives? How can companies unobtrusively identify shopping motives without explicit user input? We demonstrate that eye movement data allow building reliable prediction models for identifying goal-directed and exploratory shopping motives. Our approach is validated in a real supermarket and in an immersive virtual reality supermarket. Several managerial implications of using gaze-based classification of information search behavior are discussed: First, the advent of virtual shopping environments makes using our approach straightforward as eye movement data are readily available in next-generation virtual reality devices. Virtual environments can be adapted to individual needs once shopping motives are identified and can be used to generate more emotionally engaging customer experiences. Second, identifying exploratory behavior offers opportunities for marketers to adapt marketing communication and interaction processes. Personalizing the shopping experience and profiling customers’ needs based on eye movement data promises to further increase conversion rates and customer satisfaction. Third, eye movement-based recommender systems do not need to interrupt consumers and thus do not take away attention from the purchase process. Finally, our paper outlines the technological basis of our approach and discusses the practical relevance of individual predictors. }
    }

  • J. Blattgerste and T. Pfeiffer, “Promptly authored augmented reality instructions can be sufficient to enable cognitively impaired workers,” in GI VR / AR Workshop 2020, 2020.
    [BibTeX] [Abstract] [Download PDF]

    The benefits of contextualising information through Augmented Reality (AR) instructions to assist cognitively impaired workers are well known, but most findings are based on AR instructions carefully designed for predefined standard tasks. Previous findings indicate that the modality and quality of provided AR instructions have a significant impact on the provided benefits. The emergence of commercial products providing tools for instructors to promptly author their own AR instructions elicits the question, whether instructions created through those are sufficient to support cognitively impaired workers. This paper explores this question through a qualitative study using an AR authoring tool to create AR instructions for a task that none out of 10 participants was able to complete previously. Using promptly authored instructions, however, most were able to complete the task. Additionally, they reported good usability and gave qualitative feedback indicating they would like to use comparable AR instructions more often.

    @inproceedings{blattgerste2020prompty,
    title={Promptly Authored Augmented Reality Instructions Can Be Sufficient to Enable Cognitively Impaired Workers},
    author={Blattgerste, Jonas and Pfeiffer, Thies},
    booktitle={{GI VR / AR Workshop 2020}},
    year={2020},
    url = {https://mixality.de/wp-content/uploads/2020/07/blattgerste2020prompty.pdf},
    abstract = {The benefits of contextualising information through Augmented Reality (AR) instructions to assist cognitively impaired workers are well known, but most findings are based on AR instructions carefully designed for predefined standard tasks. Previous findings indicate that the modality and quality of provided AR instructions have a significant impact on the provided benefits. The emergence of commercial products providing tools for instructors to promptly author their own AR instructions elicits the question, whether instructions created through those are sufficient to support cognitively impaired workers. This paper explores this question through a qualitative study using an AR authoring tool to create AR instructions for a task that none out of 10 participants was able to complete previously. Using promptly authored instructions, however, most were able to complete the task. Additionally, they reported good usability and gave qualitative feedback indicating they would like to use comparable AR instructions more often.}
    }

  • P. Renner and T. Pfeiffer, “AR-Glasses-Based Attention Guiding for Complex Environments – Requirements, Classification and Evaluation,” in Proceedings of the 13th acm international conference on pervasive technologies related to assistive environments, New York, NY, USA, 2020. doi:10.1145/3389189.3389198
    [BibTeX] [Abstract] [Download PDF]

    Augmented Reality (AR) based assistance has a huge potential in the context of Industry 4.0: AR links digital information to physical objects and processes in a mobile and, in the case of AR glasses, hands-free way. In most companies, order-picking is still done using paper lists. With the rapid development of AR hardware during the last years, the interest in digitizing picking processes using AR rises. AR-based guiding for picking tasks can reduce the time needed for visual search and reduce errors, such as wrongly picked items or false placements. Choosing the best guiding technique is a non-trivial task: Different environments bring their own inherent constraints and requirements. In the literature, many kinds of guiding techniques were proposed, but the majority of techniques were only compared to non-AR picking assistance. To reveal advantages and disadvantages of AR-based guiding techniques, the contribution of this paper is three-fold: First, an analysis of tasks and environments reveals requirements and constraints on attention guiding techniques which are condensed to a taxonomy of attention guiding techniques. Second, guiding techniques covering a range of approaches from the literature are evaluated in a large-scale picking environment with a focus on task performance and on factors as the users’ feeling of autonomy and ergonomics. Finally, a 3D path-based guiding technique supporting multiple goals simultaneously in complex environments is proposed.

    @inproceedings{renner2020AR,
    author = {Renner, Patrick and Pfeiffer, Thies},
    title = {{AR-Glasses-Based Attention Guiding for Complex Environments - Requirements, Classification and Evaluation}},
    year = {2020},
    isbn = {9781450377737},
    publisher = {Association for Computing Machinery},
    address = {New York, NY, USA},
    doi = {10.1145/3389189.3389198},
    booktitle = {Proceedings of the 13th ACM International Conference on PErvasive Technologies Related to Assistive Environments},
    articleno = {31},
    numpages = {10},
    location = {Corfu, Greece},
    series = {PETRA ’20},
    url = {https://mixality.de/wp-content/uploads/2020/07/renner2020AR.pdf},
    abstract = {Augmented Reality (AR) based assistance has a huge potential in the context of Industry 4.0: AR links digital information to physical objects and processes in a mobile and, in the case of AR glasses, hands-free way. In most companies, order-picking is still done using paper lists. With the rapid development of AR hardware during the last years, the interest in digitizing picking processes using AR rises. AR-based guiding for picking tasks can reduce the time needed for visual search and reduce errors, such as wrongly picked items or false placements. Choosing the best guiding technique is a non-trivial task: Different environments bring their own inherent constraints and requirements. In the literature, many kinds of guiding techniques were proposed, but the majority of techniques were only compared to non-AR picking assistance. To reveal advantages and disadvantages of AR-based guiding techniques, the contribution of this paper is three-fold: First, an analysis of tasks and environments reveals requirements and constraints on attention guiding techniques which are condensed to a taxonomy of attention guiding techniques. Second, guiding techniques covering a range of approaches from the literature are evaluated in a large-scale picking environment with a focus on task performance and on factors as the users' feeling of autonomy and ergonomics. Finally, a 3D path-based guiding technique supporting multiple goals simultaneously in complex environments is proposed.}
    }

  • J. Blattgerste, K. Luksch, C. Lewa, M. Kunzendorf, N. H. Bauer, A. Bernloehr, M. Joswig, T. Schäfer, and T. Pfeiffer, “Project heb@ar: exploring handheld augmented reality training to supplement academic midwifery education,” in Delfi 2020 – die 18. fachtagung bildungstechnologien der gesellschaft für informatik e.v., Bonn, 2020, pp. 103-108.
    [BibTeX] [Abstract] [Download PDF]

    Augmented Reality (AR) promises great potential for training applications as it allows to provide the trainee with instructions and feedback that is contextualized. In recent years, AR reached a state of technical feasibility that not only allows for larger, long term evaluations, but also for explorations of its application to specific training use cases. In the BMBF funded project Heb@AR, the utilization of handheld AR as a supplementary tool for the practical training in academic midwifery education is explored. Specifically, how and where AR can be used most effectively in this context, how acceptability and accessibility for tutors and trainees can be ensured and how well emergency situations can be simulated using the technology. In this paper an overview of the Heb@AR project is provided, the goals of the project are stated and the project’s research questions are discussed from a technical perspective. Furthermore, insights into the current state and the development process of the first AR training prototype are provided: The preparation of a tocolytic injection.

    @inproceedings{blattgerste2020hebar,
    author = {Blattgerste, Jonas AND Luksch, Kristina AND Lewa, Carmen AND Kunzendorf, Martina AND Bauer, Nicola H. AND Bernloehr, Annette AND Joswig, Matthias AND Schäfer, Thorsten AND Pfeiffer, Thies},
    title = {Project Heb@AR: Exploring handheld Augmented Reality training to supplement academic midwifery education},
    booktitle = {DELFI 2020 – Die 18. Fachtagung Bildungstechnologien der Gesellschaft für Informatik e.V.},
    year = {2020},
    editor = {Zender, Raphael AND Ifenthaler, Dirk AND Leonhardt, Thiemo AND Schumacher, Clara} ,
    pages = { 103-108 },
    publisher = {Gesellschaft für Informatik e.V.},
    address = {Bonn},
    url = {https://dl.gi.de/bitstream/handle/20.500.12116/34147/103%20DELFI2020_paper_79.pdf?sequence=1&isAllowed=y},
    abstract = {Augmented Reality (AR) promises great potential for training applications as it allows to provide the trainee with instructions and feedback that is contextualized. In recent years, AR reached a state of technical feasibility that not only allows for larger, long term evaluations, but also for explorations of its application to specific training use cases. In the BMBF funded project Heb@AR, the utilization of handheld AR as a supplementary tool for the practical training in academic midwifery education is explored. Specifically, how and where AR can be used most effectively in this context, how acceptability and accessibility for tutors and trainees can be ensured and how well emergency situations can be simulated using the technology. In this paper an overview of the Heb@AR project is provided, the goals of the project are stated and the project’s research questions are discussed from a technical perspective. Furthermore, insights into the current state and the development process of the first AR training prototype are provided: The preparation of a tocolytic injection.}
    }

  • C. Hainke and T. Pfeiffer, “Adapting virtual trainings of applied skills to cognitive processes in medical and health care education within the divifag project,” in Delfi 2020 – die 18. fachtagung bildungstechnologien der gesellschaft für informatik e.v., Bonn, 2020, pp. 355-356.
    [BibTeX] [Abstract] [Download PDF]

    The use of virtual reality technology in education rises in popularity, especially in professions that include the training of practical skills. By offering the possibility to repeatedly practice and apply skills in controllable environments, VR training can help to improve the education process. The training simulations that are going to be developed within this project will make use of the high controllability by evaluating behavioral data as well as gaze-based data during the training process. This analysis can reveal insights in the user’s mental states and offers the opportunity of autonomous training adaption.

    @inproceedings{hainke2020adapting,
    author = {Hainke, Carolin AND Pfeiffer, Thies},
    title = {Adapting virtual trainings of applied skills to cognitive processes in medical and health care education within the DiViFaG project},
    booktitle = {DELFI 2020 – Die 18. Fachtagung Bildungstechnologien der Gesellschaft für Informatik e.V.},
    year = {2020},
    editor = {Zender, Raphael AND Ifenthaler, Dirk AND Leonhardt, Thiemo AND Schumacher, Clara} ,
    pages = { 355-356 },
    publisher = {Gesellschaft für Informatik e.V.},
    address = {Bonn},
    url = {https://dl.gi.de/bitstream/handle/20.500.12116/34184/355%20DELFI2020_paper_85.pdf?sequence=1&isAllowed=y},
    abstract = {The use of virtual reality technology in education rises in popularity, especially in professions that include the training of practical skills. By offering the possibility to repeatedly practice and apply skills in controllable environments, VR training can help to improve the education process. The training simulations that are going to be developed within this project will make use of the high controllability by evaluating behavioral data as well as gaze-based data during the training process. This analysis can reveal insights in the user’s mental states and offers the opportunity of autonomous training adaption.}
    }

  • L. Meyer and T. Pfeiffer, “Comparing virtual reality and screen-based training simulations in terms of learning and recalling declarative knowledge,” in Delfi 2020 – die 18. fachtagung bildungstechnologien der gesellschaft für informatik e.v., Bonn, 2020, pp. 55-66.
    [BibTeX] [Abstract] [Download PDF]

    This paper discusses how much the more realistic user interaction in a life-sized fully immersive VR Training is a benefit for acquiring declarative knowledge compared to the same training via a screen-based first-person application. Two groups performed a nursing training scenario in immersive VR and on a tablet. A third group learned the necessary steps using a classic text-picture-manual (TP group). Afterwards all three groups had to perform a recall test with repeated measurement (one week). The results showed no significant differences between VR training and tablet training. In the first test shortly after completion of the training both training simulation conditions were worse than the TP group. In the long-term test, however, the knowledge loss of the TP group was significantly higher than that of the two simulation groups. Ultimately, VR training in this study design proved to be as efficient as training on a tablet for declarative knowledge acquisition. Nevertheless, it is possible that acquired procedural knowledge distinguishes VR training from the screen-based application.

    @inproceedings{meyer2020comparing,
    author = {Meyer, Leonard AND Pfeiffer, Thies},
    title = {Comparing Virtual Reality and Screen-based Training Simulations in Terms of Learning and Recalling Declarative Knowledge},
    booktitle = {DELFI 2020 – Die 18. Fachtagung Bildungstechnologien der Gesellschaft für Informatik e.V.},
    year = {2020},
    editor = {Zender, Raphael AND Ifenthaler, Dirk AND Leonhardt, Thiemo AND Schumacher, Clara} ,
    pages = { 55-66 },
    publisher = {Gesellschaft für Informatik e.V.},
    address = {Bonn},
    url = {https://dl.gi.de/bitstream/handle/20.500.12116/34204/055%20DELFI2020_paper_92.pdf?sequence=1&isAllowed=y},
    abstract = {This paper discusses how much the more realistic user interaction in a life-sized fully immersive VR Training is a benefit for acquiring declarative knowledge compared to the same training via a screen-based first-person application. Two groups performed a nursing training scenario in immersive VR and on a tablet. A third group learned the necessary steps using a classic text-picture-manual (TP group). Afterwards all three groups had to perform a recall test with repeated measurement (one week). The results showed no significant differences between VR training and tablet training. In the first test shortly after completion of the training both training simulation conditions were worse than the TP group. In the long-term test, however, the knowledge loss of the TP group was significantly higher than that of the two simulation groups. Ultimately, VR training in this study design proved to be as efficient as training on a tablet for declarative knowledge acquisition. Nevertheless, it is possible that acquired procedural knowledge distinguishes VR training from the screen-based application.}
    }

  • Y. Tehreem and T. Pfeiffer, “Immersive virtual reality training for the operation of chemical reactors,” in Delfi 2020 – die 18. fachtagung bildungstechnologien der gesellschaft für informatik e.v., Bonn, 2020, pp. 359-360.
    [BibTeX] [Abstract] [Download PDF]

    This paper discusses virtual reality (VR) training for chemical operators on hazardous or costly operations of chemical plants. To this end, a prototypical training scenario is developed which will be deployed to industrial partners and evaluated regarding efficiency and effectiveness. In this paper, the current version of the prototype is presented, that allows life-sized trainings in a virtual simulation of a chemical reactor. Building up on this prototype scenario, means for measuring performance, providing feedback, and guiding users through VR-based trainings are explored and evaluated, targeting at an optimized transfer of knowledge from virtual to real world. This work is embedded in the Marie-Skłodowska-Curie Innovative Training Network CHARMING3, in which 15 PhD candidates from six European countries are cooperating.

    @inproceedings{tehreem2020immersive,
    author = {Tehreem, Yusra AND Pfeiffer, Thies},
    title = {Immersive Virtual Reality Training for the Operation of Chemical Reactors},
    booktitle = {DELFI 2020 – Die 18. Fachtagung Bildungstechnologien der Gesellschaft für Informatik e.V.},
    year = {2020},
    editor = {Zender, Raphael AND Ifenthaler, Dirk AND Leonhardt, Thiemo AND Schumacher, Clara} ,
    pages = { 359-360 },
    publisher = {Gesellschaft für Informatik e.V.},
    address = {Bonn},
    url = {https://dl.gi.de/bitstream/handle/20.500.12116/34186/359%20DELFI2020_paper_81.pdf?sequence=1&isAllowed=y},
    abstract = {This paper discusses virtual reality (VR) training for chemical operators on hazardous or costly operations of chemical plants. To this end, a prototypical training scenario is developed which will be deployed to industrial partners and evaluated regarding efficiency and effectiveness. In this paper, the current version of the prototype is presented, that allows life-sized trainings in a virtual simulation of a chemical reactor. Building up on this prototype scenario, means for measuring performance, providing feedback, and guiding users through VR-based trainings are explored and evaluated, targeting at an optimized transfer of knowledge from virtual to real world. This work is embedded in the Marie-Skłodowska-Curie Innovative Training Network CHARMING3, in which 15 PhD candidates from six European countries are cooperating.}
    }

2019

  • J. Blattgerste, P. Renner, and T. Pfeiffer, “Authorable augmented reality instructions for assistance and training in work environments,” in Proceedings of the 18th international conference on mobile and ubiquitous multimedia, New York, NY, USA, 2019. doi:10.1145/3365610.3365646
    [BibTeX] [Abstract] [Download PDF]

    Augmented Reality (AR) is a promising technology for assistance and training in work environments, as it can provide instructions and feedback contextualised. Not only, but especially impaired workers can benefit from this technology. While previous work mostly focused on using AR to assist or train specific predefined tasks, “general purpose” AR applications, that can be used to intuitively author new tasks at run-time, are widely missing. The contribution of this work is twofold: First we develop an AR authoring tool on the Microsoft HoloLens in combination with a Smartphone as an additional controller following considerations based on related work, guidelines and focus group interviews. Then, we evaluate the usability of the authoring tool itself and the produced AR instructions on a qualitative level in realistic scenarios and gather feedback. As the results reveal a positive reception, we discuss authorable AR as a viable form of AR assistance or training in work environments.

    @inproceedings{blattgerste2019authorable,
    author = {Blattgerste, Jonas and Renner, Patrick and Pfeiffer, Thies},
    title = {Authorable Augmented Reality Instructions for Assistance and Training in Work Environments},
    year = {2019},
    isbn = {9781450376242},
    publisher = {Association for Computing Machinery},
    address = {New York, NY, USA},
    doi = {10.1145/3365610.3365646},
    booktitle = {Proceedings of the 18th International Conference on Mobile and Ubiquitous Multimedia},
    articleno = {34},
    numpages = {11},
    keywords = {training, cognitive impairments, augmented reality, annotation, mixed reality, authoring, assistance},
    location = {Pisa, Italy},
    series = {MUM 19},
    url = {https://mixality.de/wp-content/uploads/2020/07/blattgerste2019authorable.pdf},
    abstract = {Augmented Reality (AR) is a promising technology for assistance and training in work environments, as it can provide instructions and feedback contextualised. Not only, but especially impaired workers can benefit from this technology. While previous work mostly focused on using AR to assist or train specific predefined tasks, "general purpose" AR applications, that can be used to intuitively author new tasks at run-time, are widely missing. The contribution of this work is twofold: First we develop an AR authoring tool on the Microsoft HoloLens in combination with a Smartphone as an additional controller following considerations based on related work, guidelines and focus group interviews. Then, we evaluate the usability of the authoring tool itself and the produced AR instructions on a qualitative level in realistic scenarios and gather feedback. As the results reveal a positive reception, we discuss authorable AR as a viable form of AR assistance or training in work environments.}
    }

  • D. Mardanbegi and T. Pfeiffer, “EyeMRTK: A Toolkit for Developing Eye Gaze Interactive Applications in Virtual and Augmented Reality,” in Proceedings of the 11th acm symposium on eye tracking research & applications (etra ’19), 2019, p. 76:1–76:5. doi:10.1145/3317956.3318155
    [BibTeX] [Abstract] [Download PDF]

    For head mounted displays, like they are used in mixed reality applications, eye gaze seems to be a natural interaction modality. EyeMRTK provides building blocks for eye gaze interaction in virtual and augmented reality. Based on a hardware abstraction layer, it allows interaction researchers and developers to focus on their interaction concepts, while enabling them to evaluate their ideas on all supported systems. In addition to that, the toolkit provides a simulation layer for debugging purposes, which speeds up prototyping during development on the desktop.

    @inproceedings{2937153,
    abstract = {For head mounted displays, like they are used in mixed reality applications, eye gaze seems to be a natural interaction modality. EyeMRTK provides building blocks for eye gaze interaction in virtual and augmented reality. Based on a hardware abstraction layer, it allows interaction researchers and developers to focus on their interaction concepts, while enabling them to evaluate their ideas on all supported systems. In addition to that, the toolkit provides a simulation layer for debugging purposes, which speeds up prototyping during development on the desktop.},
    author = {Mardanbegi, Diako and Pfeiffer, Thies},
    booktitle = {Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications (ETRA '19)},
    isbn = {978-1-4503-6709-7},
    keywords = {eye tracking, gaze interaction, unity, virtual reality},
    pages = {76:1--76:5},
    publisher = {ACM},
    title = {{EyeMRTK: A Toolkit for Developing Eye Gaze Interactive Applications in Virtual and Augmented Reality}},
    url = {https://pub.uni-bielefeld.de/record/2937153},
    doi = {10.1145/3317956.3318155},
    year = {2019},
    }

  • E. Lampen, J. Teuber, F. Gaisbauer, T. Bär, T. Pfeiffer, and S. Wachsmuth, “Combining Simulation and Augmented Reality Methods for Enhanced Worker Assistance in Manual Assembly,” Procedia cirp, vol. 81, p. 588–593, 2019. doi:10.1016/j.procir.2019.03.160
    [BibTeX] [Abstract] [Download PDF]

    Due to mass customization product variety increased steeply in the automotive industry, entailing the increment of worker’s cognitive load during manual assembly tasks. Although worker assistance methods for cognitive automation already exist, they proof insufficient in terms of usability and achieved time saving. Given the rising importance of simulation towards autonomous production planning, a novel approach is proposed using human simulation data in context of worker assistance methods to alleviate cognitive load during manual assembly tasks. Within this paper, a new concept for augmented reality-based worker assistance is presented. Additionally, a comparative user study (N=24) was conducted with conventional worker assistance methods to evaluate a prototypical implementation of the concept. The results illustrate the enhancing opportunity of the novel approach to save cognitive abilities and to induce performance improvements. The implementation provided stable information presentation during the entire experiment. However, with regard to the recentness, there has to be carried out further developments and research, concerning performance adaptions and investigations of the effectiveness.

    @article{2937152,
    abstract = {Due to mass customization product variety increased steeply in the automotive industry, entailing the increment of worker’s cognitive load during manual assembly tasks. Although worker assistance methods for cognitive automation already exist, they proof insufficient in terms of usability and achieved time saving. Given the rising importance of simulation towards autonomous production planning, a novel approach is proposed using human simulation data in context of worker assistance methods to alleviate cognitive load during manual assembly tasks. Within this paper, a new concept for augmented reality-based worker assistance is presented. Additionally, a comparative user study (N=24) was conducted with conventional worker assistance methods to evaluate a prototypical implementation of the concept. The results illustrate the enhancing opportunity of the novel approach to save cognitive abilities and to induce performance improvements. The implementation provided stable information presentation during the entire experiment. However, with regard to the recentness, there has to be carried out further developments and research, concerning performance adaptions and investigations of the effectiveness.},
    author = {Lampen, Eva and Teuber, Jonas and Gaisbauer, Felix and Bär, Thomas and Pfeiffer, Thies and Wachsmuth, Sven},
    issn = {2212-8271},
    journal = {Procedia CIRP},
    keywords = {Virtual Reality, Augmented Reality, Manual Assembly},
    pages = {588--593},
    publisher = {Elsevier},
    title = {{Combining Simulation and Augmented Reality Methods for Enhanced Worker Assistance in Manual Assembly}},
    url = {https://pub.uni-bielefeld.de/record/2937152},
    doi = {10.1016/j.procir.2019.03.160},
    volume = {81},
    year = {2019},
    }

  • C. Peukert, J. Pfeiffer, M. Meißner, T. Pfeiffer, and C. Weinhardt, “Shopping in Virtual Reality Stores. The Influence of Immersion on System Adoption,” Journal of management information systems, vol. 36, iss. 3, p. 1–34, 2019. doi:10.1080/07421222.2019.1628889
    [BibTeX] [Abstract] [Download PDF]

    Companies have the opportunity to better engage potential customers by presenting products to them in a highly immersive virtual reality (VR) shopping environment. However, a minimal amount is known about why and whether customers will adopt such fully immersive shopping environments. We therefore develop and experimentally validate a theoretical model, which explains how immersion affects adoption. The participants experienced the environment by using a head-mounted display (high immersion) or by viewing product models in 3D on a desktop (low immersion). We find that immersion does not affect the users’ intention to reuse the shopping environment, because two paths cancel each other out: Highly immersive shopping environments positively influence a hedonic path through telepresence, but surprisingly, they negatively influence a utilitarian path through product diagnosticity. We can explain this effect via low readability of product information in the VR environment and expect VR’s full potential to develop when the technology is further advanced. Our study contributes to literature on immersive systems and IS adoption research by introducing a research model for the adoption of VR shopping environments. A key practical implication of our study is that system designers need to pay special attention to the current state of technology when designing VR applications. # Video Contributions: ## High vs. Low Immersion The following video shows examples from the two conditions high immersion and low immersion.

    ## High Immersion vs. Physical Reality The following video shows a side-by-side comparison between the high immersive setup and the physical reality setup.

    @article{2934590,
    abstract = {Companies have the opportunity to better engage potential customers by presenting products to them in a highly immersive virtual reality (VR) shopping environment. However, a minimal amount is known about why and whether customers will adopt such fully immersive shopping environments. We therefore develop and experimentally validate a theoretical model, which explains how immersion affects adoption. The participants experienced the environment by using a head-mounted display (high immersion) or by viewing product models in 3D on a desktop (low immersion). We find that immersion does not affect the users’ intention to reuse the shopping environment, because two paths cancel each other out: Highly immersive shopping environments positively influence a hedonic path through telepresence, but surprisingly, they negatively influence a utilitarian path through product diagnosticity. We can explain this effect via low readability of product information in the VR environment and expect VR’s full potential to develop when the technology is further advanced. Our study contributes to literature on immersive systems and IS adoption research by introducing a research model for the adoption of VR shopping environments. A key practical implication of our study is that system designers need to pay special attention to the current state of technology when designing VR applications.
    # Video Contributions:
    ## High vs. Low Immersion
    The following video shows examples from the two conditions high immersion and low immersion.
    
    ## High Immersion vs. Physical Reality The following video shows a side-by-side comparison between the high immersive setup and the physical reality setup.
    }, author = {Peukert, Christian and Pfeiffer, Jella and Meißner, Martin and Pfeiffer, Thies and Weinhardt, Christof}, issn = {0742-1222}, journal = {Journal of Management Information Systems}, number = {3}, pages = {1--34}, publisher = {Taylor & Francis}, title = {{Shopping in Virtual Reality Stores. The Influence of Immersion on System Adoption}}, url = {https://pub.uni-bielefeld.de/record/2934590}, doi = {10.1080/07421222.2019.1628889}, volume = {36}, year = {2019}, }

  • L. Christoforakos, S. Tretter, S. Diefenbach, S. Bibi, M. Fröhner, K. Kohler, D. Madden, T. Marx, T. Pfeiffer, N. Pfeiffer-Leßmann, and N. Valkanova, “Potential and Challenges of Prototyping in Product Development and Innovation,” I-com, vol. 18, iss. 2, p. 179–187, 2019. doi:10.1515/icom-2019-0010
    [BibTeX] [Abstract] [Download PDF]

    Prototyping represents an established, essential method of product development and innovation, widely accepted across the industry. Obviously, the use of prototypes, i. e., simple representations of a product in development, in order to explore, communicate and evaluate the product idea, can provide many benefits. From a business perspective, a central advantage lies in cost-efficient testing. Consequently, the idea to “fail early”, and to continuously rethink and optimize design decisions before cost-consuming implementations, lies at the heart of prototyping. Still, taking a closer look at prototyping in practice, many organizations do not live up to this ideal. In fact, there are several typical misunderstandings and unsatisfying outcomes regarding the effective use of prototypes (e. g. Christoforakos & Diefenbach [3]; Diefenbach, Chien, Lenz, & Hassenzahl [4]). For example, although prominent literature repeatedly underlines the importance of the fit between a prototyping method or tool and its underlying research question and purpose (e. g. Schneider [7]), practitioners often seem to lack reflection and structure regarding their choice of prototyping approaches. Instead, the used prototypes often simply rest on organizational routines. As a result, prototypes can fail their purpose and might not contribute to the initial research question or aim of prototyping. Furthermore, the varying interests of different stakeholders within the prototyping process are often not considered with much detail either. According to Blomkvist and Holmlid [1], stakeholders of prototyping can be broadly categorized in colleagues (i. e. team members involved in the process of product development), clients (i. e. clients, whom the product is being developed for or potential new clients to be acquired) users (i. e. potential users of the final product). Each of these stakeholders employ different purposes of prototyping due to their distinct responsibilities within the process of product development. Moreover, they can hold different expectations regarding the prototyping process, and thus, have different preferences for certain methods or tools. Yet, the substantial role of stakeholders in the appropriate choice of prototyping approach and methods is often overlooked.

    @article{2936802,
    abstract = {Prototyping represents an established, essential method of product development and innovation, widely accepted across the industry. Obviously, the use of prototypes, i. e., simple representations of a product in development, in order to explore, communicate and evaluate the product idea, can provide many benefits. From a business perspective, a central advantage lies in cost-efficient testing. Consequently, the idea to “fail early”, and to continuously rethink and optimize design decisions before cost-consuming implementations, lies at the heart of prototyping. Still, taking a closer look at prototyping in practice, many organizations do not live up to this ideal. In fact, there are several typical misunderstandings and unsatisfying outcomes regarding the effective use of prototypes (e. g. Christoforakos & Diefenbach [3]; Diefenbach, Chien, Lenz, & Hassenzahl [4]). For example, although prominent literature repeatedly underlines the importance of the fit between a prototyping method or tool and its underlying research question and purpose (e. g. Schneider [7]), practitioners often seem to lack reflection and structure regarding their choice of prototyping approaches. Instead, the used prototypes often simply rest on organizational routines. As a result, prototypes can fail their purpose and might not contribute to the initial research question or aim of prototyping. Furthermore, the varying interests of different stakeholders within the prototyping process are often not considered with much detail either. According to Blomkvist and Holmlid [1], stakeholders of prototyping can be broadly categorized in colleagues (i. e. team members involved in the process of product development), clients (i. e. clients, whom the product is being developed for or potential new clients to be acquired) users (i. e. potential users of the final product). Each of these stakeholders employ different purposes of prototyping due to their distinct responsibilities within the process of product development. Moreover, they can hold different expectations regarding the prototyping process, and thus, have different preferences for certain methods or tools. Yet, the substantial role of stakeholders in the appropriate choice of prototyping approach and methods is often overlooked.},
    author = {Christoforakos, Lara and Tretter, Stefan and Diefenbach, Sarah and Bibi, Sven-Anwar and Fröhner, Moritz and Kohler, Kirstin and Madden, Dominick and Marx, Tobias and Pfeiffer, Thies and Pfeiffer-Leßmann, Nadine and Valkanova, Nina},
    issn = {2196-6826},
    journal = {i-com},
    number = {2},
    pages = {179--187},
    publisher = {de Gruyter},
    title = {{Potential and Challenges of Prototyping in Product Development and Innovation}},
    url = {https://pub.uni-bielefeld.de/record/2936802},
    doi = {10.1515/icom-2019-0010},
    volume = {18},
    year = {2019},
    }

  • J. Blattgerste, P. Renner, and T. Pfeiffer, “Augmented Reality Action Assistance and Learning for Cognitively Impaired People. A Systematic Literature Review,” in The 12th pervasive technologies related to assistive environments conference (petra ’19), 2019. doi:10.1145/3316782.3316789
    [BibTeX] [Abstract] [Download PDF]

    Augmented reality (AR) is a promising tool for many situations in which assistance is needed, as it allows for instructions and feedback to be contextualized. While research and development in this area have been primarily driven by industry, AR could also have a huge impact on those who need assistance the most: cognitively impaired people of all ages. In recent years some primary research on applying AR for action assistance and learning in the context of this target group has been conducted. However, the research field is sparsely covered and contributions are hard to categorize. An overview of the current state of research is missing. We contribute to filling this gap by providing a systematic literature review covering 52 publications. We describe the often rather technical publications on an abstract level and quantitatively assess their usage purpose, the targeted age group and the type of AR device used. Additionally, we provide insights on the current challenges and chances of AR learning and action assistance for people with cognitive impairments. We discuss trends in the research field, including potential future work for researchers to focus on.

    @inproceedings{2934446,
    abstract = {Augmented reality (AR) is a promising tool for many situations in which assistance is needed, as it allows for instructions and feedback to be contextualized. While research and development in this area have been primarily driven by industry, AR could also have a huge impact on those who need assistance the most: cognitively impaired people of all ages. In recent years some primary research on applying AR for action assistance and learning in the context of this target group has been conducted. However, the research field is sparsely covered and contributions are hard to categorize. An overview of the current state of research is missing. We contribute to filling this gap by providing a systematic literature review covering 52 publications. We describe the often rather technical publications on an abstract level and quantitatively assess their usage purpose, the targeted age group and the type of AR device used. Additionally, we provide insights on the current challenges and chances of AR learning and action assistance for people with cognitive
    impairments. We discuss trends in the research field, including potential future work for researchers to focus on.},
    author = {Blattgerste, Jonas and Renner, Patrick and Pfeiffer, Thies},
    booktitle = {The 12th PErvasive Technologies Related to Assistive Environments Conference (PETRA ’19)},
    location = {Rhodes, Greece},
    publisher = {ACM},
    title = {{Augmented Reality Action Assistance and Learning for Cognitively Impaired People. A Systematic Literature Review}},
    url = {https://nbn-resolving.org/urn:nbn:de:0070-pub-29344462, https://pub.uni-bielefeld.de/record/2934446},
    doi = {10.1145/3316782.3316789},
    year = {2019},
    }

  • M. Meißner, J. Pfeiffer, T. Pfeiffer, and H. Oppewal, “Combining virtual reality and mobile eye tracking to provide a naturalistic experimental environment for shopper research,” Journal of business research, vol. 100, p. 445–458, 2019. doi:10.1016/j.jbusres.2017.09.028
    [BibTeX] [Abstract] [Download PDF]

    Technological advances in eye tracking methodology have made it possible to unobtrusively measure consumer visual attention during the shopping process. Mobile eye tracking in field settings however has several limitations, including a highly cumbersome data coding process. In addition, field settings allow only limited control of important interfering variables. The present paper argues that virtual reality can provide an alternative setting that combines the benefits of mobile eye tracking with the flexibility and control provided by lab experiments. The paper first reviews key advantages of different eye tracking technologies as available for desktop, natural and virtual environments. It then explains how combining virtual reality settings with eye tracking provides a unique opportunity for shopper research in particular regarding the use of augmented reality to provide shopper assistance.

    @article{2914094,
    abstract = {Technological advances in eye tracking methodology have made it possible to unobtrusively measure consumer visual attention during the shopping process. Mobile eye tracking in field settings however has several limitations, including a highly cumbersome data coding process. In addition, field settings allow only limited control of important interfering variables. The present paper argues that virtual reality can provide an alternative setting that combines the benefits of mobile eye tracking with the flexibility and control provided by lab experiments. The paper first reviews key advantages of different eye tracking technologies as available for desktop, natural and virtual environments. It then explains how combining virtual reality settings with eye tracking provides a unique opportunity for shopper research in particular regarding the use of augmented reality to provide shopper assistance.},
    author = {Meißner, Martin and Pfeiffer, Jella and Pfeiffer, Thies and Oppewal, Harmen},
    issn = {0148-2963},
    journal = {Journal of Business Research},
    keywords = {Eye tracking, Visual attention, Virtual reality, Augmented reality, Assistance system, Shopper behavior, CLF_RESEARCH_HIGHLIGHT},
    pages = {445--458},
    publisher = {Elsevier BV},
    title = {{Combining virtual reality and mobile eye tracking to provide a naturalistic experimental environment for shopper research}},
    url = {https://pub.uni-bielefeld.de/record/2914094},
    doi = {10.1016/j.jbusres.2017.09.028},
    volume = {100},
    year = {2019},
    }

  • M. Andersen, T. Pfeiffer, S. Müller, and U. Schjoedt, “Agency detection in predictive minds. A virtual reality study,” Religion, brain & behavior, vol. 9, iss. 1, p. 52–64, 2019. doi:10.1080/2153599x.2017.1378709
    [BibTeX] [Abstract] [Download PDF]

    Since its inception, explaining the cognitive foundations governing sensory experiences of supernatural agents has been a central topic in the cognitive science of religion. Following recent developments in perceptual psychology, this preregistered study examines the effects of expectations and sensory reliability on agency detection. Participants were nstructed to detect beings in a virtual forest. Results reveal that participants expecting a high probability of encountering an agent in the forest are much more likely to make false detections than participants expecting a low probability of such encounters. Furthermore, low sensory reliability increases the false detection rate compared to high sensory reliability, but this effect is much smaller than the effect of expectations. While previous accounts of agency detection have speculated that false detections of agents may give rise to or strengthen religious beliefs, our results suggest that the reverse direction of causality may also be true. Religious teachings may first produce expectations in believers, which in turn elicit false detections of agents. These experiences may subsequently work to confirm the teachings and narratives upon which the values of a given culture are built.

    @article{2914550,
    abstract = {Since its inception, explaining the cognitive foundations governing sensory experiences of supernatural agents has been a central topic in the cognitive science of religion. Following recent developments in perceptual psychology, this preregistered study examines the effects of expectations and sensory reliability on agency detection. Participants were nstructed to detect beings in a virtual forest. Results reveal that participants expecting a high probability of encountering an agent in the forest are much more likely to make false detections than participants expecting a low probability of such encounters. Furthermore, low sensory reliability increases the false detection rate compared to high sensory reliability, but this effect is much smaller than the effect of expectations. While previous accounts of agency detection have speculated that false detections of agents may give rise to or strengthen religious beliefs, our results suggest that the reverse direction of causality may also be true. Religious teachings may first produce expectations in believers, which in turn elicit false detections of agents. These experiences may subsequently work to confirm the teachings and narratives upon which the values of a given culture are built.},
    author = {Andersen, Marc and Pfeiffer, Thies and Müller, Sebastian and Schjoedt, Uffe},
    issn = {2153-5981},
    journal = {Religion, Brain & Behavior},
    keywords = {CLF_RESEARCH_HIGHLIGHT},
    number = {1},
    pages = {52--64},
    publisher = {Routledge},
    title = {{Agency detection in predictive minds. A virtual reality study}},
    url = {https://pub.uni-bielefeld.de/record/2914550},
    doi = {10.1080/2153599x.2017.1378709},
    volume = {9},
    year = {2019},
    }

2018

  • P. Renner, F. Lier, F. Friese, T. Pfeiffer, and S. Wachsmuth, “WYSIWICD: What You See is What I Can Do,” in Hri ’18 companion: 2018 acm/ieee international conference on human-robot interaction companion, 2018. doi:10.1145/3173386.3177032
    [BibTeX] [Download PDF]
    @inproceedings{2916801,
    author = {Renner, Patrick and Lier, Florian and Friese, Felix and Pfeiffer, Thies and Wachsmuth, Sven},
    booktitle = {HRI '18 Companion: 2018 ACM/IEEE International Conference on Human-Robot Interaction Companion},
    keywords = {Augmented Reality, Natural Interfaces, Sensor Fusion},
    location = {Chicago},
    publisher = {ACM},
    title = {{WYSIWICD: What You See is What I Can Do}},
    url = {https://nbn-resolving.org/urn:nbn:de:0070-pub-29168017, https://pub.uni-bielefeld.de/record/2916801},
    doi = {10.1145/3173386.3177032},
    year = {2018},
    }

  • N. Mitev, P. Renner, T. Pfeiffer, and M. Staudte, “Towards efficient human–machine collaboration. Effects of gaze-driven feedback and engagement on performance,” Cognitive research: principles and implications, vol. 3, iss. 3, 2018. doi:10.1186/s41235-018-0148-x
    [BibTeX] [Abstract] [Download PDF]

    Referential success is crucial for collaborative task-solving in shared environments. In face-to-face interactions, humans, therefore, exploit speech, gesture, and gaze to identify a specific object. We investigate if and how the gaze behavior of a human interaction partner can be used by a gaze-aware assistance system to improve referential success. Specifically, our system describes objects in the real world to a human listener using on-the-fly speech generation. It continuously interprets listener gaze and implements alternative strategies to react to this implicit feedback. We used this system to investigate an optimal strategy for task performance: providing an unambiguous, longer instruction right from the beginning, or starting with a shorter, yet ambiguous instruction. Further, the system provides gaze-driven feedback, which could be either underspecified (“No, not that one!”) or contrastive (“Further left!”). As expected, our results show that ambiguous instructions followed by underspecified feedback are not beneficial for task performance, whereas contrastive feedback results in faster interactions. Interestingly, this approach even outperforms unambiguous instructions (manipulation between subjects). However, when the system alternates between underspecified and contrastive feedback to initially ambiguous descriptions in an interleaved manner (within subjects), task performance is similar for both approaches. This suggests that listeners engage more intensely with the system when they can expect it to be cooperative. This, rather than the actual informativity of the spoken feedback, may determine the efficiency of information uptake and performance.

    @article{2932893,
    abstract = {Referential success is crucial for collaborative task-solving in shared environments. In face-to-face interactions, humans, therefore, exploit speech, gesture, and gaze to identify a specific object. We investigate if and how the gaze behavior of a human interaction partner can be used by a gaze-aware assistance system to improve referential success. Specifically, our system describes objects in the real world to a human listener using on-the-fly speech generation. It continuously interprets listener gaze and implements alternative strategies to react to this implicit feedback. We used this system to investigate an optimal strategy for task performance: providing an unambiguous, longer instruction right from the beginning, or starting with a shorter, yet ambiguous instruction. Further, the system provides gaze-driven feedback, which could be either underspecified (“No, not that one!”) or contrastive (“Further left!”). As expected, our results show that ambiguous instructions followed by underspecified feedback are not beneficial for task performance, whereas contrastive feedback results in faster interactions. Interestingly, this approach even outperforms unambiguous instructions (manipulation between subjects). However, when the system alternates between underspecified and contrastive feedback to initially ambiguous descriptions in an interleaved manner (within subjects), task performance is similar for both approaches. This suggests that listeners engage more intensely with the system when they can expect it to be cooperative. This, rather than the actual informativity of the spoken feedback, may determine the efficiency of information uptake and performance.},
    author = {Mitev, Nikolina and Renner, Patrick and Pfeiffer, Thies and Staudte, Maria},
    issn = {2365-7464},
    journal = {Cognitive Research: Principles and Implications},
    keywords = {Human–computer interaction, Natural language generation, Listener gaze, Referential success, Multimodal systems},
    number = {3},
    publisher = {Springer Nature},
    title = {{Towards efficient human–machine collaboration. Effects of gaze-driven feedback and engagement on performance}},
    url = {https://pub.uni-bielefeld.de/record/2932893},
    doi = {10.1186/s41235-018-0148-x},
    volume = {3},
    year = {2018},
    }

  • F. Summann, T. Pfeiffer, and M. Preis, “Kooperative Entwicklung digitaler Services an Hochschulbibliotheken,” Bibliotheksdienst, vol. 52, iss. 8, p. 595–609, 2018. doi:10.1515/bd-2018-0070
    [BibTeX] [Download PDF]
    @article{2930431,
    author = {Summann, Friedrich and Pfeiffer, Thies and Preis, Matthias},
    issn = {2194-9646},
    journal = {Bibliotheksdienst},
    keywords = {Virtuelle Forschungsumgebung, Virtuelle Realiät},
    number = {8},
    pages = {595--609},
    publisher = {de Gruyter },
    title = {{Kooperative Entwicklung digitaler Services an Hochschulbibliotheken}},
    url = {https://pub.uni-bielefeld.de/record/2930431},
    doi = {10.1515/bd-2018-0070},
    volume = {52},
    year = {2018},
    }

  • M. Andersen, K. L. Nielbo, U. Schjoedt, T. Pfeiffer, A. Roepstorff, and J. Sørensen, “Predictive minds in Ouija board sessions,” Phenomenology and the cognitive sciences, vol. 18, iss. 3, p. 577–588, 2018. doi:10.1007/s11097-018-9585-8
    [BibTeX] [Abstract] [Download PDF]

    Ouija board sessions are illustrious examples of how subjective feelings of control the Sense of Agency (SoA) – can be manipulated in real life settings. We present findings from a field experiment at a paranormal conference, where Ouija enthusiasts were equipped with eye trackers while using the Ouija board. Our results show that participants have a significantly lower probability at visually predicting letters in a Ouija board session compared to a condition in which they are instructed to deliberately spell out words with the Ouija board planchette. Our results also show that Ouija board believers report lower SoA compared to sceptic participants. These results support previous research which claim that low sense of agency is caused by a combination of retrospective inference and an inhibition of predictive processes. Our results show that users in Ouija board sessions become increasingly better at predicting letters as responses unfold over time, and that meaningful responses from the Ouija board can only be accounted for when considering interactions that goes on at the participant pair level. These results suggest that meaningful responses from the Ouija board may be an emergent property of interacting and predicting minds that increasingly impose structure on initially random events in Ouija sessions.

    @article{2921413,
    abstract = {Ouija board sessions are illustrious examples of how subjective feelings of control the Sense of Agency (SoA) - can be manipulated in real life settings. We present findings from a field experiment at a paranormal conference, where Ouija enthusiasts were equipped with eye trackers while using the Ouija board. Our results show that participants have a significantly lower probability at visually predicting letters in a Ouija board session compared to a condition in which they are instructed to deliberately spell out words with the Ouija board planchette. Our results also show that Ouija board believers report lower SoA compared to sceptic participants. These results support previous research which claim that low sense of agency is caused by a combination of retrospective inference and an inhibition of predictive processes. Our
    results show that users in Ouija board sessions become increasingly better at predicting letters as responses unfold over time, and that meaningful responses from the Ouija board can only be accounted for when considering interactions that goes on at the participant pair level. These results suggest that meaningful responses from the Ouija board may be an emergent property of interacting and predicting minds that increasingly impose structure on initially random events in Ouija sessions.},
    author = {Andersen, Marc and Nielbo, Kristoffer L. and Schjoedt, Uffe and Pfeiffer, Thies and Roepstorff, Andreas and Sørensen, Jesper},
    issn = {1572-8676},
    journal = {Phenomenology and the Cognitive Sciences},
    number = {3},
    pages = {577--588},
    publisher = {Springer Nature},
    title = {{Predictive minds in Ouija board sessions}},
    url = {https://pub.uni-bielefeld.de/record/2921413},
    doi = {10.1007/s11097-018-9585-8},
    volume = {18},
    year = {2018},
    }

  • T. Pfeiffer, C. Hainke, L. Meyer, M. Fruhner, and M. Niebling, “Virtual SkillsLab – Trainingsanwendung zur Infusionsvorbereitung (Wettbewerbssieger) ,” in Delfi workshops 2018. proceedings der pre-conference-workshops der 16. e-learning fachtagung informatik co-located with 16th e-learning conference of the german computer society (delfi 2018), frankfurt, germany, september 10, 2018, 2018.
    [BibTeX] [Abstract] [Download PDF]

    In der Ausbildung in der Pflege gibt es großen Bedarf an praktischem Training. SkillsLabs, physikalische Nachbauten von realen Arbeitsräumen an den Lehrstätten, bieten die einzigartige Möglichkeit, praktisches Wissen in direkter Nähe und in direkter Einbindung mit den Lehrenden zu erarbeiten und mit der theoretischen Ausbildung zu verzahnen. Häufig stehen jedoch die notwendigen Ressourcen (Räume, Arbeitsmittel) nicht in ausreichendem Maße zur Verfügung. Virtuelle SkillsLabs können hier den Bedarf zum Teil abdecken und eine Brücke zwischen Theorie und Praxis bilden. Im Beitrag wird eine solche Umsetzung mit verschiedenen Ausbaustufen vorgestellt.

    @inproceedings{2932685,
    abstract = {In der Ausbildung in der Pflege gibt es großen Bedarf an praktischem Training. SkillsLabs, physikalische Nachbauten von realen Arbeitsräumen an den Lehrstätten, bieten die einzigartige Möglichkeit, praktisches Wissen in direkter Nähe und in direkter Einbindung mit den Lehrenden zu erarbeiten und mit der theoretischen Ausbildung zu verzahnen. Häufig stehen jedoch die notwendigen Ressourcen (Räume, Arbeitsmittel) nicht in ausreichendem Maße zur Verfügung.
    Virtuelle SkillsLabs können hier den Bedarf zum Teil abdecken und eine Brücke zwischen Theorie und Praxis bilden. Im Beitrag wird eine solche Umsetzung mit verschiedenen Ausbaustufen vorgestellt.},
    author = {Pfeiffer, Thies and Hainke, Carolin and Meyer, Leonard and Fruhner, Maik and Niebling, Moritz},
    booktitle = {DeLFI Workshops 2018. Proceedings der Pre-Conference-Workshops der 16. E-Learning Fachtagung Informatik co-located with 16th e-Learning Conference of the German Computer Society (DeLFI 2018), Frankfurt, Germany, September 10, 2018},
    editor = {Schiffner, Daniel},
    issn = {1613-0073},
    keywords = {Virtual Skills Lab, Virtuelle Realität},
    title = {{Virtual SkillsLab - Trainingsanwendung zur Infusionsvorbereitung (Wettbewerbssieger) }},
    url = {https://pub.uni-bielefeld.de/record/2932685},
    volume = {2250},
    year = {2018},
    }

  • P. Agethen, V. Subramanian Sekar, F. Gaisbauer, T. Pfeiffer, M. Otto, and E. Rukzio, “Behavior Analysis of Human Locomotion in Real World and Virtual Reality for Manufacturing Industry,” Acm transactions on applied perception (tap), vol. 15, iss. 3, 2018. doi:10.1145/3230648
    [BibTeX] [Abstract] [Download PDF]

    With the rise of immersive visualization techniques, many domains within the manufacturing industry are increasingly validating production processes in virtual reality (VR). The validity of the results gathered in such simulations, however, is widely unknown – in particular with regard to human locomotion behavior. To bridge this gap, this paper presents an experiment, analyzing the behavioral disparity between human locomotion being performed without any equipment and in immersive virtual reality while wearing a head-mounted display (HMD). The presented study (n = 30) is split up in three sections and covers linear walking, non-linear walking and obstacle avoidance. Special care has been given to design the experiment so that findings are generally valid and can be applied to a wide range of domains beyond the manufacturing industry. The findings provide novel insights into the effect of immersive virtual reality on specific gait parameters. In total, a comprehensive sample of 18.09 km is analyzed. The results reveal that the HMD had a medium effect (up to 13%) on walking velocity, on non-linear walking towards an oriented target and on clearance distance. The overall-differences are modeled using multiple regression models, thus allowing the general usage within various domains. Summarizing, it can be concluded that VR can be used to analyze and plan human locomotion, however, specific details may have to be adjusted in order to transfer findings to the real world.

    @article{2921256,
    abstract = {With the rise of immersive visualization techniques, many domains within the manufacturing industry are increasingly
    validating production processes in virtual reality (VR). The validity of the results gathered in such simulations, however,
    is widely unknown - in particular with regard to human locomotion behavior. To bridge this gap, this paper presents an
    experiment, analyzing the behavioral disparity between human locomotion being performed without any equipment and in immersive virtual reality while wearing a head-mounted display (HMD). The presented study (n = 30) is split up in three sections and covers linear walking, non-linear walking and obstacle avoidance. Special care has been given to design the experiment so that findings are generally valid and can be applied to a wide range of domains beyond the manufacturing industry. The findings provide novel insights into the effect of immersive virtual reality on specific gait parameters. In total, a comprehensive sample of 18.09 km is analyzed. The results reveal that the HMD had a medium effect (up to 13%) on walking velocity, on non-linear walking towards an oriented target and on clearance distance. The overall-differences are modeled using multiple regression models, thus allowing the general usage within various domains. Summarizing, it can be concluded that VR can be used to analyze and plan human locomotion, however, specific details may have to be adjusted in order to transfer findings to the real world.},
    author = {Agethen, Philipp and Subramanian Sekar, Viswa and Gaisbauer, Felix and Pfeiffer, Thies and Otto, Michael and Rukzio, Enrico},
    issn = {1544-3558},
    journal = {ACM Transactions on Applied Perception (TAP)},
    keywords = {CLF_RESEARCH_HIGHLIGHT},
    number = {3},
    publisher = {ACM},
    title = {{Behavior Analysis of Human Locomotion in Real World and Virtual Reality for Manufacturing Industry}},
    url = {https://nbn-resolving.org/urn:nbn:de:0070-pub-29212563, https://pub.uni-bielefeld.de/record/2921256},
    doi = {10.1145/3230648},
    volume = {15},
    year = {2018},
    }

  • S. Meyer zu Borgsen, P. Renner, F. Lier, T. Pfeiffer, and S. Wachsmuth, “Improving Human-Robot Handover Research by Mixed Reality Techniques,” in Vam-hri 2018. the inaugural international workshop on virtual, augmented and mixed reality for human-robot interaction. proceedings, 2018. doi:10.4119/unibi/2919957
    [BibTeX] [Download PDF]
    @inproceedings{2919957,
    author = {Meyer zu Borgsen, Sebastian and Renner, Patrick and Lier, Florian and Pfeiffer, Thies and Wachsmuth, Sven},
    booktitle = {VAM-HRI 2018. The Inaugural International Workshop on Virtual, Augmented and Mixed Reality for Human-Robot Interaction. Proceedings},
    location = {Chicago},
    title = {{Improving Human-Robot Handover Research by Mixed Reality Techniques}},
    url = {https://nbn-resolving.org/urn:nbn:de:0070-pub-29199579, https://pub.uni-bielefeld.de/record/2919957},
    doi = {10.4119/unibi/2919957},
    year = {2018},
    }

  • P. Agethen, M. Link, F. Gaisbauer, T. Pfeiffer, and E. Rukzio, “Counterbalancing virtual reality induced temporal disparities of human locomotion for the manufacturing industry,” in Proceedings of the 11th annual international conference on motion, interaction, and games – mig ’18, 2018. doi:10.1145/3274247.3274517
    [BibTeX] [Download PDF]
    @inproceedings{2932220,
    author = {Agethen, Philipp and Link, Max and Gaisbauer, Felix and Pfeiffer, Thies and Rukzio, Enrico},
    booktitle = {Proceedings of the 11th Annual International Conference on Motion, Interaction, and Games - MIG '18},
    isbn = {978-1-4503-6015-9},
    publisher = {ACM Press},
    title = {{Counterbalancing virtual reality induced temporal disparities of human locomotion for the manufacturing industry}},
    url = {https://pub.uni-bielefeld.de/record/2932220},
    doi = {10.1145/3274247.3274517},
    year = {2018},
    }

  • T. Pfeiffer and P. Renner, “Quantifying the interplay of gaze and gesture in deixis using an experimental-simulative approach,” in Eye-tracking in interaction. studies on the role of eye gaze in dialogue, G. Brône and B. Oben, Eds., John benjamins publishing company, 2018, vol. 10, p. 109–138. doi:10.1075/ais.10.06pfe
    [BibTeX] [Download PDF]
    @inbook{2931842,
    author = {Pfeiffer, Thies and Renner, Patrick},
    booktitle = {Eye-tracking in Interaction. Studies on the role of eye gaze in dialogue},
    editor = {Brône, Geert and Oben, Bert},
    isbn = {9789027201522},
    pages = {109--138},
    publisher = {John Benjamins Publishing Company},
    title = {{Quantifying the interplay of gaze and gesture in deixis using an experimental-simulative approach}},
    url = {https://pub.uni-bielefeld.de/record/2931842},
    doi = {10.1075/ais.10.06pfe},
    volume = {10},
    year = {2018},
    }

  • J. Blattgerste, P. Renner, and T. Pfeiffer, “Advantages of Eye-Gaze over Head-Gaze-Based Selection in Virtual and Augmented Reality under Varying Field of Views,” in Cogain ’18. proceedings of the symposium on communication by gaze interaction, 2018. doi:10.1145/3206343.3206349
    [BibTeX] [Abstract] [Download PDF]

    The current best practice for hands-free selection using Virtual and Augmented Reality (VR/AR) head-mounted displays is to use head-gaze for aiming and dwell-time or clicking for triggering the selection. There is an observable trend for new VR and AR devices to come with integrated eye-tracking units to improve rendering, to provide means for attention analysis or for social interactions. Eye-gaze has been successfully used for human-computer interaction in other domains, primarily on desktop computers. In VR/AR systems, aiming via eye-gaze could be significantly faster and less exhausting than via head-gaze. To evaluate benefits of eye-gaze-based interaction methods in VR and AR, we compared aiming via head-gaze and aiming via eye-gaze. We show that eye-gaze outperforms head-gaze in terms of speed, task load, required head movement and user preference. We furthermore show that the advantages of eye-gaze further increase with larger FOV sizes.

    @inproceedings{2919602,
    abstract = {The current best practice for hands-free selection using Virtual and Augmented Reality (VR/AR) head-mounted displays is to use head-gaze for aiming and dwell-time or clicking for triggering the selection. There is an observable trend for new VR and AR devices to come with integrated eye-tracking units to improve rendering, to provide means for attention analysis or for social interactions. Eye-gaze has been successfully used for human-computer interaction in other domains, primarily on desktop computers. In VR/AR systems, aiming via eye-gaze could be significantly faster and less exhausting than via head-gaze.
    To evaluate benefits of eye-gaze-based interaction methods in VR and AR, we compared aiming via head-gaze and aiming via eye-gaze. We show that eye-gaze outperforms head-gaze in terms of speed, task load, required head movement and user preference. We furthermore show that the advantages of eye-gaze further increase with larger FOV sizes.},
    author = {Blattgerste, Jonas and Renner, Patrick and Pfeiffer, Thies},
    booktitle = {COGAIN '18. Proceedings of the Symposium on Communication by Gaze Interaction},
    isbn = {978-1-4503-5790-6},
    keywords = {Augmented Reality, Virtual Reality, Assistance Systems, Head-Mounted Displays, Eye-Tracking, Field of View, Human Computer Interaction},
    location = {Warsaw, Poland},
    publisher = {ACM},
    title = {{Advantages of Eye-Gaze over Head-Gaze-Based Selection in Virtual and Augmented Reality under Varying Field of Views}},
    url = {https://nbn-resolving.org/urn:nbn:de:0070-pub-29196024, https://pub.uni-bielefeld.de/record/2919602},
    doi = {10.1145/3206343.3206349},
    year = {2018},
    }

  • N. Mitev, P. Renner, T. Pfeiffer, and M. Staudte, “Using Listener Gaze to Refer in Installments Benefits Understanding,” in Proceedings of the 40th annual conference of the cognitive science society, 2018.
    [BibTeX] [Download PDF]
    @inproceedings{2930542,
    author = {Mitev, Nikolina and Renner, Patrick and Pfeiffer, Thies and Staudte, Maria},
    booktitle = {Proceedings of the 40th Annual Conference of the Cognitive Science Society},
    location = {Madison, Wisconsin, USA},
    title = {{Using Listener Gaze to Refer in Installments Benefits Understanding}},
    url = {https://pub.uni-bielefeld.de/record/2930542},
    year = {2018},
    }

  • T. Pfeiffer and N. Pfeiffer-Leßmann, “Virtual Prototyping of Mixed Reality Interfaces with Internet of Things (IoT) Connectivity,” I-com, vol. 17, iss. 2, p. 179–186, 2018. doi:10.1515/icom-2018-0025
    [BibTeX] [Abstract] [Download PDF]

    One key aspect of the Internet of Things (IoT) is, that human machine interfaces are disentangled from the physicality of the devices. This provides designers with more freedom, but also may lead to more abstract interfaces, as they lack the natural context created by the presence of the machine. Mixed Reality (MR) on the other hand, is a key technology that enables designers to create user interfaces anywhere, either linked to a physical context (augmented reality, AR) or embedded in a virtual context (virtual reality, VR). Especially today, designing MR interfaces is a challenge, as there is not yet a common design language nor a set of standard functionalities or patterns. In addition to that, neither customers nor future users have substantial experiences in using MR interfaces.

    @article{2930374,
    abstract = {One key aspect of the Internet of Things (IoT) is, that human machine interfaces are disentangled from the physicality of the devices. This provides designers with more freedom, but also may lead to more abstract interfaces, as they lack the natural context created by the presence of the machine. Mixed Reality (MR) on the other hand, is a key technology that enables designers to create user interfaces anywhere, either linked to a physical context (augmented reality, AR) or embedded in a virtual context (virtual reality, VR). Especially today, designing MR interfaces is a challenge, as there is not yet a common design language nor a set of standard functionalities or patterns. In addition to that, neither customers nor future users have substantial experiences in using MR interfaces.},
    author = {Pfeiffer, Thies and Pfeiffer-Leßmann, Nadine},
    issn = {2196-6826},
    journal = {i-com},
    number = {2},
    pages = {179--186},
    publisher = {Walter de Gruyter GmbH},
    title = {{Virtual Prototyping of Mixed Reality Interfaces with Internet of Things (IoT) Connectivity}},
    url = {https://pub.uni-bielefeld.de/record/2930374},
    doi = {10.1515/icom-2018-0025},
    volume = {17},
    year = {2018},
    }