Monday, October 16, 2017

8:30Registration
9:00 - 9:10Welcome
9:10 - 10:20Opening Keynote: Frank Steinicke

Super-Natural Interfaces for the Ultimate Display



Abstarct: In his essay “The Ultimate Display” from 1965, Ivan E. Sutherland states that “The ultimate display would [...] be a room within which the computer can control the existence of matter [...]“. This general notion of a computer-mediated or virtual reality, in which synthetic objects or the entire virtual environment get indistinguishable from the real world, dates back to Plato’s “The Allegory of the Cave” and has been reconsidered again and again in science fiction literature as well as the movie industry. For instance, virtual reality is often used to question whether we truly “know” if our perceptions are real or not. Early science fiction movies like “The Matrix” or the fictional holodeck from the Star Trek universe are prominent examples of these kind of perceptual ambiguities. Furthermore, in movies like Steven Spielberg’s “Minority Report”, Jon Favreau’s “Iron Man″, or Brad Bird’s "Mission Impossible 4" actors can seamlessly use free-hand gestures in space combined with speech to manipulate 3D holographic projections, while they also perceive haptic feedback when touching the virtual objects.


In my talk I will revisit some of the most visually impressive 3D user interfaces and experiences of such fictional ultimate displays. As a matter of fact, we cannot let a computer fully control the existence of matter, but we can fool our senses and give a user the illusion that the computer can after all. I will show how different ultimate displays can be implemented with current state-of-the-art technology by exploiting perceptually-inspired interfaces. However, we will see that the resulting ultimate displays are not so ultimate at all, but pose novel interesting future research challenges and questions.


Frank Steinicke is a professor for Human-Computer Interaction at the Department of Informatics at the University of Hamburg. His research is driven by understanding the human perceptual, cognitive and motor abilities and limitations in order to reform the interaction as well as the experience in computer-mediated realities. Frank Steinicke regularly serves as panelist and speaker at major events in the area of virtual reality and human-computer interaction and is on the IPC of various national and international conferences. He serves as the program chair for IEEE VR 2017/2018, which is the most renowned scientific conference in the area of VR/AR. Furthermore, he is a member of the Steering committee of the ACM SUI Symposium and the GI SIG VR/AR, and currently editor of the IEEE Computer Graphics & Applications Department on Spatial Interfaces.


10:20 - 10:40Program Overview and Poster/Demo Fast Forward
10:40 - 11:00Coffee Break (Poster/Demo Setup)
11:00 - 12:30Session 1: Immersion & Presence"
  • Smooth Immersion: The benefits of Making the Transition to Virtual Environments a Continuous Process
    Dimitar Valkov, Steffen Flagge

    In this paper we discuss the benefits and the limitations, as well as different implementation options for smooth immersion into a HMD-based IVE. We evaluated our concept in a preliminary user study, in which we have tested users’ awareness, reality judgment and experience in the IVE, when using different transition techniques to enter it. Our results show that a smooth transition to the IVE improves the awareness of the user and may increase the perceived interactivity of the system.


  • RealME: The Influence of Body and Hand Representations on Body Ownership and Presence
    Sungchul Jung, Christian Sandor, Pamela J. Wisniewski, Charles E. Hughes

    The study presented in this paper extends earlier research involving body continuity by investigating if the presence of real body cues (legs that look like and move like one's own) alters one's sense of immersion in a virtual environment. The main hypothesis is that real body cues increase one's sense of body ownership and physical presence, even when those body parts are not essential to the activity on which one is focused. To test this hypothesis, we developed an experiment that uses a virtual human hand and arm that are directly observable but clearly synthetic, and a lower body seen through a virtual mirror, where the legs are sometimes visually accurate and personalized, and other times accurate in movement but not in appearance. The virtual right hand and arm are the focus of our scenario; the lower body, only visible in the mirror, is largely irrelevant to task, only provides perceptually contextual information. By looking at combinations of arm-hand continuity (2 conditions), freedom or lack of it to move the hand (2 conditions), and realism or lack of it of the virtually reflected lower body (2 conditions), we are able to study the effects of each combination on the perceptions of body ownership and presence, critical features in virtual environments involving a virtual surrogate.


  • ReflectiveSpineVR: An Immersive Spine Surgery Simulation with Interaction History Capabilities
    Ahmed E. Mostafa, Won Hyung A. Ryu, Kazuki Takashima, Sonny Chan, Mario Costa Sousa, Ehud Sharlin

    This paper contributes ReflectiveSpineVR, an immersive spine surgery simulation enriched with interaction history capabilities aimed to support effective learning and training. The provided interaction history features are based on a design study we conducted exploring what makes an effective interaction history representation in spatial tasks. Existing surgical simulation systems only provide a crude way to supporting repetitive practice where the simulation needs to be restarted every time. By working closely with medical collaborators and following an iterative process, we present our novel approach to enriching users with nonlinear interaction history capabilities and supporting repetitive practice including how such features were realized in our ReflectiveSpineVR prototype. We conclude the paper with the results of a preliminary evaluation of ReflectiveSpineVR, highlighting the positive feedback regarding our history representation approach and the interface benefits.

  • 12:30 - 13:30Lunch Break
    13:30 - 15:00Session 2: User Interfaces"
  • CloudBits: Supporting Conversations Through Augmented Zero-query Search Visualization
    Florian Müller, Sebastian Günther, Azita Hosseini Nejad, Niloofar Dezfuli, Mohammadreza Khalilbeigi, Max Mühlhäuser

    The retrieval of additional information from public (e.g., map data) or private (e.g., e-mail) information sources using personal smart devices is a common habit in today's co-located conversations. This behavior of users imposes challenges in two main areas: 1) cognitive focus switching and 2) information sharing. In this paper, we explore a novel approach for conversation support through augmented information bits, allowing users to see and access information right in front of their eyes. To that end, we investigate the requirements for the design of a user interface to support conversations through proactive information retrieval in an exploratory study. Based on the results, we 2) present CloudBits: A set of visualization and interaction techniques to provide mutual awareness and enhance coupling in conversations through augmented zero-query search visualization along with its prototype implementation. Finally, we 3) report the findings of a qualitative evaluation and conclude with guidelines for the design of user interfaces for conversation support.


  • Visibility Perception and Dynamic Viewsheds for Topographic Maps and Models
    Nico Li, Wesley Willett, Ehud Sharlin, Mario Costa Sousa

    We compare the effectiveness of 2D maps and 3D terrain models for visibility tasks and demonstrate how interactive dynamic viewsheds can improve performance for both types of terrain representations. In general, the two-dimensional nature of classic topographic maps limits their legibility and can make complex yet typical cartographic tasks like determining the visibility between locations difficult. Both 3D physical models and interactive techniques like dynamic viewsheds have the potential to make it improve viewers’ understanding of topography, but their impact has not been deeply explored. We evaluate the effectiveness of 2D maps, 3D models, and interactive viewsheds for both simple and complex visibility tasks. Our results demonstrate the benefits of the dynamic viewshed technique and highlight opportunities for additional tactile interactions with maps and models. Based on these findings we present guidelines for improving the design and usability of future topographic maps and models.


  • Using Artificial Landmarks to Improve Revisitation Performance and Spatial Learning in Linear Control Widgets
    Md. Sami Uddin, Carl Gutwin, Alix Goguey

    Linear interface controllers such as sliders and scrollbars are primary tools for navigating through linear content such as videos or text documents. Linear control widgets provide an abstract representation of the entire document in the body of the widget, in that they map each document location to a different position of the slider knob or scroll thumb. In most cases, however, these linear mappings are visually undifferentiated – all locations in the widget look the same – and so it can be difficult to build up spatial knowledge of the document, and difficult to navigate back to locations that the user has already visited. In this paper, we examine a technique that can address this problem: artificial landmarks that are added to a linear control widget in order to improve spatial understanding and revisitation. We carried out a study with two types of content (a video, and a PDF document) to test the effects of adding artificial landmarks. We compared standard widgets (with no landmarks) to two augmented designs: one that placed arbitrary abstract icons in the body of the widget, and one that added thumbnails extracted from the document. We found that for both kinds of content, adding artificial landmarks significantly improved revisitation performance and user preference, with the thumbnail landmarks fastest and most accurate in both cases. Our study demonstrates that augmenting linear control widgets with artificial landmarks can provide substantial benefits for document navigation.

  • 15:00 - 15:30Poster Session & Coffee Break
    15:30 - 17:20Session 3: Haptics"
  • Stylo and Handifact: Modulating Haptic Perception through Visualisations for Posture Training in Augmented Reality
    Nicholas Katzakis, Jonathan Tong, Oscar Javier Ariza Nunez, Lihan Chen, Gudrun Klinker, Brigitte Roeder, Frank Steinicke

    Stylo-Handifact is a novel spatial user interface consisting of a haptic device (i.e., Stylo) attached to the forearm and a visualization of a virtual hand (i.e., Handifact), which in combination provide visuo-haptic feedback for posture training applications. In this paper we evaluate the mutual effects of Handifact and Stylo on visuo-haptic sensations in a psychophysical experiment. The results show that a visual stimulus can modulate the perceived strength of a haptic stimulus by more than 5%. A wrist docking task indicates that Stylo-Handifact results in improved task completion time as compared to a state-of-the-art technique.


  • Evaluating the Effect of Tangible Virtual Reality on Spatial Perspective Taking Ability
    Jack Shen-Kuen Chang, Georgina Yeboah, Alison Doucette, Paul Clifton, Michael Nitsche, Timothy Welsh, Ali Mazalek

    As shown in many large-scale and longitudinal studies, spatial ability is strongly associated with STEM (science, technology, engineering, and mathematics) learning and career success. At the same time, a growing volume of research connects cognitive science theories with tangible/embodied interactions (TEI) and virtual reality (VR) to offer novel means to support spatial cognition. But very few VR-TEI systems are specifically designed to support spatial ability, nor are they evaluated with respect to spatial ability. In this paper, we present the background, approach, and evaluation of TASC (Tangibles for Augmenting Spatial Cognition), a VR-TEI system built to support spatial perspective taking ability. We tested 3 conditions (tangible VR, keyboard/mouse, control; n=46). Analysis of the pre/post-test change in performance on a perspective taking test revealed that only the VR-TEI group showed statistically significant improvements. The results highlight the role of tangible VR design for enhancing spatial cognition.


  • Analysing the Effect of Tangibile User Interfaces on Spatial Memory
    Markus Löchtefeld, Frederik Wiehr, Sven Gehring

    Tangible User Interfaces (TUIs) allow for effective and easy interaction with digital information by encapsulating them into a physical form. Especially in combination with interactive surfaces, TUIs have been studied in a variety of forms and application cases. By taking advantage of the human abilities to grasp and manipulate they ease collaboration and learning. In this paper we study the effects of TUIs on spatial memory. In our study we compare participants’ performance of recalling positions of buildings that they priorly placed on an interactive tabletop either using a TUI or a touch-based GUI. While 83,3% of the participants reported in their self assessment that they performed better in recalling the positions when using the GUI our results show that participants were on an average 24.5% more accurate when using the TUI.


  • HaptoBend: Shape-Changing Passive Haptic Feedback in Virtual Reality
    John C. McClelland, Robert J. Teather, Audrey Girouard

    We present HaptoBend, a novel shape-changing input device providing passive haptic feedback (PHF) for a wide spectrum of objects in virtual reality (VR). Past research in VR shows that PHF increases presence and improves user task performance. However, providing PHF for multiple objects usually requires complex, immobile systems, or multiple props. HaptoBend addresses this problem by allowing users to bend the device into 2D plane-like shapes and multi-surface 3D shapes. We believe HaptoBend’s physical approximations of virtual objects can provide realistic haptic feedback through research demonstrating the dominance of human vision over other senses in VR. To test the effectiveness of HaptoBend in matching 2D planar and 3D multi-surface shapes, we conducted an experiment modeled after gesture elicitation studies with 20 participants. High goodness and ease scores show shape-changing passive haptic devices, like HaptoBend, are an effective approach to generalized haptics. Further analysis supports the use of physical approximations for realistic haptic feedback.

  • 19:00 - 23:59Conference Banquet

    Jurys Inn

    Tuesday, October 17, 2017