Monday, October 16, 2017

8:30Registration
9:00 - 9:10Welcome
9:10 - 10:20Opening Keynote: Frank Steinicke

Super-Natural Interfaces for the Ultimate Display



Abstarct: In his essay “The Ultimate Display” from 1965, Ivan E. Sutherland states that “The ultimate display would [...] be a room within which the computer can control the existence of matter [...]“. This general notion of a computer-mediated or virtual reality, in which synthetic objects or the entire virtual environment get indistinguishable from the real world, dates back to Plato’s “The Allegory of the Cave” and has been reconsidered again and again in science fiction literature as well as the movie industry. For instance, virtual reality is often used to question whether we truly “know” if our perceptions are real or not. Early science fiction movies like “The Matrix” or the fictional holodeck from the Star Trek universe are prominent examples of these kind of perceptual ambiguities. Furthermore, in movies like Steven Spielberg’s “Minority Report”, Jon Favreau’s “Iron Man″, or Brad Bird’s "Mission Impossible 4" actors can seamlessly use free-hand gestures in space combined with speech to manipulate 3D holographic projections, while they also perceive haptic feedback when touching the virtual objects.


In my talk I will revisit some of the most visually impressive 3D user interfaces and experiences of such fictional ultimate displays. As a matter of fact, we cannot let a computer fully control the existence of matter, but we can fool our senses and give a user the illusion that the computer can after all. I will show how different ultimate displays can be implemented with current state-of-the-art technology by exploiting perceptually-inspired interfaces. However, we will see that the resulting ultimate displays are not so ultimate at all, but pose novel interesting future research challenges and questions.


Frank Steinicke is a professor for Human-Computer Interaction at the Department of Informatics at the University of Hamburg. His research is driven by understanding the human perceptual, cognitive and motor abilities and limitations in order to reform the interaction as well as the experience in computer-mediated realities. Frank Steinicke regularly serves as panelist and speaker at major events in the area of virtual reality and human-computer interaction and is on the IPC of various national and international conferences. He serves as the program chair for IEEE VR 2017/2018, which is the most renowned scientific conference in the area of VR/AR. Furthermore, he is a member of the Steering committee of the ACM SUI Symposium and the GI SIG VR/AR, and currently editor of the IEEE Computer Graphics & Applications Department on Spatial Interfaces.


10:20 - 10:40Program Overview and Poster/Demo Fast Forward
10:40 - 11:00Coffee Break (Poster/Demo Setup)
11:00 - 12:30Session 1: Immersion & Presence (chair: Kyle Johnsen)
  • Smooth Immersion: The benefits of Making the Transition to Virtual Environments a Continuous Process
    Dimitar Valkov, Steffen Flagge

    In this paper we discuss the benefits and the limitations, as well as different implementation options for smooth immersion into a HMD-based IVE. We evaluated our concept in a preliminary user study, in which we have tested users’ awareness, reality judgment and experience in the IVE, when using different transition techniques to enter it. Our results show that a smooth transition to the IVE improves the awareness of the user and may increase the perceived interactivity of the system.


  • RealME: The Influence of Body and Hand Representations on Body Ownership and Presence
    Sungchul Jung, Christian Sandor, Pamela J. Wisniewski, Charles E. Hughes

    The study presented in this paper extends earlier research involving body continuity by investigating if the presence of real body cues (legs that look like and move like one's own) alters one's sense of immersion in a virtual environment. The main hypothesis is that real body cues increase one's sense of body ownership and physical presence, even when those body parts are not essential to the activity on which one is focused. To test this hypothesis, we developed an experiment that uses a virtual human hand and arm that are directly observable but clearly synthetic, and a lower body seen through a virtual mirror, where the legs are sometimes visually accurate and personalized, and other times accurate in movement but not in appearance. The virtual right hand and arm are the focus of our scenario; the lower body, only visible in the mirror, is largely irrelevant to task, only provides perceptually contextual information. By looking at combinations of arm-hand continuity (2 conditions), freedom or lack of it to move the hand (2 conditions), and realism or lack of it of the virtually reflected lower body (2 conditions), we are able to study the effects of each combination on the perceptions of body ownership and presence, critical features in virtual environments involving a virtual surrogate.


  • ReflectiveSpineVR: An Immersive Spine Surgery Simulation with Interaction History Capabilities
    Ahmed E. Mostafa, Won Hyung A. Ryu, Kazuki Takashima, Sonny Chan, Mario Costa Sousa, Ehud Sharlin

    This paper contributes ReflectiveSpineVR, an immersive spine surgery simulation enriched with interaction history capabilities aimed to support effective learning and training. The provided interaction history features are based on a design study we conducted exploring what makes an effective interaction history representation in spatial tasks. Existing surgical simulation systems only provide a crude way to supporting repetitive practice where the simulation needs to be restarted every time. By working closely with medical collaborators and following an iterative process, we present our novel approach to enriching users with nonlinear interaction history capabilities and supporting repetitive practice including how such features were realized in our ReflectiveSpineVR prototype. We conclude the paper with the results of a preliminary evaluation of ReflectiveSpineVR, highlighting the positive feedback regarding our history representation approach and the interface benefits.

  • 12:30 - 13:30Lunch Break
    13:30 - 15:00Session 2: User Interfaces (chair: Michele Fiorentino)
  • CloudBits: Supporting Conversations Through Augmented Zero-query Search Visualization
    Florian Müller, Sebastian Günther, Azita Hosseini Nejad, Niloofar Dezfuli, Mohammadreza Khalilbeigi, Max Mühlhäuser

    The retrieval of additional information from public (e.g., map data) or private (e.g., e-mail) information sources using personal smart devices is a common habit in today's co-located conversations. This behavior of users imposes challenges in two main areas: 1) cognitive focus switching and 2) information sharing. In this paper, we explore a novel approach for conversation support through augmented information bits, allowing users to see and access information right in front of their eyes. To that end, we investigate the requirements for the design of a user interface to support conversations through proactive information retrieval in an exploratory study. Based on the results, we 2) present CloudBits: A set of visualization and interaction techniques to provide mutual awareness and enhance coupling in conversations through augmented zero-query search visualization along with its prototype implementation. Finally, we 3) report the findings of a qualitative evaluation and conclude with guidelines for the design of user interfaces for conversation support.


  • Visibility Perception and Dynamic Viewsheds for Topographic Maps and Models
    Nico Li, Wesley Willett, Ehud Sharlin, Mario Costa Sousa

    We compare the effectiveness of 2D maps and 3D terrain models for visibility tasks and demonstrate how interactive dynamic viewsheds can improve performance for both types of terrain representations. In general, the two-dimensional nature of classic topographic maps limits their legibility and can make complex yet typical cartographic tasks like determining the visibility between locations difficult. Both 3D physical models and interactive techniques like dynamic viewsheds have the potential to make it improve viewers’ understanding of topography, but their impact has not been deeply explored. We evaluate the effectiveness of 2D maps, 3D models, and interactive viewsheds for both simple and complex visibility tasks. Our results demonstrate the benefits of the dynamic viewshed technique and highlight opportunities for additional tactile interactions with maps and models. Based on these findings we present guidelines for improving the design and usability of future topographic maps and models.


  • Using Artificial Landmarks to Improve Revisitation Performance and Spatial Learning in Linear Control Widgets
    Md. Sami Uddin, Carl Gutwin, Alix Goguey

    Linear interface controllers such as sliders and scrollbars are primary tools for navigating through linear content such as videos or text documents. Linear control widgets provide an abstract representation of the entire document in the body of the widget, in that they map each document location to a different position of the slider knob or scroll thumb. In most cases, however, these linear mappings are visually undifferentiated – all locations in the widget look the same – and so it can be difficult to build up spatial knowledge of the document, and difficult to navigate back to locations that the user has already visited. In this paper, we examine a technique that can address this problem: artificial landmarks that are added to a linear control widget in order to improve spatial understanding and revisitation. We carried out a study with two types of content (a video, and a PDF document) to test the effects of adding artificial landmarks. We compared standard widgets (with no landmarks) to two augmented designs: one that placed arbitrary abstract icons in the body of the widget, and one that added thumbnails extracted from the document. We found that for both kinds of content, adding artificial landmarks significantly improved revisitation performance and user preference, with the thumbnail landmarks fastest and most accurate in both cases. Our study demonstrates that augmenting linear control widgets with artificial landmarks can provide substantial benefits for document navigation.

  • 15:00 - 15:30Poster Session & Coffee Break
    15:30 - 17:20Session 3: Haptics (chair: Francisco Ortega)
  • Stylo and Handifact: Modulating Haptic Perception through Visualisations for Posture Training in Augmented Reality
    Nicholas Katzakis, Jonathan Tong, Oscar Javier Ariza Nunez, Lihan Chen, Gudrun Klinker, Brigitte Roeder, Frank Steinicke

    Stylo-Handifact is a novel spatial user interface consisting of a haptic device (i.e., Stylo) attached to the forearm and a visualization of a virtual hand (i.e., Handifact), which in combination provide visuo-haptic feedback for posture training applications. In this paper we evaluate the mutual effects of Handifact and Stylo on visuo-haptic sensations in a psychophysical experiment. The results show that a visual stimulus can modulate the perceived strength of a haptic stimulus by more than 5%. A wrist docking task indicates that Stylo-Handifact results in improved task completion time as compared to a state-of-the-art technique.


  • Evaluating the Effect of Tangible Virtual Reality on Spatial Perspective Taking Ability
    Jack Shen-Kuen Chang, Georgina Yeboah, Alison Doucette, Paul Clifton, Michael Nitsche, Timothy Welsh, Ali Mazalek

    As shown in many large-scale and longitudinal studies, spatial ability is strongly associated with STEM (science, technology, engineering, and mathematics) learning and career success. At the same time, a growing volume of research connects cognitive science theories with tangible/embodied interactions (TEI) and virtual reality (VR) to offer novel means to support spatial cognition. But very few VR-TEI systems are specifically designed to support spatial ability, nor are they evaluated with respect to spatial ability. In this paper, we present the background, approach, and evaluation of TASC (Tangibles for Augmenting Spatial Cognition), a VR-TEI system built to support spatial perspective taking ability. We tested 3 conditions (tangible VR, keyboard/mouse, control; n=46). Analysis of the pre/post-test change in performance on a perspective taking test revealed that only the VR-TEI group showed statistically significant improvements. The results highlight the role of tangible VR design for enhancing spatial cognition.


  • Analysing the Effect of Tangibile User Interfaces on Spatial Memory
    Markus Löchtefeld, Frederik Wiehr, Sven Gehring

    Tangible User Interfaces (TUIs) allow for effective and easy interaction with digital information by encapsulating them into a physical form. Especially in combination with interactive surfaces, TUIs have been studied in a variety of forms and application cases. By taking advantage of the human abilities to grasp and manipulate they ease collaboration and learning. In this paper we study the effects of TUIs on spatial memory. In our study we compare participants’ performance of recalling positions of buildings that they priorly placed on an interactive tabletop either using a TUI or a touch-based GUI. While 83,3% of the participants reported in their self assessment that they performed better in recalling the positions when using the GUI our results show that participants were on an average 24.5% more accurate when using the TUI.


  • HaptoBend: Shape-Changing Passive Haptic Feedback in Virtual Reality
    John C. McClelland, Robert J. Teather, Audrey Girouard

    We present HaptoBend, a novel shape-changing input device providing passive haptic feedback (PHF) for a wide spectrum of objects in virtual reality (VR). Past research in VR shows that PHF increases presence and improves user task performance. However, providing PHF for multiple objects usually requires complex, immobile systems, or multiple props. HaptoBend addresses this problem by allowing users to bend the device into 2D plane-like shapes and multi-surface 3D shapes. We believe HaptoBend’s physical approximations of virtual objects can provide realistic haptic feedback through research demonstrating the dominance of human vision over other senses in VR. To test the effectiveness of HaptoBend in matching 2D planar and 3D multi-surface shapes, we conducted an experiment modeled after gesture elicitation studies with 20 participants. High goodness and ease scores show shape-changing passive haptic devices, like HaptoBend, are an effective approach to generalized haptics. Further analysis supports the use of physical approximations for realistic haptic feedback.

  • 19:00 - 23:59Conference Banquet

    Jurys Inn


     


    Tuesday, October 17, 2017

    8:30Registration
    9:00 - 10:30Session 4: Gaze (chair: Markus Löchtefeld)
  • The Eyes Don’t Have It: An Empirical Comparison of Head-Based and Eye-Based Selection in Virtual Reality
    YuanYuan Qian, Robert J. Teather

    We present a study comparing selection performance between three eye/head interaction techniques using the recently released FOVE head-mounted display (HMD). The FOVE offers an integrated eye tracker, which we use as an alternative to potentially fatiguing and uncomfortable head-based selection used with other commercial devices. Our experiment was modelled after the ISO 9241-9 reciprocal selection task, with targets presented at varying depths in a custom virtual environment. We compared eye-based selection, and head-based selection (i.e., gaze direction) in isolation, and a third condition which used both eye-tracking and head-tracking at once. Results indicate that eye-only selection offered the worst performance in terms of error rate, selection times, and throughput. Head-only selection offered significantly better performance.


  • Gaze + Pinch Interaction in Virtual Reality
    Ken Pfeuffer, Benedikt Mayer, Diako Mardanbegi, Hans Gellersen

    Virtual reality affords experimentation with human abilities beyond what's possible in the real world, toward novel senses of interaction. In many interactions, the eyes naturally point at objects of interest while the hands skilfully manipulate in 3D space. We explore a particular combination for virtual reality, the Gaze + Pinch interaction technique. It integrates eye gaze to select targets, and indirect freehand gestures to manipulate them. This keeps the gesture use intuitive like direct physical manipulation, but the gesture's effect can be applied to any object the user sees --- whether located near or far. In this paper, we describe novel interaction concepts and an experimental system prototype that bring together interaction technique variants, menu interfaces, and applications into one unified virtual experience. Early application examples were developed and tested, including 3D manipulation, scene navigation, and image zooming, illustrating a range of advanced interaction capabilities on targets at any distance, without using controllers.


  • EyeSee360: Designing a Visualization Technique for Out-of-view Objects in Head-mounted Augmented Reality
    Uwe Gruenefeld, Dag Ennenga, Abdallah El Ali, Wilko Heuten, Susanne Boll

    Head-mounted displays allow user to augment reality or dive into a virtual one. However, these 3D spaces often come with problems due to objects that may be out of view. Visualizing these out-of-view objects is useful under certain scenarios, such as situation monitoring during ship docking. To address this, we designed a lo-fi prototype of our EyeSee360 system, and based on user feedback, subsequently implemented EyeSee360. We evaluate our technique against well-known 2D off-screen object visualization techniques (Arrow, Halo, Wedge) adapted for head-mounted Augmented Reality, and found that EyeSee360 results in lowest error for direction estimation of out-of-view objects. Based on our findings, we outline the limitations of our approach and discuss the usefulness of our developed lo-fi prototyping tool.

  • 10:30 - 11:00Demo Session & Coffee Break
    11:00 - 12:00Spatial User Interaction Panel
    Florian Daiber (moderator), Robert W. Lindeman, Kyle Johnsen, Sriram Subramanian, Wolfgang Stuerzlinger

    Abstract: In this panel, we will discuss the current state of Spatial User Interfaces (SUI), and the new research challenges that await us. The discussion will start on the topic of field studies, and practical applications of SUI technologies in the wild. Most current research focuses on controlled settings, therefore exploring how these technologies can be applied outside laboratories will be of particular relevance.


    Florian Daiber is a post-doctoral researcher at the Innovative Retail Laboratory (IRL) at the German Research Center for Artificial Intelligence (DFKI) in Saarbrücken, Germany. His main research is in the field of human-computer interaction, 3D user interfaces and ubiquitous sports technologies. Florian is currently involved in projects on affective lighting in retail environments, 3D interaction with stereoscopic displays, mobile augmented reality and wearable sports technologies.


    Kyle Johnsen is an Associate Professor in the School of Electrical and Computer Engineering at the University of Georgia. His highly interdisciplinary applied research is in the design and evaluation of novel human-computer systems, particularly those that involve ubiquitous computing, intelligent agents, and virtual reality, that address societal problems. Examples include the Marine Debris Tracker App (marinedebris.engr.uga.edu), Virtual STEM Buddies exhibit at the Children’s Museum of Atlanta, and medical training tool, nervesim.com. In 2017, he became the founding director of the Georgia Informatics Institutes for Research and Education, which is leading advancement of computational tools across disciplines, and includes faculty members from over half of the Colleges at the University of Georgia.


    Rob Lindeman has been doing research in the field of Virtual Reality since 1993. His work focuses on immersive, multi-sensorial feedback systems for VR, AR, and gaming, as well as natural and non-fatiguing interaction. He is Professor of HCI at the Human Interface Technology Lab New Zealand (HIT Lab NZ) at the University of Canterbury. Prior to that, Rob was in the CS Department at Worcester Polytechnic Institute (WPI) in the USA and director of WPI’s Interactive Media & Game Development programme. Rob holds a BA from Brandeis University (USA), an MS from the University of Southern California (USA), and an ScD from the George Washington University (USA). Rob is a Senior Member of the IEEE and ACM. He is an avid geocacher, mountain biker, skier, and soccer player.


    Sriram Subramanian is a Professor of Informatics at the University of Sussex where he leads the Interact Lab (www.interact-lab.com). Before joining Sussex, he was a Professor of Human-computer Interaction at the University of Bristol and prior to this a senior scientist at Philips Research Netherlands. He has published over 100 research articles on designing new and novel user-experiences through a combination of physical science, engineering and creativity. He has received research funding from EPSRC (Responsive mode), ERC (Starting Grant and PoC), EU (FET-open) and industry. He is also the co-founder of Ultrahaptics (www.ultrahaptics.com) a spin-out company that aims to commercialise the mid-air haptics.


    Building on his deep expertise in virtual reality and human-computer interaction, Dr. Stuerzlinger is a leading researcher in spatial and three-dimensional user interfaces. He got his Doctorate from the Vienna University of Technology, was a postdoctoral researcher at the University of Chapel Hill in North Carolina, and professor at York University in Toronto. Since 2014, he is a full professor at the School of Interactive Arts + Technology at Simon Fraser University in Vancouver, Canada. His works aims to find innovative solutions for real-world problems. Current research projects include better interaction techniques for spatial applications, new human-in-the-loop systems for big data analysis (visual analytics and immersive analytics), the characterization of the effects of technology limitations on human performance, investigations of human behaviors with occasionally failing technologies, user interfaces for versions, scenarios and alternatives, and new virtual reality hardware and software.

    12:30 - 13:30Lunch Break
    13:30 - 15:00Session 5: Systems & Applications (chair: Dimitar Valkov)
  • Evaluation of Finger Position Estimation with a Small Ranging Sensor Array
    Yu Ishikawa, Buntarou Shizuki, Junichi Hoshino

    We implemented the input interface named ‘Novest’ which can estimate the finger position and classify the finger state (out-of-range/hovering/touching) on the back of a hand with a small ranging sensor array attached on the side of a smartwatch, each of the small ranging sensors gets distance data and strength of signal data. With a prototype of Novest, we conducted the first evaluation to assess the basic performance. In this paper, we conducted the formal user study with two different conditions: the sitting/standing conditions to evaluate the practical perfor-mance of Novest as an input interface. The results show that the finger position estimation accuracies in the sitting and standing conditions are 4.2 mm and 5.0 mm, respectively. Additionally, the finger state classification accuracies with parameters adjust-ed by grid search in the sitting and standing conditions are 97.2% and 97.9%, respectively.


  • GestureDrawer: One-Handed Interaction Technique for Spatial User-Defined Imaginary Interfaces
    Teo Babic, Harald Reiterer, Michael Haller

    Existing empty-handed mid-air interaction techniques for system control are typically limited to a confined gesture set or point-and-select on graphical user interfaces. In this paper, we introduce GestureDrawer, a one-handed interaction with a 3D imaginary interface. Our approach allows users to self-define an imaginary interface, acquire visuospatial memory of the position of its controls in empty space and enables them to select or manipulate those controls by moving their hand in all three dimensions. We evaluate our approach with three user studies and demonstrate that users can indeed position imaginary controls in 3D empty space and select them with an accuracy of 93% without receiving any feedback and without fixed land-marks (e.g. second hand). Further, we show that imaginary interaction is generally faster than mid-air interaction with graphical user interfaces, and that users can retrieve the position of their imaginary controls even after a proprioception disturbance. We condense our findings into several design recommendations and present automotive applications.


  • TriggerWalking: A Biomechanically-Inspired Locomotion User Interface for Efficient Realistic Virtual Walking
    Ms Bhuvaneswari Sarupuri, Simon Hoermann, Robert Lindeman, Frank Steinicke

    Most current virtual reality (VR) applications use some form of teleportation to cover large distances, or real walking in room-scale setups for moving in virtual environments. Though real walking is the most natural for medium distances, it gets physically demanding and inefficient after prolonged use, while the sudden viewpoint changes experienced with teleportation often lead to disorientation. To close the gap between travel over long and short distances, we introduce TriggerWalking a biomechanically-inspired locomotion user interface for efficient realistic virtual walking. The idea is to map the human’s embodied ability for walking to a finger-based locomotion technique. Using the triggers of common VR controllers, the user can generate near-realistic virtual bipedal steps. We analyzed head oscillations of VR users while they walked with a head-mounted display, and used the data to simulate realistic walking motions with respect to the trigger pulls. We evaluated how the simulation of walking biomechanics affects task performance and spatial cognition. We also compared the usability of TriggerWalking with joystick, teleportation, and walking in place. The results show that users can efficiently use TriggerWalking, while still benefiting from the inherent advantages of real walking.

  • 15:00 - 15:30Coffee Break & Poster/Demo Viewing
    15:30 - 16:40Closing Keynote: Alex Schwartz

    Intuitive Spatial Interactions in VR



    Abstarct: As a day-one launch title on every 6DOF VR system in 2016 (HTC Vive, Oculus Touch, and PlayStation VR), "Job Simulator" was an early pioneer of hand-based commercial VR design, and is seen as providing one of the most intuitive experiences to date. Alex Schwartz, CEOwl of Owlchemy Labs, will dissect the UX paradigms and lessons learned building spatial interactions throughout the development of Owlchemy’s titles in addition to sharing his thoughts on the future of 6DOF hand interactions in VR. Alex will dive into the development process for new interactions at Owlchemy, ways to address incorrect affordances, in-house revelations such as “Tomato Presence”, and the story behind a highly accessible menu interface consisting of an edible burrito.


    Alex Schwartz is the CEO (Chief Executive Owl) and Janitor of VR studio Owlchemy Labs, creators of the HTC Vive, Oculus Touch, and PlayStation VR triple-platform launch title "Job Simulator" as well as "Rick and Morty VR". He received a BS in Interactive Media & Game development from the Worcester Polytechnic Institute. As one of the first studios to dive head-first into the VR space, Owlchemy Labs’ efforts have resulted in some of the top played VR content in the world, earning them a Sundance Nomination for Job Simulator as well as the opportunity to ship inside the box with the HTC Vive. As co-founder of the VR Austin group, Alex plays an active role in the gaming and VR community by speaking around the world, organizing events, and smoking various exotic meats down in Austin, TX.


    16:40 - 16:55 Best of SUI Awards
    16:55 - 17:00Farewell
    18:00 - 22:00Joint SUI & ISS Poster & Demo Reception