The Conference on Human Factors in Computing Systems (CHI) is the premier annual conference in the field of Human-Computer-Interaction. For researchers in our field, it is the most important venue to publish and present their work.
At CHI 2015 in Seoul, our group was represented with a record number of contributions – even more than last year: Overall, our group contributed nine papers/notes and five work-in-progress papers – which puts us at rank 10 of all contributing institutions world-wide, at rank 4 (Europe), and rank 1 (Germany)! We are also very happy to announce that two of our submissions received a „Best Paper Honorable Mention Award“ (top 5% of submissions).
Papers and Notes:
Full papers and notes are the most important submission category of CHI. This year, the acceptance rate was 23%. Our group contributed the following papers:
Emanuel von Zezschwitz presented SwiPIN, a novel authentication system that allows input of traditional PINs using simple touch gestures like up or down and makes it secure against human observers. SwiPIN performs adequately fast (3.7 s) to serve as an alternative input method for risky situations. Furthermore, SwiPIN is easy to use, significantly more secure against shoulder surfing attacks and switching between PIN and SwiPIN feels natural. The system was recentely featured by Gizmodo.
We performed a systematic evaluation of the shoulder surfing susceptibility of the Android pattern (un)lock. The results of an online study (n = 298) enabled us to quantify the influence of pattern length, line visibility, number of knight moves, number of overlaps and number of intersections on observation resistance. The results show that all parameters have a highly significant influence, with line visibility and pattern length being most important. At CHI, we discussed implications for real-world patterns and presented a linear regression model that can predict the observability of a given pattern.
This paper reports the findings of a study about reasons for (not) using biometric authentication on smartphones. The results indicate that usability is among the main factors in the decision process.
Alina Hang, Alexander De Luca, Michael Richter, Heinrich Hussmann
I Know What You Did Last Week! Do You? Dynamic Security Questions for Fallback Authentication on Smartphones (Honorable Mention)
The paper reports the results of two consecutive user studies on the design of dynamic security questions on smartphones and discusses their usability, security and privacy implications. The work was recentely featured by Gizmodo.
We compared the memorability of physical bar charts to that of digital bar charts by measuring the recall of three types of information immediately after exploration and with a delay of two weeks. The results show that the physical visualizations led to significantly less information decay with this time span..
Authentication methods can be improved by considering implicit, individual behavioural cues. In particular, verifying users based on typing behaviour has been widely studied with physical keyboards. On mobile touchscreens, the same concepts have been applied with little adaptations so far. This paper presents the first reported study on mobile keystroke biometrics which compares touch-specific features between three different hand postures and evaluation schemes.
With recent progress in display technology, visual see-through head-mounted displays are beginning to enter our everyday lives. Especially in cars they may replace head-up displays, as they can theoretically perfectly imitate them but are more flexible to use. However, prior work has shown that both screen- and vehicle-stabilized content suffer from drawbacks such as occlusion or technological limitations. As a potential alternative, we propose three concept alternatives, in which head rotation is used to manipulate the displayed content differently from both of the known stabilization techniques.
The Work-in-Progress track allows authors to present their ongoing work in a more concise format. Our five submissions were accepted:
When watching a movie, the viewer perceives camera motion as an integral movement of a viewport in a scene. Behind the scenes, however, there is a complex and error-prone choreography of multiple people controlling separate motion axes and camera attributes. This strict separation of tasks has mostly historical reasons, which we believe could be overcome with today’s technology. We
revisit interface design for camera motion starting with ethnographic observations and interviews with nine camera operators. We identiﬁed seven inﬂuencing factors for camera work and found that automation needs to be combined with human interaction: Operators want to be able to spontaneously take over in unforeseen situations. We characterize a class of user interfaces supporting (semi-)automated camera motion that take both human and machine capabilities into account by oﬀering seamless
transitions between automation and control.
In this paper, we present a system that supports groups in using the Disney method, a collaborative creativity technique based on three roles: dreamer, realist and critic. Each group member is provided with a tablet to enter ideas and choose the role in which a contribution is made, represented by different colors. We compared two versions: a baseline without additional support and a version with an additional feedback mechanism providing functional feedback about the distribution of the roles. Our results indicate that functional feedback can help modest group members to engage more in the group process.
The handling of 3D content increasingly permeates amateur activities and occurs spontaneously on public displays. The design of interaction techniques for such scenarios is subject to tensions between established expert user interfaces, 3D touch interaction and the requirements of the usage context. We present a novel concept for 3D touch interaction on a curved display targeted at non-expert and spontaneous interaction scenarios. We further present preliminary results from an experiment, during which we compared our interaction technique with an established one for different 3D interaction tasks. The results indicate that for the chosen tasks both techniques perform equally well and point out room for further improvement.
Video recording is becoming an integral part of our daily activities: Action cams and wearable cameras allow us to capture scenes of our daily life effortlessly. This trend generates vast amounts of video material impossible to review manually. However, these recordings also contain a lot of information potentially interesting to the recording individual and to others. Such videos can provide a meaningful summary of the day, serving as a digital extension to the user’s human memory. They might also be interesting to others as tutorials (e.g. how to change a flat tyre). As a first step towards this vision, we present a survey assessing the users‘ view and their video recording behavior.
This paper reports on the use of in-car 3D displays in a real-world driving scenario. Today, stereoscopic displays are becoming ubiquitous in many domains such as mobile phones or TVs. Instead of using 3D for entertainment, we explore the 3D effect as a mean to spatially structure user interface (UI) elements. To evaluate potentials and drawbacks of in-car 3D displays we mounted an autostereoscopic display as instrument cluster in a vehicle and conducted a real-world driving study with 15 experts in automotive UI design. The results show that the 3D effect increases the perceived quality of the UI and enhances the presentation of spatial information (e.g., navigation cues) compared to 2D. However, the eect should be used well-considered to avoid spatial clutter which can increase the system’s complexity.
The following papers were submitted in cooperation with other research groups:
Max Pfeiffer, Tim Duente, Stefan Schneegass, Florian Alt, Michael Rohs
Cruise Control for Pedestrians: Controlling Walking Direction using Electrical Muscle Stimulation (Best Paper Award)
Pedestrian navigation systems require users to perceive, interpret, and react to navigation information. This can tax cognition as navigation information competes with information from the real world. We propose actuated navigation, a new kind of pedestrian navigation in which the user does not need to attend to the navigation task at all. An actuation signal is directly sent to the human motor system to influence walking direction. To achieve this goal we stimulate the sartorius muscle using electrical muscle stimulation. The rotation occurs during the swing phase of the leg and can easily be counteracted. The user therefore stays in control. We discuss the properties of actuated navigation and present a lab study on identifying basic parameters of the technique as well as an outdoor study in a park. The results show that our approach changes a user’s walking direction by about 16°/m on average and that the system can successfully steer users in a park with crowded areas, distractions, obstacles, and uneven ground.
Christian Winkler, Jan Gugenheimer, Alexander De Luca, Gabriel Haas, Philipp Speidel, David Dobbelstein, Enrico Rukzio
Glass Unlock: Enhancing Security of Smartphone Unlocking through Leveraging a Private Near-eye Display
This paper presents Glass Unlock, a novel concept using smart glasses for smartphone unlocking, which is theoretically secure against smudge attacks, shoulder-surfing, and camera attacks. By introducing an additional temporary secret like the layout of digits that is only shown on the private near-eye display, attackers cannot make sense of the observed input on the almost empty phone screen. We report a user study with three alternative input methods and compare them to current state-of-the-art systems.
We would like to thank all our collaborators who worked very hard to make all this possible!
Finally, here are some impressions from the conference: