Proceedings

The full proceedings of the AutoUI 2020 conference, and the adjunct proceedings can be downloaded from the ACM Digital Library.

You can find a table of contents and individual papers for the proceedings of the 12th International Conference on Automotive User Interfaces and Interactive Vehicular Applications with the following links:


TOC - Main Proceedings

Full Paper Session 1: Advanced Interfaces, Automation and User Experiences

Investigating the effect of tactile input and output locations for drivers’ hands on in-car tasks performance

  • Dong-Bach Vo
  • Stephen Brewster

This paper reports a study investigating the effects of tactile input and output from the steering wheel and the centre console on non-driving task performance. While driving, participants were asked to perform list selection tasks using tactile switches and to experience tactile feedback on either the non-dominant, dominant or both hands as they were browsing the list. Our results show the average duration for selecting an item is 30% shorter when interacting with the steering wheel. They also show a 20% increase in performance when tactile feedback is provided. Our findings reveal that input prevails over output location when designing interaction for drivers. However, tactile feedback on the steering wheel is beneficial when provided at the same location as the input or to both hands. The results will help designers understand the trade-offs of using different interaction locations in the car.

How Long Can a Driver Look? Exploring Time Thresholds to Evaluate Head-up Display Imagery

  • Bethan Hannah Topliss
  • Catherine Harvey
  • Gary Burnett

Currently, the visual demand incurred by vehicle displays is evaluated using time criteria (such as those provided by NHTSA). This 60-participant driving simulator study investigated to what extent glance time criteria applies to Head-up Display (HUD) imagery, considering 48 locations across the windshield (and 3 in-vehicle display positions). Participants were required to make a long controlled continuous glance to a sample of these locations. Consequently, the time at which lateral/longitudinal unsafe driving occurred (e.g. deviating out of lane, unacceptable time to collision) could then be assessed. Using the selected measures, the results suggest that drivers are able to maintain driving performance for longer than recommended NHTSA guidelines for in-vehicle displays whilst engaging with HUD imagery in various locations. Importantly, the data from this study provides initial maps for designers highlighting the visual demand implications of HUD imagery across the windshield.

A Wizard of Oz Field Study to Understand Non-Driving-Related Activities, Trust, and Acceptance of Automated Vehicles

  • Henrik Detjen
  • Bastian Pfleging
  • Stefan Schneegass

Understanding user needs and behavior in automated vehicles (AVs) while traveling is essential for future in-vehicle interface and service design. Since AVs are not yet market-ready, current knowledge about AV use and perception is based on observations in other transportation modes, interviews, or surveys about the hypothetical situation. In this paper, we close this gap by presenting real-world insights into the attitude towards highly automated driving and non-driving-related activities (NDRAs). Using a Wizard of Oz AV, we conducted a real-world driving study (N = 12) with six rides per participant during multiple days. We provide insights into the users’ perceptions and behavior. We found that (1) the users’ trust a human driver more than a system, (2) safety is the main acceptance factor, and (3) the most popular NDRAs were being idle and the use of the smartphone.

Sick of Scents: Investigating Non-invasive Olfactory Motion Sickness Mitigation in Automated Driving

  • Clemens Schartmüller
  • Andreas Riener

While automated vehicles are supposed to become places for purposes beyond transportation, motion sickness is still a largely unsolved issue that may be critical for this transformation. Due to its previously shown positive impact on the gastric and central nervous system, we hypothesize that olfaction (in particular the scents of lavender and ginger) may be able to reduce motion sickness symptoms in a non-invasive manner. We investigate the effects of these scents on the driver-passenger in chauffeured drives in a test track study with a reading-span non-driving related task. Evaluation of self-rated (Simulator Sickness Questionnaire, UX Curves) and physiologically measured motion sickness (Electrogastrography, Electrocardiography), and observations are presented and discussed. Results indicate that the issued scents were detrimental to the well-being of participants in the comparisons between post-task (baseline, scented) and pre-test measurements, with symptoms in the lavender-scented group being perceived as slightly less harsh than in the ginger-scented group.

Full Paper Session 2: Vehicle Automation: Trust and Takeovers

Situational Trust Scale for Automated Driving (STS-AD): Development and Initial Validation

  • Brittany E. Holthausen
  • Philipp Wintersberger
  • Bruce N. Walker
  • Andreas Riener

Trust is important in determining how drivers interact with automated vehicles. Overtrust has contributed to fatal accidents; and distrust can hinder successful adoption of this technology. However, existing studies on trust are often hard to compare, given the complexity of the construct and the absence of standardized measures. Further, existing trust scales often do not consider its multi-dimensionality. Another challenge is that driving is strongly context- and situation-dependent. We present the Situational Trust Scale for Automated Driving, a short questionnaire to assess different aspects of situational trust, based on the trust model proposed by Hoff and Bashir. We evaluated the scale using an online study in the US and Germany (N=303), where participants faced different videos of an automated vehicle. Results confirm the existence of situational factors as components of trust, and support the scale being a valid measure of situational trust in this automated driving context.

Effects of Anger and Display Urgency on Takeover Performance in Semi-automated Vehicles

  • Harsh Sanghavi
  • Yiqi Zhang
  • Myounghoon Jeon

As semi-automated vehicles get to have the ability to drive themselves, it is important (1) to explore drivers’ affective states which may influence takeover performance and (2) to design optimized control transition displays to warn drivers to take control back from the vehicles. The present study investigated the influence of anger on drivers’ takeover reaction time and quality, with varying urgency of auditory takeover request displays. Using a driving simulator, 36 participants experienced takeover scenarios in a semi-automated vehicle with a secondary task (game). Higher frequency and more repetitions of the auditory displays led to faster takeover reaction times, but there was no difference between angry and neutral drivers. For takeover quality, angry drivers drove faster, took longer to change lanes and had lower steering wheel angles than neutral drivers, which made riskier driving. Results are discussed with the necessity of affect research and display design guidelines in automated vehicles.

Driver-initiated Tesla Autopilot Disengagements in Naturalistic Driving

  • Alberto Morando
  • Pnina Gershon
  • Bruce Mehler
  • Bryan Reimer

We quantify the time-course of glance behavior and steering wheel control level in driver-initiated, non-critical disengagements of Tesla Autopilot (AP) in naturalistic driving. Although widely used, there are limited objective data on the impact of AP on driver behavior. We offer insights from 19 Tesla vehicle owners on driver behavior when using AP and transitioning to manual driving. Glance behavior and steering wheel control level were coded for 298 highway driving disengagements. The average proportion of off-road glances decreased from 36% when AP was engaged to 24% while driving manually after AP disengagement. Most of the off-road glances before the transition were downward and to the center stack (17%). Lastly, in 33% of the events drivers were not holding the steering wheel prior to AP disengagement. The study helps begin to enhance society’s understanding, and provide a reference, of real-world AP use.

Evaluating Effects of Cognitive Load, Takeover Request Lead Time, and Traffic Density on Drivers’ Takeover Performance in Conditionally Automated Driving

  • Na Du
  • Jinyong Kim
  • Feng Zhou
  • Elizabeth Pulver
  • Dawn M. Tilbury
  • Lionel P. Robert
  • Anuj K. Pradhan
  • X. Jessie Yang

In conditionally automated driving, drivers engaged in non-driving related tasks (NDRTs) have difficulty taking over control of the vehicle when requested. This study aimed to examine the relationships between takeover performance and drivers’ cognitive load, takeover request (TOR) lead time, and traffic density. We conducted a driving simulation experiment with 80 participants, where they experienced 8 takeover events. For each takeover event, drivers’ subjective ratings of takeover readiness, objective measures of takeover timing and quality, and NDRT performance were collected. Results showed that drivers had lower takeover readiness and worse performance when they were in high cognitive load, short TOR lead time, and heavy oncoming traffic density conditions. Interestingly, if drivers had low cognitive load, they paid more attention to driving environments and responded more quickly to takeover requests in high oncoming traffic conditions. The results have implications for the design of in-vehicle alert systems to help improve takeover performance.

Full Paper Session 3: External Displays, Ride Sharing and Individual Differences

Designing External Automotive Displays: VR Prototypes and Analysis

  • Ashratuz Zavin Asha
  • Fahim Anzum
  • Patrick Finn
  • Ehud Sharlin
  • Mario Costa Sousa

Our work extends contemporary research into visualizations and related applications for automobiles. Focusing on external car bodies as a design space we introduce the External Automotive Displays (EADs), to provide visualizations that can share context and user-specific information as well as offer opportunities for direct and mediated interaction between users and automobiles. We conducted a design study with interaction designers to explore design opportunities on EADs to provide services to different road users; pedestrians, passengers, and drivers of other vehicles. Based on the design study, we prototyped four EADs in virtual reality (VR) to demonstrate the potential of our approach. This paper contributes our vision for EADs, the design and VR implementation of a few EAD prototypes, a preliminary design critique of the prototypes, and a discussion of the possible impact and future usage of external automotive displays.

Capturing Passenger Experience in a Ride-Sharing Autonomous Vehicle: The Role of Digital Assistants in User Interface Design

  • Benjamin S. Alpers
  • Kali Cornn
  • Lauren E. Feitzinger
  • Usman Khaliq
  • So Yeon Park
  • Bardia Beigi
  • Daniel Joseph Hills-Bunnell
  • Trevor Hyman
  • Kaustubh Deshpande
  • Rieko Yajima
  • Larry Leifer
  • Lauren Aquino Shluzas

Autonomous ride-sharing services have the potential to disrupt future transportation ecosystems. It is critical to understand factors that influence user experience in autonomous vehicles (AVs) to design for widespread adoption. We conducted an on-road driving study in a mock AV to examine how the amount of information provided by an in-vehicle digital assistant, and the manner in which information is delivered, can impact one's overall AV experience. Passengers were divided into two cohorts, based on their assigned in-vehicle digital assistant (Lilly vs. Julie). Through a mixed-methods analysis, the data showed that quantity and quality of information presented via the digital assistant had a significant impact on one's confidence in an AV's driving capability and willingness to ride again. These findings highlight that although the two cohorts were identical with respect to the actual vehicle driven, differences in in-vehicle digital assistant design can alter passengers’ perceptions of their overall AV experience.

Sound Decisions: How Synthetic Motor Sounds Improve Autonomous Vehicle-Pedestrian Interactions

  • Dylan Moore
  • Rebecca Currano
  • David Sirkin

Electric vehicles’ (EVs) nearly silent operation has proved to be dangerous for bicyclists and pedestrians, who often use an internal combustion engine’s sound as one of many signals to locate nearby vehicles and predict their behavior. Inspired by regulations currently being implemented that will require EVs and hybrid vehicles (HVs) to play synthetic sound, we used a Wizard-of-Oz AV setup to explore how adding synthetic engine sound to a hybrid autonomous vehicle (AV) will influence how pedestrians interact with the AV in a naturalistic field study. Pedestrians reported increased interaction quality and clarity of intent of the vehicle to yield compared to a baseline condition without any added sound. These findings suggest that synthetic engine sound will not only be effective at helping pedestrians to hear EVs, but also may help AV developers implicitly signal to pedestrians when the vehicle will yield.

What Driving Says About You: A Small-Sample Exploratory Study Between Personality and Self-Reported Driving Style Among Young Male Drivers

  • Xingwei Wu
  • Yuki Gorospe
  • Teruhisa Misu
  • Y Huynh
  • Nimsi Guerrero

Understanding how personalities relate to driving styles is crucial for improving Advanced Driver Assistance Systems (ADASs) and driver-vehicle interactions. Focusing on the ”high-risk” population of young male drivers, the objective of this study is to investigate the association between personality traits and driving styles. An online survey study was conducted among 46 males aged 21-30 to gauge their personality traits, self-reported driving style, and driving history. Hierarchical Clustering was proposed to identify driving styles and revealed two subgroups of drivers who either had a ”risky” or ”compliant” driving style. Compared to the compliant group, the risky cluster sped more frequently, was easily distracted and affected by negative emotion, and often behaved recklessly. The logit model results showed that the risky driving style was associated with lower Agreeableness and Conscientiousness, but higher driving exposure. An interaction effect was also detected between age and Extraversion to form a risky driving style.

Full Paper Session 4: Communication with Other Road Users

Evaluating Highly Automated Trucks as Signaling Lights

  • Mark Colley
  • Stefanos Can Mytilineos
  • Marcel Walch
  • Jan Gugenheimer
  • Enrico Rukzio

Automated vehicles will change the trucking industry as human drivers become more absent. In crossing scenarios, external communication concepts are already evaluated to resolve potential issues. However, automated delivery poses unique communication problems. One specific situation is the delivery to the curb with the truck remaining partially on the street, blocking sidewalks. Here, pedestrians have to walk past the vehicle with reduced sight, resulting in safety issues. To address this, we conducted a literature survey revealing the lack of addressing external communication of automated vehicles in situations other than crossings. Afterwards, a study in Virtual Reality (N=20) revealed the potential of such communication. While the visualization (e.g., arrows or text) of whether it is safe to walk past the truck only played a minor part, the information of being able to safely walk past was highly appreciated. This shows that external communication concepts carry great potential besides simple crossing scenarios.

Designing Communication Strategies of Autonomous Vehicles with Pedestrians: An Intercultural Study

  • Mirjam Lanzer
  • Franziska Babel
  • Fei Yan
  • Bihan Zhang
  • Fang You
  • Jianmin Wang
  • Martin Baumann

Autonomous vehicles (AVs) have the opportunity to reduce accident and injury rates in urban areas and improve safety for vulnerable road users (VRUs). To realize these benefits, AVs have to communicate with VRUs like pedestrians. While there are proposed solutions concerning the visualization or modality of external human-machine interfaces, a research gap exists regarding the AVs’ communication strategy when interacting with pedestrians. Our work presents a comparative study of an autonomous delivery vehicle with three communication strategies ranging from polite to dominant in two scenarios, at a crosswalk or on the street. We investigated these strategies in an online-based video study in a German (N = 34) and a Chinese sample (N = 56) regarding compliance, acceptance and trust. We found that a polite strategy led to more compliance in the Chinese but not the German sample. However, the polite strategy positively affected trust and acceptance of the AV in both samples equally.

Impact of Hand Signals on Safety: Two Controlled Studies With Novice E-Scooter Riders

  • Andreas Löcken
  • Pascal Brunner
  • Ronald Kates

The introduction of micro-mobility, such as e-scooters, brings new challenges. Nevertheless, these trend devices are spreading rapidly without a comprehensive study of their interactions with other road users. For example, many countries currently require drivers of e-scooters to signal turns by hand. In this work, we investigate whether e-scooter riders can do this without losing control and whether they perceive hand signals as safe enough to use in traffic. We have conducted two studies with 10 and 24 participants, respectively. Each participant was able to perform hand signals without apparent problems. We also observed an intensive training effect regarding the handling of e-scooters. Nevertheless, our results indicate that a considerable number of inexperienced riders will, outside the laboratory, turn without signs with the currently prevailing e-scooter designs and regulations due to uncertainties.

Full Paper Session 5: Advanced Interfaces: Design and Applications

The Role and Potentials of Field User Interaction Data in the Automotive UX Development Lifecycle: An Industry Perspective

  • Patrick Ebel
  • Florian Brokhausen
  • Andreas Vogelsang

We are interested in the role of field user interaction data in the development of In-Vehicle Information System (IVIS), the potentials practitioners see in analyzing this data, the concerns they share, and how this compares to companies with digital products. We conducted interviews with 14 UX professionals, 8 from automotive and 6 from digital companies, and analyzed the results by emergent thematic coding. Our key findings indicate that implicit feedback through field user interaction data is currently not evident in the automotive UX development process. Most decisions regarding the design of IVIS are made based on personal preferences and the intuitions of stakeholders. However, the interviewees also indicated that user interaction data has the potential to lower the influence of guesswork and assumptions in the UX design process and can help to make the UX development lifecycle more evidence-based and user-centered.

Gaze-based Interaction with Windshield Displays for Automated Driving: Impact of Dwell Time and Feedback Design on Task Performance and Subjective Workload

  • Andreas Riegler
  • Bilal Aksoy
  • Andreas Riener
  • Clemens Holzmann

With increasing automation, vehicles could soon become mobile work- and living spaces, but traditional user interfaces (UIs) are not designed for this domain. We argue that high levels of productivity and user experience will only be achieved in SAE L3 automated vehicles if UIs are modified for non-driving related tasks. As controls might be far away (up to 2 meters), we suggest to use gaze-based interaction with windshield displays. In this work, we investigate the effect of different dwell times and feedback designs (circular and linear progress indicators) on user preference, task performance and error rates. Results from a user study conducted in a virtual reality driving simulator (N = 24) highlight that circular feedback animations around the viewpoint are preferred for gaze input. We conclude this work by pointing out the potential of gaze-based interactions with windshield displays for future SAE L3 vehicles.

Dealing with Input Uncertainty in Automotive Voice Assistants

  • Vanessa Tobisch
  • Markus Funk
  • Adam Emfield

After their success in the smart home, voice assistants are becoming increasingly popular in automotive user interfaces. These voice assistants are traditionally designed to provide a human-like dialog with the user. Thus, when processing voice input, especially dealing with uncertainty is an important factor that needs to be considered when designing system responses. While state-of-the-art voice assistants offer responses based on their certainty of what they understood, these response-thresholds are largely under-explored. In this work, we close this gap by providing a user-centered approach to investigate which responses are acceptable for voice input users depending on input certainty. Through findings from semi-structured online interviews with 101 participants, we provide insights about designing voice user interface responses based on system certainty. Our findings reveal a sweet spot for executing a task versus requesting additional user input. Further, we provide data-driven guidelines for different in-car voice assistant behaviors.

“Watch out!”: Prediction-Level Intervention for Automated Driving

  • Chao Wang
  • Matti Krüger
  • Christiane B. Wiebel-Herboth

It seems that autonomous driving systems are substituting human responsibilities in the driving task. However, this does not mean that vehicles should not interact with their driver anymore, even in case of full automation. One reason is that the automation is not yet advanced enough to predict other road user's behavior in complex situations, which can lead to sub-optimal action choices, decrease comfort and user experience. In contrast, a human driver may have a more reliable understanding of other road users’ intentions which could complement that of the automation. We propose a framework that distinguishes between four levels for interaction with automation. Based on the framework, we introduce a concept which allows drivers to provide prediction-level guidance to an automated driving system through gaze-speech interaction. Results of a pilot user study show that people hold a positive attitude towards prediction-level intervention as well as the gaze-based interaction method.

Full Paper Session 6: Communication With Other Road Users II

Effect of Visualization of Pedestrian Intention Recognition on Trust and Cognitive Load

  • Mark Colley
  • Christian Bräuner
  • Mirjam Lanzer
  • Marcel Walch
  • Martin Baumann
  • Enrico Rukzio

Autonomous vehicles carry the potential to greatly improve mobility and safety in traffic. However, this technology has to be accepted and of value for the intended users. One challenge on this way is the detection and recognition of pedestrians and their intentions. While there are technological solutions to this problem, there seems to be no research on how to make this information transparent to the user in order to calibrate the user’s trust. Our work presents a comparative study of 5 visualization techniques with Augmented Reality or tablet-based visualization technology and two or three information clarity states of pedestrian intention in the context of highly automated driving. We investigated these in a user study in Virtual Reality (N=15). We found that such a visualization was rated reasonable, necessary, and that especially the Augmented Reality-based version with three clarity states was preferred.

Distance-Dependent eHMIs for the Interaction Between Automated Vehicles and Pedestrians

  • Debargha Dey
  • Kai Holländer
  • Melanie Berger
  • Berry Eggen
  • Marieke Martens
  • Bastian Pfleging
  • Jacques Terken

External human-machine interfaces (eHMIs) support automated vehicles (AVs) in interacting with vulnerable road users such as pedestrians. While related work investigated various eHMIs concepts, these concepts communicate their message in one go at a single point in time. There are no empirical insights yet whether distance-dependent multi-step information that provides additional context as the vehicle approaches a pedestrian can increase the user experience. We conducted a video-based study (N = 24) with an eHMI concept that offers pedestrians information about the vehicle’s intent without providing any further context information, and compared it with two novel eHMI concepts that provide additional information when approaching the pedestrian. Results show that additional distance-based information on eHMIs for yielding vehicles enhances pedestrians’ comprehension of the vehicle’s intention and increases their willingness to cross. This insight posits the importance of distance-dependent information in the development of eHMIs to enhance the usability, acceptance, and safety of AVs.

An Exploration of Prosocial Aspects of Communication Cues between Automated Vehicles and Pedestrians

  • Shadan Sadeghian
  • Marc Hassenzahl
  • Kai Eckoldt

Road traffic is a social situation where participants heavily interact with each other. Consequently, communication plays an important role. Typically, the communication between pedestrians and drivers is nonverbal and consists of a combination of gestures, eye contact, and body movement. However, when vehicles become automated, this will change. Previous work has investigated the design and effectiveness of additional communication cues between pedestrians and automated vehicles. It remains unclear, though, how this impacts the perceptions of the quality of communication and impressions of mindfulness and prosociality. In this paper, we report an online experiment, where we evaluated the perception of communication cues in the form of on-road light projections, across different traffic scenarios and roles. Our results indicate that, while the cues can improve communication, their effect is dependent on traffic scenarios. These results provide preliminary implications for the design of communication cues that consider their prosocial aspects.

A Design Space for External Communication of Autonomous Vehicles

  • Mark Colley
  • Enrico Rukzio

Autonomous vehicles are on the verge of entering the mass market. Communication between these vehicles with vulnerable road users could increase safety and ease their introduction by helping to understand the vehicle’s intention. Numerous communication modalities and messages were proposed and evaluated. However, these explorations do not account for the factors described in communication theory. Therefore, we propose a two-part design space consisting of a concept part with 3 dimensions and a situation part with 6 dimensions based on a literature review on communication theory and a focus group with experts (N=4) on communication. We found that most work until now does not address situation-specific aspects of such communication.

Full Paper Session 7: Vehicle Automation: Support Systems and Feedback

The Joy of Collaborating with Highly Automated Vehicles

  • Gesa Wiegand
  • Kai Holländer
  • Katharina Rupp
  • Heinrich Hussmann

Fully autonomous driving leaves drivers with little opportunity to intervene in the driving decision. Giving drivers more control can enhance their driving experience. We develop two collaborative interface concepts to increase the user experience of drivers in autonomous vehicles. Our aim is to increase the joy of driving and to give drivers competence and autonomy even when driving autonomously. In a driving simulator study (N = 24) we investigate how vehicles and drivers can collaborate to decide on driving actions together. We compare autonomous driving (AD), the option to take back driving control (TBC) and two collaborative driving interface concepts by evaluating usability, user experience, workload, psychological needs, performance criteria and interview statements. The collaborative interfaces significantly increase autonomy and competence compared to AD. Joy is highly represented in the qualitative data during TBC and collaboration. Collaboration proves to be good for situations in which quick decisions are called for.

“Left!” – “Right!” – “Follow!”: Verbalization of Action Decisions for Measuring the Cognitive Take-Over Process

  • Lara Scatturin
  • Rainer Erbach
  • Martin Baumann

Influencing factors on the take-over performance during conditionally automated driving are intensively researched these days. Most of the studies focus on visual and motoric reactions. Only limited information is available about what happens on the cognitive level during the transition from automated to manual driving. Thus, the aim of the study is to investigate a measurement method for assessing the cognitive take-over performance. In this method, the cognitive component decision-making is operationalized via concurrent verbalization of action decisions. The results suggest that valid predictions for the time of the decision can be provided. Additionally, it seems that the effects of situational complexity on the driver behavior can be extended to cognitive processes. A temporal classification of the decision-making within the take-over process is derived that can be applied for the development of cognitive plausible assistance systems.

The Effect of Instructions and Context-Related Information about Limitations of Conditionally Automated Vehicles on Situation Awareness

  • Quentin Meteier
  • Marine Capallera
  • Emmanuel de Salis
  • Andreas Sonderegger
  • Leonardo Angelini
  • Stefano Carrino
  • Omar Abou Khaled
  • Elena Mugellini

In conditionally automated driving, drivers do not have to constantly monitor their vehicle but they must be able to take over control when necessary. In this paper, we assess the impact of instructions about limitations of automation and the presentation of context-related information through a mobile application on the situation awareness and takeover performance of drivers. We conducted an experiment with 80 participants in a fixed-base driving simulator. Participants drove for an hour in conditional automation while performing secondary tasks on a tablet. Besides, they had to react to five different takeover requests. In addition to the assessment of behavioral data (e.g. quality of takeover), participants rated their situation awareness after each takeover situation. Instructions and context-related information on limitations combined showed encouraging results to raise awareness and improve takeover performance.

Explainable Automation: Personalized and Adaptive UIs to Foster Trust and Understanding of Driving Automation Systems

  • Philipp Wintersberger
  • Hannah Nicklas
  • Thomas Martlbauer
  • Stephan Hammer
  • Andreas Riener

Recent research indicates that transparent information on the behavior of automated vehicles positively affects trust, but how such feedback should be composed and if user trust influences the amount of desired feedback is relatively unexplored. Consequently, we conducted an interview study with (N=56) participants, who were presented different videos of an automated vehicle from the ego-perspective. Subjects rated their trust in the vehicle in these situations and could arbitrarily select objects in the driving environment that should be included in augmented reality feedback systems, so that they are able to trust the vehicle and understand its actions. The results show an inverse correlation between situational trust and participants’ desire for feedback and further reveal reasons why certain objects should be included in feedback systems. The study also highlights the need for more adaptive in-vehicle interfaces for trust calibration and outlines necessary steps for automatically generating feedback in the future.

Full Paper Session 8: Interfaces, Special Populations, and Shared Concepts

Usable and Acceptable Response Delays of Conversational Agents in Automotive User Interfaces

  • Markus Funk
  • Carie Cunningham
  • Duygu Kanver
  • Christopher Saikalis
  • Rohan Pansare

With an increasing ability to answer and fulfill user requests, voice-enabled Conversational Agents (CAs) are becoming more and more powerful. However, as the complexity of the requests increase, the time for the CAs to process and fulfill the tasks can become longer. In other cases where input prediction is available, some requests can be processed and answered even before the user is finished saying the command. However, the effects of these positive and negative delays in system response time are still under-explored. In this paper, we systematically analyze the effects of different response delays on usability and acceptability considering three common interaction techniques for voice-enabled CAs. Our results reveal that an unnaturally long positive delay in system response time leads users to assume that an error occurred, while a negative delay is perceived by the users as rude. Based on our findings, we present design guidelines for voice-enabled CAs.

Capacity Management in an Automated Shuttle Bus: Findings from a Lab Study

  • Alexander G. Mirnig
  • Vivien Wallner
  • Magdalena Gärtner
  • Alexander Meschtscherjakov
  • Manfred Tscheligi

Driverless shuttles bear different and novel challenges for passengers. One of these is related to capacity management, as such shuttles are often smaller (usually from 6 to 12 seats) with limited capacities to (re-)assign seating, control reservations, or arrange travels for groups that exceed a shuttle’s capacity. Since a bus driver is missing, passengers need to resolve conflicts or uncertainties on their own, unless additional systems provide such support. In this paper, we present the results from a laboratory study, in which we investigated passenger needs in relation to booking and reserving spots (seats, standing spots, and strollers) in an automated shuttle. We found that such functionalities have a low-to-medium impact on an overall scale but could constitute exclusion criteria for more vulnerable parts of the population, such as older adults, families with small children, or physically impaired individuals.

Concepts of Connected Vehicle Applications: Interface Lessons Learned from a Rail Crossing Implementation

  • Gregory Baumgardner
  • Liberty Hoekstra-Atwood
  • David Miguel M. Prendez

Emerging connected vehicle (CV) technologies provide timely, advance warnings of roadway hazards to road users who may be incapable of perceiving them otherwise. Between 2012 and 2019, United States government transportation agencies deployed V2X Hub, an open platform supporting a broad set of transportation safety applications. This platform facilitates real-time data sharing between infrastructure, in-vehicle, or mobile devices to communicate hazards using Dedicated Short-Range Communications (DSRC). Armed with expertise gained in developing the technology, along with a renewed focus for the design of the in-vehicle safety system's Human-Machine Interface (HMI), the Rail Crossing Violation Warning (RCVW) research team evaluated practical driver use cases and extended the system capabilities to meet a broader set of safety goals.

Putting Older Adults in the Driver Seat: Using User Enactment to Explore the Design of a Shared Autonomous Vehicle

  • Aaron Gluck
  • Kwajo Boateng
  • Earl W. Huff Jr.
  • Julian Brinkley

Self-driving vehicles have been described as one of the most significant advances in personal mobility of the past century. By minimizing the role of arguably error-prone human drivers, self-driving vehicles are heralded for improving traffic safety. Primarily driven by the technology’s potential impact, there is a rapidly evolving body of literature focused on consumer preferences. Missing, we argue, are studies that explore the needs and design preferences of older adults (60+). This is a significant knowledge gap, given the disproportionate impact that self-driving vehicles may have concerning personal mobility for older adults who are unable or unwilling to drive. Within this paper, we explore the design and interaction preferences of older adults through a series of enactment-based design sessions. This work contributes insights into the needs of older adults, which may prove critical if equal access to emerging self-driving technologies are to be realized.


TOC - Adjunct Proceedings

Session: Work in Progress

"Give Me the Keys, I’ll Drive!" Results from an Exploratory Interview Study to assess Public’s Desires and Concerns on Automated Valet Parking

  • Martina Schuß
  • Andreas Riener 

Highly and fully automated vehicles are not expected on public roads in the near future, but at lower levels of automation, several applications/business models are discussed by vehicle manufacturers and fleet operators. Automated valet parking (AVP) is one of them, which could be implemented almost immediately. Vehicles with AVP feature are able to drive independently in the parking garage and find/occupy a free parking space. However, a better understanding of public’s opinion on this service is needed. In this paper, we present the findings from an exploratory interview study on public’s opinion of automated valet parking. Results suggest that the main benefits from the user perspective are clearly practical in nature (time saving, efficient use of parking lots), but are mitigated by emotional concerns (feeling of uncertainty, loss of control). We therefore conclude that these concerns must be addressed to ultimately ensure automated valet parking’s success and benefits in society.

Hit the Brakes! Augmented Reality Head-up Display Impact on Driver Responses
to Unexpected Events

  • Missie Smith
  • Lisa Jordan
  • Kiran Bagalkotkar
  • Srikar Sai Manjuluri
  • Rishikesh Nittala
  • Joseph Gabbard 

In this study, we provide a first look at driver responses when using augmented reality (AR) head-up displays (HUDs) during an unexpected and potentially dangerous event. Twenty participants followed a lead car in a driving simulator while completing no task or distracting secondary tasks on AR HUDs in three different vertical positions or head-down displays (HDDs). After a series of uneventful drives, the lead car unexpectedly braked while participants completed a distractor task, requiring them to respond quickly to avoid a collision. We qualitatively analyzed participants’ glance behavior, crash avoidance, and self-reported experience. We found that participants using HDDs all frequently glanced back toward the roadway and lead vehicle while those using AR HUDs were inconsistent. Our results suggest that more research must be done to fully understand AR HUDs’ impact on drivers during surprise events but display location may impact behavior.

An Exploration of Users' Thoughts on Rear-Seat Productivity in Virtual Reality 

  • Jingyi Li
  • Ceenu George
  • Andrea Ngao
  • Kai Holländer
  • Stefan Mayer
  • Andreas Butz

With current technology, mobile working has become a real trend. With wireless head-mounted displays we could soon even be using immersive working environments while commuting. However, it is unclear what such a virtual workplace will look like. In anticipation of autonomous cars, we investigate the use of VR in the rear seat of current cars. Given the limited space, how will interfaces make us productive, but also keep us aware of the essentials of our surroundings? In interviews with 11 commuters, they generally could imagine using VR in cars for working, but were concerned with their physical integrity while in VR. Two types of preferred working environments stuck out in the physical dimension and three information levels for rear-seat VR productivity emerged from our interviews: productivity, notification, and environment. We believe that the interview results and proposed information levels can inspire the UI structure of future ubiquitous productivity applications.

User Requirements for Remote Teleoperation-based Interfaces

  • Gaetano Graf
  • Heinrich Hussmann

Despite the rapid progress of Autonomous Vehicle (AV) technology, remote human situational assessment continues to be required. Herewith, remote operation is introducing several challenges, as limited perception and difficulty in maintaining Situation Awareness (SA). In this regard, this research provides first-hand SA requirements for remote teleoperation-based interfaces. Complementary to a previous literature review on requirements for Human-Machine Interface for unmanned systems, we conducted two (N = 18, N = 10) user studies. To ascertain the views of the users, we employed two methodologies, in-depth interviews, and traditional statistical analysis to find out specific preferences. We collected a total of 80 statements that we could cluster over 12 categories, presenting a comprehensive overview of SA user requirements. The research is envisioned to be used by others as a tool to help the development of AV teleoperation-based interfaces.

Ultrahapticons: Haptifying Drivers' Mental Models to Transform Automotive Mid-Air Haptic Gesture Infotainment Interfaces

  • Eddie Brown
  • David R. Large
  • Hannah Limerick
  • Gary Burnett

In-vehicle gesture interfaces show potential to reduce visual demand and improve task performance when supported with mid-air, ultrasound-haptic feedback. However, comparative studies have tended to select gestures and haptic sensations based either on experimental convenience or to conform with existing interfaces, and thus may have fallen short on realising their full potential. Aiming to design and validate an exemplar set of ultrasonic, mid-air haptic icons (“ultrahapticons”), a participatory design exercise was conducted, whereby seventeen participants were presented with seven in-vehicle infotainment tasks. Participants were asked to describe their mental models for each, and then sketch these visual, tactual and auditory associations. ‘Haptifiable’ elements were extracted, and these were analysed using semiotics principles, resulting in thirty ultrahapticon concepts. These were subsequently evaluated and further refined in a workshop involving user experience and haptics experts. The final seventeen concepts will be validated in a salience recognition and perspicuity study.

Decoding CNN based Object Classifier Using Visualization

  • Abhishek Mukhopadhyay
  • Imon Mukherjee
  • Pradipta Biswas

This paper investigates how working of Convolutional Neural Network (CNN) can be explained through visualization in the context of machine perception of autonomous vehicles. We visualize what type of features are extracted in different convolution layers of CNN that helps to understand how CNN gradually increases spatial information in every layer. Thus, it concentrates on region of interests in every transformation. Visualizing heat map of activation helps us to understand how CNN classifies and localizes different objects in image. This study also helps us to reason behind low accuracy of a model helps to increase trust on object detection module.

Requirements of Future Control Centers in Public Transport

  • Carmen Kettwich
  • Annika Dreßler

New mobility concepts in public transport will benefit from automated driving systems. Nevertheless, fully automated vehicles are not expected within the next years. For this reason, a remote operating fallback authority might be a promising solution. To cope with highly complex automation tasks, teleoperation with a distinct human-machine interaction could be used. This work describes a task analysis in order to derive requirements for the design of a future control center workplace which deals with the control of driverless shuttles in combination with mobility-on-demand services in public transport. The results will contribute to create an efficient, valid and capable human-machine interaction concept for vehicle teleoperation.

Towards A Framework of Detecting Mode Confusion in Automated Driving: Examples of Data from Older Drivers

  • Shabnam Haghzare
  • Jennifer Campos
  • Alex Mihailidis

A driver's confusion about the dynamic operating modes of an Automated Vehicle (AV), and thereby their confusion about their driving responsibilities can compromise safety. To be able to detect drivers’ mode confusion in AVs, we expand on a previous theoretical model of mode confusion and operationalize it by first defining the possible operating modes within an AV. Consequently, using these AV modes as different classes, we then propose a classification framework that can potentially detect a driver's mode confusion by classifying the driver's perceived AV mode using measures of their gaze behavior. The potential applicability of this novel framework is demonstrated by a classification algorithm that can distinguish between drivers’ gaze behavior measures during two AV modes of fully-automated and non-automated driving with 93% average accuracy. The dataset was collected from older drivers (65+), who, due to changes in sensory and/or cognitive abilities can be more susceptible to mode confusion.

Adapting In-Vehicle Voice Output: A User- and Situation-Adaptive Approach

  • Daniela Stier
  • Ulrich Heid
  • Wolfgang Minker

There is a current trend towards natural, adaptive in-vehicle spoken dialogue systems which react flexibly to the individual requirements of a driver or the driving situation. They aim to provide the driver with the most efficient form of interaction and to thereby reduce his cognitive load. Studies show that even the syntactic form of system output has an influence on the driver and his driving performance. Against this background, in this paper we present our user-centered approach for a user- and situation-adaptive strategy for the syntactic design of voice output. Based on the collected data of two user studies, we combine the two aspects of speech production and perception and compare actual language behaviour with syntactic preferences. The resulting strategy will be evaluated and elaborate further in future user studies.

Toward Minimum Startle After Take-Over Request: A Preliminary Study of Physiological Data

  • Erfan Pakdamanian
  • Nauder Namaky
  • Shili Sheng
  • Inki Kim
  • James Arthur Coan
  • Lu Feng

In this work, we introduce the preliminary analysis of driver’s physiological data after receiving take-over request (TOR). Studies have shown that physiological measurements on drivers may provide better insights into the cognitive behavior and performance of drivers. Our goal is to examine the effect of two common TOR modalities (visual-auditory and generic auditory), in the limited take-over time budget, on psychophysical states and take-over behavior of the drivers. We applied multimodal physiological data streams -i.e. eye-tracker, EEG, GSR and PPG to have a comprehensive overview of driver’s workload, stress and reaction time for each TOR modality. The preliminary results suggest that visual-auditory modality leads to a safer take-over behavior than generic auditory tone. EEG and heart rate variability results showed a significantly greater engagement on the visual-auditory TOR than for the auditory TOR. Results of this study can be used to investigate the safer modality inducing the least startle reactions.

Exploring the Effectiveness of External Human-Machine Interfaces on Pedestrians and Drivers

  • Young Woo Kim
  • Jae Hyun Han
  • Yong Gu Ji
  • Seul Chan Lee

Previous literature provided the results that eHMIs can be an effective method in interacting with pedestrians. However, there remains a question whether eHMIs are also effective for other road users or not. Therefore, the present study aimed to explore subjective evaluations on eHMIs with two different perspectives, pedestrians and drivers around AVs. Subjective preferences to different types of eHMIs were investigated through an online survey. Nine types of eHMIs were designed based on the combinations of display location and sign format. The results showed that people have different attitudes towards eHMIs depending on their perspectives. The participants as driver evaluated the bottom condition negatively compared to the pedestrian, and the participants as pedestrians felt that icons are not a good option compared to the drivers. The findings of the present study contribute to design of eHMI considering various road users.

Help, Accident Ahead! Using Mixed Reality Environments in Automated Vehicles to Support Occupants After Passive Accident Experiences

  • Henrik Detjen
  • Stefan Geisler
  • Stefan Schneegass

Currently, car assistant systems mainly try to prevent accidents. Increasing built-in car technology also extends the potential applications in vehicles. Future cars might have virtual windshields that augment the traffic or individual virtual assistants interacting with the user. In this paper, we explore the potential of an assistant system that helps the car’s occupants to calm down and reduce stress when they experience an accident in front of them. We present requirements from a discussion (N = 11) and derive a system design from them. Further, we test the system design in a video-based simulator study (N = 43). Our results indicate that an accident support system increases perceived control and trust and helps to calm down the user.

Designing the Interaction of Highly Automated Vehicles with Cyclists in Urban Longitudinal Traffic. Relevant Use Cases and Methodical Considerations

  • Nicole Fritz
  • Fanny Kobiela
  • Dietrich Manstetten
  • Andreas Korthauer
  • Klaus Bengler

In future urban traffic, highly automated vehicles (HAVs) will have to successfully interact with vulnerable road users, such as pedestrians and cyclists. While the interaction of HAVs with crossing pedestrians is already well studied, HAV interaction concepts for the encounters with cyclists are yet to be explored. We present a project that focuses on the user-centered design of HAV driving maneuvers for interactions with cyclists travelling upfront and in the same direction in urban longitudinal traffic. This work introduces the use cases and the methodical approach to explore current cyclist-vehicle interactions in a real life setting. With this approach, we aim to derive implications for the design of future HAV interaction behavior.

What is it? How to Collect Urgent Utterances using a Gamification Approach

  • Jakob Landesberger
  • Ute Ehrlich
  • Wolfgang Minker

Many modern cars today have voice assistants. The problem is that current in car speech interfaces are mostly designed for certain commands in a very restricted form. In the future, these interfaces will have to deal with more complex user input like several intentions in one utterance or quick urgent insertions. Especially in rapidly changing situations like during a highly automated journey, it becomes relevant to detect urgent utterances and react accordingly. Collecting data by reproducing the conditions for the interaction in a real vehicle can by very difficult. Therefore we propose, to abstract the problem and use a gamification approach. We successfully simulated urgent situations and collected spoken utterances with the game “What is it?”.

Addressing Rogue Vehicles by Integrating Computer Vision, Activity Monitoring, and Contextual Information

  • Brook Abegaz
  • Eric Chan-Tin
  • Neil Klingensmith
  • George K. Thiruvathukal

In this paper, we address the detection of rogue autonomous vehicles using an integrated approach involving computer vision, activity monitoring and contextual information. The proposed approach can be used to detect rogue autonomous vehicles using sensors installed on observer vehicles that are used to monitor and identify the behavior of other autonomous vehicles operating on the road. The safe braking distance and the safe following time are computed to identify if an autonomous vehicle is behaving properly. Our preliminary results show that there is a wide variation in both the safe following time and the safe braking distance recorded using three autonomous vehicles in a test-bed. These initial results show significant progress for the future efforts to coordinate the operation of autonomous, semi-autonomous and non-autonomous vehicles.

Assessing the Use of Physiological Signals and Facial Behaviour to Gauge Drivers' Emotions as a UX Metric in Automotive User Studies

  • Christine Spencer
  • Chihiro Suga
  • Ibrahim Alper Koc
  • Alexander Lee
  • Anupama Mahesh Dhareshwar
  • Elin Franzén
  • Maria Iozzo
  • Gawain Morrison
  • Gary McKeown

Studies have shown that drivers’ emotions can be assessed via changes in their physiology and facial behaviour. This study examined this approach as a means of gauging user experience (UX) in an automotive user study. 36 drivers’ responses to typical UX-style questions were compared with computational estimates of their emotional state, based on changes in their cardiac, respiratory, electrodermal and facial signals. The drivers’ arousal and valence levels were monitored in real-time as they drove a 23-mile route around Sunnyvale, CA. These estimates corresponded with two independent observers’ judgments of the drivers’ emotions. The results highlighted a disparity between the self-report and algorithmic scores—the drivers who answered the UX questions more positively experienced higher levels of stress—evidenced by higher arousal and lower valence algorithmic scores. The findings highlight the value of supplementing self-report measures with objective estimates of drivers’ emotions in automotive UX research.

No Need to Slow Down! A Head-up Display Based Warning System for Cyclists for Safe Passage of Parked Vehicles

  • Tamara von Sawitzky
  • Thomas Grauschopf
  • Andreas Riener

The development of driver assistance systems and technologies for vehicle automation is pushed forward in order to increase road safety. However, there are still numerous situations that cannot (easily) be solved with on-vehicle assistance systems, particularly during the interaction of vehicles with vulnerable road users. Bicyclists, especially in urban areas, are traveling at relatively high speeds compared to pedestrians. Even on dedicated bicycle lanes, numerous accidents are caused by vehicle doors opening. Our proposed approach aims at warning cyclists of suddenly opening doors of parked vehicles when passing. We plan to conduct a user study in mixed reality, comparing the baseline (no system) to two warning concepts (visual and visual-auditory). We expect that awareness of the possible danger will let cyclists feel safer and help to prevent accidents. We estimate that the visual-auditory system leads to better reaction times and lower mental workload as the visual-only system.

Crosswalk Cooperation: A Phone-Integrated Driver-Vehicle Cooperation Approach to Predict the Crossing Intentions of Pedestrians in Automated Driving

  • Marcel Walch
  • Stacey Li
  • Ilan Mandel
  • David Goedicke
  • Natalie Friedman
  • Wendy Ju

While the implementation of automated driving under well-defined circumstances such as highways is possible today, more complex and dynamic environments such as urban areas remain challenging. In particular, the behavior prediction of other road users like pedestrians can be difficult for automated vehicles. We suggest a cooperative approach to overcome this system limitation. More specifically, the system can rely on the users’ interpretation of the situation to clarify ambiguity. This study explores how passengers can be used to disambiguate pedestrian crossing behavior. Because passengers likely use their phones while traveling in an automated vehicle, we displayed a live view of the traffic scenes as well as a cooperation request to ask about the intentions of the pedestrian at the crosswalk. A preliminary evaluation of usability shows that this approach provides promising results. Participants also reported trusting the system and showed willingness to help an automated vehicle.

Haptic Feedback for the Transfer of Control in Autonomous Vehicles

  • Patrizia Di Campli San Vito
  • Edward Brown
  • Stephen Brewster
  • Frank Pollick
  • Simon Thompson
  • Lee Skrypchuk
  • Alexandros Mouzakitis

Vehicles offering autonomous features need effective methods for transferring the control from the driver to the vehicle and back. While most research focuses on presenting information the driver might need after retaking control, our study investigates ways to improve the process of transferring control itself. We investigated multimodal feedback with and without haptics and visuals in a simulator study. Results showed that visual and haptic feedback improved driving during handover. Subjective ratings described multimodal feedback without visual as more disruptive than with the visual feedback included. Furthermore, ratings showed a preference for including visual and haptic feedback. These results lead us a step closer to a safe, clear and accepted control transfer process between driver and vehicle.

A Gender Study of Communication Interfaces between an Autonomous Car and a Pedestrian

  • Chia-Ming Chang

Communication between an autonomous car and a pedestrian is an important issue that has been widely discussed in the recent years. Many studies and car companies proposed concepts of communication interface on an autonomous car to communicate with a pedestrian. However, there is no detailed study in exploring communication interfaces from a gender perspective. In this study, we explored vehicle-to-pedestrian communication from a gender perspective. We firstly designed three types of communication interface on an autonomous car: verbal, non-verbal and expressive interfaces. And then we compared these three types of communication interface in a pedestrian street-crossing situation via a video experiment. The results show that there are significant differences between male and female pedestrians in the expressive interface (smile and non-smile). More female pedestrians can understand car's intention via the expressive interfaces than male pedestrians. Additionally, more female pedestrians feel the expressive interface is reliable and comfortable about than male pedestrians.

Foresight Safety: Sharing Drivers' State among Connected Road Users

  • Paolo Pretto
  • Sandra Trösterer
  • Nikolai Ebinger
  • Nino Dum

When drivers approach a potentially critical situation, they tend to glance over drivers of neighboring vehicles to gather a mutual understanding of the respective states and intentions. Then, experienced drivers can take quick decisions and prevent the onset of a danger. Yet, such a safety-effective behavior finds no equals in current automated driving, although the technologies to build a similar solution are already available. Therefore, it is important to investigate the effects of sharing drivers’ state among road users to understand the potential benefit for pre-critical situations. A networked simulators study was performed involving two drivers in a cut-in maneuver. Results indicate that when a driver is notified that the driver in the adjacent vehicle is distracted, the preferred reaction is to change lane, putting more space between the respective vehicles. Such a preventive action should therefore become the target behavior for automated vehicles capable of a human-like driving style.

Sensor Fusion Based State Estimation for Localization of Autonomous Vehicle

  • Subrahmanya Gunaga
  • Nalini C Iyer
  • Akash Kulkarni

Localization is an estimate of vehicle position for a given environment. This work focuses on the state estimation of a vehicle for localization functionality using the Schmidt Kalman filter for fused sensor data. The Kalman filter provides an efficient approach in reducing the errors presented by the sensors.

Further, computational complexity is reduced through pre-processed initialization in the Schmidt Kalman filter. The two sensors used are GPS (Global Positioning System) and IMU (Inertial Measurement Unit), where GPS provides the position, and IMU provides acceleration/direction. Since GPS data has a dependency on the external environmental factors resulting in discontinuities, it is augmented with similar data until the corrected GPS data is resumed. The error in the position determined by GPS can be as high as 12m. This work presents a method for fusing sensor data using the Schmidt Kalman filter in a practical scenario.

VR-PAVIB: The Virtual Reality Pedestrian-Autonomous Vehicle Interaction Benchmark

  • Ana Dalipi
  • Dongfang Liu
  • Xiaolei Guo
  • Yingjie Victor Chen
  • Christos Moussas

Autonomous vehicles (AVs) are an emerging theme for future transportation. However, research on pedestrian-AV interaction, which promotes pedestrian safety during autonomous driving, is not a well-explored domain. One challenge preventing the development of pedestrian-AV interaction research is that there is no publicly available and standardized benchmark to allow researchers to investigate how different interfaces could help pedestrians communicate with AVs. To resolve this challenge, we introduce the Virtual Reality Pedestrian-Autonomous Vehicle Interaction Benchmark (VR-PAVIB). VR-PAVIB is a standardized platform that can be used to reproduce interaction scenarios and compare results. Our benchmark provides state-of-the-art functionalities that can easily be implemented in any interaction scenario authored by a user. The VR-PAVIB can easily be used in a controlled lab space using low-cost virtual reality equipment. We have released our project code and include the automotive user interface community to extend VR-PAVIB.

Face2Multimodal: In-vehicle Multi-modal Predictors via Facial Expressions

  • Zhentao Huang
  • Rongze Li
  • Wangkai Jin
  • Zilin Song
  • Yu Zhang
  • Xiangjun Peng
  • Xu Sun

Towards intelligent Human-Vehicle Interaction systems and innovative Human-Vehicle Interaction designs, in-vehicle drivers’ physiological data has been explored as an essential data source. However, equipping multiple biosensors is considered the limited extent of user-friendliness and impractical during the driving procedure. The lack of a proper approach to access physiological data has hindered wider applications of advanced biosignal-driven designs in practice (e.g. monitoring systems and etc.). Hence, the demand for a user-friendly approach to measuring drivers’ body statuses has become more intense.

In this Work-In-Progress, we present Face2Multi-modal, an In-vehicle multi-modal Data Streams Predictors through facial expressions only. More specifically, we have explored the estimations of Heart Rate, Skin Conductance, and Vehicle Speed of the drivers. We believe Face2Multi-modal provides a user-friendly alternative to acquiring drivers’ physiological status and vehicle status, which could serve as the building block for many current or future personalized Human-Vehicle Interaction designs. More details and updates about the project Face2Multi-modal is online at https://github.com/unnc-ucc/Face2Multimodal/.

Tactical Decisions for Lane Changes or Lane Following? Development of a Study Design for Automated Driving

  • Johannes Ossig
  • Stephanie Cramer

Overtaking slower vehicles on a highway usually involves lane changes. This paper examines a large number of non-automated as well as automated lane changes on the basis of two datasets. The focus is on the relationship between the velocity of the preceding vehicle being overtaken and the target velocity of the vehicle involved in changing lane and overtaking. Based on this, a study design is developed and should enable human-centered investigation of the preferred points in time for automated lane changes. In order to identify further characteristics of an automated journey that can influence the preferred lane change behavior, expert interviews were conducted, to be taken into consideration in the study design. According to this, non-driving related tasks play an essential role in the proposed driving study.

Session: Workshops

Workshop on Virtual Reality (VR) in Automated Vehicles: Developing and Evaluating Metrics to Assess VR in the Car

  • Zoe M Becerra
  • Nadia Fereydooni
  • Stephen Brewster
  • Andrew L. Kun
  • Angus McKerral
  • Bruce N. Walker

As automated systems continue to be integrated in everyday vehicles, drivers have an opportunity to engage in non-driving related tasks (NDRTs) while the automation is responsible for the driving task. However, current research on NDRTs is limited and has not explored the use of virtual reality (VR) in an automated vehicle. To understand how this technology may be implemented in this environment, it is critical to investigate related constructs like situation awareness, perceived risk, or presence. However, current measures for these constructs are not suitable for use in VR. This workshop aims to address this gap bringing together researchers to begin the development of new measures for these constructs. Creating new measures is the first step toward effectively and accurately assessing the use of VR in the automated vehicle context.

What Could Go Wrong? Exploring the Downsides of Autonomous Vehicles

  • Nikolas Martelaro
  • Wendy Ju

While autonomous vehicles have the potential to greatly improve our daily lives, there are also challenges and potential downsides to these systems. In this workshop, we intend to foster discussions about the potential negative aspects of autonomous cars in hopes of surfacing challenges that should be considered during the design process rather than after deployment. We will spur these conversations through a review of participant position statements and through group discussion facilitated by a card game called “What Could Go Wrong?” Our goal is to consider the autonomous vehicle’s benefits—improving safety, increasing mobility, reducing emissions—against potential drawbacks. By identifying potential harms and downsides, the workshop attendees, and the AutoUI community more broadly can design well-considered solutions.

The 2nd Workshop on Localization vs. Internationalization: Impact of COVID-19 Pandemic on AutomotiveUI Activities from the View of Diversity and Inclusion

  •  Seul Chan Lee
  • Kristina Stojmenova
  • Gowdham Prabhakar
  • Shan Bao
  • Jaka Sodnik
  • Myounghoon Jeon

A worldwide pandemic has brought many challenges in numerous areas of everyone's life. The AutomotiveUI 2020 has also been moved to a virtual conference. Although the situation seems to be improving in some parts of the world, the impacts that the pandemic has brought to the research and academia may last long even after the pandemic is over. In the AutomotiveUI community, there is more than one aspect that should be taken into consideration. Ironically, the situation brought about both risks and opportunities including research methods, collaboration, interaction manners, and diversity and inclusion. With this background, the goal of this workshop is to discuss the impact of the COVID19 pandemic on the AutomotiveUI community from the perspective of the diversity and inclusion and to discuss the direction of collaborative activities of our community with researchers from various groups. We will organize three virtual workshop sessions accomodating different time zones.

Emotion GaRage Vol. II: A Workshop on Affective In-Vehicle Display Design

  • Chihab Nadri
  • Jingyi Li
  • Esther Bosch
  • Michael Oehl
  • Ignacio Alvarez
  • Michael Braun
  • Myounghoon Jeon

Driver performance and behavior can be partially predicated based on one's emotional state. Through ascertaining the emotional state of passengers and employing various mitigation strategies, empathic cars can show potential in improving user experience and driving performance. Challenges remain in the implementation of such strategies, as individual differences play a large role in mediating the effect of affective intervention. Therefore, we propose a workshop that aims to bring together researchers and practitioners interested in affective interfaces and in-vehicle technologies as a forum for the development of targeted emotion intervention methods. During the workshop, we will focus on a common set of use cases and generate approaches that can suit different user groups. By the end of this short workshop, researchers will determine ideal intervention methods for prospective user groups. This will be achieved through the method of insight combination to generate and discuss ideas.

Saluton! How do you evaluate usability? – Virtual Workshop on Usability Assessments of Automated Driving Systems

  • Deike Albers
  • Niklas Grabbe
  • Dominik Janetzko
  • Klaus Bengler 

The usability of human-machine-interfaces (HMIs) for automated driving systems (ADS) gains importance with the imminent introduction of SAE L3 automated vehicles [15]. Assuming global proliferation of automated vehicles, a common understanding of usability for ADS HMIs and its application in research and industry is indispensable. In reference to ISO 9241-11 [8], this virtual workshop aims to identify potential differences in the understanding and the resulting assessment of usability. The international audience of the Automotive-UI poses an ideal setting for this purpose by bringing together academics and practitioners in the domain of automotive user-interfaces. The experimental design for an international usability study serves as an illustrative case example for the discussion. Participants learn about methods, challenges and current research on international evaluations of automotive user interfaces. The workshop's goal is to jointly derive a consensus for the theoretical and practical interpretation of the term usability in the context of HMIs for automated driving.

AutoWork 2020: Second Workshop on the Future of Work and Well-Being in Automated Vehicles

  • Clemens Schartmüller
  • Philipp Wintersberger
  • Andreas Riener
  • Andrew L. Kun
  • Stephen Brewster
  • Orit Shaer

The gradual implementation of automated driving systems opens a wide range of opportunities for researchers and vehicle designers to transform vehicle interiors into a place for productivity and well-being. Former events held by the organizers have identified a research agenda to transform vehicles into a space for office work. This second edition of the workshop builds upon previous findings and focuses on initiating concrete research projects and fruitful cooperation between participants. In a two-session schedule tailored to fit the requirements of an online event, participants will define relevant user stories and elaborate experimental designs with measurable outcomes to contribute to the research roadmap.