TOC - Main Proceedings
Full Paper Session 1: Advanced Interfaces, Automation and User Experiences
Investigating the effect of tactile input and output locations for drivers’ hands on in-car tasks performance
This paper reports a study investigating the effects of tactile input and output from the steering wheel and the centre console on non-driving task performance. While driving, participants were asked to perform list selection tasks using tactile switches and to experience tactile feedback on either the non-dominant, dominant or both hands as they were browsing the list. Our results show the average duration for selecting an item is 30% shorter when interacting with the steering wheel. They also show a 20% increase in performance when tactile feedback is provided. Our findings reveal that input prevails over output location when designing interaction for drivers. However, tactile feedback on the steering wheel is beneficial when provided at the same location as the input or to both hands. The results will help designers understand the trade-offs of using different interaction locations in the car.
Currently, the visual demand incurred by vehicle displays is evaluated using time criteria (such as those provided by NHTSA). This 60-participant driving simulator study investigated to what extent glance time criteria applies to Head-up Display (HUD) imagery, considering 48 locations across the windshield (and 3 in-vehicle display positions). Participants were required to make a long controlled continuous glance to a sample of these locations. Consequently, the time at which lateral/longitudinal unsafe driving occurred (e.g. deviating out of lane, unacceptable time to collision) could then be assessed. Using the selected measures, the results suggest that drivers are able to maintain driving performance for longer than recommended NHTSA guidelines for in-vehicle displays whilst engaging with HUD imagery in various locations. Importantly, the data from this study provides initial maps for designers highlighting the visual demand implications of HUD imagery across the windshield.
A Wizard of Oz Field Study to Understand Non-Driving-Related Activities, Trust, and Acceptance of Automated Vehicles
Understanding user needs and behavior in automated vehicles (AVs) while traveling is essential for future in-vehicle interface and service design. Since AVs are not yet market-ready, current knowledge about AV use and perception is based on observations in other transportation modes, interviews, or surveys about the hypothetical situation. In this paper, we close this gap by presenting real-world insights into the attitude towards highly automated driving and non-driving-related activities (NDRAs). Using a Wizard of Oz AV, we conducted a real-world driving study (N = 12) with six rides per participant during multiple days. We provide insights into the users’ perceptions and behavior. We found that (1) the users’ trust a human driver more than a system, (2) safety is the main acceptance factor, and (3) the most popular NDRAs were being idle and the use of the smartphone.
Sick of Scents: Investigating Non-invasive Olfactory Motion Sickness Mitigation in Automated Driving
While automated vehicles are supposed to become places for purposes beyond transportation, motion sickness is still a largely unsolved issue that may be critical for this transformation. Due to its previously shown positive impact on the gastric and central nervous system, we hypothesize that olfaction (in particular the scents of lavender and ginger) may be able to reduce motion sickness symptoms in a non-invasive manner. We investigate the effects of these scents on the driver-passenger in chauffeured drives in a test track study with a reading-span non-driving related task. Evaluation of self-rated (Simulator Sickness Questionnaire, UX Curves) and physiologically measured motion sickness (Electrogastrography, Electrocardiography), and observations are presented and discussed. Results indicate that the issued scents were detrimental to the well-being of participants in the comparisons between post-task (baseline, scented) and pre-test measurements, with symptoms in the lavender-scented group being perceived as slightly less harsh than in the ginger-scented group.
Full Paper Session 2: Vehicle Automation: Trust and Takeovers
Trust is important in determining how drivers interact with automated vehicles. Overtrust has contributed to fatal accidents; and distrust can hinder successful adoption of this technology. However, existing studies on trust are often hard to compare, given the complexity of the construct and the absence of standardized measures. Further, existing trust scales often do not consider its multi-dimensionality. Another challenge is that driving is strongly context- and situation-dependent. We present the Situational Trust Scale for Automated Driving, a short questionnaire to assess different aspects of situational trust, based on the trust model proposed by Hoff and Bashir. We evaluated the scale using an online study in the US and Germany (N=303), where participants faced different videos of an automated vehicle. Results confirm the existence of situational factors as components of trust, and support the scale being a valid measure of situational trust in this automated driving context.
As semi-automated vehicles get to have the ability to drive themselves, it is important (1) to explore drivers’ affective states which may influence takeover performance and (2) to design optimized control transition displays to warn drivers to take control back from the vehicles. The present study investigated the influence of anger on drivers’ takeover reaction time and quality, with varying urgency of auditory takeover request displays. Using a driving simulator, 36 participants experienced takeover scenarios in a semi-automated vehicle with a secondary task (game). Higher frequency and more repetitions of the auditory displays led to faster takeover reaction times, but there was no difference between angry and neutral drivers. For takeover quality, angry drivers drove faster, took longer to change lanes and had lower steering wheel angles than neutral drivers, which made riskier driving. Results are discussed with the necessity of affect research and display design guidelines in automated vehicles.
We quantify the time-course of glance behavior and steering wheel control level in driver-initiated, non-critical disengagements of Tesla Autopilot (AP) in naturalistic driving. Although widely used, there are limited objective data on the impact of AP on driver behavior. We offer insights from 19 Tesla vehicle owners on driver behavior when using AP and transitioning to manual driving. Glance behavior and steering wheel control level were coded for 298 highway driving disengagements. The average proportion of off-road glances decreased from 36% when AP was engaged to 24% while driving manually after AP disengagement. Most of the off-road glances before the transition were downward and to the center stack (17%). Lastly, in 33% of the events drivers were not holding the steering wheel prior to AP disengagement. The study helps begin to enhance society’s understanding, and provide a reference, of real-world AP use.
Evaluating Effects of Cognitive Load, Takeover Request Lead Time, and Traffic Density on Drivers’ Takeover Performance in Conditionally Automated Driving
In conditionally automated driving, drivers engaged in non-driving related tasks (NDRTs) have difficulty taking over control of the vehicle when requested. This study aimed to examine the relationships between takeover performance and drivers’ cognitive load, takeover request (TOR) lead time, and traffic density. We conducted a driving simulation experiment with 80 participants, where they experienced 8 takeover events. For each takeover event, drivers’ subjective ratings of takeover readiness, objective measures of takeover timing and quality, and NDRT performance were collected. Results showed that drivers had lower takeover readiness and worse performance when they were in high cognitive load, short TOR lead time, and heavy oncoming traffic density conditions. Interestingly, if drivers had low cognitive load, they paid more attention to driving environments and responded more quickly to takeover requests in high oncoming traffic conditions. The results have implications for the design of in-vehicle alert systems to help improve takeover performance.
Full Paper Session 3: External Displays, Ride Sharing and Individual Differences
Our work extends contemporary research into visualizations and related applications for automobiles. Focusing on external car bodies as a design space we introduce the External Automotive Displays (EADs), to provide visualizations that can share context and user-specific information as well as offer opportunities for direct and mediated interaction between users and automobiles. We conducted a design study with interaction designers to explore design opportunities on EADs to provide services to different road users; pedestrians, passengers, and drivers of other vehicles. Based on the design study, we prototyped four EADs in virtual reality (VR) to demonstrate the potential of our approach. This paper contributes our vision for EADs, the design and VR implementation of a few EAD prototypes, a preliminary design critique of the prototypes, and a discussion of the possible impact and future usage of external automotive displays.
Capturing Passenger Experience in a Ride-Sharing Autonomous Vehicle: The Role of Digital Assistants in User Interface Design
Autonomous ride-sharing services have the potential to disrupt future transportation ecosystems. It is critical to understand factors that influence user experience in autonomous vehicles (AVs) to design for widespread adoption. We conducted an on-road driving study in a mock AV to examine how the amount of information provided by an in-vehicle digital assistant, and the manner in which information is delivered, can impact one's overall AV experience. Passengers were divided into two cohorts, based on their assigned in-vehicle digital assistant (Lilly vs. Julie). Through a mixed-methods analysis, the data showed that quantity and quality of information presented via the digital assistant had a significant impact on one's confidence in an AV's driving capability and willingness to ride again. These findings highlight that although the two cohorts were identical with respect to the actual vehicle driven, differences in in-vehicle digital assistant design can alter passengers’ perceptions of their overall AV experience.
Electric vehicles’ (EVs) nearly silent operation has proved to be dangerous for bicyclists and pedestrians, who often use an internal combustion engine’s sound as one of many signals to locate nearby vehicles and predict their behavior. Inspired by regulations currently being implemented that will require EVs and hybrid vehicles (HVs) to play synthetic sound, we used a Wizard-of-Oz AV setup to explore how adding synthetic engine sound to a hybrid autonomous vehicle (AV) will influence how pedestrians interact with the AV in a naturalistic field study. Pedestrians reported increased interaction quality and clarity of intent of the vehicle to yield compared to a baseline condition without any added sound. These findings suggest that synthetic engine sound will not only be effective at helping pedestrians to hear EVs, but also may help AV developers implicitly signal to pedestrians when the vehicle will yield.
What Driving Says About You: A Small-Sample Exploratory Study Between Personality and Self-Reported Driving Style Among Young Male Drivers
Understanding how personalities relate to driving styles is crucial for improving Advanced Driver Assistance Systems (ADASs) and driver-vehicle interactions. Focusing on the ”high-risk” population of young male drivers, the objective of this study is to investigate the association between personality traits and driving styles. An online survey study was conducted among 46 males aged 21-30 to gauge their personality traits, self-reported driving style, and driving history. Hierarchical Clustering was proposed to identify driving styles and revealed two subgroups of drivers who either had a ”risky” or ”compliant” driving style. Compared to the compliant group, the risky cluster sped more frequently, was easily distracted and affected by negative emotion, and often behaved recklessly. The logit model results showed that the risky driving style was associated with lower Agreeableness and Conscientiousness, but higher driving exposure. An interaction effect was also detected between age and Extraversion to form a risky driving style.
Full Paper Session 4: Communication with Other Road Users
Automated vehicles will change the trucking industry as human drivers become more absent. In crossing scenarios, external communication concepts are already evaluated to resolve potential issues. However, automated delivery poses unique communication problems. One specific situation is the delivery to the curb with the truck remaining partially on the street, blocking sidewalks. Here, pedestrians have to walk past the vehicle with reduced sight, resulting in safety issues. To address this, we conducted a literature survey revealing the lack of addressing external communication of automated vehicles in situations other than crossings. Afterwards, a study in Virtual Reality (N=20) revealed the potential of such communication. While the visualization (e.g., arrows or text) of whether it is safe to walk past the truck only played a minor part, the information of being able to safely walk past was highly appreciated. This shows that external communication concepts carry great potential besides simple crossing scenarios.
Autonomous vehicles (AVs) have the opportunity to reduce accident and injury rates in urban areas and improve safety for vulnerable road users (VRUs). To realize these benefits, AVs have to communicate with VRUs like pedestrians. While there are proposed solutions concerning the visualization or modality of external human-machine interfaces, a research gap exists regarding the AVs’ communication strategy when interacting with pedestrians. Our work presents a comparative study of an autonomous delivery vehicle with three communication strategies ranging from polite to dominant in two scenarios, at a crosswalk or on the street. We investigated these strategies in an online-based video study in a German (N = 34) and a Chinese sample (N = 56) regarding compliance, acceptance and trust. We found that a polite strategy led to more compliance in the Chinese but not the German sample. However, the polite strategy positively affected trust and acceptance of the AV in both samples equally.
The introduction of micro-mobility, such as e-scooters, brings new challenges. Nevertheless, these trend devices are spreading rapidly without a comprehensive study of their interactions with other road users. For example, many countries currently require drivers of e-scooters to signal turns by hand. In this work, we investigate whether e-scooter riders can do this without losing control and whether they perceive hand signals as safe enough to use in traffic. We have conducted two studies with 10 and 24 participants, respectively. Each participant was able to perform hand signals without apparent problems. We also observed an intensive training effect regarding the handling of e-scooters. Nevertheless, our results indicate that a considerable number of inexperienced riders will, outside the laboratory, turn without signs with the currently prevailing e-scooter designs and regulations due to uncertainties.
Full Paper Session 5: Advanced Interfaces: Design and Applications
The Role and Potentials of Field User Interaction Data in the Automotive UX Development Lifecycle: An Industry Perspective
We are interested in the role of field user interaction data in the development of In-Vehicle Information System (IVIS), the potentials practitioners see in analyzing this data, the concerns they share, and how this compares to companies with digital products. We conducted interviews with 14 UX professionals, 8 from automotive and 6 from digital companies, and analyzed the results by emergent thematic coding. Our key findings indicate that implicit feedback through field user interaction data is currently not evident in the automotive UX development process. Most decisions regarding the design of IVIS are made based on personal preferences and the intuitions of stakeholders. However, the interviewees also indicated that user interaction data has the potential to lower the influence of guesswork and assumptions in the UX design process and can help to make the UX development lifecycle more evidence-based and user-centered.
Gaze-based Interaction with Windshield Displays for Automated Driving: Impact of Dwell Time and Feedback Design on Task Performance and Subjective Workload
With increasing automation, vehicles could soon become mobile work- and living spaces, but traditional user interfaces (UIs) are not designed for this domain. We argue that high levels of productivity and user experience will only be achieved in SAE L3 automated vehicles if UIs are modified for non-driving related tasks. As controls might be far away (up to 2 meters), we suggest to use gaze-based interaction with windshield displays. In this work, we investigate the effect of different dwell times and feedback designs (circular and linear progress indicators) on user preference, task performance and error rates. Results from a user study conducted in a virtual reality driving simulator (N = 24) highlight that circular feedback animations around the viewpoint are preferred for gaze input. We conclude this work by pointing out the potential of gaze-based interactions with windshield displays for future SAE L3 vehicles.
After their success in the smart home, voice assistants are becoming increasingly popular in automotive user interfaces. These voice assistants are traditionally designed to provide a human-like dialog with the user. Thus, when processing voice input, especially dealing with uncertainty is an important factor that needs to be considered when designing system responses. While state-of-the-art voice assistants offer responses based on their certainty of what they understood, these response-thresholds are largely under-explored. In this work, we close this gap by providing a user-centered approach to investigate which responses are acceptable for voice input users depending on input certainty. Through findings from semi-structured online interviews with 101 participants, we provide insights about designing voice user interface responses based on system certainty. Our findings reveal a sweet spot for executing a task versus requesting additional user input. Further, we provide data-driven guidelines for different in-car voice assistant behaviors.
It seems that autonomous driving systems are substituting human responsibilities in the driving task. However, this does not mean that vehicles should not interact with their driver anymore, even in case of full automation. One reason is that the automation is not yet advanced enough to predict other road user's behavior in complex situations, which can lead to sub-optimal action choices, decrease comfort and user experience. In contrast, a human driver may have a more reliable understanding of other road users’ intentions which could complement that of the automation. We propose a framework that distinguishes between four levels for interaction with automation. Based on the framework, we introduce a concept which allows drivers to provide prediction-level guidance to an automated driving system through gaze-speech interaction. Results of a pilot user study show that people hold a positive attitude towards prediction-level intervention as well as the gaze-based interaction method.
Full Paper Session 6: Communication With Other Road Users II
Autonomous vehicles carry the potential to greatly improve mobility and safety in traffic. However, this technology has to be accepted and of value for the intended users. One challenge on this way is the detection and recognition of pedestrians and their intentions. While there are technological solutions to this problem, there seems to be no research on how to make this information transparent to the user in order to calibrate the user’s trust. Our work presents a comparative study of 5 visualization techniques with Augmented Reality or tablet-based visualization technology and two or three information clarity states of pedestrian intention in the context of highly automated driving. We investigated these in a user study in Virtual Reality (N=15). We found that such a visualization was rated reasonable, necessary, and that especially the Augmented Reality-based version with three clarity states was preferred.
External human-machine interfaces (eHMIs) support automated vehicles (AVs) in interacting with vulnerable road users such as pedestrians. While related work investigated various eHMIs concepts, these concepts communicate their message in one go at a single point in time. There are no empirical insights yet whether distance-dependent multi-step information that provides additional context as the vehicle approaches a pedestrian can increase the user experience. We conducted a video-based study (N = 24) with an eHMI concept that offers pedestrians information about the vehicle’s intent without providing any further context information, and compared it with two novel eHMI concepts that provide additional information when approaching the pedestrian. Results show that additional distance-based information on eHMIs for yielding vehicles enhances pedestrians’ comprehension of the vehicle’s intention and increases their willingness to cross. This insight posits the importance of distance-dependent information in the development of eHMIs to enhance the usability, acceptance, and safety of AVs.
An Exploration of Prosocial Aspects of Communication Cues between Automated Vehicles and Pedestrians
Road traffic is a social situation where participants heavily interact with each other. Consequently, communication plays an important role. Typically, the communication between pedestrians and drivers is nonverbal and consists of a combination of gestures, eye contact, and body movement. However, when vehicles become automated, this will change. Previous work has investigated the design and effectiveness of additional communication cues between pedestrians and automated vehicles. It remains unclear, though, how this impacts the perceptions of the quality of communication and impressions of mindfulness and prosociality. In this paper, we report an online experiment, where we evaluated the perception of communication cues in the form of on-road light projections, across different traffic scenarios and roles. Our results indicate that, while the cues can improve communication, their effect is dependent on traffic scenarios. These results provide preliminary implications for the design of communication cues that consider their prosocial aspects.
Autonomous vehicles are on the verge of entering the mass market. Communication between these vehicles with vulnerable road users could increase safety and ease their introduction by helping to understand the vehicle’s intention. Numerous communication modalities and messages were proposed and evaluated. However, these explorations do not account for the factors described in communication theory. Therefore, we propose a two-part design space consisting of a concept part with 3 dimensions and a situation part with 6 dimensions based on a literature review on communication theory and a focus group with experts (N=4) on communication. We found that most work until now does not address situation-specific aspects of such communication.
Full Paper Session 7: Vehicle Automation: Support Systems and Feedback
Fully autonomous driving leaves drivers with little opportunity to intervene in the driving decision. Giving drivers more control can enhance their driving experience. We develop two collaborative interface concepts to increase the user experience of drivers in autonomous vehicles. Our aim is to increase the joy of driving and to give drivers competence and autonomy even when driving autonomously. In a driving simulator study (N = 24) we investigate how vehicles and drivers can collaborate to decide on driving actions together. We compare autonomous driving (AD), the option to take back driving control (TBC) and two collaborative driving interface concepts by evaluating usability, user experience, workload, psychological needs, performance criteria and interview statements. The collaborative interfaces significantly increase autonomy and competence compared to AD. Joy is highly represented in the qualitative data during TBC and collaboration. Collaboration proves to be good for situations in which quick decisions are called for.
“Left!” – “Right!” – “Follow!”: Verbalization of Action Decisions for Measuring the Cognitive Take-Over Process
Influencing factors on the take-over performance during conditionally automated driving are intensively researched these days. Most of the studies focus on visual and motoric reactions. Only limited information is available about what happens on the cognitive level during the transition from automated to manual driving. Thus, the aim of the study is to investigate a measurement method for assessing the cognitive take-over performance. In this method, the cognitive component decision-making is operationalized via concurrent verbalization of action decisions. The results suggest that valid predictions for the time of the decision can be provided. Additionally, it seems that the effects of situational complexity on the driver behavior can be extended to cognitive processes. A temporal classification of the decision-making within the take-over process is derived that can be applied for the development of cognitive plausible assistance systems.
The Effect of Instructions and Context-Related Information about Limitations of Conditionally Automated Vehicles on Situation Awareness
In conditionally automated driving, drivers do not have to constantly monitor their vehicle but they must be able to take over control when necessary. In this paper, we assess the impact of instructions about limitations of automation and the presentation of context-related information through a mobile application on the situation awareness and takeover performance of drivers. We conducted an experiment with 80 participants in a fixed-base driving simulator. Participants drove for an hour in conditional automation while performing secondary tasks on a tablet. Besides, they had to react to five different takeover requests. In addition to the assessment of behavioral data (e.g. quality of takeover), participants rated their situation awareness after each takeover situation. Instructions and context-related information on limitations combined showed encouraging results to raise awareness and improve takeover performance.
Explainable Automation: Personalized and Adaptive UIs to Foster Trust and Understanding of Driving Automation Systems
Recent research indicates that transparent information on the behavior of automated vehicles positively affects trust, but how such feedback should be composed and if user trust influences the amount of desired feedback is relatively unexplored. Consequently, we conducted an interview study with (N=56) participants, who were presented different videos of an automated vehicle from the ego-perspective. Subjects rated their trust in the vehicle in these situations and could arbitrarily select objects in the driving environment that should be included in augmented reality feedback systems, so that they are able to trust the vehicle and understand its actions. The results show an inverse correlation between situational trust and participants’ desire for feedback and further reveal reasons why certain objects should be included in feedback systems. The study also highlights the need for more adaptive in-vehicle interfaces for trust calibration and outlines necessary steps for automatically generating feedback in the future.
Full Paper Session 8: Interfaces, Special Populations, and Shared Concepts
With an increasing ability to answer and fulfill user requests, voice-enabled Conversational Agents (CAs) are becoming more and more powerful. However, as the complexity of the requests increase, the time for the CAs to process and fulfill the tasks can become longer. In other cases where input prediction is available, some requests can be processed and answered even before the user is finished saying the command. However, the effects of these positive and negative delays in system response time are still under-explored. In this paper, we systematically analyze the effects of different response delays on usability and acceptability considering three common interaction techniques for voice-enabled CAs. Our results reveal that an unnaturally long positive delay in system response time leads users to assume that an error occurred, while a negative delay is perceived by the users as rude. Based on our findings, we present design guidelines for voice-enabled CAs.
Driverless shuttles bear different and novel challenges for passengers. One of these is related to capacity management, as such shuttles are often smaller (usually from 6 to 12 seats) with limited capacities to (re-)assign seating, control reservations, or arrange travels for groups that exceed a shuttle’s capacity. Since a bus driver is missing, passengers need to resolve conflicts or uncertainties on their own, unless additional systems provide such support. In this paper, we present the results from a laboratory study, in which we investigated passenger needs in relation to booking and reserving spots (seats, standing spots, and strollers) in an automated shuttle. We found that such functionalities have a low-to-medium impact on an overall scale but could constitute exclusion criteria for more vulnerable parts of the population, such as older adults, families with small children, or physically impaired individuals.
Concepts of Connected Vehicle Applications: Interface Lessons Learned from a Rail Crossing Implementation
Emerging connected vehicle (CV) technologies provide timely, advance warnings of roadway hazards to road users who may be incapable of perceiving them otherwise. Between 2012 and 2019, United States government transportation agencies deployed V2X Hub, an open platform supporting a broad set of transportation safety applications. This platform facilitates real-time data sharing between infrastructure, in-vehicle, or mobile devices to communicate hazards using Dedicated Short-Range Communications (DSRC). Armed with expertise gained in developing the technology, along with a renewed focus for the design of the in-vehicle safety system's Human-Machine Interface (HMI), the Rail Crossing Violation Warning (RCVW) research team evaluated practical driver use cases and extended the system capabilities to meet a broader set of safety goals.
Putting Older Adults in the Driver Seat: Using User Enactment to Explore the Design of a Shared Autonomous Vehicle
Self-driving vehicles have been described as one of the most significant advances in personal mobility of the past century. By minimizing the role of arguably error-prone human drivers, self-driving vehicles are heralded for improving traffic safety. Primarily driven by the technology’s potential impact, there is a rapidly evolving body of literature focused on consumer preferences. Missing, we argue, are studies that explore the needs and design preferences of older adults (60+). This is a significant knowledge gap, given the disproportionate impact that self-driving vehicles may have concerning personal mobility for older adults who are unable or unwilling to drive. Within this paper, we explore the design and interaction preferences of older adults through a series of enactment-based design sessions. This work contributes insights into the needs of older adults, which may prove critical if equal access to emerging self-driving technologies are to be realized.