Proceedings

The full proceedings of the AutoUI 2021 conference, and the adjunct proceedings can be downloaded from the ACM Digital Library.

You can find a table of contents and individual papers for the proceedings of the 13th International Conference on Automotive User Interfaces and Interactive Vehicular Applications with the following links:


TOC - Main Proceedings

Full Paper Session 1: Measuring and modeling: Getting it right

Geopositioned 3D Areas of Interest for Gaze Analysis

  • Jan Bickerdt
  • Jan Sonnenberg
  • Christian Gollnick
  • Enkelejda Kasneci

To understand driver’s gaze behavior, the gaze is usually matched to surrounding objects or static areas of interest (AOI) at fixed positions around the car. Full surround object tracking allows for an understanding of the traffic situation. However, because it requires an extensive sensor set and a lot of processing power, it’s not yet broadly available in production cars. The use of static AOIs only requires the addition of eye tracking sensors. They are at fixed positions around the car and can’t adapt to the environment, therefore their usefulness is limited. We propose geopositioned 3D AOIs. With adaptability and the use of a small sensor set, they combine the strengths of both methods. To test 3D AOIs’ capabilities for gaze analysis, a driving simulator study with 74 participants was conducted. We show that 3D AOIs are suitable for driver’s gaze analysis and a promising tool for driver intention prediction.

Modelling Drivers’ Adaptation to Assistance Systems

  • Jussi P. P. Jokinen
  • Tuomo Kujala

Human factors research and engineering of advanced driving assistance systems (ADAS) must consider how drivers adapt to their presence. The major obstruction to this at the moment is poor understanding of the details of the adaptive processes that the human cognition undergoes when faced with such changes. This paper presents a simulation model that predicts how drivers adapt to a steering assistance system. Our approach is based on computational rationality, and demonstrates how task interleaving strategies adapt to the task environment and the driver’s goals and cognitive limitations. A supervisor controls eye movements between the driving and non-driving tasks, making this choice on the basis of maximising expected joint task utility. The model predicts that with steering assistance, drivers’ in car glance durations increase. We also show that this adaptation leads to risky driving in cases where the reliability of the system is compromised. 

How Will Drivers Take Back Control in Automated Vehicles? A Driving Simulator Test of an Interleaving Framework

  • Divyabharathi Nagaraju
  • Alberta Ansah
  • Nabil Al Nahin Ch
  • Caitlin Mills
  • Christian P. Janssen
  • Orit Shaer
  • Andrew L Kun

We explore the transfer of control from an automated vehicle to the driver. Based on data from N=19 participants who participated in a driving simulator experiment, we find evidence that the transfer of control often does not take place in one step. In other words, when the automated system requests the transfer of control back to the driver, the driver often does not simply stop the non-driving task. Rather, the transfer unfolds as a process of interleaving the non-driving and driving tasks. We also find that the process is moderated by the length of time available for the transfer of control: interleaving is more likely when more time is available. Our interface designs for automated vehicles must take these results into account so as to allow drivers to safely take back control from automation.

 

Full Paper Session 2: Uh oh! Drowsiness and motion sickness

Queasy Rider: How Head Movements Influence Motion Sickness in Passenger Use of Head-Mounted Displays

  • Jingyi Li
  • Agnes Reda
  • Andreas Butz

In autonomous cars, drivers will spend more time on non-driving-related activities. Getting their hands off the wheel and eyes off the road, the driver, similar to a rear-seat passenger today, can use multiple built-in displays for such activities or even mobile head-mounted displays (HMDs) in virtual reality (VR). A wider motion range is known to increase engagement, but might also amplify the risk of motion sickness while switching between displays. In a rear-seat VR field study (N=21) on a city highway, we found a head movement range of ± 50° with a speed of 1.95m/s to provide the best trade-off between motion sickness and engagement. Compared to the pitch (Y) axis, movement around the yaw (X) axis induced less discomfort and more engagement with less motion sickness. Our work provides a concrete starting point for future research on self-driving carsickness, starting from today’s rear-seat passengers.

A Review of Motion Sickness in Automated Vehicles

  • Abhraneil Dam
  • Myounghoon Jeon

Automated vehicles (AVs) are the next wave of evolution in the transportation industry, but the progress towards increased levels of automation faces several challenges. One of the major problems, that gets overlooked, is motion sickness. As more drivers become passengers engaging in ‘passenger tasks’, it will lead to greater occurrences of motion sickness, preventing AVs from providing their true benefit to society. In an attempt to encourage more researchers to solve the problem of motion sickness in AVs, this study conducted a literature review following the PRISMA framework to identify the latest research trends and methodologies. Based on the findings and limitations in the existing literature, this study suggests a bird's-eye-view research framework consisting of causation, induction, measurement, and mitigation techniques, that researchers and early practitioners can utilize to conduct research in this field. Furthermore, the paper highlights future research directions in mitigation techniques to combat motion sickness in AVs.

Performance and Acceptance Evaluation of a Driver Drowsiness Detection System based on Smart Wearables

  • Thomas Kundinger
  • Andreas Riener
  • Ramyashree Bhat

Current systems for driver drowsiness detection often use driving-related parameters. Automated driving reduces the availability of these parameters. Techniques based on physiological signals seem to be a promising alternative. However, in a dynamic driving environment, only non- or minimal intrusive methods are accepted. In this work, a driver drowsiness detection system based on a smart wearable is proposed. A mobile application with an integrated machine learning classifier processes heart rate from a consumer-grade wearable. A simulator study (N=30) with two age groups (20-25, 65-70 years) was conducted to evaluate acceptance and performance of the system. Acceptance evaluation resulted in high acceptance in both age groups. Older participants showed higher attitudes and intentions towards using the system compared to younger participants. Overall detection accuracy of 82.72% was achieved. The proposed system offers new options for in-vehicle human-machine interfaces, especially for driver drowsiness detection in the lower levels of automated driving.

 

Full Paper Session 3: Gestures and actions

Enhancing Interactions for In-Car Voice User Interface with Gestural Input on the Steering Wheel

  • Zhitong Cui
  • Hebo Gong
  • Yanan Wang
  • Chengyi Shen
  • Wenyin Zou
  • Shijian Luo

Voice user interfaces (VUI) are becoming indispensable in car for offering drivers the opportunity to make distraction-free inputs and conduct complex tasks. However, the usability and control efficiency of today’s VUI remain to be enhanced due to its sequential nature. In this work, we explored gestural input on the steering wheel to improve the interaction efficiency of VUI. Based on limitations in VUI, we designed novel gestural commands on the steering wheel to augment them. We also elicited corresponding user-defined gestures by exploring drivers’ touch behavior. Then, we implemented a prototype attached to the steering wheel for recognizing gestures. Finally, we evaluated our system’s usability regarding driving performance, interaction efficiency, cognitive workload and user feedback. Results revealed that our system improved the control efficiency of VUI and reduced workload without a significant reduction in driving distraction than just using VUI.

Comparing an Unbalanced Motor with Mechanical Vibration Transducers for Vibrotactile Forward Collision Warnings in the Steering Wheel

  • Anne Zühlsdorff

A static driving simulator study was conducted to investigate an unbalanced actuator (UAK) and two mechanical vibration transducers (exciters) integrated in a steering wheel as vibrotactile warnings during manual driving. In a repeated-measures design, two vibration signal conditions (UAK vs. exciters) were presented to 32 subjects in two test routes with two forward collision warning scenarios. The effects of the vibration signals on driving behavior, reaction times, workload, acceptance, preference, and vibrotactile feel were examined in order to evaluate signal usability. The exciters led to lower SD speed and lower mean steering wheel angle acceleration and were preferred by 63% compared to the UAK. However, subjects showed longer reaction times and shorter time to collisions with the exciters. Due to its intense vibration, the UAK is more suitable for acutely dangerous situations requiring quick reactions. For less hazardous situations or incremental warnings, exciters are more suitable to avoid startle effects.

ORIAS: On-The-Fly Object Identification and Action Selection for Highly Automated Vehicles

  • Mark Colley
  • Ali Askari
  • Marcel Walch
  • Marcel Woide
  • Enrico Rukzio

Automated vehicles are about to enter the mass market. However, such systems regularly meet limitations of varying criticality. Even basic tasks such as Object Identification can be challenging, for example, under bad weather or lighting conditions or for (partially) occluded objects. One common approach is to shift control to manual driving in such circumstances, however, post-automation effects can occur in these control transitions. Therefore, we present ORIAS, a system capable of asking the driver to (1) identify/label unrecognized objects or to (2) select an appropriate action to be automatically executed. ORIAS extends the automation capabilities, prevents unnecessary takeovers, and thus reduces post-automation effects. This work defines the capabilities and limitations of ORIAS and presents the results of a study in a driving simulator (N=20). Results indicate high usability and input correctness.

 

Full Paper Session 4: Shall we cooperate?

After You! Design and Evaluation of a Human Machine Interface for Cooperative Truck Overtaking Maneuvers on Freeways

  • Jana Fank
  • Christian Knies
  • Frank Diermeyer

Truck overtaking maneuvers on freeways are inefficient, risky and promote high potential for conflict between road users. Collective perception based on V2X communication allow coordination with all parties to reduce the negative impact and could be installed in a timely manner compared to automation. However, the prerequisite for the success of this system is a human-machine interface that the driver can easily operate, trusts and accepts. In this approach, a user-centered conception and design of a human-machine interface for cooperative truck overtaking maneuver on freeways is presented. The development process is separated in two steps: After a prototype is build based on task analysis it is initially evaluated and improved iteratively with a heuristic evaluation by experts. The final prototype is tested in a simulator study with 30 truck drivers. The study provides initial feedback regarding the drivers' attitudes towards such a system and how it can be further improved.

What Makes a Good Team? - Towards the Assessment of Driver-Vehicle Cooperation.

  • Sebastianus Martinus Petermeijer
  • Angelica Tinga
  • Reinier Jansen
  • Antoine de Reus
  • Boris van Waterschoot

With the introduction of driving automation, the driving task has become a shared task between driver and vehicle. Today, an increasing amount of driving tasks can be performed by the automation and the view of driver and automation acting as collaborative partners has been well established. Although this notion has been adopted in the research and design domains, means to assess the quality of the driver-automation interaction in a structured way are still lacking. Moreover, most design evaluations are usually addressed either from a technical stance or from a human factors viewpoint, which does not comply with a general acknowledged view of a unified driver-vehicle system. The aim of the current study is therefore to investigate the possibility to quantitatively evaluate the quality of the driver and vehicle cooperation. Seven dimensions indicative for the quality of cooperation are identified, based on a literature survey and expert input during focus groups. This work potentially supports road authorities, legislation, regulation and original equipment manufacturers to monitor, evaluate and design driver-vehicle cooperation.

From SAE-Levels to Cooperative Task Distribution: An Efficient and Usable Way to Deal with System Limitations?

  • Jürgen Pichen
  • Tanja Stoll
  • Martin Baumann

Automated driving seems to be a promising approach to increase traffic safety, efficiency, and driver comfort. The defined automation capability levels (SAE) recommend a distinct takeover of the vehicle’s control from the human driver. This implies that if the system reaches a system boundary, the control falls back to the human. However, another possibility might be the cooperative approach of task distribution: The driver provides the missing information to the automation, which will stay activated. In a driving simulator study, we compared both a classical and a cooperative approach (N = 18). An automated car was driving on a rural road when a slower leading vehicle made it impossible for the automation to overtake. The participants could either initiate the overtake by providing the missing information cooperatively or fully taking over the vehicle’s control. Results showed that the cooperative approach has a higher usage and reduces workload. Therefore, the suggested cooperative approach seems to be more promising.

 

Full Paper Session 5: Can you feel it? User experience

Why Drivers Feel the Way they Do:An On-the-Road Study Using Self-Reports and Geo-Tagging

  • Monique Dittrich

In automotive research, the current hot topic of emotion recognition is mainly technology-driven, focusing on the development of sensors and algorithms that ensure recognition accuracy and reliability. Often, a subjective reference, i.e., information about what drivers actually feel, is missing for the interpretation of the data collected. Thus, this paper explores the subjective component of drivers’ emotions, focusing on when, where, and why they occur. In an on-the-road study, 34 drivers tracked their emotions and the triggers of these experiences in-situ. In total, 367 verbal self-reports were captured, providing insights into the spatial-temporal distribution of drivers’ emotions and their determinants. Results show, for example, that intersections are emotional hotspots, and that positive emotions arise especially at the beginning and at the end of the drive. The results can help to understand emotion recognition data and to infer drivers’ emotions from contextual information if no emotion data is available.

Designing for Driver's Emotional Transitions and Rituals

  • Jiayu Wu
  • Katrine Dalum Hesseldahl
  • Sam Johnson
  • Sheila Clark
  • Dan Quinlan
  • Dale Harrow

Emotions are a topic of increasing interest in vehicle design and research as they have a substantive impact on people's behaviour, affecting driving performance and being a source of safety issues particularly on long journeys. However, emotions do not usually occur distinctly and individually and frequently transition and transform between states. It can be challenging to obtain information about the exact emotions drivers experience, especially when subtle. We present design-led research focusing on identifying scenarios that contain normally unarticulated emotions and mental reminders that drivers use to make a journey safer and develop concepts for in-vehicle interactions that assist with these rituals. As results of the research, we designed and user tested in-vehicle interactions for two emotional transition scenarios - pre-journey preparation (‘Ready... Steady … Relax’) and checking the progress of a journey (‘Driving Whisper’).

Perceptions of Trucking Automation: Insights from the r/Truckers Community

  • Lisa Orii
  • Diana Tosca
  • Andrew L Kun
  • Orit Shaer

Recent technological advancements in automation have sparked interest in how automation will affect truck drivers and the trucking industry. However, there is a gap in the literature addressing how truck drivers perceive automation and how they believe it will impact trucking. This study aims to understand truck drivers’ perspectives on automation in the trucking industry. Extending a preliminary study, we conducted a broader analysis of comments discussing automation in the r/Truckers subreddit from February 2017 to March 2021. In general, the community had negative sentiments towards automation in the trucking industry. Participants speculated when automation would become mainstream in trucking and discussed the feasibility of automation in the context of executing non-driving tasks and having accommodating infrastructure. Our findings indicate that truck drivers seek to participate in conversations about the future and to prepare themselves for when automation is more prominent in the trucking industry.

 

Full Paper Session 6: Automation: Are you ready?

Mode Awareness Interfaces in Automated Vehicles, Robotics, and Aviation: A Literature Review

  • Yasemin Dönmez Özkan
  • Alexander G. Mirnig
  • Alexander Meschtscherjakov
  • Cansu Demir
  • Manfred Tscheligi

With increasing automation capabilities and a push towards full automation in vehicles, mode awareness, i.e., the driver’s awareness of the vehicle’s current automation mode, becomes an important factor. While issues surrounding mode awareness are known, research concerning and design towards mode awareness appears to not yet be a focal point in the automated driving domain. In this paper, we provide a state-of-the art on mode awareness from the related domains of automated driving, aviation, and Human-Robot Interaction. We present a summary of existing mode awareness interface solutions as well as existing techniques and recognized gaps concerning mode awareness. We found that existing interfaces are often simple, sometimes outdated, yet are difficult to meaningfully expand without overloading the user. We also found predictive approaches as a promising strategy to lessen the need for mode awareness via separate indicators.

A Novel Technique for Faster Responses to Take Over Requests in an Automated Vehicle

  • Francesco Walker
  • Oliver Morgenstern
  • Javier Martinez Avila
  • Marieke Martens
  • Willem Verwey

In Level 3 automated vehicles, drivers must take back control when prompted by a Take Over Request (TOR). However, there is currently no consensus on the safest way to achieve this. Research has shown that participants interact faster with an avatar when this “glows” in synchrony with participant physiology (heartbeat). We hypothesized that a similar form of synchronization might allow drivers to react faster to a TOR. Using a driving simulator, we studied driver responses to a TOR when permanently visible ambient lighting was synchronized with participants’ breathing. Experimental participants responded to the TOR faster than controls. There were no significant effects on self-reported trust or physiological arousal, and none of the participants reported that they were aware of the manipulation. These findings suggest that new ways of keeping the driver unconsciously “connected” to the vehicle could facilitate faster, and potentially safer, transfers of control.

Technology readiness affects the acceptance of and intention to use an automated driving system after repeated experience

  • Barbara Metz
  • Johanna Wörle

User acceptance is the key to success of automated driving. The user's technology readiness is one important factor in their behavioral intention to use automated driving. User acceptance changes with actual experience of the technology. The effect of user's technology readiness, automation level and experience with the technology on the acceptance of automated driving are assessed in a driving simulator study. N=60 drivers tested an L3 or L4 motorway automated driving system during six drives taking place at six different days. They evaluated the tested systems for a variety of relevant aspects. The results show an impact of technology readiness on higher level concepts like usefulness, satisfaction and behavioural intention but not on direct evaluation of the functionality or on drivers’ immediate experience of driving with the system. The meaning of the results for future research is discussed.

Designing Alert Systems in Takeover Transitions: The Effects of Display Information and Modality

  • Na Du
  • Feng Zhou
  • Dawn Tilbury
  • Lionel Peter Robert
  • X. Jessie Yang

In conditionally automated driving, in-vehicle alert systems can provide drivers with information to assist their takeovers from automated driving. This study investigated how display modality and information influenced drivers’ acceptance of the in-vehicle alert systems under different event criticality situations. We conducted an online video study with a 3 (information type) × 3 (display modality) × 2 (event criticality) mixed design involving 60 participants. The results showed that considering drivers’ perceived usefulness and ease of use, presenting why only information was not sufficient for takeovers as compared to what will only information and why + what will information. Participants reported higher ease of use in the combination of speech and augmented reality condition when compared to the speech only condition. High event criticality led to drivers’ lower perceived usefulness and more negative opinions of the displays. The findings have implications for the design of in-vehicle alert systems during takeover transitions.

 

Full Paper Session 7: New methods: What's in your toolbox?

Automatic Generation of Road Trip Summary Video for Reminiscence and Entertainment using Dashcam Video

  • Kana Bito
  • Itiro Siio
  • Yoshio Ishiguro
  • Kazuya Takeda

Vehicle dashboard cameras are becoming an increasingly popular kind of automotive accessory. While it is easy to obtain the high-definition video data recorded by dashcams using Secure Digital memory cards, this data is rarely used except for safety purposes because it takes substantial time and effort to review or edit many hours of such recorded videos. In this paper, we propose a new usage for this data through the automatic video editing system we have developed that can create enjoyable video summaries of road trips utilizing video and other data from the vehicle. We also report the results of comparisons between automatically edited videos created by the proposed system and manually edited videos created by study participants. The prototype developed in this study and the findings from our experiments will contribute to improving the driving experience by providing entertainment for automobile users after road trips, and by memorializing their travels.

Measuring user experience in automated driving: Developing a single-item measure

  • Chantal Himmels
  • Kamil Omozik
  • Oliver Jarosch
  • Axel Buchner

Measuring user experience is highly important for human-centered development and thus for designing automated driving systems. Multi-item measures such as the System Usability Scale (SUS) [7] or the Usability Metric for User Experience (UMUX) [14] are commonly used for collecting user feedback on technical systems or products. The goal of the present study was to investigate the potentials of a single-item approach as an economic alternative for measuring user experience compared to multi-item scales. Therefore, a single-item measure was developed to assess both event-related and cumulative user experience in automated driving. User experience was manipulated in a between-subject design implemented in a real-world driving task and feedback was collected using the newly developed Single Item User Experience (SIUX) scale, the UMUX, and the SUS. Results indicate that the SIUX scale is more sensitive than the UMUX to differences in event-related user experience, but not in cumulative user experience. Both the SIUX and the UMUX were more sensitive than the SUS when measuring differences in cumulative user experience. Future studies should be aimed at investigating the applicability of the SIUX scale to domains other than automated driving and at collecting more extensive data on validity and reliability of all three instruments.

Development and Evaluation of a Data Privacy Concept for a Frustration-Aware In-Vehicle System: Development and Evaluation of a Data Privacy Concept for a Frustration-Aware In-Vehicle System

  • Klas Ihme
  • Stefan Bohmann
  • Martin Schramm
  • Sonja Cornelsen
  • Victor Fäßler
  • Anna-Antonia Pape

To realize frustration-aware in-vehicle systems based on real-time user monitoring, personal data have to be recorded, analyzed and (potentially) stored raising data privacy concerns that may reduce the user acceptance and hence the spread of such systems. Complementing the development of a frustration-aware system with voice interface in the project F-RELACS, a data privacy concept was created based on the principles privacy by design and privacy by default recommended in the European General Data Protection Regulation. Nine criteria were formulated and 23 concrete measures to satisfy the criteria were derived. The measures were evaluated in an online study with 96 participants between 18 and 74 years. On average, the measures were rated as rather sufficient to sufficient. Participants evaluated the use of commercial third-party software for speech processing as most critical. All results are discussed and proposals to further increase the acceptance of frustration-aware systems are outlined.

 

Full Paper Session 8: Brave new world: Displays and visualisations

Towards future pedestrian-vehicle interactions: Introducing theoretically-supported AR prototypes

  • Wilbert Tabone
  • Yee Mun Lee
  • Natasha Merat
  • Riender Happee
  • Joost de Winter

The future urban environment may consist of mixed traffic in which pedestrians interact with automated vehicles (AVs). However, it is still unclear how AVs should communicate their intentions to pedestrians. Augmented reality (AR) technology could transform the future of interactions between pedestrians and AVs by offering targeted and individualized communication. This paper presents nine prototypes of AR concepts for pedestrian-AV interaction that are implemented and demonstrated in a real crossing environment. Each concept was based on expert perspectives and designed using theoretically-informed brainstorming sessions. Prototypes were implemented in Unity MARS and subsequently tested on an unmarked road using a standalone iPad Pro with LiDAR functionality. Despite the limitations of the technology, this paper offers an indication of how future AR systems may support future pedestrian-AV interactions.

Visualizing Event Sequence Data for User Behavior Evaluation of In-Vehicle Information Systems

  • Patrick Ebel
  • Christoph Lingenfelder
  • Andreas Vogelsang

With modern In-Vehicle Information Systems (IVISs) becoming more capable and complex than ever, their evaluation becomes increasingly difficult. The analysis of large amounts of user behavior data can help to cope with this complexity and can support UX experts in designing IVISs that serve customer needs and are safe to operate while driving. We, therefore, propose a Multi-level User Behavior Visualization Framework providing effective visualizations of user behavior data that is collected via telematics from production vehicles. Our approach visualizes user behavior data on three different levels: (1) The Task Level View aggregates event sequence data generated through touchscreen interactions to visualize user flows. (2) The Flow Level View allows comparing the individual flows based on a chosen metric. (3) The Sequence Level View provides detailed insights into touch interactions, glance, and driving behavior. Our case study proves that UX experts consider our approach a useful addition to their design process.

HMInference: Inferring Multimodal HMI Interactions in Automotive Screens

  • Jannik Wolf
  • Marco Wiedner
  • Mohamed Kari
  • David Bethge

Driving requires high cognitive capabilities in which drivers need to be able to focus on first-level driving tasks. However, each interaction with the User Interface (UI) system presents a potential distraction. Designing UIs based on insights from field-collected user interaction logs, as well as real-time estimation of the most probable interaction modality, can contribute to engineering focus-supporting UIs. However, the question arises of how user interactions can be predicted in in-the-wild driving scenarios. In this paper, we present HMInference, an automotive machine-learning framework which exploits user interaction log data. HMInference analyzes the interaction sequences of users based on UI domains (e.g., navigation, media, settings) and driving context (e.g., vehicle trajectory) to predict different interaction modalities (e.g., touch, speech). In 10-fold cross-validation, HMInference achieves a mean accuracy of 73.2% (SD: 0.02). Our work advances areas where user interaction prediction for in-car scenarios is required e.g., to enable adaptive system designs.

 

Full Paper Session 9: Watch your language!

How to Design the Perfect Prompt: A Linguistic Approach to Prompt Design in Automotive Voice Assistants – An Exploratory Study

  • Anna-Maria Meck
  • Lisa Precht

In-vehicle voice user interfaces (VUIs) are becoming increasingly popular while needing to handle more and more complex functions. While many guidelines exist in terms of dialog design, a methodical and encompassing approach to prompt design is absent in the scientific landscape. The present work closes this gap by providing such an approach in form of linguistic-centered research. By extracting syntactical, lexical, and grammatical parameters from a German contemporary grammar, we examine how their respective manifestations affect users’ perception of a given system output across different prompt types. Through exploratory studies with a total of 1,206 participants, we provide concrete best practices to optimize and refine the design of VUI prompts. Based on these best practices, three superordinate user needs regarding prompt design can be identified: a) a suitable level of (in)formality, b) a suitable level of complexity/simplicity, and c) a suitable level of (im)mediacy.

In-Vehicle Intelligent Agents in Fully Autonomous Driving: The Effects of Speech Style and Embodiment Together and Separately

  • Manhua Wang
  • Seul Chan Lee
  • Harsh Kamalesh Sanghavi
  • Megan Eskew
  • Bo Zhou
  • Myounghoon Jeon

Speech style and embodiment are two widely researched characteristics of in-vehicle intelligent agents (IVIAs). This study aimed to investigate the influence of speech style (informative vs. conversational) and embodiment (voice-only vs. robot) and their interaction effects on driver-agent interaction. We conducted a driving simulator experiment, where 24 young drivers experienced four different fully autonomous driving scenarios, accompanied by four types of agents each, and completed subjective questionnaires about their perception towards the agents. Results showed that both conversational agents and robot agents promoted drivers' likability and perceived warmth. These two features also demonstrated independent impacts. Conversational agents received higher anthropomorphism and animacy scores, while robot agents received higher competence and lower perceived workload scores. The pupillometry indicated that drivers were more engaged while accompanied by conversational agents. Our findings are able to provide insights on applying different features to IVIAs to fulfill various user needs in highly intelligent autonomous vehicles.

Effects of Native and Secondary Language Processing on Emotional Drivers’ Situation Awareness, Driving Performance, and Subjective Perception

  • Sushmethaa Muhundan
  • Myounghoon Jeon

Research shows that emotions have a substantial influence on the cognitive processes of humans and in the context of driving, can negatively influence driving performance. Drivers’ interaction with in-vehicle agents can improve their emotional state and can lead to increased road safety. Language is another important aspect that influences human behavior and information processing. This study aims to explore the effects of native and secondary-language processing on emotional drivers’ situation awareness, driving performance, and subjective perception by conducting a within-subject simulation study. Twenty-four young drivers drove three different laps with a native-language speaking agent, secondary-language speaking agent, and no agent. The study results are indicative of the importance of native-language processing in the context of driving. Native-language agent condition resulted in improved driving performance and heightened situation awareness. The study results and discussions have theoretical and practical design implications and are expected to help foster future work in this domain.

 

Full Paper Session 10: eHMIs: Sharing is caring

Investigating the Effects of Feedback Communication of Autonomous Vehicles

  • Mark Colley
  • Jan Henry Belz
  • Enrico Rukzio

Autonomous vehicles (AVs) are expected to communicate to vulnerable road users as a substitution of, for example, driver-pedestrian communication, leading to increased safety and acceptance. This communication is currently one-directional, i.e., from the AV to the pedestrian. However, today’s communication between drivers and pedestrians in crossing scenarios is bidirectional. Pedestrians gesture “thank you’’ or wave drivers through in case they do not want to cross. Human drivers often acknowledge this, for example, with a nod. We present an experiment in Virtual Reality (N=20), in which the effect of such acknowledgment of the AVs via its external communication is investigated for the two described scenarios and concerning pedestrian presence. Results show that such feedback is perceived as highly necessary, depends on the scenario, and improves the perceived intelligence of the AV, confirming a Halo-Effect.

Towards Scalable eHMIs: Designing for AV-VRU Communication Beyond One Pedestrian

  • Debargha Dey
  • Arjen van Vastenhoven
  • Raymond H. Cuijpers
  • Marieke Martens
  • Bastian Pfleging

Current research on external Human-Machine Interfaces (eHMIs) in facilitating interactions between automated vehicles (AVs) and pedestrians have largely focused on one-to-one encounters. In order for eHMIs to be viable in reality, they need to be scalable, i.e., facilitate interaction with more than one pedestrian with clarity and unambiguity. We conducted a virtual-reality-based empirical study to evaluate four eHMI designs with two pedestrians. Results show that even in this minimum criteria of scalability, traditional eHMI designs struggle to communicate effectively whom the AV intends to yield to. Road-projection-based eHMIs show promise in clarifying the specific yielding intention of an AV, although it may still not be an ideal solution. The findings point towards the need to consider the element of scalability early in the design process, and potentially the need to reconsider the current paradigm of eHMI design.

Stop or Go? Let me Know!: A Field Study on Visual External Communication for Automated Shuttles

  • Alexander G. Mirnig
  • Magdalena Gärtner
  • Vivien Wallner
  • Michael Gafert
  • Hanna Braun
  • Peter Fröhlich
  • Stefan Suette
  • Jakub Sypniewski
  • Alexander Meschtscherjakov
  • Manfred Tscheligi

In mixed traffic environments, highly automated vehicles (HAV) can potentially be disruptive and a source of hazards due to their non-human driving behavior and a lack of “traditional” communication means (gestures, eye contact, and similar) to resolve issues or otherwise unclear situations. As a result, additional external human-machine interfaces (eHMI) for automated vehicles that replace the now absent human element in communication have been proposed. In this paper, we present the results from a study, in which two versions of a light band eHMI to communicate driving intend of an automated shuttle were evaluated in a real driving environment. We found that the green-red traffic light metaphor and simple animations could improve interaction success in certain aspects. We also found and discuss that the effect of using vs. not using the visual eHMIs was overall lower than expected and that the shuttle’s position and observable driving behavior seemed to play a larger role than anticipated.

Evaluating the Impact of Decals on Driver Stereotype Perception and Exploration of Personalization of Automated Vehicles via Digital Decals

  • Mark Colley
  • Mirjam Lanzer
  • Jan Henry Belz
  • Marcel Walch
  • Enrico Rukzio

Traffic behavior and its perception is shaped by various factors such as vehicle color or size. Decals are used to express information about the owner’s beliefs or are intended to be funny. In the future, with external displays on (automated) vehicles, individualized customization could be even more pronounced. While some research looked at the messages these decals convey, it is unclear how these decals influence the perception of surrounding drivers on the operator of the vehicle. We gathered data on decals in 29 cities in 8 countries. A thematic analysis unveiled 17 dominant themes among decals. Subsequently, we investigated effects of decals of 9 supra-regional common themes in an online study (N=64) finding that participants attributed different characteristics to the driver of a vehicle with a decal based on the type of decal and the participants’ country of origin. Additionally, a Virtual Reality study (N=16) revealed diverse opinions on future usage of such personalization options.


TOC - Adjunct Proceedings

SESSION: Works in Progress

Measuring Interaction-based Secondary Task Load: A Large-Scale Approach using Real-World Driving Data

  • Patrick Ebel
  • Christoph Lingenfelder
  • Andreas Vogelsang

Center touchscreens are the main Human-Machine Interface (HMI) between the driver and the vehicle. They are becoming, larger, increasingly complex and replace functions that could previously be controlled using haptic interfaces. To ensure that touchscreen HMIs can be operated safely, they are subject to strict regulations and elaborate test protocols. Those methods and user trials require fully functional prototypes and are expensive and time-consuming. Therefore it is desirable to estimate the workload of specific interfaces or interaction sequences as early as possible in the development process. To address this problem, we envision a model-based approach that, based on the combination of user interactions and UI elements, can predict the secondary task load of the driver when interacting with the center screen. In this work, we present our current status, preliminary results, and our vision for a model-based system build upon large-scale natural driving data.

Signaling Yielding Intent with eHMIs: the Timing Determines an Efficient Crossing

  • Hatice Şahin
  • Kevin Daudrich
  • Heiko Müller
  • Susanne CJ Boll

While different studies have investigated the different types of signals to communicate that an automated vehicle will or will not yield, the question of “when” signaling intent is contributing best to cooperation is still open. We conducted a study investigating the effects of eHMIs’ signaling yielding intent to pedestrians at different times relative to the vehicles’ deceleration maneuver. We realized this scenario by means of a video game in which pedestrians need to safely cross the street. Our results show that signaling yielding intent with eHMIs before or simultaneously with a deceleration maneuver significantly shortens participant’s street crossing onsets while later presentation of yielding intent did not perform better than not presenting it at all. Our study suggests signaling yielding intent on the eHMI before or during the deceleration of the vehicles supports an efficient crossing of the pedestrian.

InShift: A Shifting Infotainment System to Enhance Co-Driver Experience and Collaboration

  • Melanie Berger
  • Anil Eranil
  • Regina Bernhaupt
  • Bastian Pfleging

Car manufacturers introduced a variety of non-driving-related features to enhance the trip experience for drivers and passengers. However, using an in-vehicle infotainment system (IVIS) can be mentally demanding and cause driver distraction. In addition, the usage of an IVIS is often restricted to front-seat passengers, which limits the possibility of assisting the driver. To address these problems, we demonstrate InShift, an initial concept to foster co-driver participation. It presents a physical interaction concept that allows delegating selected (e.g., highly mentally demanding) IVIS functions towards the front-seat passenger. Based on the functions the driver wants to delegate, the IVIS touchscreen moves towards the co-driver to provide easier access. Results from a pilot study (N = 8) suggest that InShift offers a positive user experience for drivers and co-drivers. The qualitative feedback reveals that InShift supports driver-passenger collaboration and allows the co-driver to be a better assistant.

Virtual Horse: an Anthropomorphic Notification Interface for Traffic Accident Reduction

  • Fengyusheng Wang
  • Chia-Ming Chang
  • Takeo Igarashi

With the development of Vehicle-to-Infrastructure (V2I) and Vehicle-to-Everything (V2X), the difficulty of predicting traffic accidents is decreasing. Vehicles can communicate with each other about their status and receive notifications from traffic infrastructures about the prediction movements of other objects on the road. The efficiency of showing warnings to human drivers will become the bottleneck of reducing traffic accidents. In this study, we propose Virtual Horse: an anthropomorphic notification interface using implicit information to implicate potential dangers to the driver. The virtual horse represents the car’s behaviors and indicates potential dangers via body language like a real horse. We expect this anthropomorphic interface can reduce drivers’ reaction time for preventing dangers. We developed a prototype and ran a pilot study in a web-based driving simulation. The results showed potential benefits of using the proposed interface as well as showing useful insights for our future development of a user study.

Conception, Development and First Evaluation of a Context-Adaptive User Interface for Commercial Vehicles

  • Lasse Schölkopf
  • Maria-Magdalena Wolf
  • Veronika Hutmann
  • Frank Diermeyer

Modern vehicles are complex working environments and feature a multitude of functionalities. This applies in particular to commercial vehicles, which are equipped with even more functions than passenger cars. Research showed the potential of context-adaptive user interfaces for reducing the complexity of the human-machine interaction but has focused on passenger cars. Therefore, a context-adaptive touchscreen-based system is conceptualized specifically for commercial vehicles such as trucks based on existing findings and design guidelines. Acknowledging the importance of being able to gather early user feedback for allowing fast iteration cycles, an interactive prototype was implemented, and a modular study setup developed. This combination was tested in an initial user study, which evaluated the usability as well as user experience in terms of the novel context-adaptive interface.

A VR-Based Simulator Using Motion Feedback of a Real Powered Wheelchair for Evaluation of Autonomous Navigation Systems

  • Hiroshi Yoshitake
  • Kazuto Futawatari
  • Motoki Shino

Autonomous navigation systems for powered wheelchairs are drawing attention to support older adults in moving outdoors. It is important to evaluate how the users behave and feel when they experience autonomous navigation for their development. VR-based simulators are used to evaluate user's behavior and subjective assessment in various scenarios without putting anybody at risk. However, it is difficult to reproduce realistic motion feedback in the simulator using motion platforms. Moreover, this physical feedback influences user's perception of self-motion, which affects their behavior and feeling. In this paper, we proposed a novel simulator that gives real motion feedback while simulating various scenarios by combining a virtual reality headset and a real powered wheelchair. Experiment results showed that subjective assessment of comfort during autonomous navigation differed between the proposed simulator and a simulator without motion feedback, indicating that perception of the wheelchair's behavior was enhanced by real motion feedback.

Gear Up for Safety:Development and Evaluation of an Assisted Bicycle

  • Philipp Wintersberger
  • David Suppan
  • Andreas Schweidler
  • Florian Michahelles

Although cycling is a promising transport modality for the future, cyclists could not substantially benefit from the safety gain in the last decades. To improve cycling safety and convenience, as well as extending the user base, we proposed to develop a self-balancing, assisted, and connected bicycle that transfers several assistance functions from the vehicle to the cycling domain. In this work-in-progress we sketch our vision of this bicycle, discuss some challenges for its development, and present early results. We present a self-balancing function using reinforcement learning and current progress in the development of a virtual reality motion bicycle simulator. If the progress in the projects continues as expected, we will report results from studies with human users soon.

An Online Study to Explore Trust in Highly Automated Vehicle in Non-Critical Automated Driving Scenarios

  • Haoyu Dong
  • Marieke Martens
  • Bastian Pfleging

While using highly automated systems, various non-critical automated driving scenarios can be identified in which trust plays a role. In this study, we investigated the change of trust in these scenarios with a digital “Feeling of Trust” indicator, through video-based online experiments simulating automated driving. Initial results show that trust even changes in these scenarios and revealed multiple influential factors. While trust seems to drop consistently in certain cases, we found individual differences in other events. With our experimental setup and findings, we provide a tool to examine trust aspects in an online study. This contributes to the understanding of how to design human-vehicle interactions in highly automated cars with the goal to calibrate trust under ordinary non-critical events.

Should Automated Features Warn, Assist, or Take Control?

  • Myrna S van den Berg
  • Benno Thijs
  • Bastian Pfleging
  • Christian P. Janssen

Driver support and assistance features for vehicles have grown a lot during the last years. Despite a wealth of features, car brands and manufacturers differ in the implementation of these features. What insights can we gain on what drivers choose to (partially) automate their vehicles, given all these options? In this work in progress paper, we report the interim results of a survey which investigated this question. The survey contained hierarchical questions asking mainly Dutch respondents for their automated driving feature preferences. Results show that respondents choose a large number of features and were very diverse in their preferences. Nevertheless, there seems to be a preference of control type features over warning and passive-assist features. Based on these findings, we concluded that our sample supports the continuous development of features.

What does Well-Designed Adaptivity Mean for Drivers? A Research Approach to Develop Recommendations for Adaptive In-Vehicle User Interfaces that are Understandable, Transparent and Controllable

  • Julia Graefe
  • Doreen Engelhardt
  • Klaus Bengler

Applications of our everyday devices like smartphones and computers are becoming increasingly smart, adaptive, and personalized. We assume that users will soon expect the same behavior from their cars. Research shows that well-designed adaptivity brings the potential to increase a system's usability and thereby offer a safer driver-vehicle interaction. While car manufacturers already present first interaction concepts, there still seems to be a research gap regarding human factors challenges of adaptivity. In this paper, we will present some of these challenges and explain their relevance within the automotive context. Additionally, we present our research approach for developing recommendations for the design of adaptive user interfaces for in-vehicle comfort and infotainment features.

The TOR Agent:Optimizing Driver Take-Over with Reinforcement Learning

  • Jakob Kuen
  • Clemens Schartmüller
  • Philipp Wintersberger

Various factors influence drivers’ response to Take-Over Requests in automated driving, and a wide range of designs have been proposed to improve transitions. Still, little research has investigated how systems could deliver Take-Over Requests at the best moment in time. In this paper, we sketch the idea of a reinforcement learning agent that learns to deliver Take-Over Requests at the right time so that drivers’ performance gets optimized, which could help to increase driving safety. We implemented such a system in Unity to evaluate this approach using a simple driver model. Our agent receives coordinates of the upcoming road segment and learns to deliver a Take-Over Request at an appropriate moment within a short time window. The reward function is composed to minimize the lateral deviation in the subsequent phase of manual driving. The initial results obtained are promising, and we will evaluate the concept with real human users soon.

Web-based Simulator for Studying Shared Control of Autonomous Platoons

  • Utkarsh Singh
  • Chris S Crawford

Recent interest in semi-autonomous platoons has lead to novel Human-Machine Interface research. Additionally, studies seem to feature autonomous platoon simulations more frequently than traditional field tests. Simulation-based approaches often support rapid prototyping and evaluation of various HMI designs. However, existing simulation platforms can be bulky, expensive or require specialized hardware, which may present barriers. In this work, we present WebPlatoon, a novel web-based simulation platform. WebPlatoon seeks to support the design and evaluation of shared control scenarios featuring semi-autonomous platoons. By leveraging web technologies, our platform aims to present initial steps towards portable, cross-platform, and accessible platforms for semi-autonomous platoon research.

Driver Monitoring Systems: Perceived Fairness of Consequences when Distractions are Detected

  • Michael Nees

Driver monitoring may become a standard safety feature to discourage distraction in vehicles with or without automated driving functions. Research to date has focused on technology for identifying driver distraction—little is known about how drivers will respond to monitoring systems. An exploratory online survey assessed the perceived risk and reasonableness associated with driving distractions as well as the perceived fairness of potential consequences when a driver monitoring system detects distractions under either manual driving or Level 2 automated driving. Although more research is needed, results suggested: (1) fairness was associated with perceived risk; (2) alerts generally were viewed as fair; (3) more severe consequences (feature lockouts, insurance reporting, automation lockouts, involuntary takeovers) generally were viewed as less fair; (4) fairness ratings were similar for manual versus Level 2 driving, with some potential exceptions; and (5) perceived risk of distractions was slightly lower with automated driving.

Attitudes Towards Autonomous Public Transportation

  • Henrik Detjen
  • Irawan Nurhas
  • Stefan Geisler

Public transportation will become highly automated in the future, and at some point, human drivers are no longer necessary. Today many people are skeptical about such scenarios of autonomous public transport (abbr.: APT). In this paper, we assess users’ subjective priority of different factors that lead to personal acceptance or rejection of APT using an adapted online version of the Q-Methodology with 44 participants. We found four prototypical attitudes to which subgroups of participants relate: 1) technical enthusiasts, 2) social skeptics, 3) service-oriented non-enthusiasts, and 4) technology-oriented non-enthusiasts. We provide an unconventional perspective on APT acceptance that helps practitioners prioritize design requirements and communicate, targeting users’ specific attitudes.

XR-OOM: Mixing Virtual Driving Simulation with Real Cars and Environments Safely

  • David Goedicke
  • Alexandra W.D. Bremers
  • Hiroshi Yasuda
  • Wendy Ju

High-fidelity driving simulators can act as test beds for designing in-vehicle interfaces or validating the safety of novel driver assistance features. XR-OOM is a mixed-reality driving simulator system that enables us to superimpose virtual objects and events into the view of participants engaging in real-world driving in unmodified vehicles. In this work-in-progress paper, we present system requirements and specify our initial proof-of-concept system design. This work enables us to further develop measures for using such systems in ways that are safe, effective, and lead to accurate, repeatable data collection about behavioral responses in hybrid real-world/simulated driving tasks.

Taking on driving tasks yourself? That was yesterday! How drivers would like to be supported by assistance systems

  • Lisa Marie Zankel
  • Paul Gerber
  • Verena Zimmermann
  • Nina Gerber

Advanced Driver Assistance Systems (ADAS) can be found in almost every vehicle nowadays. They support drivers and usually ensure comfort and road safety. While some ADAS have been on the market for years and have proven themselves, some have only recently become available or are still in development. The present study investigates in which situations and contexts ADAS are considered to be most useful. We explored in which contexts a high degree of automation is desired and which implications this has for the design of ADAS. We found that users prefer a high level of automation for motorway driving and parking, but not for urban and overland driving. Our results suggest that the use of and trust in ADAS is related to their complexity. Based on our results, we derive recommendations for interface designers and car manufacturers.

Evaluation of graphical human-machine interfaces for turning manoeuvres in automated vehicles

  • Ina Krefting
  • Alexander Trende
  • Anirudh Unni
  • Jochem Rieger
  • Andreas Luedtke
  • Martin Fränzle

Autonomous vehicles may become an important part of traffic in the next few decades. One of the key factors that contribute to the acceptance of the technology is that the user can understand and to a certain degree predict the behaviour of the automation. One way to communicate the automation's intention and behaviour are graphical human-machine interfaces (HMIs). An online survey was conducted to evaluate five designs for HMIs for turning manoeuvres in automated vehicles. During the survey the participants rated the HMIs based on videos. The overall ratings don't show a clear favourite, however, a separation of the participants regarding their trust in automation and technical affinity indicate preferences regarding the preferred level of abstraction. Furthermore, 97% of the participants agreed that HMIs like the ones presented would help them gain or increase their trust in automated vehicles.

Developers’ Information Needs during Test Drives with Automated Vehicles in Real Traffic: A Focus Group Study

  • Franziska Henze
  • Natalie Magdalena Stasinski
  • Dennis Fassbender
  • Christoph Stiller

So far, user interfaces are mainly developed for customers using highly automated driving functions, but little is known about the requirements on human machine interfaces for function developers. However, good development interfaces are important for efficient validating in real traffic test drives. Therefore, we present a focus group study with software developers of such functions to characterize their needs. Firstly, we identify typical situations where the automated vehicle executes an action other than the one expected, and, secondly, we gather developers’ requests for information to increase the traceability of driving decisions in general. We found that developers perceive actions involving lateral motion as being more non-transparent than with longitudinal motion and therefore identify a need for a more detailed explanation of these. Additionally, explanations should include more environmental features to help understand the reasons behind an action, and at the same time be tailored to the developers’ information interests.

Origo Steering Wheel: Improving Tactile Feedback for Steering Wheel IVIS Interaction using Embedded Haptic Wave Guides and Constructive Wave Interference

  • Ahmed Farooq
  • Hanna Venesvirta
  • Hasse Sinivaara
  • Mikko Laaksonen
  • Arto Hippula
  • Veikko Surakka
  • Roope Raisamo

Automotive industry is evolving through “Electrification”, “Autonomous Driving Systems”, and “Ride Sharing”, and all three vectors of change are taking place in the same timeframe. One of the key challenges during this transition will be to present critical information collected through additional onboard systems, to the driver and passengers, enhancing multimodal in-vehicle interaction. In this research authors suggest creating embedded tactile-feedback zones on the steering wheel itself, which can be used to relay haptic signals to the driver with little to no visual demand. Using “Haptic Mediation” techniques such as 3D-printed Embedded Haptic Waveguides (EHWs) and Constructive Wave Interference (CWI), the authors were able to provide reliable tactile feedback in normal driving environments. Signal analysis shows that EHWs and CWI can reduce haptic signal distortion and attenuation in noisy environments and during user testing, this technique yielded better driving performance and required lower cognitive load while completing common IVIS tasks.

What if the Automation Fails? – A Classification of Scenarios in Teleoperated Driving

  • Carmen Kettwich
  • Andreas Schrank
  • Hüseyin Avsar
  • Michael Oehl

Teleoperated driving as an enabler has the potential to bridge the gap to fully automated driving (SAE Level 5 [13]) by monitoring and controlling remotely highly automated vehicles (AVs, SAE 4) whenever their automation fails to do so. To ensure safe and efficient teleoperation, a user-centered human-machine interface (HMI) considering use cases, scenarios, and sequences relevant in teleoperated driving needs to be designed. For this purpose, this paper presents as a grounding an extensive system to classify scenarios relevant to remote-controlled AVs from a control center perspective. It is based on four major categories pertaining to the vehicles, the teleoperation workstation, interaction partners, and the environment. The system will serve as a scaffolding to categorize a catalogue of more than 150 scenarios derived from several research projects and this system will be adapted in future research to fit an ever-broader range of scenarios in the teleoperation of AVs.

Exploratory Breaks: A User Interface that Encourages Car Drivers to Take Valuable Breaks

  • Melanie Berger
  • Aditya Dandekar
  • Regina Bernhaupt
  • Bastian Pfleging

Recent in-car infotainment features focus especially on enhancing the experience while moving. However, the experience of a car ride is not limited to the in-car experience. Especially on long-distance journeys, breaks are crucial to fulfill basic human needs and to reduce driver fatigue. However, breaks cannot always be planned in advance and spontaneous breaks often lack experience, e.g., when it comes to food choices or points of interest. This results often in short breaks and limited recovery from stressful driving. To encourage drivers to take breaks, we present a concept that recommends exploratory stops based on occupants’ interests (e.g., viewpoints, restaurants). Results from a pilot study (N = 10) show that personalized breaks lead towards a positive user experience. The qualitative feedback reveals that such a system encourages drivers to take more breaks because the stop fits their interest, makes them curious about the location, and is perceived as easy to use while driving.

A Method for Location Initialization of Handheld Devices using Autonomous Driving Vehicles for Interactive Systems

  • Yoshio Ishiguro
  • Kazuya Takeda

One basic function performed by autonomous vehicles is estimation of their relative position within the environment, using LiDAR and high-precision 3D point-cloud maps. We propose a method for acquiring the relative position of smartphones by exploiting the localization information of mobility systems and their user interaction functionality. Generally, the technology required for estimating the position of a device is a trade-off between accuracy and the simplicity of the device. Our proposed method is a real-time, self-positioning approach, using a vehicle’s own position and mapping information as an initialization reference for estimating a smartphone’s relative position. The system continues to track the smartphone and to estimate its position after the initial linkup with the vehicle. The results of our experimental evaluation confirm that the phone can be accurately tracked and its position corrected, both indoors and outdoors, without any special equipment, and with a sufficiently small processing load for the smartphone.

Evaluation of Driver’s Sense of Control in Lane Change Maneuvers with a Cooperative Steering Control System

  • Kohei Tatsumi
  • Akira Utsumi
  • Tetsushi Ikeda
  • Yumiko O. Kato
  • Isamu Nagasawa
  • Kazuhiko Takahashi

Balancing safety and comfort is an important requirement in highly automated driving. In order to achieve safe and comfortable transportation using automated driving technology, cooperative driving, in which the driver and the system work together on the common driving task, may be more effective than a system that completely replaces the driver. Therefore, in this research, we address a cooperative driving system that provides steering assistance and steer-angle regulation based on force-feedback and steer-by-wire systems. We evaluate the safety and comfort of the system through a simulated driving study under different levels of system intervention in lane change maneuvers. Experimental results show that steer-angle regulation without force feedback gives the highest sense of control (involvement in driving) to drivers, although the driving maneuver is overridden by the system to maintain safety in the current condition.

Towards Emulating Internet-of-Vehicles on a Single Machine

  • Wangkai Jin
  • Xiaoxing Ming
  • Zilin Song
  • Zeyu Xiong
  • Xiangjun Peng

Techniques for Human-Vehicle Interaction can be easily bounded by the limited amount of computational resources within the vehicles. Therefore, outsourcing the computations from vehicles into a powerful, centralized server becomes the prime method in practice. Such formalization between edge vehicles and centralized servers is denoted as the Internet-of-Vehicles, which forms multiple vehicles as a distributed system. However, there are no available supports to examine the feasibility and suitability of Human-Vehicle Interaction techniques, in the context of Internet-of-Vehicles. Such examinations are essential for newly proposed techniques, by providing experimental characterizations, suggesting detailed implementation schemes and understanding its real-world effects under Internet-of-Vehicle settings.

In this work, we report our progress in terms of a general-purpose and portable emulation platform, for examining the effects of novel Human-Vehicle Interaction techniques under the Internet-of-Vehicles setting. The key idea of our work is two-folded: (1) we provide an automatic extractor to retrieve the key patterns from Human-Vehicle Interaction workloads, so that our platform can facilitate with the needs of different scenarios/techniques; and (2) we leverage the configurable networking connections to provide abstractions regarding the interactions between edge vehicles and centralized servers, so that we can enable various types of emulations (e.g. geo-distributed applications). Our current progress is the finalization of the first prototype, and we leverage this prototype to characterize the impacts of a Deep-Neural-Network-driven in-vehicle application. Our results reveal the impacts of different implementations under the Internet-of-Vehicles setting, and our future works would focus on enhancing the characteristics of our prototype and scaling our study into a broader range of applications in Human-Vehicle Interactions.

Context and Culture affect the Psychometrics of Questionnaires evaluating Speech-based Assistants

  • Ina Koniakowsky
  • Alexandra Loew
  • Yannick Forster
  • Frederik Naujoks
  • Andreas Keinath

Intelligent Personal Assistants (IPAs) have grown into technologically mature systems. However, instruments used for evaluating the usability and user experience of IPAs were developed two decades ago. This bears the danger for research and development to apply inadequate measurements to a novel technology. In this study, more recent scales from human-robot-interaction were used for evaluating speech-based assistants in vehicles in a Chinese sample. However, it cannot be assumed, that adapting a questionnaire from another context and culture leads to objective, reliable and valid measurements. Therefore, data was examined regarding internal consistency and factor structure. Cronbach's alpha was considerably high. However, a factor analysis did not support the assumed four-factor structure but rather a two-factorial solution. Findings suggest that adapting a questionnaire from a different context and culture, can affect its psychometrics. Consequently, the underlying postulated constructs should be treated with caution.

Scaling up Automated Vehicles’ eHMI Communication Designs to Interactions with Multiple Pedestrians – Putting eHMIs to the Test

  • Marc Wilbrink
  • Manja Nuttelmann
  • Michael Oehl

To ensure safety in future mixed traffic, automated vehicles (AV) will have to interact in an understandable way with other, more vulnerable road users (VRU). Current research has shown positive effects on the AV-VRU interaction for external human-machine interfaces (eHMI). The paper presents preliminary results of an online video study, focusing on the evaluation of different eHMI designs in situations with multiple pedestrians. Three eHMI communication designs using a 360° LED-light band and different combinations of additional pedestrians’ positionings were analyzed. The main goal of this study was to investigate how the communication designs were interpreted in situations with multiple pedestrians and whether negative effects may arise through their presence. Preliminary results showed that participants crossed the street more often, experienced more certainty in their decision, and felt more addressed if they were interacting with an AV with intention-based communication strategy compared to a static eHMI or no eHMI.

Designing Collaboration between Human Beings and Self-driving Heavy Vehicles with Emerging Interaction Technologies

  • Johan Fagerlönn
  • Anna Sirkka
  • Lina Orrell
  • Yanqing Zhang
  • Stefan Larsson
  • Elin Tybring
  • Hanna Rönntoft

The present work-in-progress paper describes the development of novel user interface concepts that allow human operators to collaborate with self-driving heavy vehicles in a mining context. Concept development was performed within a user-centered design process containing three main steps. First, a study was performed to identify interaction points between heavy vehicle drivers and other human operators in mines. Second, potential interaction technologies were investigated. Finally, suggestions for interaction models were designed and implemented in 3D animated movies. The concepts were designed to support human operators performing loading tasks together with self-driving vehicles and utilize voice interaction and an augmented reality head-up-display to facilitate the interaction. In addition to the mining context, similar concepts were developed to support forklift drivers performing loading tasks in logistic centers. In the next step of this project, the suggested interaction models will be evaluated with mine workers and forklift drivers.

FlowMotion: Exploring the Intuitiveness of Fluid Motion Based Communication in eHMI Design for Vehicle-Pedestrian Communication

  • Debargha Dey
  • Brent Temmink
  • Daan Sonnemans
  • Karijn Den Teuling
  • Lotte van Berkel
  • Bastian Pfleging

External Human-Machine interfaces (eHMIs) are typically proposed to facilitate explicit communication of vehicle intent to pedestrians. However, implicit communication through vehicle kinematics or movement patterns is shown to be the primary indicator of driving behavior and intention in traffic, for both manually-driven and automated vehicles. Unfortunately, subtle changes in kinematics are often hard to perceive and make it difficult to comprehend a vehicle’s consequent intention. We created a novel eHMI concept by using fluid movements to highlight and exaggerate the movement of the vehicle, and therefore emphasize this movement-based implicit communication. Preliminary results showed that the movement of the fluid called attention to subtle changes in the acceleration and deceleration of the vehicle, and was able to effectively indicate the nature of the change of vehicle speed, which in turn alluded to its intentions in traffic. This approach explores design solutions to highlight vehicle kinematics in facilitating communication of vehicle intention in traffic, for both automated vehicles of the future, and manually-driven vehicles of today.

CommDisk: A Holistic 360° eHMI Concept to Facilitate Scalable, Unambiguous Interactions between Automated Vehicles and Other Road Users

  • Rutger Verstegen
  • Debargha Dey
  • Bastian Pfleging

External Human-Machine Interfaces (eHMIs) are proposed to address the communication gap between automated vehicles (AVs) and other road users. However, existing concepts of eHMIs are often limited in their ability to communicate in a clear, unambiguous, and scalable way, which hinders their effectiveness. eHMIs typically communicate by showing the vehicle’s driving intention, situational awareness, or its path/ trajectory; none of which is independently able to sufficiently achieve effective and unambiguous communication. We created a novel eHMI concept which combines these three communication elements in one unified design that (1) promises a more seamless, two-way interaction between AVs and other road users, and (2) is capable of a better scalable, high-resolution communication, i.e., facilitating communication beyond one-on-one communication while mitigating ambiguity regarding the specific recipient of the communication message. This addresses some of the limitations of existing eHMIs and paves a path towards a novel approach in eHMI design for future AVs.

Shape-Changing Interfaces as eHMIs: Exploring the Design Space of Zoomorphic Communication between Automated Vehicles and Pedestrians

  • Debargha Dey
  • Coen de Zeeuw
  • Miguel Bruns
  • Bastian Pfleging

External human-machine interfaces (eHMIs) are shown to support Automated Vehicles (AVs) in interacting with vulnerable road users such as pedestrians. Typically, eHMI concepts are light- or sound-based designs that communicate the AV’s intention with abstract visualizations, which are unfamiliar, not fully intuitive, and require learning. In the natural world, animals are shown to communicate their intention or disposition with a variety of visible reactions, using posture, gesture, or other means, which have implicit meaning by association with centuries of evolution. We explore the design space of biomimicry-inspired communication between AVs and pedestrians using external Shape-Changing (eSC) interfaces. We created six distinct concepts of eHMIs that employ external shape change and evaluated them in a focus group. Results show that zoomorphic, shape-changing-based eHMI concepts are promising in achieving intuitive communication about an AV’s intention in traffic. This may help in reducing the learning effort associated with abstract eHMIs, and ease the integration of AVs.

An Autonomous Driving System - Dedicated Vehicle for People with ASD and their Caregivers

  • Gandhimathi Padmanaban
  • Nathaniel P Jachim
  • Hala Shandi
  • Lilit Avetisyan
  • Garrett Smith
  • Howraa Hammoud
  • Feng Zhou

Automated driving system - dedicated vehicles (ADS-DVs), specially designed for people with various disabilities, can be beneficial to improve their mobility. However, research related to autonomous vehicles (AVs) for people with cognitive disabilities, especially Autism Spectrum Disorder (ASD) is limited. Thus, in this study, we focused on the challenge that we framed: “How might we design an ADS-DV that benefits people with ASD and their caregivers?”. In order to address the design challenge, we followed the human-centered design process. First, we conducted user research with caregivers of people with ASD. Second, we identified their user needs, including safety, monitoring and updates, individual preferences, comfort, trust, and reliability. Third, we generated a large number of ideas with brainstorming and affinity diagrams, based on which we proposed an ADS-DV prototype with a mobile application and an interior design. Fourth, we tested both the low-fidelity and high-fidelity prototypes to fix the possible issues. Our preliminary results showed that such an ASD-DV would potentially improve the mobility of those with ASD without worries.

The Importance Distribution of Drivers’ Facial Expressions Varies over Time!

  • Jiahao Wang
  • Zeyu Xiong
  • Yicun Duan
  • Junyu Liu
  • Zilin Song
  • Xiangjun Peng

Facial Expressions are valuable data sources for advanced Human-Vehicle Interaction designs. However, existing works always consider the whole facial expressions as input, which restricts the design space for detailed optimizations. In this work, we make the hypothesis that facial expressions can exhibit significant variations during the driving procedure. Our goal in this work-in-progress is to justify this hypothesis, by performing detailed characterizations on the drivers’ facial expressions. To this end, we leverage Local Binary Fitting, a novel mechanism for selecting representative feature points from facial images on the fly, for our characterizations. Our characterizations reveal that, among six major components of facial feature points, there are significant variations of correlations with a certain vehicle status (i.e. Vehicle Speed), in terms of (1) the time spots during the driving procedure; and (2) the gender of the drivers. We believe our works can serve as a starting point to incorporate the characteristics of our findings with a great amount of adaptive and personalized Human-Vehicle Interaction designs.

Investigating the Interplay between eHMI and dHMI for Automated Buses: How Do Contradictory Signals Influence a Pedestrian's Willingness to Cross?

  • Merle Lau
  • Meike Jipp
  • Michael Oehl

In future urban traffic, communication abilities of automated vehicles (AVs) are needed to enable a safe interaction with pedestrians as so-called vulnerable road users. Dynamic human-machine interfaces (dHMIs) and external human-machine interfaces (eHMIs) are designed to enable AVs to communicate implicitly, e.g., via vehicle dynamics, and explicitly, e.g., by light signals. To this point, it is neither sufficiently studied how the exact interplay of both communication tools should take place nor how this should be considered for an automated bus. This study aims to shed light on pedestrians’ interaction with an automated bus by combining vehicle behavior (dHMI) and different eHMIs. The main focus was on the effect of contradictory communication messages on pedestrians’ willingness to cross. Results showed that for non-yielding conditions, a dynamic eHMI that displayed an erroneous yielding intent lead to a significantly higher pedestrians’ willingness to cross compared to static eHMI or no eHMI.

SESSION: Workshops

To Customize or Not to Customize - Is That the Question?

  • Sidney T Scott-Sharoni
  • Nadia Fereydooni
  • Bruce N. Walker
  • Myounghoon Jeon
  • Andreas Riener
  • Philipp Wintersberger

As automated vehicles become more prevalent, designing interfaces that best fit all users, especially ones in minority populations, is a pressing but difficult goal. System-driven adaptation is a commonly used approach as it is easier and created by experts but, has innate flaws. Customization, on the other hand, allows users to consciously alter the interface to appear and operate in a manner most suited to their needs and wants. However, various components of the interface have different constraints, capabilities, and requirements with the amount of customization appropriate. In this workshop, we will dissect an expansive taxonomy for customization and develop a series of levels in order to get the full benefits from customization, which in turn can help engineers and designers in creating more user-centered systems.

Computational Modeling of Driving Behaviors: Challenges and Approaches

  • Myounghoon Jeon
  • Yiqi Zhang
  • Heejin Jeong
  • Christian P. Janssen
  • Shan Bao

Computational modeling has great advantages in human behavior research, such as abstracting the problem space, simulating the situation by varying critical variables, and predicting future outcomes. Although much research has been conducted on driver behavior modeling, relatively little modeling research has appeared at the Auto-UI Conferences. If any, most work has focused on qualitative models about manual driving. In this workshop, we will first describe why computational driver behavior modeling is crucial for automotive research and then, introduce recent driver modeling research to researchers, practitioners, and students. By identifying research gaps and exploring solutions together, we expect to form the basis of a new modeling special interest group combining the Auto-UI community and the computational modeling community. The workshop will be closed with suggestions on the directions for future transdisciplinary work.

AutoWork 2021: Workshop on the Future of Work and Well-Being with Automated Vehicles

  • Peter Fröhlich
  • Clemens Schartmüller
  • Philipp Wintersberger
  • Andreas Riener
  • Andrew L Kun
  • Stephen Brewster
  • Orit Shaer
  • Matthias Baldauf

The emergence of automated driving systems will influence roles and practices in many parts of work life. Human factors and user interface design will play an important role in shaping the transition towards more productivity and well-being. The AutoWork 2021 workshop builds on its predecessor workshops by refining a research agenda and drafting concrete research studies and projects towards achieving this goal. This year’s version will especially tackle the challenge of designing both for bottom-up “worker-driven” empowerment and engagement in individual, partly automated transport, as well as for supporting affected users and stakeholders in the systemic economically-driven introduction of fully autonomous vehicles in closed intralogistics areas. In a two-session schedule tailored to fit the requirements of an online event, participants will evaluate gathered requirements from previous workshops and projects on these topics and define relevant user stories and elaborate experimental designs with measurable outcomes to contribute to the research roadmap.

Workshop on Prosocial Behavior in Future Mixed Traffic

  • Hatice Sahin
  • Heiko Mueller
  • Shadan Sadeghian
  • Debargha Dey
  • Andreas Löcken
  • Andrii Matviienko
  • Mark Colley
  • Azra Habibovic
  • Philipp Wintersberger

“Prosocial Behavior‘‘ means cooperating and acting in a way to benefit others. Since more and more diverse road users (such as electronic bicycles, scooters, etc.) but also vehicles at different levels of automation are sharing the safety-critical road environment, acting prosocial will become increasingly important in the future for both human and automated traffic participants. A few papers so far have already begun to address this issue, but currently, there exist no systematic methodological approaches to research this area. In the proposed workshop, we plan to define more specifically what characterizes prosocial behavior in future traffic scenarios where automated and manual vehicles meet and interact with all kinds of vulnerable road users. We further want to identify important scenarios and discuss potential evaluation methods for researching prosocial behavior. Ultimately, these findings will be integrated into a research agenda actively pursued by cooperation initiated during this event.

Workshop for designing biofeedback of driver’s state and emotion in automated vehicles

  • Marine Capallera
  • Quentin Meteier
  • Kevin Koch
  • Markus Funk
  • Mira El Kamali
  • Karl Daher
  • Omar Abou Khaled
  • Elena Mugellini

Different drivers’ states and emotions can affect negatively the driving performance. Recent advances in affective computing now give the opportunity to measure the users’ state or emotions using various sources of data such as physiological signals or voice samples. Conveying biofeedback in the car could help to make roads safer and improve users’ health and mental state during a ride in an autonomous car. This workshop aims at selecting the drivers’ hazardous states and emotions that are crucial to be assessed, as well as how to convey the appropriate biofeedback to the driver, using multimodal interaction in the car.

The 1st Workshop on User Experience in Urban Air Mobility: Design considerations and issues

  • Young Woo Kim
  • Cherin Lim
  • Seul Chan Lee
  • Sol Hee Yoon
  • Yong Gu Ji

Urban Air Mobility (UAM) is beginning to gain attention as an expansion of the current provided transportation. However, the unfamiliarity of air transport can make the users reluctant to try these distinctive experiences. Thus, in this workshop, we are going to present the current state of the UAM research trend and encourage participants to discuss in-depth and share the User Experience (UX) considerations for designers and developers of the technology. The aim of the workshop is to discuss potential issues and expected challenges for the introduction of UAM into the transportation system. We expect to share our understanding of UAM with experts in automotive fields and establish future research directions.

Workshop on Exploring Interfaces for Enhanced Automation Assistance for Improving Manual Driving Abilities

  • Alexander G. Mirnig
  • Mauricio Marcano
  • Sandra Trösterer
  • Joseba Sarabia
  • Sergio Diaz
  • Yasemin Dönmez Özkan
  • Jakub Sypniewski
  • Ruth Madigan

With continually advancing automation capabilities in vehicles, there is increasing potential for these capabilities to be used not only as stopgaps towards full automation but to enhance humans’ manual driving capabilities during this transition phase and beyond. By employing smart automation assistance (e.g., highlighting of relevant roadside information, maneuver interventions and corrections), it might even be possible to enable automation assisted “manual” driving for those, who might not be able to drive otherwise (older adults, individuals with impairments). In this workshop, we intend to explore this problem together with the participants and identify potentials for automation assistance to enhance manual driving performance, and what in-vehicle interfaces can contribute in this regard.

Passenger's State of Mind: Future Narratives for Semi-Autonomous Cars

  • Güzin Sen
  • Sila Umulu

This online workshop invites participants to elaborate the infotainment systems of semi-autonomous cars with passengers’ priorities in mind and develop quick auto-UI solutions by utilizing “passenger infotainment modes”. These modes represent a front-seat passenger's changing relations with the driver, the surroundings, and the infotainment system. There can be various strategies for an automotive user interface to adapt to these diverse situations where the passenger prioritizes one agent over the other (e.g., co-navigating with the driver, enjoying the scenery without being interrupted by the system notifications, being immersed in a private bubble of entertainment). The workshop provides each team of participants with a future travel scenario and encourages them to enhance its narrative by defining possible activities or tasks delivered through automotive infotainment systems. Finally, these narratives are enriched further by addressing a selection of the passenger infotainment modes with new proposals for the auto-UI content, functionalities, or interactions.

CUI @ Auto-UI: Exploring the Fortunate and Unfortunate Futures of Conversational Automotive User Interfaces

  • Justin Edwards
  • Philipp Wintersberger
  • Leigh Clark
  • Daniel Rough
  • Philip R Doyle
  • Victoria Banks
  • Adam Wyner
  • Christian P. Janssen
  • Benjamin R. Cowan

This work aims to connect the Automotive User Interfaces (Auto-UI) and Conversational User Interfaces (CUI) communities through discussion of their shared view of the future of automotive conversational user interfaces. The workshop aims to encourage creative consideration of optimistic and pessimistic futures, encouraging attendees to explore the opportunities and barriers that lie ahead through a game. Considerations of the future will be mapped out in greater detail through the drafting of research agendas, by which attendees will get to know each other’s expertise and networks of resources. The two day workshop, consisting of two 90-minute sessions, will facilitate greater communication and collaboration between these communities, connecting researchers to work together to influence the futures they imagine in the workshop.

The 3rd Workshop on Localization vs. Internationalization: Accessibility of Autonomous Vehicles by Different End-Users

  • Kristina Stojmenova
  • Seulchan Lee
  • Jaka Sodnik
  • Miltos Kyriakidis
  • Carolina Diaz Piedra
  • Myounghoon Jeon

The elderly, children, and people with disabilities are among vulnerable users who can benefit the most from the autonomous vehicles (AVs). Yet, most AV concepts discussed in the past decade, including the AutomotiveUI conferences, seem to focus on the mobility needs of younger and middle-aged drivers, who are the overeducated working population, technological enthusiasts, and above middle-class users. In that regard, the third workshop on Localization vs. Internalization aims to explore these disparities, identify the accessibility barriers and search for research approaches that can increase the accessibility opportunities of the vulnerable AV end-users. Built upon the findings from the previous workshops, which focused on diversity, inclusion and differences among cultures regarding AV-related research approaches, the purpose of the present workshop is to provide an in-depth insight into the state of global AV research and identify areas that still need to be explored to increase the AV accessibility, locally and internationally.

Workshop on the Design of Inclusive and Accessible Future Mobility

  • Henrik Detjen
  • Stefan Geisler
  • Stefan Schneegass
  • Andrew L Kun
  • Vidya Sundar

Through automation, future mobility has the potential to offer services to a broader range of users than ever before. However, users at the margins are often not represented in the design process or vehicles, and specifically automated vehicles. This can lead to these users being overlooked and ultimately excluded from the use of automated mobility services. Therefore, it is vital to raise awareness for the inclusive design of future mobility services and reflect on how they can become part of automotive design and research practices. In this AutomotiveUI 2021 workshop, we will explore this topic from different angles through expert talks and reflections, followed by discussions. The expected outcome of the workshop is the development of and work on an agenda for accessible and inclusive mobility in the age of automated vehicles.

Genie vs. Jarvis: Characteristics and Design Considerations of In-Vehicle Intelligent Agents

  • Manhua Wang
  • Philipp Hock
  • Seul Chan Lee
  • Martin Baumann
  • Myounghoon Jeon

Intelligent agents (IAs) have been widely used at home and have been gradually introduced into driving contexts. While many studies researched agent features and their influences on user perception toward in-vehicle agents (IVAs), what attributes make IVAs unique and how people perceive them differently from at-home agents remain unclear. Therefore, the proposed workshop aims to bring up a discussion among researchers and practitioners worldwide to contribute insights to a list of characteristics and design considerations for in-vehicle intelligent agents. Features specialized in IVAs will also be discussed in the workshop, along with the preference for the agent form. We expect to extract innovative research and design considerations to benefit IVA research and the AutomotiveUI community.

Workshop on Human-Vehicle-Environment Cooperation in Automated driving: The Next Stage of a Classic Topic

  • Chao Wang
  • Marcel Usai
  • Jingyi Li
  • Martin Baumann
  • Frank Flemisch

It appears that autonomous systems are replacing human in the driving task. However, autonomous driving abilities do not mean that vehicles should not interact with their drivers/passengers or their environment anymore. There are still many scenarios where either the automated system cannot handle the driving very well, or human wants to spontaneously influence the behavior of the system to meet their preferences. Thus, beyond the hype of autonomous driving, a large space opens for human-vehicle cooperation at a different level of automated driving. As this topic draws more attention both by academia and industry, we organize this workshop to in-depth identify potential research opportunities of it under the latest technology of automated driving. In this workshop, participants will discuss the motivations of driver/passenger’s intervention, generate the use cases of cooperative driving, and explore means of cooperation and interaction that human and vehicle would exchange intent smoothly. It is expected that the workshop will consolidate existing knowledge of human-vehicle-environment cooperation and provide insight for future works.

SESSION: Video Demos

Advancing In-vehicle Gesture Interactions with Adaptive Hand-Recognition and Auditory Displays

  • Moustafa Tabbarah
  • Yusheng Cao
  • Yi Liu
  • Myounghoon Jeon

Competition for visual attention in vehicles has increased with the integration of touch-based interfaces, which has led to an increased crash risk. To mitigate this visual distraction, we designed an in-vehicle gesture-based menu system with different auditory feedback types and hand-recognition systems. We are conducting an experiment using a driving simulator where the participant performs a secondary task of selecting a menu item. Three auditory feedback types are tested in addition to the baseline condition (no audio): auditory icons, earcons, and spearcons. For each type of auditory display, two hand-recognition systems are tested: fixed and adaptive. We expect we can reduce the driver’s secondary task workload, while minimizing off-road glances for safety. Our experiment would contribute to the existing literature in multimodal signal processing, confirming the Multiple Resource Theory. It would also present practical design guidelines for auditory-feedback for gesture-based in-vehicle interactions.

Acceptance is in the Eye of the Stakeholder: Gathering the Needs for Automated Road Transport Logistics

  • Jelena Rosic
  • Florian Hammer
  • Michael Gafert
  • Peter Fröhlich

The transport logistics sector is expected to be a promising ground for the roll-out and business integration of automated vehicles. Within this transition towards driverless vehicles, fleet management and control interventions will have to be taken over by logistics personnel. However, so far, a specific needs and expectations analysis with regard to the involved stakeholders towards automated road transport logistics have only been analyzed to a limited degree, and consequently there is so far no systematic approach towards designing corresponding user interfaces. This demo video highlights the requirements gathering activities within the project AWARD, which investigates and develops all-weather autonomous real logistics operations and demonstrations. The demo video introduces into the different perspectives of the involved stakeholder groups, and it illustrates addressed use cases and operational scenarios. The derived acceptance factors model and first impressions of preliminary results are provided.

Eye-Gaze Analysis of HUD Interventions for Conditional Automation to Increase Situation Awareness

  • Michael A. Gerber
  • Ronald Schroeter
  • Daniel Johnson
  • Andry Rakotonirainy

Automated driving seems promising to reduce crashes caused by human error. However, in the transition towards automated driving, a human is still required in some automation levels in some circumstances. Specifically, in conditional automation or SAE Level 3, a human needs to be able to continue the driving task any time the vehicle requests it. This means that throughout the L3 automated driving, this ”fallback-ready user” needs to remain in a state to continue driving, even when they are engaged in other tasks, such as watching a movie. We designed three interventions with the aim to increase their fallback-readiness and have tested them in a high-fidelity video driving simulation study. In this video, we present and describe the interventions, the study design and the setup to test the interventions.

Designing for Prediction-Level Collaboration Between a Human Driver and an Automated Driving System

  • Chao Wang
  • Thomas H. Weisswange
  • Matti Krüger

Although automated driving (AD) systems progress fast in recent years, there are still various corner cases that such systems cannot handle well especially for predicting the behavior of surrounding traffic. This may result in discomfort or even dangerous situations. Results from a previous Wizard-of-OZ study suggest that the collaboration between human and system at the prediction level can effectively enhance the experience and comfort of automated driving. For an in-depth investigation of the confluence between AD and driver, a prototype was implemented in a driving simulator driven by a functional AD system that has been partially validated on the public road. Furthermore, we designed and implemented a gaze-button input for intuitive vehicle referencing and a graphical user interface (GUI) for enhancing the explainability of the AD system. Three typical driving scenarios in which an AD could take advantage of the human driver’s anticipation to drive more comfortable and personalized were created for subsequent evaluation.

Multimodal Trip Support with an Autonomous Vehicle and Autonomous Robots

  • Takashi Matsumoto
  • Kazuyuki Yoneyama
  • Fuku Himuro
  • Naotaka Ikutomi
  • Michihito Shiraishi

In this research, we designed multimodal trip support combining autonomous vehicles and autonomous robots. The video introduces a scenario using a prototype of the system which enables a user to request a vehicle ride and service robots at one time. The vehicle takes the user from their location to another building. Robots take the user and their luggage from a vehicle drop-off to a destination in the building. The request is made on their personal device using a web application. In addition to the device, multimodal user interfaces such as robot’s HMIs (human-machine interfaces) are used for user interaction to provide a seamless experience of the trip. Its background system is connected to fleet management systems of vehicles and robots and building facilities, for example elevators and cameras, via network. Providing facility-vehicle and facility-robot interactions, the system supports smooth automatic operations of the vehicle and the robots on the way.

Don't Worry, I'm in Control! Is Users’ Trust in Automated Driving Different When Using a Continuous Ambient Light HMI Compared to an Auditory HMI?

  • Tyron Louw
  • Ruth Madigan
  • Yee Mun Lee
  • Cinzia De Marco
  • Jorge Lorente Mallada
  • Natasha Merat

Ambient LED displays have been used to provide peripheral light-based cues to drivers about a vehicle's current state, along with providing requests for a driver's attention or action. However, few studies have investigated the use of an ambient LED display to improve drivers' trust, perceived safety, and reactions during L3 automated driving. Due to the ambient nature of an LED lightband display, it could be anticipated that it would provide reassurance of the automation status while automation is on, along with providing a gentle cue for non-urgent transitions of control. This video submission presents a methodological overview of a driving simulator study designed to evaluate the effectiveness of an ambient peripheral light display (Lightband HMI) in terms of its potential to improve drivers' trust in L3 automation, along with a comparison of a Lightband and Auditory HMI in terms of their effectiveness in facilitating transitions of control.

“To Go or Not To Go? That is the Question”: When In-Vehicle Agents Argue with Each Other

  • Seul Chan Lee
  • Seona Jeong
  • Manhua Wang
  • Philipp Hock
  • Martin Baumann
  • Myounghoon Jeon

Intelligent agents (IAs) are widely being adopted in our daily lives, and much research on the design of communication with IAs have been conducted. However, almost all research focuses on the interaction between a human operator and an agent. As more and more IAs are being used, there is a possibility that more than two IAs coexist. Then, in a driving context, what if in-vehicle agents (IVAs) and other IAs coexist and show conflicting suggestions or responses? We are interested in answering the related research questions in this situation. As a first step, we developed scenarios and presented the video to embody futuristic situations that a user interacts with multiple IAs. It is expected that this effort can call attention to this topic, and the developed video can be utilized further to explore the related research questions.

The Driving Experience Lab: Simulating the Automotive Future in a Trailer

  • Clemens Schartmüller
  • Andreas Riener

Driving simulators are typically used to evaluate next-generation automotive user interfaces in user studies as they offer a replicable driving setting that also allows studying safety-critical and/or future systems. However, this AutomotiveUI experience research is often limited to university or company campuses and their students and staff. To combat that, we introduced a mobile driving simulator lab in a car trailer. We present features but also limitations of this lab, report on experiences after the first days of operation, and discuss further use cases beyond research. During 7 days of user studies with the trailer at a national garden festival, we conducted trials with more than 70 participants from diverse backgrounds. However, executing studies at public events also has its limitations, e.g., on accepted trial duration and potential for biased responses.

Mixed Reality Environment for Testing Automated Vehicle and Pedestrian Interaction

  • Maikol Funk Drechsler
  • Jakob Benedikt Peintner
  • Georg Seifert
  • Werner Huber
  • Andreas Riener

The test and development of Automated Driving Systems is usually realized by scenario based testing or virtual testing environments. These methods apply artificial targets to trigger the safety critical functions under specific predefined scenarios, as the NCAP or IIHS test catalogues. Despite having a good reproducibility, these approaches hardly permit the evaluation of new interaction concepts like external Human-Machine Interfaces (eHMIs), since the interaction between real users and the vehicle cannot be realistic reproduced without risks to the participants. The novel Mixed Reality Test Environment (MiRE) overcomes this limitation by the integration of Virtual Reality (VR) technologies, Dynamic Vehicle-in-the-Loop (DynViL) and the Virtual Environment. In MiRE the movement and positioning of the vehicles and the VRUs are tracked in real time and reproduced in the virtual environment. Synthetic data from the virtual environment is generated to stimulate the vehicle and the human participant, enabling a safe interaction between both entities.

An Anthropological Study Designed to Understand the Essence of Intention Sharing Between Drivers and Passengers

  • Mohammad Faramarzian
  • Jorge Pardo
  • Ronald Schroeter
  • Wendy Ju
  • Andry Rakotonirainy
  • Ilan Mandel
  • Xiaomeng Li
  • Sebastien Glaser

This study focuses on human driving behaviour to identify key non-verbal cues which may inform a passenger of the driver’s intentions. An anthropological inquiry, supported by live remote field observations and follow-up interviews, aims to understand a) the nuances and mechanisms, i.e. intention cues, through which human drivers consciously or unconsciously convey their driving intention, b) how passengers recognise and interpret those intention cues, and c) the role that the clarity, ambiguity or absence of these cues may play in passenger comfort or trust. Lastly, this research designs a live remote observation protocol to analyse the exchange of subtle intention cues between driver-passenger pairs during the driving task.