Proceedings

The full proceedings of the Auto-UI 2019 conference, and the adjunct proceedings will be distributed at the conference. In addition, both proceedings can be downloaded free of charge from the ACM Digital Library for a period of 1 month after the start of the conference.

You can find a table of contents and individual papers for the proceedings of the 11th International Conference on Automotive User Interfaces and Interactive Vehicular Applications with the following links:


TOC - Main Proceedings

InCarAR: A Design Space Towards 3D Augmented Reality Applications in Vehicles

  • Gesa Wiegand
  • Christian Mai
  • Kai Holländer
  • Heinrich Hussmann

Advances in vehicle automation and the resulting change of the interior of cars lead to new challenges for user interface concepts. Augmented reality (AR) is a promising solution for the emerging design needs due to its diverse opportunities for user interaction and presenting information. This paper supports the development of novel AR applications. We describe a corresponding use case set consisting of 98 examples from a literature review and two focus groups. Based on these samples we present a design space for in-car AR applications. To demonstrate the benefit thereof, we show a fictional design process including our proposed design space to derive a custom AR system. This work supports designers and engineers by providing a systematic approach for integrating 3D AR interfaces in a vehicle, excluding windshields and windows.

A Video-Based Automated Driving Simulator for Automotive UI Prototyping, UX and Behaviour Research

  • Michael A. Gerber
  • Ronald Schroeter
  • Julia Vehns

The lack of automated cars above SAE level 3 raises challenges for conducting User Experience Design (UXD) and behaviour research for automated driving. User-centred methods are critical to ensuring a human-friendly progress of vehicle automation. This work introduces the Immersive Video-based Automated Driving (IVAD) Simulator. It uses carefully recorded 180/360° videos that are played back in a driving simulator. This provides immersive driving experiences in visually realistic and familiar environments. This paper reports learnings from an iterative development of IVAD, and findings of two user studies: One simulator study (N=15) focused on the immersive experience; and one VR study (N=16) focused on rapid prototyping and the evaluation of Augmented Reality (AR) concepts. Overall, we found the method to be a useful, versatile and low budget UXD tool with a high level of immersion that is uniquely aided by the familiarity of the environment. IVAD's limitations and future improvements are discussed in relation to research applications within AutoUI.

Exploring the Future Experience of Automated "Valet Parking" - a User Enactment

  • Robin Neuhaus
  • Eva Lenz
  • Shadan Sadeghian Borojeni
  • Marc Hassenzahl

While the general debate about the potential of automated vehicles is pervasive, less is known about how people experience those vehicles in everyday life. To this end, we studied the experiential consequences of a speculative automated "valet parking" service. We designed three variants and confronted participants "in the wild." The automation replaced practical worries about how to navigate a parking garage with unease about the safety of the car, which clearly limited potential gains in positive experiences. The unease could be counteracted by providing feedback about the car itself and the process. This created the impression that the car is properly cared for. In addition, while parking in garages in itself was not especially enjoyable, it played an important role in a number of related positive practices (e.g., shopping routines). To ensure acceptance, automated systems need to carefully address the question of how those systems become embedded in everyday life.

The Energy Interface Challenge. Towards Designing Effective Energy Efficiency Interfaces for Electric Vehicles

  • Thomas Franke
  • Daniel Görges
  • Matthias G. Arend

The design of effective energy interfaces for electric vehicles needs an integrated perspective on the technical and psychological factors that together establish real-world vehicle energy efficiency. The objective of the present research was to provide a transdisciplinary synthesis of key factors for the design of energy interfaces for battery electric vehicles (BEVs) that effectively support drivers in their eco-driving efforts. While previous research tends to concentrate on the (visual) representation of common energy efficiency measures, we focus on the design of action-integrated metrics and indicators for vehicle energy efficiency that account for the perceptual capacities and bounded rationality of drivers. Based on this rationale, we propose energy interface examples for the most basic driving maneuvers (acceleration, constant driving, deceleration) and discuss challenges and opportunities of these design solutions.

To Please in a Pod: Employing an Anthropomorphic Agent-Interlocutor to Enhance Trust and User Experience in an Autonomous, Self-Driving Vehicle

  • David R. Large
  • Kyle Harrington
  • Gary Burnett
  • Jacob Luton
  • Peter Thomas
  • Pete Bennett

Recognising that one of the aims of conversation is to build, maintain and strengthen positive relationships with others, the study explores whether passengers in an autonomous vehicle display similar behaviour during transactions with an on-board conversational agent-interface; moreover, whether related attributes (e.g. trust) transcend to the vehicle itself. Employing a counterbalanced, within-subjects design, thirty-four participants were transported in a self-driving pod using an expansive testing arena. Participants undertook three journeys with an anthropomorphic agent-interlocutor (via Wizard-of-Oz), a voice-command interface, or a traditional touch-surface; each delivered equivalent task-related information. Results show that the agent-interlocutor was the most preferred interface, attracting the highest ratings of trust, and significantly enhancing the pleasure and sense of control over the journey experience, despite the inclusion of 'trust challenges' as part of the design. The findings can help support the design and development of in-vehicle agent-based voice interfaces to enhance trust and user experience in autonomous cars.

Exploratory Analysis of the Research Literature on Evaluation of In-Vehicle Systems

  • Lukas Lamm
  • Christian Wolff

An exploratory literature review method was applied to publications from several sources on Human-Computer Interaction (HCI) for In-Vehicle Information Systems (IVIS). The novel approach for bibliographic classification uses a graph database to investigate connections between authors, papers, used methods, and investigated interface types. This allows the application of algorithms to find similarities between different publications and overlaps between different usability evaluation methods. Through community detection algorithms, the publications can be clustered based on similarity relationships. For the proposed approach several thousand papers were systematically filtered, classified, and stored in a graph database. The survey shows a trend for usability assessment methods with direct involvement of users, especially the observation of users and performance-related measurements, as well as questionnaires and interviews. However, especially methods usually applied in early stages of development based on the assessment through models or experts, as well as collaborative and creativity methods do not seem very popular in automotive HCI research.

From Manual Driving to Automated Driving: A Review of 10 Years of AutoUI

  • Jackie Ayoub
  • Feng Zhou
  • Shan Bao
  • X. Jessie Yang

This paper gives an overview of the ten-year development of the papers presented at the International ACM Conference on Automotive User Interfaces and Interactive Vehicular Applications (AutoUI) from 2009 to 2018. We categorize the topics into two main groups, namely, manual driving-related research and automated driving-related research. Within manual driving, we mainly focus on studies on user interfaces (UIs), driver states, augmented reality and head-up displays, and methodology; Within automated driving, we discuss topics, such as takeover, acceptance and trust, interacting with road users, UIs, and methodology. We also discuss the main challenges and future directions for AutoUI and offer a roadmap for the research in this area.

Effectiveness of Red-Light Running Countermeasures: A Systematic Review

  • Sardar Elias
  • Moojan Ghafurian
  • Siby Samuel

This paper presents a systematic review of the literature on the effectiveness of engineering countermeasures at reducing unintentional Red-Light Running (RLR) violations and improving safety at traffic intersections. 26 relevant studies on countdown timers, pavement markings, signal operations, advance warning systems, and in-vehicle warning systems are discussed and their results are summarized. While all countermeasures demonstrated varying levels of effectiveness, in-vehicle warning systems that provided audio and/or visual feedback to drivers were found to be the most promising in lowering RLR rates, with studies showing RLR reduction by 84.3%, collision rate reduction by 37%, lower RLR probability and lower risks of crashes. Limitations of each countermeasure are discussed and research shortcomings are indicated. Further areas of potential advancements are highlighted and refinement of countermeasures are proposed in light of improving their effectiveness in reducing RLR violations and improving intersection safety.

Towards Opt-Out Permission Policies to Maximize the Use of Automated Driving

  • Philipp Hock
  • Franziska Babel
  • Johannes Kraus
  • Enrico Rukzio
  • Martin Baumann

Automated driving has the potential to reduce road fatalities. However, the public opinion to use automated driving can be described as skeptical. To increase the use of automated driving features, we investigate the persuasion principle of opt-out permission policies for enabling the automation, meaning automatically enabling the automation if users do not veto. In a driving simulator study (n = 19), participants drove on three different tracks (city, highway, rural). Three different interface concepts (opt-out, opt-in, control) were examined regarding their effects on automation use, trust, and acceptance. We found that an opt-out activation policy may increase automation usage for some participants. However, opt-out was perceived as more persuasive and more patronizing than the other conditions. Most importantly, opt-out can lead to mode confusion and therefore to dangerous situations. When such an opt-out policy is used in an automated vehicle, mode confusion must be addressed.

The Insurer's Paradox: About Liability, the Need for Accident Data, and Legal Hurdles for Automated Driving

  • Alexander G. Mirnig
  • Rod McCall
  • Alexander Meschtscherjakov
  • Manfred Tscheligi

In light of recent incidents, it has become increasingly relevant to determine who is responsible in case of accidents involving automated vehicles. In this paper, we investigate the question of liability in automated vehicles of SAE levels 3 and above. We claim that there is a mismatch between current liability practices, where a designated driver is usually held responsible, and future perspectives, where the human assumes more and more a passive passenger-like role. Our claims are supported by the results from an interview study with insurance companies from two European countries. We show that insurers lack sufficient data to make informed decisions on how to apportion liability in SAE level 3+ scenarios. We discuss how these considerations have to be reflected in interfaces for the driver in order to make the legal status transparent for the driver.

Advanced Driver Assistance Systems for Aging Drivers: Insights on 65+ Drivers' Acceptance of and Intention to Use ADAS

  • Hanna Braun
  • Magdalena Gärtner
  • Sandra Trösterer
  • Lars E. M. Akkermans
  • Marije Seinen
  • Alexander Meschtscherjakov
  • Manfred Tscheligi

Advanced Driver Assistance Systems (ADAS) aim to increase safety by supporting drivers in the driving task. Especially older drivers (65+ years), given the nature of aging, could benefit from these systems. However, little is known about older drivers' acceptance of ADAS in general and how particular acceptance aspects influence their intention to use such systems. To address this research gap, we present results from a large-scale online survey (n=1328) with aging drivers, which was conducted in three European countries in 2019. We identified several demographic and driving-related variables, which are significantly related to acceptance. Furthermore, we found that older drivers' intention to use ADAS is most strongly predicted by favorable acceptance aspects (i.e., usefulness, reassurance, and trust), while unfavorable aspects (i.e., annoyance, irritation, and stress) were found to have less to none predictive power. The findings are discussed considering future research directions in this area.

Understanding the Messages Conveyed by Automated Vehicles

  • Yee Mun Lee
  • Ruth Madigan
  • Jorge Garcia
  • Andrew Tomlinson
  • Albert Solernou
  • Richard Romano
  • Gustav Markkula
  • Natasha Merat
  • Jim Uttley

Efficient and safe interactions between automated vehicles and other road users can be supported through external Human-Machine Interfaces (eHMI). The success of these interactions relies on the eHMI signals being adequately understood by other road users. A paired-comparison forced choice task (Task 1), and a 6-point rating task (Task 2) were used to assess the extent to which ten different eHMI signals conveyed three separate messages, 'I am giving way', 'I am in automated mode' and 'I will start moving'. The different eHMI options consisted of variations of a 360° lightband, a single lamp, and an auditory signal. Results demonstrated that the same eHMI format could convey different messages equally well, suggesting a need to be cautious when designing eHMI, to avoid presenting misleading, potentially unsafe, information. Future research should investigate whether the use of an eHMI signal indicating a change in the AV's behaviour is sufficient for conveying intention.

Cooperative Overtaking: Overcoming Automated Vehicles' Obstructed Sensor Range via Driver Help

  • Marcel Walch
  • Marcel Woide
  • Kristin Mühl
  • Martin Baumann
  • Michael Weber

Automated vehicles will eventually operate safely without the need of human supervision and fallback, nevertheless, scenarios will remain that are managed more efficiently by a human driver. A common approach to overcome such weaknesses is to shift control to the driver. Control transitions are challenging due to human factor issues like post-automation behavior changes. We thus investigated cooperative overtaking wherein driver and vehicle complement each other: drivers support the vehicle to perceive the traffic scene and decide when to execute a maneuver whereas the system steers. We explored two maneuver approval and cancel techniques on touchscreens, and show that cooperative overtaking is feasible, both interaction techniques provide good usability and were preferred over manual maneuver execution. However, participants disregarded rear traffic in more complex situations. Consequently, system weaknesses can be overcome with cooperation, but drivers should be assisted by an adaptive system.

Owner Manuals Review and Taxonomy of ADAS Limitations in Partially Automated Vehicles

  • Marine Capallera
  • Quentin Meteier
  • Emmanuel de Salis
  • Leonardo Angelini
  • Stefano Carrino
  • Omar Abou Khaled
  • Elena Mugellini

In the context of highly automated driving, the driver has to be aware of driving risks and to take over control of the car in hazardous situations. The goal of this paper is to categorize and analyze the factors that lead to such critical scenarios. To this purpose, we analyzed limitations of Advanced Driver-Assistance Systems (ADAS) extracted from owner manuals of 12 partially automated cars available on the market. A taxonomy with 6 macro-categories and 26 micro-categories is proposed to classify and better understand the limitations of these vehicles. We also investigated if these limitations are conveyed to the driver through Human-Machine Interaction (HMI) in the car. Some suggestions are made to better communicate these limitations to the driver in order to raise his/her situation awareness.

Voices in Self-Driving Cars Should be Assertive to More Quickly Grab a Distracted Driver's Attention

  • Priscilla N. Y. Wong
  • Duncan P. Brumby
  • Harsha Vardhan Ramesh Babu
  • Kota Kobayashi

Automated driving will mean that people can engage in other activities and an important concern will be how to alert the driver to critical events that require their intervention. This study evaluates how various levels of assertiveness of voice command in a semi-AV and different degrees of immersion of a non-driving task may affect people's attention on the road. In a simulated set-up, 20 participants were required to execute actions on the steering wheel when a voice command was given while playing a mobile game. Regardless of how immersed the driver was in the game, a more assertive voice resulted in faster reaction time to the instructions and was perceived as more urgent than a less assertive voice. Automotive systems should use an assertive voice to effectively grab people's attention. This is effective even when they are engaged in an immersive secondary task.

No Risk No Trust: Investigating Perceived Risk in Highly Automated Driving

  • Mengyao Li
  • Brittany E. Holthausen
  • Rachel E. Stuck
  • Bruce N. Walker

When evaluating drivers' trust in automated systems, perceived risk is an inevitable, yet underestimated component, especially during initial interaction. We designed two experimental studies focusing on how people assess risk in different driving environments and how introductory information about automation reliability influences trust and risk perception. First, we designed nine driving scenarios to determine which factors influence Perceived Situational Risk (PSR) and Perceived Relational Risk (PRR). Results showed that participants identified levels of risk based on traffic type and vehicles' abnormal behaviors. We then evaluated how introductory information and situational risk influence trust and PRR. Results showed that participants reported the highest level of trust, perceived automation reliability, and the lowest level of PRR when presented with information about a highly reliable system, and when driving in a low-risk situation. These results highlight the importance of incorporating perceived risk and introductory information to support the trust calibration in automated vehicles.

Teleoperation: The Holy Grail to Solve Problems of Automated Driving? Sure, but Latency Matters

  • Stefan Neumeier
  • Philipp Wintersberger
  • Anna-Katharina Frison
  • Armin Becher
  • Christian Facchi
  • Andreas Riener

In the domain of automated driving, numerous (technological) problems were solved in recent years, but still many limitations are around that could eventually prevent the deployment of automated driving systems (ADS) beyond SAE level 3. A remote operating fallback authority might be a promising solution. In order for teleoperation to function reliably and universal, it will make use of existing infrastructure, such as cellular networks. Unfortunately, cellular networks might suffer from variable performance. In this work, we investigate the effects of latency on task performance and perceived workload for different driving scenarios. Results from a simulator study (N=28) suggest that latency has negative influence on driving performance and subjective factors and led to a decreased confidence in Teleoperated Driving during the study. A latency of about 300 ms already led to a deteriorated driving performance, whereas variable latency did not consequently deteriorate driving performance.

Who Has The Right of Way, Automated Vehicles or Drivers?: Multiple Perspectives in Safety, Negotiation and Trust

  • Priscilla N. Y. Wong

Public opinion suggests that it is still unclear how people will react when automated vehicles (AVs) emerge on the roads. Fatal accidents involving AVs have received wide media attention, possibly disproportionate to their frequency. How does the framing of such stories affect public perceptions of AVs? Few drivers have encountered AVs, but how do they imagine interacting with AVs in the near future? This survey study with 600 UK and Hong Kong drivers addressed these two questions. After reading news 'vignettes' reporting an imagined car crash, respondents presented with subjective information perceived AVs as less safe than those presented with factual information. We draw implications for news media framing effects to counter negative newsflow with factual information. Respondents were presented with another imagined interaction with human-driven and AVs and did not differentiate between the two. Results from other variables e.g., first and third person framing, and cultural differences are also reported.

Overtrust in External Cues of Automated Vehicles: An Experimental Investigation

  • Kai Holländer
  • Philipp Wintersberger
  • Andreas Butz

The intentions of an automated vehicle are hard to spot in the absence of eye contact with a driver or other established means of communication. External car displays have been proposed as a solution, but what if they malfunction or display misleading information? How will this influence pedestrians' trust in the vehicle? To investigate these questions, we conducted a between-subjects study in Virtual Reality (N = 18) in which one group was exposed to erroneous displays. Our results show that participants already started with a very high degree of trust. Incorrectly communicated information led to a strong decline in trust and perceived safety, but both recovered very quickly. This was also reflected in participants' road crossing behavior. We found that malfunctions of an external car display motivate users to ignore it and thereby aggravate the effects of overtrust. Therefore, we argue that the design of external communication should avoid misleading information and at the same time prevent the development of overtrust by design.

A Longitudinal Simulator Study to Explore Drivers' Behaviour in Level 3 Automated Vehicles

  • David R. Large
  • Gary Burnett
  • Davide Salanitri
  • Anneka Lawson
  • Elizabeth Box

In a longitudinal study, 49 drivers undertook a commute-style journey, with part of the route supporting level-3 automation, over five consecutive days. Bespoke HMIs were provided to keep drivers in-the-loop during automation, and help them regain situational-awareness (SA) during handovers, in a 2×2 between-subjects design. Drivers demonstrated high levels of trust from the outset, delegating control to the vehicle (when available) and directing attention to their own activities/devices. Ratings of trust and technology acceptance increased during the week -- even following an unexpected, emergency handover on day four -- with the highest ratings recorded on day five. High levels of lateral instability were observed immediately following takeovers, although improvements were noted during the week and following the provision of SA-enhancing hand-over advice. Results demonstrate benefits associated with novel HMI designs to keep drivers in-the-loop and improve takeover performance, as well as the necessity of multiple exposures during the evaluation of future, immersive technologies.

Where Does It Go?: A Study on Visual On-Screen Designs for Exit Management in an Automated Shuttle Bus

  • Alexander G. Mirnig
  • Magdalena Gärtner
  • Vivien Wallner
  • Sandra Trösterer
  • Alexander Meschtscherjakov
  • Manfred Tscheligi

Riding a highly automated bus has the potential to bring about a set of novel challenges for the passenger. As there is no human driver present, there is no one to talk to regarding driving direction, stops, or delays. This lack of a human element is likely to cause a stronger reliance on the in-vehicle means of communication, such as displays. In this paper, we present the results from a qualitative study, in which we tested three different on-screen visualizations for passenger information during an automated bus trip. The designs focused primarily on signaling the next stop and proper time to request the bus to stop in absence of a human driver. We found that adding geo-spatial details can easily confuse more than help and that the absence of a human driver makes passengers feel more insecure about being able to exit at the right stop. Thus, passengers are less receptive for visual cues signaling upcoming stops and more likely to input stop requests immediately upon leaving the station.

Evaluating Head-Up Displays across Windshield Locations

  • Bethan Hannah Topliss
  • Sanna M. Pampel
  • Gary Burnett
  • Joseph L. Gabbard

Full windshield displays (WSDs) have the potential to present imagery across the windshield. Current knowledge on display location has not investigated translucent displays at high eccentricities from the driver's forward view. A simulator study (n=26) was conducted aiming to, (a) investigate the effects of Head-Up Display (HUD) location across the entire windshield on driving performance, and (b) better understand how the visual demand for a complex HUD imagery differs from that for a Head-Down Display (HDD). Lane-keeping was poorer when HUD imagery was furthest from the driver (and for the HDD compared to the HUD). Equally, counts of "unacceptable" driving behaviour were greater for displays furthest from the driver's forward view. Furthermore, drivers preferred HUD imagery that was closer to them. The results indicate that HUD evaluations should account for image location, because of how driver gaze location can impact lateral driving performance.

Fitts Goes Autobahn: Assessing the Visual Demand of Finger-Touch Pointing Tasks in an On-Road Study

  • Sanna M. Pampel
  • Gary Burnett
  • Chrisminder Hare
  • Harpreet Singh
  • Arber Shabani
  • Lee Skrypchuk
  • Alex Mouzakitis

The visual demand of finger-touch based interactions with touch screens has been increasingly modelled using Fitts' Law. With respect to driving, these models facilitate the prediction of mean glance duration and total glance time with an index of difficulty based on target size and location. Strong relationships between measures have been found in the controlled conditions of driving simulators. The present study aimed to validate such models in naturalistic conditions. Nineteen experienced drivers carried out a range of touchscreen button-press tasks in an instrumented car on a UK motorway. In contrast with previous simulator-based work, our on-road data produced much weaker relationships between the index of difficulty and glance times. The model improved by focusing on tasks that required one glance only. Limitations of Fitts' Law in the more complex and dynamic real-world driving environment are discussed, as are the potential drawbacks of driving simulators for conducting visual demand research.

How Should Automated Vehicles Interact with Pedestrians?: A Comparative Analysis of Interaction Concepts in Virtual Reality

  • Andreas Löcken
  • Carmen Golling
  • Andreas Riener

Automated vehicles (AVs) introduce a new challenge to human-computer interaction (HCI): pedestrians are no longer able to communicate with human drivers. Hence, new HCI designs need to fill this gap. This work presents the implementation and comparison of different interaction concepts in virtual reality (VR). They were derived after an analysis of 28 works from research and industry, which were classified into five groups according to their complexity and the type of communication. We implemented one concept per group for a within-subject experiment in VR. For each concept, we varied if the AV is going to stop and how early it starts to activate its display. We observed effects on safety, trust, and user experience. A good concept displays information on the street, uses unambiguous signals (e.g., green lights) and has high visibility. Additional feedback, such as continuously showing the recognized pedestrian's location, seem to be unnecessary and may irritate.

How People Experience Autonomous Intersections: Taking a First-Person Perspective

  • Sven Krome
  • David Goedicke
  • Thomas J. Matarazzo
  • Zimeng Zhu
  • Zhenwei Zhang
  • J. D. Zamfirescu-Pereira
  • Wendy Ju

Top-down simulations of autonomous intersections neglect considerations for the human experience of being in cars driving through these autonomous intersections. To understand the impact that perspective has on perception of autonomous intersections, we conducted a driving simulator experiment and studied the experience in terms of perception, feelings, and pleasure. Based on this data, we discuss experiential factors of autonomous intersections that are perceived as beneficial or detrimental for the future driver. Furthermore, we present what the change of perspective implies for designing intersection models, future in-car interfaces and simulation techniques.

Designing for Projection-based Communication between Autonomous Vehicles and Pedestrians

  • Trung Thanh Nguyen
  • Kai Holländer
  • Marius Hoggenmueller
  • Callum Parker
  • Martin Tomitsch

Recent studies have investigated new approaches for communicating an autonomous vehicle's (AV) intent and awareness to pedestrians. This paper adds to this body of work by presenting the design and evaluation of in-situ projections on the road. Our design combines common traffic light patterns with aesthetic visual elements. We describe the iterative design process and the prototyping methods used in each stage. The final design concept was represented as a virtual reality simulation and evaluated with 18 participants in four different street crossing scenarios, which included three scenarios that simulated various degrees of system errors. We found that different design elements were able to support participants' confidence in their decision even when the AV failed to correctly detect their presence. We also identified elements in our design that needed to be more clearly communicated. Based on these findings, the paper presents a series of design recommendations for projection-based communication between AVs and pedestrians.

The Case for Implicit External Human-Machine Interfaces for Autonomous Vehicles

  • Dylan Moore
  • Rebecca Currano
  • G. Ella Strack
  • David Sirkin

Autonomous vehicles' (AVs) interactions with pedestrians remain an ongoing uncertainty. Several studies have claimed the need for explicit external human-machine interfaces (eHMI) such as lights or displays to replace the lack of eye contact with and explicit gestures from drivers, however this need is not thoroughly understood. We review literature on explicit and implicit eHMI, and discuss results from a field study with a Wizard-of-Oz driverless vehicle that tested pedestrians' reactions in everyday traffic without explicit eHMI. While some pedestrians were surprised by the vehicle, others did not notice its autonomous nature, and all crossed in front without explicit signaling, suggesting that pedestrians may not need explicit eHMI in routine interactions---the car's implicit eHMI (its motion) may suffice.

Unimodal and Multimodal Signals to Support Control Transitions in Semiautonomous Vehicles

  • Katri Salminen
  • Ahmed Farooq
  • Jussi Rantala
  • Veikko Surakka
  • Roope Raisamo

Semiautonomous driving still requires the driver's control and attention in certain situations. Especially control transitions, i.e. take-over and hand-over situations, are important for safety. Our aim was to study control transitions supported by unimodal (i.e. visual, auditory, or haptic) or multimodal (i.e. visual, auditory and haptic) signals indicating change from manual to autonomous driving and vice versa. The signals were abstract visual blinks, auditory beeps, or haptic vibrations. The task was to take over driving while either looking through the windshield or playing a game. In addition, in half of the control transitions a feedback signal indicated successful control transition. The results showed that a secondary task slowed down the reaction times, but there was a great variation between individuals. In general, the response to auditory signal was slower than to visual, haptic, or multimodal signals. Moreover, users preferred feedback during control transitions but this slowed down the reaction time.

Designing Haptic Effects on an Accelerator Pedal to Support a Positive Eco-Driving Experience

  • Alex de Ruiter
  • Miguel Bruns Alonso

Haptic feedback has frequently been proposed as a means to support eco-driving behaviour. While force and vibrotactile feedback have proven to be effective and safe approaches, no studies were found that assessed the user experience of different feedback designs. We describe the design of six haptic effects which were implemented in a custom designed accelerator pedal. The user experience of three effects (linear force increase, bump and pulse) were assessed in a driving simulator and compared to a baseline with no feedback. Results show that the haptic pedal effects were rated positively on attractiveness, dependability, stimulation and novelty. The pulsating effect scored significantly lower on attractiveness and dependability but highest on the novelty. Qualitative results suggest that combining a bump and pulse could increase the positive experience of a haptic pedal. Consequently, we argue for more experiential approaches to haptic feedback design in accelerator pedals.

Conveying Uncertainties Using Peripheral Awareness Displays in the Context of Automated Driving

  • Alexander Kunze
  • Stephen J. Summerskill
  • Russell Marshall
  • Ashleigh J. Filtness

As a consequence of insufficient situation awareness and inappropriate trust, operators of highly automated driving systems may be unable to safely perform takeovers following system failures. The communication of system uncertainties has been shown to alleviate these issues by supporting trust calibration. However, the existing approaches rely on information presented in the instrument cluster and therefore require users to regularly shift their attention between road, uncertainty display, and non-driving related tasks. As a result, these displays have the potential to increase workload and the likelihood of missed signals. A driving simulator study was conducted to compare a digital uncertainty display located in the instrument cluster with a peripheral awareness display consisting of a light strip and vibro-tactile seat feedback. The results indicate that the latter display affords users flexibility to direct more attention towards the road prior to critical situations and leads to lower workload scores while improving takeover performance.

Text Comprehension: Heads-Up vs. Auditory Displays: Implications for a Productive Work Environment in SAE Level 3 Automated Vehicles

  • Clemens Schartmüller
  • Klemens Weigl
  • Philipp Wintersberger
  • Andreas Riener
  • Marco Steinhauser

With increasing automation, vehicles could soon become "mobile offices" but traditional user interfaces (UIs) for office work are not optimized for this domain. We hypothesize that productive work will only be feasible in SAE level 3 automated vehicles if UIs are adapted to (A) the operational design domain, and (B) driver-workers' capabilities. Consequently, we studied adapted interfaces for a typical office task (text-comprehension) by varying display modality (heads-up reading vs. auditory listening), as well as UI behavior in conjunction with take-over situations (attention-awareness vs. no attention-awareness). Self-ratings, physiological indicators, and objective performance measures in a driving simulator study (N = 32) allowed to derive implications for a mobile workspace automated vehicle. Results highlight that heads-up displays promote sequential multi-tasking and thereby reduce workload and improve productivity in comparison to auditory displays, which were still more attractive to users. Attention-awareness led to reduced stress but later driving reactions, consequently requiring further investigations.

AttentivU: Designing EEG and EOG Compatible Glasses for Physiological Sensing and Feedback in the Car

  • Nataliya Kosmyna
  • Caitlin Morris
  • Thanh Nguyen
  • Sebastian Zepf
  • Javier Hernandez
  • Pattie Maes

Several research projects have recently explored the use of physiological sensors such as electroencephalography (EEG) or electrooculography (EOG) to measure the engagement and vigilance of a user in context of car driving. However, these systems still suffer from limitations such as an absence of a socially acceptable form-factor and use of impractical, gel-based electrodes. We present AttentivU, a device using both EEG and EOG for real-time monitoring of physiological data. The device is designed as a socially acceptable pair of glasses and employs silver electrodes. It also supports real-time delivery of feedback in the form of an auditory signal via a bone conduction speaker embedded in the glasses. A detailed description of the hardware design and proof of concept prototype is provided, as well as preliminary data collected from 20 users performing a driving task in a simulator in order to evaluate the signal quality of the physiological data.

Gaze Patterns in Pedestrian Interaction with Vehicles: Towards Effective Design of External Human-Machine Interfaces for Automated Vehicles

  • Debargha Dey
  • Francesco Walker
  • Marieke Martens
  • Jacques Terken

In road-crossing situations involving negotiation with approaching vehicles, pedestrians need to take into account the behavior of the car before making a decision. To investigate the kind of information about the car that pedestrians seek, and the places where do they look for it, we conducted an eye-tracking study with 26 participants and analyzed the fixation behavior when interacting with a manually-driven vehicle that approached while slowing and displaying a yielding behavior. Results show that a clear pattern of gaze behavior exists for pedestrians in looking at a vehicle during road-crossing situations as a function of the vehicle's distance. When the car is far away, pedestrians look at the environment or the road space ahead of the car. With the approach, the gaze gradually shifts to the windshield of the car. We conclude by discussing the implications of this insight in the user-centered-design of optimal external Human-Machine-Interfaces for automated vehicles.

Projection Displays Induce Less Simulator Sickness than Head-Mounted Displays in a Real Vehicle Driving Simulator

  • Tobias M. Benz
  • Bernhard Riedl
  • Lewis L. Chuang

Driving simulators are necessary for evaluating automotive technology for human users. While they can vary in terms of their fidelity, it is essential that users experience minimal simulator sickness and high presence in them. In this paper, we present two experiments that investigate how a virtual driving simulation system could be visually presented within a real vehicle, which moves on a test track but displays a virtual environment. Specifically, we contrasted display presentation of the simulation using either head-mounted displays (HMDs) or fixed displays in the vehicle itself. Overall, we find that fixed displays induced less simulator sickness than HMDs. Neither HMDs or fixed displays induced a stronger presence in our implementation, even when the field-of-view of the fixed display was extended. We discuss the implications of this, particular in the context of scenarios that could induce considerable motion sickness, such as testing non-driving activities in automated vehicles.


TOC - Adjunct Proceedings

WORKSHOP SESSION: Workshops

1st workshop on user interfaces for heavy vehicles: let's get to work

  • Markus Wallmyr
  • Taufik Akbar Sitompul
  • Lewis L. Chuang

There are more types of vehicles than the automobile. Many are used for purposes other than transporting passengers or goods. They are often dedicated to enable the user in performing specific manual tasks, in parallel to driving. Such heavy vehicles range from construction vehicles, such as excavators and articulated haulers, to agriculture vehicles, such as tractors and harvesters. They also include speciality vehicles such as lifts and cranes. Recent advances in information technology radically increases their productivity and safety. Moreover, heavy vehicles are increasingly sensor and software-driven, as well as connected and integrated with information systems. This development creates new interaction challenges and research areas. The aim of this workshop is to gather practitioners, researchers, and professionals who wish to explore the potential opportunities, identify research challenges, and innovate in the domain of heavy vehicles.

Localization vs. internationalization: research and practice on autonomous vehicles across different cultures

  • Seul Chan Lee
  • Kristina Stojmenova
  • Jaka Sodnik
  • Ronald Schroeter
  • JaeKon Shin
  • Myounghoon Jeon

AutoUI conference is the premier forum for user interface research in the automotive domain, annually bringing together over 200 researchers and practitioners interested in both the technical and the human aspects of in-vehicle user interfaces and applications. However, over 80% of its published papers come from only five countries from western Europe and North America. Considering the importance and valuable impact this conference has on the research and development of HMI (Human-Machine Interface) and automated systems in recent years, it raises the need for greater diversity and inclusion of researchers and practitioners from other continents. The goal of this workshop is to bring together researchers, practitioners, experts, and students from different research background, influenced by or influencing the automotive domain, and discuss the cross-cultural differences in driving behaviors and infrastructure, which is an essential prerequisite for future vehicle systems and driving safety.

Third workshop on trust in automation: how does trust influence interaction

  • Brittany E. Holthausen
  • Philipp Wintersberger
  • Zoe Becerra
  • Alexander G. Mirnig
  • Alexander Kunze
  • Bruce N. Walker

Properly calibrated trust in automation is a key issue for a successful implementation of automated vehicle technology. Recent research and investigation of accidents involving automated driving systems have shown that drivers have difficulties adjusting their trust levels appropriately with system performance criteria, which is a key requirement for trust calibration [7]. Whereas previous editions of this workshop have concentrated on suitable definitions, measurements, and factors influencing trust, this year's edition shifts the focus to the question: How does trust interact with and influence other latent constructs, such as risk behavior, situation awareness, or users' willingness to engage in non-driving related tasks? The workshop thereby welcomes both experts and young researchers who (already are or want to) conduct research in this timely area, with the aim of developing concrete research programs and experimental designs, that close existing knowledge gaps and allow further progress in the domain of trust calibration.

2nd workshop on user interfaces for public transport vehicles: interacting with automation

  • Peter Fröhlich
  • Matthias Baldauf
  • Alexander G. Mirnig

Automation is increasingly gaining traction not only for individual but public transportation, especially in the last-mile sector. With no human driver at the helm, there is a need for adequate interaction replacements for passenger- and roadside information - not only as the bus is already in transit but before and during boarding as well. This workshop is intended to address these needs by exploring this design space in a hands-on setting. The expected outcome of the workshop is a set of interaction scenarios, design concepts and future challenges. These should serve as a basis for ongoing research and development for the field.

Simulator showdown: pitch your virtual ride

  • Sven Krome
  • Eric Deng
  • David Goedicke
  • Wendy Ju
  • Ignacio Alverez
  • Jaka Sodnik
  • Andrew Veit
  • Francesco Grani

With autonomous driving on the horizon, new research challenges appeared and subsequently, new methods and research instruments became necessary. To adapt to these emerging research questions, driving simulators, the cornerstone of automotive human factors research, have been tweaked, modified or developed from scratch. This one-day workshop invites academics and practitioners to report, demo or discuss their solutions for simulating the next wave of automotive interaction research. The goal of this workshop is two fold: (1) we provide a forum for researchers focusing on simulator software and discuss opportunities for a future collaboration platform for sharing and co-develop simulator software. (2) We collect and discus the needs, expectations and solutions of the automotive UI community to articulate a road map for developing future driving simulators setups.

Workshop on explainable AI in automated driving: a user-centered interaction approach

  • Quentin Meteier
  • Marine Capallera
  • Leonardo Angelini
  • Elena Mugellini
  • Omar Abou Khaled
  • Stefano Carrino
  • Emmanuel De Salis
  • Stéphane Galland
  • Susanne Boll

With the increasing use of automation, users tend to delegate more tasks to the machines. Such complex systems are usually developed with "black box" Artificial Intelligence (AI), which makes these systems difficult to understand for the user. This assumption is particularly true in the field of automated driving since the level of automation is constantly increasing via the use of state-of-the-art AI solutions. We believe it is important to investigate the field of Explainable AI (XAI) in the context of automated driving since interpretability and transparency are key factors for increasing trust and security. In this workshop, we aim at gathering researchers and industry practitioners from different fields to brainstorm about XAI with a special focus on human-vehicle interaction. Questions like "what kind of explanation do we need", "which is the best trade-off between performance and explainability" and "how granular should the explanations be" will be addressed in this workshop.

MRV 2019: 3rd workshop on mixed reality for intelligent vehicles

  • Andreas Riegler
  • Andrew L. Kun
  • Stephen Brewster
  • Andreas Riener
  • Joe Gabbard
  • Carolin Wienrich

With the increasing development of Augmented reality (AR), the number of its purposes and applications in vehicles rises. Augmented reality may help to increase road safety, support more immersive (non-) driving related activities, and finally enhance driving and passenger experience. AR may also be the enabling technology to increase trust and acceptance in automated vehicles and therefore help on the transition towards automated driving. Further, automated driving extends use cases of augmented and other immersive technologies. However, there are still a number of challenges with the use of augmented reality when applied in vehicles, and also several human factors issues need to be solved. Additionally, Virtual reality (VR) has the potential to simulate AR applications for HCI research. In this workshop, we will discuss potentials and constraints as well as impact, role, and adequacy of AR and VR (mixed reality, MR) in driving applications and simulations. The primary goal of this workshop is to define a research agenda for the use of MR in intelligent vehicles within the next 3 to 5 years and beyond.

Wizards of WoZ: using controlled and field studies to evaluate AV-pedestrian interactions

  • Dylan Moore
  • Rebecca Currano
  • David Sirkin
  • Azra Habibovic
  • Victor Malmsten Lundgren
  • Debargha (Dave) Dey
  • Kai Holländer

Interactions between autonomous vehicles (AV) and pedestrians remain an ongoing area of research within the AutoUI community and beyond. Given the challenge of conducting studies to understand and prototype these interactions, we propose a combined full-day workshop and tutorial on how to conduct field experiments and controlled experiments using Wizard-of-Oz (WoZ) protocols. We will discuss strengths and weaknesses of these approaches based on practical experiences and describe challenges we have faced. After diving into the intricacies of different experiment designs, we will encourage participants to engage in hands-on exercises that will explore new ways to answer future research questions.

The embodied vehicle

  • Matti Krüger
  • Bruce N. Walker
  • Lewis Chuang

Driver assistance system development commonly targets a substitution of driver responsibilities. Such a substitutive approach which ignores the potential of utilizing available human resources has at least two downsides: 1. The human mind has not evolved to stay idle such that people tend to engage in secondary tasks and disengage from the driving task when not needed, causing out-of-the-loop effects. 2. People have strengths in domains that have not yet been mastered artificially and which could substantially improve human-machine systems. This workshop will focus on a different approach that emphasizes on strengthening the link between vehicle and driver. Guided by the term "the embodied vehicle" we will draft and discuss options for achieving a closer human-vehicle integration and try to identify possible implications of the developed approaches. The workshop aims to connect practitioners, researchers, and professionals who wish to further explore this perspective on technology and interface development.

AutoWork 2019: workshop on the future of work and well-being in automated vehicles

  • Andrew L. Kun
  • Orit Shaer
  • Andreas Riener
  • Stephen Brewster
  • Clemens Schartmüller

Automated vehicles will allow users to engage in non-driving activities related to work and well-being. This workshop will explore a number of questions related to human-computer interaction in vehicles with the ultimate goal of allowing users to be productive in automated vehicles, as well as to engage in activities that successfully increase their well-being. Additionally, the organizers will pilot a novel format for hybrid engagement of participants, which will include online activities before and after the workshop, as well as in-person activities at the workshop.

First workshop on attentive and pervasive UI in automated vehicles

  • Philipp Wintersberger
  • Remo van der Heiden
  • Shadan Sadeghian Borojeni
  • Michael A. Gerber
  • Paul Green

In automated vehicles, drivers will shift their attention between side activities and tasks relevant for vehicle control. As random interruptions lead to performance degradation and are a source of stress/anxiety, use of pervasive attentive user interfaces could mitigate such effects while maintaining comfort and safety. However, in order to develop such interfaces, a more holistic approach to design is needed. This workshop brings researchers and practitioners together to develop a structured research agenda to determine how attentive user interfaces can support the safe engagement in side activities in automated vehicles. After an open discussion will identify the main research questions, groups will model/prototype interactions using a configurable driving simulator provided by the organizers. Results will be published at the workshop website and should lead to scientific publications and collaborations.

SESSION: Works in progress

Mining consumer complaints to identify unsuccessful interactions with advanced driver assistance systems

  • Lydia Jin
  • Brian C. Tefft
  • William J. Horrey

Advanced driver assistance systems, which warn drivers of danger of an impending collision or temporarily control the vehicle's speed and/or direction under limited circumstances to assist the driver, have the potential to prevent large numbers of motor vehicle crashes, injuries, and deaths. Maximizing their potential safety benefits requires drivers to use them appropriately. However, drivers might misuse these systems if they overestimate the systems' capabilities. Driver might also disuse the systems if the systems fail to meet their expectations, diminishing the systems' benefits. This study seeks to identify unsuccessful driver interactions with advanced driver assistance systems by mining the text of a database of consumer complaints about vehicle safety issues. If successful, future work will attempt classify complaints reflecting faulty mental models versus possible system errors. This information can be used by industry to improve consumer education and driver-vehicle interface design and by researchers to guide future research needs.

Tangible virtual reality in a multi-user environment

  • Konrad Bielecki
  • Marten Bloch
  • Robin Schmidt
  • Daniel López Hernández
  • Marcel Baltzer
  • Frank Flemisch

Due to more and more recent technologies, better simulations can be developed, which can create a greater fusion between the real and virtual world and thus, also generate a higher degree of immersion. Higher levels of immersion can provide meaningful and realistic results and faster integration and assessment of new concepts (e.g. design-concepts). For this, a tangible VR simulator based on an armored vehicle driver's workplace has been developed. With this simulator, the user can interact not only with the VR (virtual reality) world but also with reality. Changes in reality mean changes in the VR world at the same time. Also, first concepts for an expansion to several workstations, which can interact with each other, are currently being developed.

Analyzing high decibel honking effect on driving behavior using VR and bio-sensors

  • Mayank Agrawal
  • Kavita Vemuri

Honking in traffic is an auditory warning to indicate: driver's actions, alert pedestrians and to convey an emergency situation requiring right of way. Though studies have shown the importance of honking in maintaining traffic flow, there have been cases where irrational use leads to an increase in stress/irritation with effects on driver decision making. Most drivers are unaware of the adverse health effects of high decibel honking. In this report, we looked at the effects of honking on driving behavior in a lab-developed VR driving simulator and anxiety/stress as indicated by changes in skin conductance/pulse rate measured using sensors.

Secondary task and situation awareness, a mobile application for conditionally automated vehicles

  • Marine Capallera
  • Emmanuel de Salis
  • Quentin Meteier
  • Leonardo Angelini
  • Stefano Carrino
  • Omar Abou Khaled
  • Elena Mugellini

Autonomous vehicles are developing rapidly and will lead to a significant change in the driver's role: s/he will have to move from the role of actor to the role of supervisor. Indeed, s/he will soon be able to perform a secondary task but s/he must be able to take over control when a critical situation is not managed by the driving system. The role of new interfaces and interactions within the vehicle is important to take into account. This article describes the design of an application that provides the driver with information about the environment perceived by the vehicle. This application is displayed as split screen on a tablet by which a secondary task can be performed. The results of initial experiment showed that the participants correctly identified all the factors limiting the proper functioning of the driving system while performing a secondary task on the tablet.

Physical fights back: introducing a model for bridging analog digital interactions

  • Stefan Heijboer
  • Josef Schumann
  • Erik Tempelman
  • Pim Groen

Current transformational developments in automotive user interface (UI) technology are causing a shift in emphasis from safety and efficiency to emotion and flexibility. The many factors to consider in parallel make this a difficult process, in which technological affordances all too easily push the user to the background. To address this issue, this paper introduces an interaction model linking the different tangible control elements, including smartphone functionality, and shows how non-driving-related activities (e.g. climate control, multimedia access) can be represented physically. Next, a working prototype is presented that supports the design and development of novel tactile UIs. By integrating layers of sensors and actuators, a flexible UI is created that pushes technology to the background, giving proper attention to the user again and enabling effective research on how to make the digital world tangible for users.

Switching between augmented reality and a manual-visual task: a preliminary study

  • Nadia Fereydooni
  • Orit Shaer
  • Andrew L. Kun

In this project we designed an Augmented Reality working space for drivers in a semi-automated vehicle, and evaluated users visual behavior in this space specifically during switching from the AR non-driving task to a Manual-Visual task. The results of this preliminary study show that users do not switch between the two provided tasks immediately after they are asked to. One possible explanation for this gradual transition is the timing of the interruption during the sub task. These results also provide a potential path for how to proceed with the study.

A classification framework based on driver's operations of in-car interaction

  • Hao Tan
  • Yaqi Zhou
  • Ruixiang Shen
  • Xiantao Chen
  • Xuning Wang
  • Moli Zhou
  • Daisong Guan
  • Qin Zhang

Advances in driverless technology have reduced driver's attention in driving, making them willing to focus on activities that are less relevant to driving, which has left a new field of research and practice for the in-car human-computer interaction. However, due to the complexity of the driving scenario and the uncertainty of the driver's activity, how to design appropriate ways of interaction that meets driver's emotional appeals from the perspective of driver activity is considerably important. This paper firstly built the driving scenario model based on the user activity theory. Secondly, for the purpose of detailing the interaction process, a framework based on the classification of operation was proposed. Finally, based on pervious work, a design evaluation method combined user's emotional appeals with interaction attributes under user activity is proposed to better support the choice of interactive solution.

A tactile interaction concept for in-car passenger infotainment systems

  • Melanie Berger
  • Regina Bernhaupt
  • Bastian Pfleging

Many modern cars offer in-vehicle infotainment systems to enable information and entertainment features. Often, these systems use touchscreen-based interaction concepts, which can be tedious (holding the arm) and imprecise due to the mobile context. In addition, most systems are driver-targeted and neglect the interaction by passengers. In this paper, we therefore investigate the use of an absolute indirect touch interaction concept with tactile feedback to enable passenger interaction with an infotainment system with the goal to ease screen navigation and improve user experience. Results from an experiment (N=18) reveal that this approach performs well regarding usability and user experience for both entertainment and infotainment functions.

Driving behavior model considering driver's over-trust in driving automation system

  • Hailong Liu
  • Toshihiro Hiraoka

Levels one to three of driving automation systems (DAS) become more and more sophisticated, not only the driver's driving skills will be reduced, but also the problem of over-trust will become serious. To prevent the over-trust, this paper discusses the followings: 1) the definition of over-trust in the DAS, 2) the occurrence conditions and the mechanism of over-trust in the DAS, and 3) a driving behavior model considering driver's over-trust in driving automation system.

Shared control and the democratization of driving in autonomous vehicles

  • Emma M. van Zoelen
  • Laure Peeters
  • Sander J. Bos
  • Feng Ye

This paper presents design concepts for multi-user interaction for influencing driving style in autonomous vehicles. Since there is no real 'driver' in autonomous vehicles, it is relevant to look at the role of all the passengers in the car. By democratizing the decision making process surrounding the driving style of the car, all passengers have the opportunity to contribute to the travelling experience. The presented concept is an exploration aimed at inspiring designers of humancomputer interaction in the car to let go of conventional hierarchies between passengers in a car context, which will be especially relevant in designing autonomous vehicles and car-sharing systems.

Designing HMIs for an active safety system on bicycles

  • David Lindström
  • Victor Malmsten Lundgren
  • Jonas Andersson
  • Thanh Bui
  • Azra Habibovic
  • Henrik Clasen
  • Hugo Drakeskär

Radar sensors have been used for active safety in cars for many years. An ongoing research project explores how radar sensors and technology common in automotive vehicles can be transferred for use on bicycles. Workshops have been used to generate ideas. A bicycle simulator is planned to be used for test and evaluation. Tests on a test track has been used to simulate high-risk scenarios. This paper describes the design process of this project, with focus on the user interface. High-risk scenarios and requirements are identified, followed by identified design challenges and design activities, including evaluation. Ideas for a dual HMI approach, directed towards the bicyclist and towards surrounding traffic are presented.

User expectations and implications for designing the user experience of shared vehicles

  • Saif A. Al Khamissi
  • Bastian Pfleging

Shared mobility is one of the main focus of mobility companies and start-ups in recent years. Despite the fact that many cities have some form of shared vehicles, it is still a niche market with the potential to become a sustainable business. In this paper, we investigate the limitations and possibilities of shared vehicles to make them more appealing to users. A combination of a literature survey, online survey, and interviewing individuals and stakeholders provide directions for future solutions. Our findings reveal that current shared vehicles services are at the beginning, where stakeholders focus on developing frameworks and regulations to deploy current services. However, the future of shared vehicles lies in the experiences provided in and around the vehicle, which might result in the development of new and more specific vehicles for certain users and contexts. The outcome is an invitation to discover and discuss the experiences of shared vehicles.

Factors influencing older adults' acceptance of fully automated vehicles

  • Shabnam Haghzare
  • Katherine Bak
  • Jennifer Campos
  • Alex Mihailidis

Fully Automated Vehicles (FAVs) have the potential to improve older adults' quality-of-life by enhancing their mobility. Such benefits can only be realized if FAVs are acceptable and thus used by older adults. However, older adults' acceptance of FAVs is reported to be the lowest amongst all Levels of Automation (LoA). The current driving simulation-based study provides preliminary insights into the factors that may be associated with older adults' acceptance of FAVs. Such insights can, in turn, inform user-centered FAV designs that are acceptable for older adults and can thereby enable the benefits of using FAVs by this population. Specific associations that were considered were those between older adults' acceptance of FAVs and internal factors characterizing the individuals, external factors characterizing the driving environment, and FAV features.

Where we come from and where we are going: a review of automated driving studies

  • Yannick Forster
  • Anna-Katharina Frison
  • Philipp Wintersberger
  • Viktoria Geisel
  • Sebastian Hergeth
  • Andreas Riener

During the last decade, research has conducted a large amount of studies that investigated driving automation from the Human-Computer Interaction (HCI) perspective. Due to the many possibilities for study design concerning investigated constructs, data collection and parameters, at present, the pool of findings is heterogeneous and nontransparent. This literature review applies a structured approach in reviewing scientific papers that investigated driving automation and allows a statement on the status quo of existing methodological approaches. Most studies focused on safety followed by trust and acceptance. Driving/Take-Over Request (TOR) performance also marked a significant portion, however many different parameters are investigated. First results identify gaps to be addressed in future studies and allow researchers to investigate known constructs with established methods.

"Why did this voice agent not understand me?": error recovery strategy for in-vehicle voice user interface

  • Jihyun Kim
  • Meuel Jeong
  • Seul Chan Lee

We aimed at investigating the effects of error recovery strategy that could enable the drivers to recover from the non-understanding error when interacting with the in-vehicle voice user interface (VUI). An experiment using a driving simulator was conducted with forty-seven participants who performed driving tasks with the VUI. One of three different error recovery strategies (ask repeat, re-prompt, and you can say) was suggested to recover from the non-understanding errors. A subjective questionnaire and semi-structured interviews were used to collect the participants' workload, perceived reasons for errors, and preference. Results showed participants felt that 'you can say' was more difficult than the 're-prompt' condition. However, preferences of 'ask repeat' and 'you can say' were significantly higher than 're-prompt' because the perceived reason for the non-understanding was 'input error' when the system used the 're-prompt' method. These findings provide insights into the design of the VUI in the context of driving.

Effect of human-machine cooperation on driving comfort in highly automated steering maneuvers

  • Hiroaki Kuramochi
  • Akira Utsumi
  • Tetsushi Ikeda
  • Yumiko O. Kato
  • Isamu Nagasawa
  • Kazuhiko Takahashi

Automated driving technology is being developed to reduce driver workloads and improve driving safety. However, how connected drivers feel to the control of their vehicles can be a critical factor in driving comfort. In this paper, we discuss a highly automated and human-machine cooperative steering system. Our prototype system regulates driver's steering maneuvers under supervision by automation and also applies guidance torque for physical interaction between a driver and the automation. To evaluate driving comfort and driver behaviors during the cooperation, experiments were conducted with a driving simulator. The results from 20 participants show that drivers who assertively participated in cooperative steering tended to feel safer and more pleasure than in fully automated-like steering.

Increasing driver awareness through translucency on windshield displays

  • Emma van Amersfoorth
  • Lotte Roefs
  • Quinta Bonekamp
  • Laurent Schuermans
  • Bastian Pfleging

When driving a car, important objects (e.g., pedestrians) are often hidden by surrounding vehicles which can delay drivers' reaction times. In this paper we therefore explore how an augmented reality enabled windshield display can improve the drivers' capabilities. By overlaying of what is behind nearby vehicles onto the own windshield, these vehicles can be rendered translucent. In a simulator experiment we evaluate the influence of three levels of opacity on driver and braking behavior. Results indicate a trend of translucency decreasing the required braking time.

Convey situation awareness in conditionally automated driving with a haptic seat

  • Marine Capallera
  • Peïo Barbé-Labarthe
  • Leonardo Angelini
  • Omar Abou Khaled
  • Elena Mugellini

Conditionally automated driving is rapidly evolving and one of its major issues is the reduction of the driver's attention to her/his environment. After a brief study of interactions increasing situation awareness, and more specifically haptic interactions, this paper proposes the use of vibrations in the seat. Vibrations, with the variation of their location, frequency and amplitude, allow to transmit to the driver various information such as the position of obstacles around her/his vehicle and the state of deterioration of track markings. The results of a first exploratory test are promising on the use of haptic interactions and they pave the way for future experiments.

Effects on user perception of a 'modified' speed experience through in-vehicle virtual reality

  • Yusuke Sakai
  • Toshimitsu Watanabe
  • Yoshio Ishiguro
  • Takanori Nishino
  • Kazuya Takeda

In order to make the experience of traveling in automated vehicles more enjoyable, Virtual Reality (VR) experiences based on the real-world journey have been proposed. Presenting users with VR content synched to the car's actual movement decreases the motion sickness, but it also sharply limits the possible range of VR content. In this paper, we investigate whether the user's subjective perception of speed can be 'modified' by presenting VR content at a different speed than the actual speed of the vehicle, and whether users feel this experience is strange. Study participants viewed VR content occurring at a faster or slower speed than their actual travel speed in an electric wheelchair. Our results show that the participants were able to do this without experiencing a feeling of "strangeness". However, the participants did report higher "strangeness" scores when the speed in the VR content was slower than their actual speed.

Comparing CNNs for non-conventional traffic participants

  • Abhishek Mukhopadhyay
  • Imon Mukherjee
  • Pradipta Biswas

This paper investigates performance of three state-of-the-art pretrained Convolutional Neural Network (CNN) models in terms of accuracy and latency for on and off-road obstacle detection in context of autonomous vehicle in Indian road. We investigated performance of Mask R-CNN, RetinaNet, and YOLOv3 on publicly available Indian road dataset. We evaluated accuracy and latency of these models on novel classes of objects such as animals, autorickshaws, caravan. Our results show that accuracy of Mask R-CNN is significantly higher than YOLOv3 and RetinaNet. We have also found Yolov3 is significantly higher than RetinaNet. We have also tested latency of the CNN models and found that latency of YOLOv3 is significantly lower than other two models and RetinaNet is significantly faster than Mask R-CNN. Finally, we have proposed an expert system to integrate environment parameters inside car along with outside car obstacles detected by YOLOv3 to estimate cognitive load of co-passengers of autonomous vehicle.

Improving target selection accuracy for vehicle touch screens

  • Kosuke Ito
  • Kento Ohtani
  • Yoshio Ishiguro
  • Takanori Nishino
  • Kazuya Takeda

When operating the touch screen in a car, the touch point can shift due to the vibration, resulting in selection errors. Using larger target is a possible solution, but this significantly limits the amount of content that can be displayed on the touch screen. Therefore, we propose a method for in-vehicle touch screen target selection that can be used with a variety of sensors to increase selection accuracy. In this method, the vibration feature is learned by Variational AutoEncoder based model, and it is used for estimating touch point distribution. Our experimental results demonstrate that the proposed method allows users to achieve higher target selection accuracy than conventional methods.

Using the wizard of Oz paradigm to prototype automated vehicles: methodological challenges

  • Andrea Isabell Müller
  • Veronika Weinbeer
  • Klaus Bengler

Wizard of Oz (WoOz) vehicles are becoming increasingly popular within the human factors research community to prototype automated vehicles in real traffic conditions. With promising results, researchers have so far used the WoOz approach, among other topics, to assess user interfaces, the effect of non-driving related tasks or the development of drowsiness during automated driving. However, a research gap exists considering standardized methodological guidelines for the implementation of WoOz studies in the context of automated driving. In an attempt to fill this gap, this paper investigates the components and their connections in tests using WoOz vehicles. It also applies the three main test quality criteria, objectivity, reliability and validity to the WoOz method. Based on this analysis, methodological challenges are discussed in detail and necessary research questions are deduced. Both challenges and research questions may serve as a thought-provoking impulse when designing WoOz studies in the future.

ATHENA: supporting UX of conditionally automated driving with natural language reliability displays

  • Anna-Katharina Frison
  • Philipp Wintersberger
  • Amelie Oberhofer
  • Andreas Riener

Research on conditionally automated (SAE L3) vehicles usually addresses safety issues in the context of Take-Over Requests. However, little knowledge is available on how the mere possibility of upcoming control transfers affects drivers' user experience. To learn more about this problem, we conducted a focus group discussion. Results suggest that the psychological needs of security, autonomy, competence, and stimulation are not properly satisfied in SAE L3 driving. To counteract, we developed a natural language reliability display (called "ATHENA"), aiming at satisfying these needs in different driving situations. First results from a driving simulator study (N=18) indicate that ATHENA, although not making drivers feel more autonomous, positively influenced their subjective feeling of safety while reducing negative affect. We conclude this work by pointing out the potential of natural language reliability displays for future SAE L3 vehicles.

Can we predict driver distraction without driver psychophysiological state?: a feasibility study on noninvasive distraction detection in manual driving

  • Emmanuel de Salis
  • Dan Yvan Baumgartner
  • Stefano Carrino

Driver distraction is a major issue in manual driving, causing more than 30'000 fatal crashes on US roadways in 2015 only [11]. As such, it is widely studied in order to increase driving safety. Many studies show how to detect driver distraction using Machine Learning algorithms and driver psychophysiological data. In this study, we investigate the trade-off between efficiency and privacy while predicting driver distraction. Specifically, we want to assess the impact on the estimation of the driver state without access to his/her psychophysiological data. Different Machine Learning models (Convolutional Neural Networks, K-NN and Random forest) are implemented to evaluate the validity of the distraction detection with and without access to psychophysiological data. The results show that a Convolutional Neural Network model is still able to detect driver distraction without access to psychophysiological features, with an f1-score of 97.11%, losing only 1.37% in the process.

Stuck behind a truck: a cooperative interaction design approach to efficiently cope with the limitations of automated systems

  • Jürgen Pichen
  • Martin Baumann
  • Tanja Stoll

Even though automated driving seems to be a promising approach, an arising problem is the cars possible disability to fulfill the needed requirements efficiently in specific situations. Recent research is addressing driver take-over requests coping with these limitations. A more efficient way is to support the vehicle cooperatively and address the particular requirements that the car cannot meet sufficiently. Based on the examination of human information processing and behavioral execution processes a new interface framework, which relies on learned behavior patterns, is presented. A simulator study compared touch interaction with the proposed approach and a gesture control for maneuver initiation without a complete take-over. The new approach seems to improve the assurance behavior and proper situation awareness, whereas usability mistakes were found in the practical implementation of the new interaction concept.

Visual aided speech interface to reduce driver distraction

  • Guiyoung Kim
  • Yong Gu Ji

Speech interfaces are increasingly finding their way in many different forms and used in a varied environment. It is also frequently used in the context of driving. The speech interface may advantage driving performance but it can also be distracting. To overcome the weakness of the speech interface, the present study aimed at finding the adequate 1) speech output (granularity of speech, clustering method) and 2) speech-aided visualization output (length of speech, voice pairing motion) in reducing cognitive load while driving.

Autonomous driving with an agent: speech style and embodiment

  • Seul Chan Lee
  • Harsh Sanghavi
  • Sangjin Ko
  • Myounghoon Jeon

A driving agent can be an effective interface to interact with drivers to increase trust towards the autonomous driving vehicle. While driving research on agent has mostly focused on the voice-agent, little empirical findings on the robot-agent were reported. In the present study, we compared three different agents (informative voice-agent, informative robot-agent, and conversational robot-agent) to investigate their effects on driver perception in Level 5 autonomous driving. A driving simulator experiment with an agent was conducted. Twelve drivers experienced a simulated autonomous driving and responded to Godspeed questionnaire, RoSAS questionnaire, and social presence. Drivers rated the conversational robot-agent as significantly more competent, warmer, and providing higher social presence than the other two agents. Interestingly, despite this emotional closeness, drivers' attitude toward the conversational robot-agent was contradictory. They mostly chose the conversational robot-agent as the best option or the worst option. Findings of the present study are meaningful as a first step of exploring the potential of various types of in-vehicle agents in the context of autonomous driving.

Cognitive psychological approach for unraveling the take-over process during automated driving

  • Lara Scatturin
  • Rainer Erbach
  • Martin Baumann

Automated driving opens up many advantages for the driver. Until fully automated driving, though, the driver is still not completely detached from the actual driving task. Situations in which the driver has to intervene and take over control will be inevitable. These takeover situations are currently well researched. However, there is only limited information about what happens on the cognitive level during the transition from automated to manual driving. Thus, aim is to unravel the cognitive processes during the entire take-over. A cognitive psychological approach is used in order to propose a conceptual framework that may serve as basis for a cognitive model. Such a model allows dynamic predictions and paves the way toward an adaptive user-centered HMI for the take-over situation.

Driving simulator studies at home: promises, potholes, and pitfalls

  • Alexander Mirnig
  • Sandra Trösterer
  • Alexander Meschtscherjakov
  • Artur Lupp
  • Manfred Tscheligi
  • Thomas Engel
  • Fintan McGee
  • Roderick McCall
  • Mickaël Stefas
  • Joan Baixauli
  • Francesco Bongiovanni

This paper presents an overview of a driving simulation platform, which is designed to let users to participate in a study over a longer period of time and in their own home. A small scale study of the platform is presented and an overview of the experiences in running such studies is summarised.

Takeover response: differences between US and Slovenia

  • Erika E Miller
  • Steven Hwang
  • Tomaz Cegovnik
  • Linda Ng Boyle
  • Jaka Sodnik

A driving simulator study was conducted in Slovenia and the USA to examine cultural differences in driver response to take-over requests (TOR) in highly automated driving. There was a total of 38 participants (19 in each study location). Drivers engaged in secondary tasks while also monitoring the driving task. A TOR occurred at the end of the drive and participant response time was measured. All participants in Slovenia responded to the TOR while only 74% of the USA participants responded to the TOR. Of the participants that did respond to the event, Slovenia participants responded significantly faster than USA participants (Δ 1388 msec). Participants that were distracted responded significantly slower than those not distracted at the start of the TOR (Δ 751 msec). This work-in-progress seeks to facilitate further comparisons in driver differences across more locations.

Don't you see them?: towards gaze-based interaction adaptation for driver-vehicle cooperation

  • Marcel Walch
  • David Lehr
  • Mark Colley
  • Michael Weber

Highly automated driving evolves steadily and even gradually enters public roads. Nevertheless, there remain driving-related tasks that can be handled more efficiently by humans. Cooperation with the human user on a higher abstraction level of the dynamic driving task has been suggested to overcome operational boundaries. This cooperation includes for example deciding whether pedestrians want to cross the road ahead. We suggest that systems should monitor their users when they have to make such decisions. Moreover, these systems can adapt the interaction to support their users. In particular, they can match gaze direction and objects in their environmental model like vulnerable road users to guide the focus of users towards overlooked objects. We conducted a pilot study to investigate the need and feasibility of this concept. Our preliminary analysis showed that some participants overlooked pedestrians that intended to cross the road which could be prevented with such systems.

Applying participatory design to symbols for SAE level 2 automated driving systems

  • Mickaël J. R. Perrier
  • Tyron Louw
  • Rafael C. Gonçalves
  • Oliver Carsten

Automakers take the risk of designing their own symbols for adaptive cruise control (ACC) and lane centring assist (LCA), some of them even using symbols from other driving assistance systems. Doing so exposes drivers to potential confusion and poses a threat to safety. A user-centred approach allowed us to gather information on ways to design intuitive symbols for users of automated vehicles. We invited drivers to a participatory design workshop to ideate and review existing symbols used for ACC and LCA. Here, we report our first step towards the development of recommendations for the design of driver-vehicle interfaces (DVI) of SAE level 2 and 3 systems.

Evaluation of driving behavior on highway entries

  • David Sauer
  • Martin Albert
  • Stephanie Cramer

Merging onto highways is a demanding driving task with an above average number of accidents. Following the progress of developing automated driving systems, a specific system for this scenario could provide help. Within the scope of a study testing different automated driving behavior variations on the A9 highway, the manually driven highway entries were evaluated. The paper presents the first results derived from the highway entry in Lenting, Germany. The variables describe the participants' driving behavior in terms of cutting corners, velocities, decelerations, and accelerations. The two curves were cut with a maximum distance of 0.58 and 0.72 m from the center line. The velocities before the start of the entry zone were between 40-65 km/h and the mean velocity for the lane change was 76.95 km/h. The maximum lateral acceleration in both curves was around 3 m/s2 and the longitudinal acceleration was approximately between zero and 1.3 m/s2.

The effect of incentives in driving simulator studies

  • Philipp Hock
  • Enrico Rukzio
  • Martin Baumann

In driving experiments, incentives are used to establish a motivation for task performances. Usually, monetary rewards are used to achieve this. We focus on the question whether other rewards than money (an anoymous donation and social comparison) can achieve equal task performance results compared to monetary rewards. In a simulator experiment (n = 20) we could show that participants perform equal task performance among all conditions (monetary rewards, anonymous donation, comparative highscore) except the baseline (a score with no further meaning). This leads to the assumption that besides monetary rewards, other incentives may lead to the same performance in simulator studies.

The first co-drive experience prototype

  • Laura Boffi
  • Philipp Wintersberger
  • Paola Cesaretti
  • Giuseppe Mincolelli
  • Andreas Riener

In this paper, we report on a first experience prototype for the Co-Drive concept, which is a new service for traveling and socializing by car between a driver on an automated vehicle and a remote passenger connected via virtual reality from home. Pushing beyond driving and safety functionalities, Co-Drive envisions a new way of sharing the trip, which could be enabled by future technologies, providing a social context for automated cars actions and teleoperated driving. We also reflect on the concept of Driver-Car assemblage as a human and material object conjunction, and we suggest that Co-Drive enables a new kind of assemblage among the Driver-Remote Passenger-Automated Car.

Towards a frustration-aware assistant for increased in-vehicle UX: F-RELACS

  • Michael Oehl
  • Klas Ihme
  • Uwe Drewitz
  • Anna-Antonia Pape
  • Sonja Cornelsen
  • Martin Schramm

Frustration has negative effects on user experience (UX) both in manual and automated driving. Therefore, it negatively influences driving performance and system acceptance. To address this, the project F-RELACS (Frustration Real-Time Recognition for an Adaptive In-Car System) aims to demonstrate a frustration-aware assistant called MUsE (My User Experience Improvement System) that is capable of estimating the user's momentary level of frustration and offering tailored support to improve UX. However, up to now little is known about interaction strategies for automotive user interfaces that successfully reduce the user's frustration and improve UX in frustrating driving situations. Here we report the initial results from user focus groups (current status: 2 groups with in total 11 participants) that are conducted to identify suitable interaction strategies to develop a frustration-aware assistant for enhanced in-vehicle UX in a user-centered design process.

Spatial visualization of sensor information for automated vehicles

  • Fei Yan
  • Shyukryan Karaosmanoglu
  • Aslihan Demir
  • Martin Baumann

Displaying the sensor limitation of automated vehicles is crucial to traffic safety and trust in automation. However, the current representation of system uncertainty is quite general with symbols or scales consisting of uncertainty levels, which is problematic in critical situations where drivers need to know the specific problem of the sensors. An interface that visualizes the radar sensor information spatially considering the surroundings is proposed, which aims to provide a better mental representation of the situation and support drivers' decisions. It is evaluated against two reference interfaces with either no or general representation of the sensor information. After seeing different interfaces in various scenarios of overtaking obstacles, participants selected one of the following options: "stop", "circuit" or "take over the control". The results show that although the interface showing no sensor information has the shortest reaction time, the proposed interface has changed drivers' decisions from "circuit" to "take over the control" the most.

HMI-testing for (non-) automated vehicles in urban connected mixed traffic: cooperative lane change

  • Sabine Springer
  • Isabel Neumann
  • Bettina Kämpfe
  • Tina Morgenstern
  • Josef F. Krems
  • Franziska Schmalfuß
  • Johanna Busch
  • Oliver Vogel
  • Alexander Jungmann

Lane changing actions in urban traffic can be highly risky. Drivers need to choose gaps in traffic flow adequately and synchronize their driving behavior with directly surrounding vehicles. Especially in urgent cases such as an approaching emergency vehicle, successful and safe lane changes are of high value. Automating vehicles as well as the use of innovative communication technologies could reduce this potential hazard, as advanced vehicles will be able to cooperate and negotiate maneuvers efficiently. Making these processes transparent and comprehensible to the driver is not only inevitable for the acceptance of these innovations, but in consequence also for establishing a safer and more efficient traffic. For this purpose, two different human machine interface (HMI) concepts for users of highly automated connected vehicles as well as for non-automated connected vehicles have been developed and evaluated w.r.t. usability aspects, acceptability and subjective workload.

Inducing erroneous behavior in a driving simulator with gamification

  • Steffen Maurer
  • Ramona Schmid
  • Rainer Erbach
  • Enrico Rukzio

Gamification elements like leader boards can be used to influence motivation and therefore behavior of people. This could be used in a driving simulator study to persuade the participant to misbehave. Such behavior is required to test participants' reactions to intervening assistance systems. In a short preliminary study (n=7), it was explored if a competitive element enables the introduction of time pressure and a high involvement in a distraction task in a driving simulator. Beside a short questionnaire, the driving behavior was observed and qualitatively evaluated. While all participants felt motivated by the possibility to compete with their colleagues, the creation of time pressure was not successful. Though, the effects of said distraction and time pressure on the driving behavior was noticeable. Finally, recommendations for the design of specific situations that induce a certain behavior in the simulator are derived from the results.

Online experiments as a supplement of automated driving simulator studies: a methodological insight

  • Philipp Hock
  • Franziska Babel
  • Kristin Muehl
  • Enrico Rukzio
  • Martin Baumann

Online experiments offer several advantages compared to lab studies. We propose that reaction times in takeover requests in automated driving research can be evaluated using online experiments. Furthermore, we assume that online studies can complement simulator studies to some extent. However, online studies have several pitfalls (e.g. an uncontrolled environment). Affecting factors can occur in hardware, software, and human behavior. We provide a detailed methodology and technology section to reduce possible confounding factors and to increase the validity of automated driving online studies. Furthermore, to test the effect of the proposed methodology, we conducted a user study with a takeover scenario as online (n = 42) and simulator (n = 17) experiment. First results indicate that reaction times did not differ between the simulator and online experiment. Thus, we conclude that with some methodical refinements and practices, similar results can be achieved in online studies compared to simulator studies.

Personalized user profiles for autonomous vehicles

  • Alexander Trende
  • Daniela Gräfing
  • Lars Weber

Adapting the driving style of autonomous vehicles to fit the user's preferences may reduce uncomfortable user experiences. This can support the adoption process of the technology. We present ongoing research about adaptive customizable user profiles for autonomous driving styles. We propose a system that allows users to customize maneuver-based driving styles via a graphical user interface. Additionally, the system can learn user preferences based on user state assessment. We plan to design and evaluate this system with two driving simulator studies.

Effects of gesture-based interfaces on safety in automotive applications

  • Christof van Nimwegen
  • Kwintijn Schuurman

This project investigates gestures instead of a dashboard for secondary tasks in cars. Driving tasks were performed using a simulator to measure differences in performance using gestures as controls while driving. No relation between driving safety and the chosen input modality was found. However, drivers using gestures spent significantly more time looking at the road than those using a dashboard, which might indicate that drivers are more likely to correctly respond to unforeseen circumstances.

Effect of on-road virtual visual references on vehicle control stability of wide/narrow FOV drivers

  • Akira Utsumi
  • Tsukasa Mikuni
  • Isamu Nagasawa

The performance of vehicle control tasks greatly depends on the driver's ability to acquire visual information about the vehicle's location on the road. Advances in sensory and display technologies are making it possible to effectively assist the driver in information acquisition by accurately detecting the vehicle's current location and dynamically providing feedback to the driver to enhance driving performance. In this paper, we use simulated driving studies to investigate how to improve a driver's control of the vehicle by presenting virtual visual targets. We also investigate individual differences in performance enhancement based on the drivers' field of view (FOV). Our experimental results suggest that the effect of virtual visual targets improves vehicle control and eye movement. We also present behavior differences between two subject groups: narrow and wide FOV groups.

Comparing user requirements for automated vehicle interiors in China and Germany

  • Vanessa Sauer
  • Alexander Mertens
  • Stefan Groß
  • Verena Nitsch
  • Jens Heitland

Automated vehicles (AVs) are a global trend. Thus, cultural differences in user preferences to create safe, trustworthy, usable and well-being enhancing AVs are likely. To investigate cultural differences in user preferences, a cross-cultural qualitative expert survey in China and Germany with focus on the AV interior was conducted. The study results indicate that similar features of the AV interior are important to fulfill the user needs for subjective well-being, trust, safety and usability. However, the importance of the features and their exact design differ. In general, the results indicate that Chinese users will prefer an AV interior with hedonic design while German users will prefer a pragmatic design.

Including people with impairments from the start: external communication of autonomous vehicles

  • Mark Colley
  • Marcel Walch
  • Jan Gugenheimer
  • Enrico Rukzio

People with impairments are among the most vulnerable road users in traffic with significantly higher risk of accidents. Upcoming autonomous vehicles are expected to reduce this but demand some form of external communication to be able to signal their intent or other information to pedestrians. Recent research around the design of vehicle-pedestrian communication focuses strongly on concepts for a non-disabled adult population (e.g. visual cues). This work conducted a current literature review of existing concepts (2014-2019) for vehicle-pedestrian communication (29 publications) and evaluated each according to guidelines of universal design and people with disabilities in mind. Our results uncover shortcomings of the proposed concepts (e.g. over 65% rely solely on visual feedback) and combine these insights with a first impressions of those affected (interview with two visually impaired users).

Supervising the self-driving car: situation awareness and fatigue during automated driving

  • Angus McKerral
  • Nathan Boyce
  • Kristen Pammer

The capacity for human drivers to resume control from an automated vehicle remains a central focus of human factors research. Physiological measures promise to allow the vehicle system to determine when a driver is in a ready-state for transition of control, particularly for level 3 automation and above. We employ an adapted measure of Situation Awareness (SA) to assess the quality of driver SA following an extended period of simulated level 3 automated driving. It is hypothesised that a within-subjects design will demonstrate increasing passive fatigue to be predictive of reduced SA following a takeover request. Participants were also randomly allocated to one of two separate conditions in which supervising drivers were either permitted to, or prohibited from the use of non-driving related tasks (NDRT) during automated driving, to investigate a potential avenue for targeted SA enhancement through deliberate NDRT engagement. Preliminary results provide tentative support for our hypotheses.

Using gaze-based interactions in automated vehicles for increased road safety

  • Holger Schmidt
  • Gottfried Zimmermann
  • Albrecht Schmidt

The development of self-driving vehicles seems to go well with the growing demand for the daily use of mobile devices. However, autonomous vehicles will still need manual intervention in unforeseen or dangerous situations. Therefore, it is important for the driver to stay aware of the traffic situation around, and so to be quickly able to take over. We developed a prototype which represents media content on a simulated windshield display and uses gaze tracking as an additional form of input device for the driver. Although we intentionally pull away the driver's gaze from the driving situation, this seems to be less of a distraction than using hand-held mobile devices or dash-integrated display devices. We hypothesize that the time to regain control with our prototype is shorter compared to traditional media presentation. This work-in-progress paper provides insight to the concept of the prototype while first results will be presented at the conference.

Uncovering perceived identification accuracy of in-vehicle biometric sensing

  • Abdallah El Ali
  • Liam Ashby
  • Andrew M. Webb
  • Robert Zwitser
  • Pablo Cesar

Biometric techniques can help make vehicles safer to drive, authenticate users, and provide personalized in-car experiences. However, it is unclear to what extent users are willing to trade their personal biometric data for such benefits. In this early work, we conducted an open card sorting study (N=11) to better understand how well users perceive their physical, behavioral and physiological features can personally identify them. Findings showed that on average participants clustered features into six groups, and helped us revise ambiguous cards and better understand users' clustering. These findings provide the basis for a follow up online closed card sorting study to more fully understand perceived identification accuracy of (in-vehicle) biometric sensing. By uncovering this at a larger scale, we can then further study the privacy and user experience trade-off in (automated) vehicles.

Lessons from Oz: design guidelines for automotive conversational user interfaces

  • David R. Large
  • Gary Burnett
  • Leigh Clark

This paper draws from literature and our experience of conducting Wizard-of-Oz (WoZ) studies using natural language, conversational user interfaces (CUIs) in the automotive domain. These studies have revealed positive effects of using in-vehicle CUIs on issues such as: cognitive demand/workload, passive task-related fatigue, trust, acceptance and environmental engagement. A nascent set of human-centred design guidelines that have emerged is presented. These are based on the analysis of users' behaviour and the positive benefits observed, and aim to make interactions with an in-vehicle agent interlocutor safe, effective, engaging and enjoyable, while conforming with users' expectations. The guidelines can be used to inform the design of future in-vehicle CUIs or applied experimentally using WoZ methodology, and will be evaluated and refined in ongoing work.

How do humans respond when automated vehicles request an immediate vehicle control take-over?

  • Su Jin Baek
  • Hanna Yun
  • Ji Hyun Yang

In the present study, human behavioral patterns have been analyzed using eye-tracking-based perception time, human response time, and the manner of driving-control-transition when a vehicle issues a control take-over request (TOR). The time for perceiving the TOR varied ranging between 1.84 s and 19.93 s. In 54% of all TOR events, drivers began manual driving with braking, in 33% with throttle application, and in 13% with steering inputs. The observations of this study are as follows: men initiate throttle operation earlier than women do when given TORs; NDRT tasks do not affect users' response; and visual-only warnings are not recommended for TORs.

In-car distractions and automated driving: a preliminary simulator study

  • Sinan E. Arkonac
  • Duncan P. Brumby
  • Tim Smith
  • Harsha Vardhan Ramesh Babu

As vehicles with automated driving features become more common, drivers may become ever more tempted to engage in secondary in-car tasks. We report on the results of a driving simulator study that investigated whether the presence of an in-car video would make drivers more likely to switch on an automated driving system so that they can watch the video. Results show an increase in automated driving mode usage when a video was playing compared to when it was not playing. The presence of this in-car video also made participants slower at reacting to frequent red traffic lights, which the automated driving mode did not detect and were the responsibility of the driver to respond to. These results suggest that in-car distractions are a critical concern for the safe and responsible use of automated driving systems.

Designing emotion-aware in-car interactions for unlike markets

  • Jingyi Li
  • Michael Braun
  • Andreas Butz
  • Florian Alt

Interior cameras and other sensors facilitate affective automotive UIs reacting to the driver's detected emotional state and providing customized support. Previous research mainly presents single prototypes designed in western countries. A comprehensive view of relevant use cases and possible cultural differences between markets, however, is missing. This is particularly important as the significance of emotions is quite different between, e.g., western and eastern cultures. We present use case clusters for affective in-car UIs based on ideation workshops with German and Chinese participants. Our focus lies on the requirements arising from cultural differences, such as more rule-consistent driving in Germany or more important social components in China. Our ideation approach aims to enrich use cases for empathetic vehicles with the user's culture in mind. The use case clusters we present can inspire future concepts for improving user experience through affective interaction and for boosting acceptance by observing cultural peculiarities.

A field study to collect expert knowledge for the development of AR HUD navigation concepts

  • Matthias Schneider
  • Anna Bruder
  • Marc Necker
  • Tim Schluesener
  • Niels Henze
  • Christian Wolff

A promising advancement of conventional head-up displays in vehicles is the implementation of augmented reality. By projecting the content onto the vehicle's windshield, information can be displayed in a contact analogue way in the real world. Two major challenges for concept developers are to reduce masking caused by augmented reality content and to create concepts that are suitable for the limited field of view. To approach these challenges, we designed two contact analogue navigation concepts and evaluated them in a field study with a prototype car that contained a complete AR HUD testing environment. The subjects were experts in interaction design, AR, HUD and sales. First results of the experts' suggestions for improvements are given in this extended abstract.

Talk to me!: exploring stereoscopic 3D anthropomorphic virtual assistants in automated vehicles

  • Kathrin Knutzen
  • Florian Weidner
  • Wolfgang Broll

Trust in self-driving cars is considered to ensure appropriate usage of such vehicles. Representing the automation system as an anthropomorphic virtual assistant that communicates the system's behavior could help to build and maintain trust. To explore such assistants, a prototype of an anthropomorphic virtual assistant was designed and integrated in a conditionally automated vehicle. In a driving simulator experiment with a between-group design (n = 56), participants experienced either an anthropomorphic 2D or stereoscopic 3D virtual assistant. While driving, distrust towards the automation system was gradually evoked. We analyzed trust and driving behavior. The current study yielded findings that the implementation of stereoscopic 3D impacts overtaking behavior, potentially fosters safety-consciousness and mitigates inappropriate usage of the system. We further report on the design of the virtual assistant.

An empirical investigation of measures for well-being in highly automated vehicles

  • Vanessa Sauer
  • Alexander Mertens
  • Verena Nitsch
  • Jens Dietmar Reuschel

The advent of automated driving shifts user behavior in vehicles and calls for user-centric design. One potential user-centric variable to consider is passenger well-being. In certain experimental designs and industry applications self-report measures may not be the optimal type of measurement and objective measures for well-being could give more nuanced results. This study investigates the relationship between subjective self-ratings and objective measures for well-being in the context of highly automated vehicles. A static driving simulator was used to create different vehicle interiors. Participants' (n=20) responses using self-reports and objective measures (i.e. heart rate variability, electrodermal activity, facial expression, body motion) were collected. The results showed significant correlations of self-reports with heart rate variability and body motion. Both measures were able to discriminate between different stimuli, suggesting that they may be suitable objective measures to act as proxy and complement subjective measures for well-being in highly automated vehicles.

Introducing automated driving to the generation 50+

  • Jingyi Li
  • Kai Holländer
  • Andreas Butz

Automated driving could promote safety and comfort in everyday traffic by reducing the human factor in vehicle control and traffic flow. Especially older people can benefit from such a technology, since age may have an impact on the driving ability. However, only few driving assistance systems focusing on the needs of the generation 50+ also consider opportunities of highly automated vehicles, such as the usage of head-mounted displays while driving. In a survey with drivers of 50+ years (N=26), we found that 81% had a basic knowledge of automated vehicles (AVs) but 73% would not buy an AV or are in doubt. In this WIP report, we present our research approach leading to a novel system for automated driving, addressing the needs of the generation 50+. We believe that the proposed system can foster the safety and acceptance of users in this age group and beyond.

Bringing the thrill to automated vehicles: an evaluation of thrill-seeking driving displays

  • Zoe M. Becerra
  • Brittany Holthausen
  • Bruce N. Walker

As vehicles with automated features are becoming more common on roadways, it is important to help drivers appropriately trust and accept those automated systems. Utilizing personalized driving displays may enhance the experience of operating a highly automated vehicle and thereby improve driver acceptance of automation. The current study evaluated a personalized display profile for "thrill-seeking" drivers, to determine if the displays conveyed the intended information and matched drivers' mental model for thrill-seeking driving. The results indicated the displays conveyed the intended meaning to the user, but some displays need further improvement to better match the mental model for thrill-seeking driving.

First attempt to build realistic driving scenes using video-to-video synthesis in OpenDS framework

  • Zili Song
  • Shuolei Wang
  • Weikai Kong
  • Xiangjun Peng
  • Xu Sun

Existing programmable simulators enable researchers to customize different driving scenarios to conduct in-lab automotive driver simulations. However, software-based simulators for cognitive research generate and maintain their scenes with the support of 3D engines, which may affect users' experiences to a certain degree since they are not sufficiently realistic. Now, a critical issue is the question of how to build scenes into real-world ones. In this paper, we introduce the first step in utilizing video-to-video synthesis, which is a deep learning approach, in OpenDS framework, which is an open-source driving simulator software, to present simulated scenes as realistically as possible. Off-line evaluations demonstrated promising results from our study, and our future work will focus on how to merge them appropriately to build a close-to-reality, real-time driving simulator.

Modeling the effects of auditory display takeover requests on drivers' behavior in autonomous vehicles

  • Sangjin Ko
  • Yiqi Zhang
  • Myounghoon Jeon

In semi-autonomous vehicles (SAE level 3) that require driver's engagement in critical situations, it is important to secure reliable control transitions. There have been many studies on investigating appropriate auditory displays for takeover request (TOR) but most of them were empirical experiments. In the present study, we established two computational models using a Queuing Network Model Human Processor (QN-MHP) framework to predict a driver's reaction time to auditory displays for TOR. The reaction time for different sound types were modeled based on the results of subjective questionnaire in empirical studies. Separately, the reaction times for various non-speech sounds were modeled by using acoustical characteristics of sounds and previous empirical studies. It is one of a few attempts modeling the effects of auditory displays for TOR on the reaction time in autonomous driving. The current study will contribute to driving research by allowing us to simulate and predict drivers' behavior.

VRoad: gesture-based interaction between pedestrians and automated vehicles in virtual reality

  • Uwe Gruenefeld
  • Sebastian Weiß
  • Andreas Löcken
  • Isabella Virgilio
  • Andrew L. Kun
  • Susanne Boll

As a third party to both automated and non-automated vehicles, pedestrians are among the most vulnerable participants in traffic. Currently, there is no way for them to communicate their intentions to an automated vehicle (AV). In this work, we explore the interactions between pedestrians and AVs at unmarked crossings. We propose a virtual reality testbed, in which we conducted a pilot study to compare three conditions: crossing a street before a car that (1) does not give information, (2) displays its locomotion, or (3) displays its locomotion and reacts to pedestrians' gestures. Our results show that gestures introduce a new point of failure, which can increase pedestrians' insecurity. However, communicating the vehicle's locomotion supports pedestrians, helping them to make safer decisions.

LeadingDisplay: a versatile, robotic display for infotainment in autonomous vehicles

  • Yoshio Ishiguro
  • Kazuya Takeda

We propose a versatile, robotic display as an interaction device for infotainment in autonomous vehicles, which is also capable of utilizing spatial information obtained by the sensors of these vehicles to designate precise locations in the surrounding environment. Our propose display can be used to indicate where the device is pointing in physical space and presents annotation information to the user on a screen. The display, which can be rotated and panned by the user or the computer, allows input or display at any angle, and can be used in various modes, such as pen or touch input, to display video or gaming content, or to designate targets or locations outside the vehicle. In this paper, we evaluate whether it is possible for users to grasp target location in the surrounding environment by changing the orientation of the display with the computer, and also evaluate whether it is possible to input direction information by rotating a display by the user.

Designing a naturalistic in-car tutor system for the initial use of partially automated cars: taking inspiration from driving instructors

  • Anika Boelhouwer
  • Arie Paul van den Beukel
  • Mascha van der Voort
  • Marieke Martens

As commercial cars start to include more automated functions it becomes difficult for drivers to understand how and when to use them safely. While general HMI recommendations for partially automated cars have been made, it is unclear how drivers should be supported during the initial use period. Recommendations for a tutor system that guides drivers in their initial use of partially automated cars are necessary. To gain inspiration for such a tutor system, we examined the existing communication loop of driving instructors and their students. Driving instructors and their students were video recorded during regular driving lessons. The tutoring patterns that were found (i.e. situation and student adaptive feedback, student adaptive tasks, body movements for correcting and requesting actions) during the initial qualitative analysis are discussed. Furthermore, we suggest methods how to implement the tutoring patterns in a tutor system to support drivers in the use partially automated cars.

Measuring susceptibility to alerts while encountering mental workload

  • Christian P. Janssen
  • Remo M. A. van der Heiden
  • Stella F. Donker
  • J. Leon Kenemans

This work-in-progress reports two studies that test if cognitive load reduces human susceptibility to auditory alerts. Previous studies showed that susceptibility (measured using Event-Related Potentials) is reduced when people perform visual or manual tasks, including in driving settings. We investigate whether a cognitively distracting task, without visual and manual components, also reduces susceptibility. Study one suggests that, outside of a driving context, performance of such a cognitively distracting task reduces susceptibility to auditory alerts compared to baseline without distraction. Study two suggests that susceptibility is also reduced when people perform a cognitively distracting task during automated driving. The results have important implications for semi-automated vehicles. Such vehicles rely on alerts to initiate a take-over of control by the human driver. However, if the human is distracted by another task - be it visual, manual, or cognitive - they might not always detect the alert, as their susceptibility is reduced.

A framework of the non-critical spontaneous intervention in highly automated driving scenarios

  • Chao Wang

One trend in the development of autonomous driving is to take the human completely out-of-the-loop. However, we believe that there are good grounds to keep the human in the loop, at some level of control (in particular tactical control), even in the case of full automation. One reason to do so is that the technology may not be flexible enough to always behave according to human needs and preferences, which may vary across people and situations. There are still many scenarios driver want to spontaneously intervene the behaviour of the system, e.g. picking up a friend in a crowded area, selecting a parking lot, that the driver would not like to follow a big truck and want to turn to another road to explore and find new places. In this paper, we present a framework of human intervention in Non-critical Spontaneous Situations (NCSSs). The framework facilitates a systematic definition of these scenarios, which ensures that all relevant aspects are addressed when eliciting requirements or validating concepts.

Driving-task-related human-machine interaction in automated driving: towards a bigger picture

  • Marcel Walch
  • Mark Colley
  • Michael Weber

The role and respective tasks of human drivers are changing due to the introduction of automation in driving. Full automation, where the driver is only a passenger, is still far-off. Consequently, both academia and industry investigate how the interaction between automated vehicles and their drivers could look like and how responsibilities could be allocated. Different approaches have been proposed to allow to deal with shortcomings of automated vehicles: control shifts (handovers and takeovers), shared control, and cooperation. While there are models and frameworks for individual areas, a big picture is still missing in literature. We propose a first overview that aims to bring the three areas in relation based on the particular differences (presence of mode changes, duration of interaction, and level of interaction).

Virtual reality passenger experiences

  • Mark McGill
  • Stephen Brewster

Our research aims to improve passenger journeys across both public and private transport, in cars, buses, planes and trains by utilizing Mixed Reality head-mounted (e.g. visual/auditory augmented and virtual reality) displays. This paper discusses our initial motivations and formative work in this area, both for in-car VR [33, 32] and for in-flight VR [42], and outlines some of the key challenges we anticipate in enabling mixed reality passenger experiences.

For a better (simulated) world: considerations for VR in external communication research

  • Mark Colley
  • Marcel Walch
  • Enrico Rukzio

In the emerging research field of external communication of autonomous vehicles with vulnerable road users (e.g. pedestrians), there is no agreed upon set of methods to design and evaluate concepts. The approaches vary from pure paper-based design studies over Virtual or Augmented Reality simulation to real-world testing of early prototypes. While there are benefits to each of these approaches, the most promising concept is considered to be the virtual reality (VR) approach since it allows for a quick, realistic and safe evaluation of new designs and concepts. A literature review of existing concepts for vehicle-pedestrian communication revealed that only 7 publications and preprints between 2014 and 2019 used VR in their research. We evaluated each based on criteria relevant for pedestrian crossing decisions and factors important for conducting experiments. Our results show relevant considerations when implementing a VR simulator for external communication research and conducting studies in this field.

Exploring the impact of transparency on the interaction with an in-car digital AI assistant

  • Robin Neuhaus
  • Matthias Laschke
  • Dimitra Theofanou-Fülbier
  • Marc Hassenzahl
  • Shadan Sadeghian

Nowadays, intelligent assistants, such as Amazon's Alexa, are widely available. Unsurprisingly, intelligent assistants find their way into cars, in some cases as a major way to interact with the car. We conducted a user enactment exploring the impact of transparency on a possible future user experience with a digital AI assistant in the car. The focus is on whether tasks should be performed in an opaque way, only involving the user when it is necessary, or in a transparent way, always offering the user insights into what is being done and how. We present initial findings indicating a slight preference towards more transparency.

Providing contextual information when encountering traffic interruptions during automated driving: a preliminary study

  • Franck Techer
  • Ebru Dogan
  • Félicie Rampillon
  • Luciano Ojeda
  • Stéphane Feron
  • David Barat
  • Jean-Yves Marteau

The complexity of urban driving environment may create frequent interruptions in the course of automated vehicles, which may have adverse effects on driver's mood, and even result in driver-initiated takeovers. A possible way to counteract this is to help drivers reappraise the situation by providing additional information when they encounter interruptions in vehicle's progression. This study aimed at assessing the effect of an HMI enhanced with contextual information on drivers' mood, attitude and behavior compared to a basic HMI. Participants drove the experimental scenario in a driving simulator, once with and once without receiving contextual information. Results are discussed, potential bias is identified, and perspectives are evoked to improve the proposed method.

SESSION: Videos

Smart bus stop made with interactive surfaces

  • Takashi Matsumoto
  • Motoaki Yamazaki
  • Kei Furukawa
  • Yuya Igarashi
  • Ryotaro Yoshida
  • Michihito Shiraishi

This is a video prototype of Smart Bus Stop which consists of interactive surfaces. The authors have envisioned a bus stop which shows necessary information for bus passengers on its wall and floor surfaces. Rapid prototypes were used to visualize functions of its navigation floor and personal walls. A functional prototype using image recognition and projection has been tested at a lab environment to specify technical details. Then a scenario skit was tested in an arrangement with a real bus. The authors assume that this concept will be applied not only for buses but also for many types of mobility solutions (e.g. ride-hailing app pickup zone, autonomous vehicle pickup zone).

Exploring the concept of the (future) mobile office

  • Christian P. Janssen
  • Andrew L. Kun
  • Stephen Brewster
  • Linda Ng Boyle
  • Duncan P. Brumby
  • Lewis L. Chuang

This video shows a concept of a future mobile office in a semi-automated vehicle that uses augmented reality. People perform non-driving tasks in current, non-automated vehicles even though that is unsafe. Moreover, even for passengers there is limited space, it is not social, and there can be motion sickness. In future cars, technology such as augmented reality might alleviate some of these issues. Our concept shows how augmented reality can project a remote conversant onto the dashboard. Thereby, the driver can keep an occasional eye on the road while the automated vehicle drives, and might experience less motion sickness. Potentially, this concept might even be used for group calls or for group activities such as karaoke, thereby creating a social setting. We also demonstrate how integration with an intelligent assistant (through speech and gesture analysis) might save the driver from having to grab a calendar to write things down, again allowing them to focus on the road.

Novel human-machine interfaces for the management of user-vehicle transitions in automated driving

  • Gary Burnett
  • Wendy Ju
  • Sabine Langlois
  • Andreas Riener
  • Steven Shladover

For automated vehicles operating at SAE Level 4 capability, control could feasibly be passed from machine to human and vice versa -regardless of whether minimal risk condition exists as a fallback solution. We propose two Human-Machine Interfaces (HMIs) to assist in the management of these transitions: 1) A 'Responsibility Panel' providing the necessary feedback for a user to understand who must undertake different driving related activities (look, brake, throttle, steer) and who might be liable if a fault arises (user or car company); 2) A 'Readiness to Drive' testing HMI that only allows a human to retake control when a certain level of competency is demonstrated. Future work should evaluate the effectiveness of our HMIs.

S.Wing: a shared on-demand transportation network system for children

  • Haena Kim
  • Auður Anna Jónsdóttir
  • Jai Shankar
  • Linda Ng Boyle

This video demonstrates the proposed Super Wing (S.Wing), a self-driving modular transportation system that provides children the opportunity to travel by themselves. The safety of children in vehicles is often threatened by distracted driving behaviors. With the on-demand S.Wing system, a child can travel by themselves based on the mutual needs of the parents and the child. S.Wing includes a supervision service by having a certified attendant with the child on-board. This ensures that the child's seat belt is safely fastened and help can be provided immediately. Modular S.Wing pod is ordered by an adult through a mobile application and offers the opportunity for the adult to monitor and interact with their child via live video streaming. The system has the potential to improve children's safety while they are on the road and allow them to engage in social and school-related activities without relying on adults for transportation.

Visualizing implicit eHMI for autonomous vehicles

  • Dylan Moore
  • G. Ella Strack
  • Rebecca Currano
  • David Sirkin

Autonomous vehicles' (AVs) interactions with pedestrians remain an ongoing uncertainty. Studies claim the need for explicit external human-machine interfaces (eHMI) such as lights to replace the lack of eye contact with and explicit gestures from drivers. To further explore this area, we conducted a naturalistic field study using the Ghostdriver protocol to explore how pedestrians react to a simulated driverless vehicle stopping at a crosswalk in real traffic on real roads. All pedestrians crossed in front of the vehicle with little hesitation, even though we did not signal anything beyond the vehicle's stopping motion. A few were surprised at the vehicle's novelty, however most paid little attention to its autonomous appearance. The video includes demonstrative examples of the kinds of reactions we observed, which we hope will further a dialogue on the role of eHMI in AV-pedestrian interactions.

The real T(h)OR: evaluation of emergency take-over on a test track

  • Anna-Katharina Frison
  • Philipp Wintersberger
  • Clemens Schartmüller
  • Andreas Riener

Take-Over Requests are one of the most prominent topics in automated driving research and have recently been addressed by numerous publications. However, most results were solely obtained in simulated driving environments. Thus, it is important to investigate whether or not these results can be validated in user studies with real vehicles. We conducted an experiment on a test track, where a conditionally automated vehicle was simulated using a driving robot. Participants engaged in Non-Driving Related Tasks were interrupted by Take-Over Requests and had to avoid hitting a real obstacle. This video gives a first insight into our setting, which allows evaluation of imminent handover situations on a test track.

CORA, a prototype for a cooperative speech-based on-demand intersection assistant

  • Martin Heckmann
  • Dennis Orth
  • Mark Dunn
  • Nico Steinhardt
  • Bram Bolder
  • Dorothea Kolossa

We present the first speech-based advanced driver assistance prototype. It is based on our previously proposed on-demand communication concept for the interaction between the driver and his or her vehicle. Using this concept, drivers can flexibly activate the system via speech whenever they want to receive assistance. We could show via driver simulator studies that an instantiation of this concept as an intersection assistant, supporting the driver in turning left, was well received by drivers and preferred to an alternative, vision-based system. In this paper, we present a prototype implementation and give details on how we adapted it to the intricacy of urban traffic as well as to the shortcomings of current sensor technology in establishing an adequate environment perception. The accompanying video gives an impression of the interaction between the driver and the system when cooperatively turning left from a subordinate road into crossing traffic.

SESSION: Interactive demos

Disappearing textile interface with inherent feedforwards

  • Haoyu Dong

Currently, interactive devices can easily disappear into a wide range of physical context due to the development of microcontrollers, sensors and actuators. However, this disappearing interaction scenario may cause confusion to the users regarding where and how to interact with it. Therefore, a research project has been conducted to investigate different inherent feedforwards for this disappearing interaction scenario in textile surfaces. A Tangible User Interface (TUI) for volume adjusting was designed, which can provide both visual and shape-changing feedforwards. This interface can be implemented in ubiquitous soft surfaces, in this demo, a textile-based Human-Machine Interaction (HMI) in the vehicle seat. The textile interface provides a both natural and enjoyable HMI concept. This report describes the theoretical background, prototype, user test and demo setup and contribution.

Synopticon: a real-time data fusion platform for behavioral research

  • Michael Hildebrandt
  • Jens-Patrick Langstrand
  • Hoa Thi Nguyen

Synopticon is a collection of tools for managing complex, multi-sensory data streams when conducting behavioral research in simulators or on the road. Synopticon's functionality includes automatic gaze object detection, multi-camera synchronization, camera-sensor synchronization (e.g. physiological sensors), camera-simulator synchronization, and support for computer vision and machine learning.

Concept simulator K3F: a flexible framework for driving simulations

  • Jan Conrad
  • Dieter Wallach
  • Arthur Barz
  • Daniel Kerpen
  • Tobias Puderer
  • Andreas Weisenburg

Developing and evaluating automotive user interfaces and driver assistance systems in real vehicles and in simulators is an effortful and costly process. To reduce the effort in defining and conducting user studies, this demo introduces the concept simulator K3F. K3F is a highly flexible driving simulation environment that allows a quick and easy modification and exchange of its hard- and software components including simulation software, dashboard/infotainment, and peripheral systems. The K3F software supports a convenient setup of user studies and a comprehensive collection of data. It simplifies the creation and modification of virtual environments and the development of real-world street scenarios. The integration of a driver model based on the ACT-R cognitive architecture allows not only the explanation of observed behavior, but also the prediction of human driving patterns.

Virtual reality driving simulator for user studies on automated driving

  • Andreas Riegler
  • Andreas Riener
  • Clemens Holzmann

Nowadays, HCI Research on automated driving is commonly carried out using either low-quality setups with 2D monitors or expensive driving simulators with motion platforms. Furthermore, software for user studies on automated driving is often expensive and hard to modify for different scenarios. We plan to fill this gap by proposing a low-cost, high-fidelity immersive prototyping simulator by making use of virtual reality (VR) technology: AutoWSD - Automated driving simulator for research on windshield displays. We showcase a hybrid software and hardware solution as well as demonstrate how to design and implement scenarios for user studies, and thereby encourage discussion about potential improvements and extensions for AutoWSD, as well as the topic of trust, acceptance, user experience and simulator sickness in automation.


Doctoral Colloquium

Laura Boffi, University of Ferrara, Ferrara, Italy
Supervisor: Giuseppe Mincolelli

Title: "Cars with an Intent"

Abstract: In a near future autonomous cars will populate the urban environment. While they will actually consist of urban- scale robots immersed in a socio-technical context, so far autonomous cars have been almost exclusively looked at from the perspective of safety and functionality and they have not been designed towards being social “urban beings”. Cars with an Intent” is a design- research driven PhD project which envisions cars acting beyond their core objectives of functionality and safety, by embedding intentional behaviours to prompt positive and enriching car-to-human and human-to-human relationships.


Michael Gerber, CARRS-Q, QUT, Brisbane, Australia
Supervisor: Ronald Schroeter

Title: "Attention Management to Improve Fallback-Readiness in Conditional Automated Vehicles"

My research aims to explore automotive UI-concepts to support fallback-readiness in conditional automated vehicles. Such vehicles allow drivers to perform "non-driving related tasks" (NDRTs), as long as they remain ready to safely take over control of the vehicle when needed. However, one of the most challenging hurdles in this control transition is the likelihood of the driver’s lack of Situation Awareness (SA). Good SA is the basis of a safe transition. This research focuses on improving SA during NDRTs. My approach is to design AR-applications that utilise a fallback-driver’s attention in two ways. Self-initiated “voluntary attention" (VA) during the NDRT and interruption management to facilitate a task switch. Overall, the goal is a better management of the drivers’ attention towards the driving-related task during automated driving. An iterative SA adapted user-interface design method is proposed to develop design-considerations, implement proof of concepts, and evaluate the concepts in simulator
studies.


Kai Holländer, LMU Munich, Munich, Germany
Supervisor: Andreas Butz

Title: "Enhancing the Interaction Between Automated Vehicles & Vulnerable Road Users”

Researchers from industry and academia work on increasing the level of automation for everyday vehicles. This endeavour will eventually result in fully automated driving where no human controller is necessary, marking a groundbreaking change in the automotive sector. Vehicles interior, the way we think about individually owned cars and infrastructure might transform. As a side effect, the question arises if new explicit vehicle to pedestrian communication signals should be added to foster safety in the interaction between automated vehicles and vulnerable road users. At the moment, we see a lively debate in the automotive research domain regarding this open challenge. I believe that the interaction between those entities is a crucial aspect for the acceptance and success of automated driving. For my PhD thesis I want to contribute design guidelines on how to enhance safety for vulnerable road users when interacting with automated vehicles. Furthermore, I aim to propose novel concepts for pedestrian-vehicle interaction and suggestions on how to evaluate them.


Abhishek Mukhopadhyay, Indian Institute of Science, Bangalore, India
Supervisors: Imon Mukherjee and Pradipta Biswas

“Ride Quality Monitoring of Driver and Co-passengers in Autonomous Vehicle”

In autonomous vehicles, awareness of autopilot capacities and incapacities leads to distinct ‘co-driving’ practices. ‘Co-driving’ systems are designed to keep drivers engaged as well as interact with autopilots and other vehicles. Automatic detection of driver’s and co-passenger’s mental state and cognitive load can be used to take evasive action to prevent accident. Even though there are lot of options like facial expression, acoustic feature of voice, skin responses, and eye gaze movements are explored with varying range of success, detecting and reducing cognitive load of driver and co-passengers is challenging. This dissertation will contribute in developing an intelligent system which will estimate cognitive load of both driver and co-passengers considering outside situations and inside car environment. Accordingly, it will alert the driving system and enhance overall safety of autonomous vehicle and comfort level of drivers and passengers.


Holger Schmidt, Stuttgart Media University, Stuttgart, Germany
Supervisors: Albrecht Schmidt, Gottfried Zimmermann

Title: "Gaze Based Interactions in Future Automotive Applications Vehicles and ‘Readiness’ Test Design"

The ongoing development of self-driving vehicles seems to go well with the growing demand for the use of mobile devices throughout the day. However, for a long time to come, autonomous vehicles will still need manual intervention in unforeseen and potentially dangerous situations. Therefore, it is important for the drivers of autonomous vehicles to stay aware of the traffic situation around, and so to be quickly able to take over control of the vehicle. As core part of this dissertation we developed an adaptive prototype which represents media content on a simulated windshield display and uses gaze tracking as an additional
form of input device for the driver. Although we intentionally pull away the driver’s gaze from the driving situation, this seems to be less of a distraction than using hand-held mobile devices or dash-integrated display devices. We hypothesize that the time to regain control with our prototype is shorter than when using a hand-held mobile device or dash-integrated display device. We plan to evaluate this prototype in our stationary driving simulator.


Emily Shaw, University of Nottingham, Nottingham, United Kingdom
Supervisors: Gary Burnett, David Large

Title: "Ready, Set, Drive? Exploring Driver Skills and Training for Future Automated"

The introduction of vehicles with automation capability to the commercial market has and will fundamentally change the role of the driver. However, this role, as well as the skills required to effectively and safely carry out the driving task in future automated vehicles has yet to be mapped out. Many drivers do not have an accurate mental model of what automated systems can do. This drives behavioural adaptations that typically lead to well-known human performance reduction costs associated with automation. Using a combination of qualitative research methods and simulator studies, this research aims to explore the specific skills required to safely and effectively carry out all elements of the driving task for future vehicles. A key objective of this research is to ensure drivers of these vehicles are ready to take on the redesigned role of the driver and able to resume the driving task during dynamic operations.


Taufik Akbar Sitompul, School of Innovation, Design and Engineering, Mälardalen
University, Västerås, Sweden and CrossControl AB, Västerås, Sweden
Supervisor: Rikard Lindell

Title: "Using See-through Visualization in Industrial Vehicles”

This PhD project investigates the use of see-through visualization in three different industrial vehicles: excavators, mobile cranes, and forest harvesters. We
hypothesize that, by presenting critical information near operators’ line of sight, the risk of overlooked information while operating industrial vehicles can be reduced. The research involves (1) reviewing available see-through displays in order to envision what kind of visualization that could be possibly made, (2) indirect approaches for gaining sufficient knowledge on industrial vehicles’ operations, (3) developing mixed reality simulations to present see-through information for industrial vehicles’ operations, and (4) evaluating the effectiveness of such visualization by involving test users in controlled environments. The main expected contribution of this PhD project is the concepts of see-through visualization, which could be utilized to help operators of industrial vehicles to perform their work, while maintaining awareness of the machine and the surroundings.