As a promising cognitive construct, situation awareness (SA) helps to assess operators' dynamic knowledge of the driving task in different contexts as well as to inspire driving assistance system design. This workshop will discuss SA in an automotive context, emphasizing the increasing challenges that are related to vehicle automation. We will begin with an understanding of SA and SA information needs. Next, we will explore how to apply SA concepts to the design of Advanced Driver Assistance Systems (ADAS) and Automated Driving (AD) user interfaces. Finally, we will discuss several useful SA measures, using exercises to demonstrate the value of some example measures.
This workshop discusses the balance between safety and productivity as automated vehicles turn into 'mobile offices': spaces where non-driving activities are performed during one's daily commute. Technological developments reduce the active role of the human driver that might, nonetheless, require occasional intervention. To what extent are drivers allowed to dedicate resources to non-driving work-related activities? To address this critical question, the workshop brings together a diverse community of researchers and practitioners that are interested in questions as follows: what non-driving activities are likely to be performed on one's way to work and back; what is a useful taxonomy of these tasks; how can various tasks be studied in experimental settings; and, what are the criteria to assess human performance in automated vehicles. To foster further dialogue, the outcome of the workshop will be an online blog where attendees can contribute their own thoughts: https://medium.com/the-mobile-office.
Methods and metrics for studying interactions between automated vehicles and other road users in their vicinity, such as pedestrians, cyclists and non-automated vehicles, are not established yet. This workshop focuses on identifying the strengths and weaknesses of various methodologies that could potentially be used to study such interactions. The objective lies in determining the proper experimental design, sensitivity of metrics for measuring user behavior, ecological validity, generalizability of findings, extraction of insights regarding how findings can be translated into actionable requirements, and the alternatives for conducting longitudinal field studies. It will be of an interactive nature and involve hands-on activities. The workshop will consolidate existing knowledge, identify recurring issues, and explore the path towards resolving these issues. The outcome will be compiled into a paper to share this valuable knowledge with a broader research community.
Highly automated vehicles require users to take over control when they reach the limits of the automation. These control transitions can lead to hazardous accidents if the human and automation do not have a consistent mental model of the abilities, authorities, and responsibilities of each other. In this workshop, we aim to apply existing knowledge to identify issues in control transitions in co-operative human-machine systems and propose solutions for them. Concrete focus points concern controllability, driving mode awareness, interaction design problems such as conveying the state of driver, automation, and vehicle, and evaluation methods.
Augmented reality (AR) has the potential to improve road safety, support more immersive (non-) driving related activities, and finally enhance driving experience. AR may also be the enabling technology to help on the transition towards automated driving. However, augmented reality still faces a number of technical challenges when applied in vehicles, and also several human factors issues need to be solved. In this workshop, we will discuss potential and constraints as well as impact, role, and adequacy of AR in driving applications. The primary goal of this workshop is to define a research agenda for the use of AR in intelligent vehicles within the next 3 to 5 years.
Automated driving systems (ADS), especially in higher levels of automation, seem to be the new focus of innovation regarding future mobility. Technological achievements of traveling automation open up new challenges for road traffic. Existing automotive research focuses on problem solving and observational approaches including users and their imagination of the future of mobility to analyze acceptance and user experience of "incremental" (step-wised improved) innovations. On the other hand, "radical" (something new, enabled by technology or meaning change) innovations extensively increase product quality leaping over incremental innovation. This workshop aims to challenge the current research approaches to automated driving against "trying to improve sitting in a horse carriage" and discuss how we can design "radical" innovations for ADS beyond the "horse carriage". Within this interactive workshop, we will utilize a design thinking approach to refocus on underlying problems that ADSs originally aim to solve and generate ideas for radical innovations.
In-car emotion detection and regulation have become an emerging and important branch of research within the automotive domain. Different emotional states can greatly influence human driving performance and user experience both in manual and automated driving conditions. The monitoring and regulation of relevant emotional states is therefore important to avoid critical driving scenarios with the human driver being in charge, and to ensure comfort and acceptance in autonomous driving. In this workshop we want to discuss the empathic user interface research to address challenges and opportunities and to reveal new research directions for future work. This workshop provides a forum for exchange and discussion on empathic user interfaces, including methods for emotion recognition and regulation, empathic automotive human-machine interaction design, user evaluation and measurements, and subsequent improvement of autonomous driving experience.
Mobility is transcending towards flexible sharing, combined transportation modes, increased vehicle automation and digital customer services. User experience and acceptance are highly important criteria for the success of such novel concepts, and consequently their human interface has to be designed with creativity and responsibility. This workshop addresses this need by providing a holistic frame for ideation and discussion of user interface concepts for public transport vehicles. The expected outcome of the workshop is a set of opportunities, design concepts and challenges. These could be the input for a research agenda for the field.
This workshop addresses key trust-related issues in the context of automated driving and aims at establishing a common ground for future research. Building on the outcome of the previous workshop at AutoUI 2017, three main aspects are targeted within interactive sessions: (1) Formulation of a comprehensive set of definitions for trust in automated systems; (2) Development of interface approaches for mitigating overtrust and undertrust issues; (3) Identification of an appropriate timing of trust-related cues. Thereby, the current research efforts of both workshop organizers and participants are used as a starting point for several breakout sessions, each addressing one of the three main workshop goals. The outcome of this workshop will provide a benchmark for future work in the field and is also intended to inspire joint publications among the participants.
The aim of this half-day workshop is to explore the topic of interaction between automated vehicles and vulnerable road users (VRUs), such as pedestrians or cyclists, in an interactive setting. The workshop is hands-on with no submission of position papers or slots for participant presentations. It aims at deriving knowledge about communication needs across various traffic scenarios resulting in metrics and methodologies for evaluating communication needs by having the participants go through a brief design and evaluation process in a two-step setting. The workshop results will be collected and preserved on a website hosted by the organisers post-workshop.
By discovering unconscious ritualistic actions in everyday driving such as preparing for the morning commute, we seek design opportunities to help people achieve critical emotional transitions such as moving from an anxious state to relief. We have gathered and analysed data from workshops and phone interviews from a variety of vehicle and public transport users to capture these key ritualistic scenarios and map their emotional transitions. Design ideation is used to generate concepts for improving the in-vehicle user experience through redesign of vehicle layout, environment and analogue and digital interfaces. We report a set of human-centred design approaches that allow us to study the details of action, objects, people, emotions and meaning for typical car users which are indispensable for designing driving experiences and are often overlooked by the car design process.
Advances in sensing technology enable the emotional state of car drivers to be captured and interfaces to be built that respond to these emotions. To evaluate such emotion-aware interfaces, researchers need to evoke certain emotional states within participants. Emotion elicitation in driving studies poses a challenge as the driving task can interfere with the elicitation task. Induced emotions also lose intensity with time and through secondary tasks. This is why we have analyzed different emotion elicitation techniques for their suitability in automotive research and compared the most promising approaches in a user study. We recommend using autobiographical recollection to induce emotions in driving studies, and suggest a way to prolong emotional states with music playback. We discuss experiences from a a driving simulator study, including solutions for addressing potential privacy issues.
This paper presents six interface concepts for Autonomous Vehicles to communicate their intention to Vulnerable Road Users. The concepts were designed to be scalable and versatile, and attempt to address some of the limitations of existing concepts towards an unambiguous communication. The interfaces exist currently as initial concepts generated from brainstorming sessions and are in the process of being validated through prototype development and controlled studies.
Eye tracking technology is becoming an important component of Advanced Driver Assistance Systems. Unfortunately, eye tracking systems require calibration to correctly associate pupil positions with gaze directions, and periodic calibration would be necessary because the accuracy will deteriorate overtime. This routine reduces the usability and practicability of in-vehicle eye tracking technology. We propose an approach to automatically perform real-time eye tracking calibration. We apply an object detection algorithm to continually detect objects that would likely attract the drivers' attention, such as traffic signs and lights. Those are, in turn, used as moving stimuli for the gaze accuracy maintenance procedure. The error vectors between recorded fixations and moving targets are calculated immediately and the weighted average of them is used to compensate for the offset of fixations in real-time. We evaluated our method both on laboratory data and real driving data. The results show that we can effectively reduce the gaze tracking errors.
Poor trust calibration in autonomous vehicles often degrades total system performance in safety or efficiency. Existing studies have primarily examined the importance of system transparency of autonomous systems to maintain proper trust calibration, with little emphasis on how to detect over-trust and under-trust nor how to recover from them. With the goal of addressing these research gaps, we first provide a framework to detect a calibration status on the basis of the user's behavior of reliance. We then propose a new concept with cognitive cues called trust calibration cues (TCCs) to trigger the user to quickly restore appropriate trust calibration. With our framework and TCCs, a novel method of adaptive trust calibration is explored in this study. We will evaluate our framework and examine the effectiveness of TCCs with a newly developed online drone simulator.
While Fully Automated Vehicles (FAVs) have the potential to significantly expand older adults' access to mobility, limited research has focused on older adults' perceptions of such technology. The current driving simulation-based study will investigate factors that may govern older adults' perceptions of FAVs with respect to trust, acceptability, and safety. Participants (65+) will experience scenarios of manual and fully automated driving in a high-fidelity driving simulator. Their perceptions of the FAV will be measured before and after the driving experiences using questionnaires. Physiological and behavioral data will also be collected throughout the driving sessions to investigate whether positive or negative perceptions of technology are associated with behavioral or physiological responses. In addition, driving performance and driving styles of participants will be captured during manual driving to investigate whether an alignment between an individual's driving style and the FAV driving style will lead to a more positive perception towards FAVs.
There are increasing needs in communication between an autonomous car and a pedestrian. Some conceptual solutions have been proposed to solve this issue, such as using various communication modalities (eyes, smile, text, light and projector) on a car to communicate with pedestrians. However, there is no detailed study in comparing these communication modalities. In this study, we compare five modalities in a pedestrian street-crossing situation via a video experiment. The results show that a text is better than other modalities to express car's intention to pedestrians. In addition, we compare the modalities in different scenarios and environments as well as pedestrian's perception of the modalities.
Previously, the classical analog speedometer was the prevalent form of speed indication in cars. With the emergence of new, freely programmable, instrument clusters, its now possible to use any form of visualization to display driving speed. In a driving simulator study with n=17 subjects, we examined the impact of diverse speedometer variants on driving performance, gaze duration, and subjective ratings of user experience and workload. Initial results confirm diverse effects. The conventional speedometer resulted in the shortest eyes off-road times, but was rated worst with respect to UX (hedonic quality). The digital speedometer variant achieved polarizing results while the zoom speedometer performed very well in general. The bracket and linear versions of a speedometer were rated poor in most of the analyzed criteria compared to the alternatives.
In the sharing-economy, users do not necessarily own a car but use commercial services to rent a car according to their needs, requirements, and liking. To provide the best driving experience and safety, it is necessary that they understand the car and its functionality. Assuming that future vehicles will facilitate mostly screens for information presentation, a possible way to foster this understanding is by remote user-controlled dashboard personalization. By that, users can design a layout according to their individual preferences prior to the drive and by that, use a familiar looking dashboard. Main contributions of this work are (1) first design guidelines for mobile applications allowing personalization of dashboards and (2) information on users views regarding dashboard personalization. Results of a design workshop with usability experts and a follow-up usability study show that user-controlled personalization has potential and our design guidelines provide a valid foundation for future research.
Automated vehicles could omit traditional steering controls to provide larger spaces for driver-passengers or prevent unnecessary interventions. However, manual control could still be necessary to provide manual driving fun or respond to Take-Over requests (TORs). This paper investigates, whether brought-in consumer devices (in this case a 10.2 inch tablet) can act as input alternative to classical steering wheels in TOR situations. Results of a driving simulator study (n=14) confirm that responding to Take-Overs with nomadic devices can reduce response times in imminent transitions during engagement in Non-Driving Related Tasks (NDRTs), as a change of the 'device in hands' is omitted. Further on, subjective scales addressing user experience show that the approach is well accepted. We conclude that nomadic device integration is a crucial pre-requisite for the success of automated vehicles, but for steering input several pivotal issues still need to be solved.
This paper presents DriverSense, a novel experimental platform for designing and validating onboard user interfaces for self-driving and remotely controlled vehicles. Most of currently existing academic and industrial testbeds and vehicular simulators are designed to reproduce with high fidelity the ergonomic aspects associated with the driving experience. However, with increasing deployment of self-driving and remote controlled vehicular modalities, it is expected that the digital components of the driving experience will become more and more relevant, because users will be less engaged in the actual driving tasks and more involved with oversight activities. In this respect, high visual testbed fidelity becomes an important pre-requisite for supporting the design and evaluation of future onboard interfaces. DriverSense, which is based on the hyper-realistic video game GTA V, has been developed to satisfy this need. To showcase its experimental flexibility, a set of selected case studies, including Heads-Up Diplays (HUDs), Augmented Reality (ARs) and directional audio solutions, are presented.
One of the factors for stress reduction among vehicle drivers is to be aware that stress is present. This project presents a biometric interface for stress detection in drivers, built with open source sensors and hardware. In two series of experiments, we induce stress in test subjects by making them drive progressively difficult scenarios in a simulator. Using the C4.5 classification algorithm, we classified the subjects' biometric data in order to determine whether the subject was stressed or not. In another series of experiments, we tested the efficacy of two driver feedback systems, a haptic one and a visual one. Identifying a stressful situation allows real-time feedback to drivers, so they can be aware of their stressed state, thus being able to take corrective actions on time, and avoid behavior leading to an accident.
This paper investigates whether an Augmented Reality Head-up Display (AR-HUD) supports usability and reduces visual demand during conditionally automated driving. In a driving simulator study, 24 drivers experienced several driving scenarios while driving with conditional automation. The drivers completed one drive with a fully developed HMI designed for automated driving (AD-HMI) that presented visual information in the cluster display and included auditory and tactile output. In another drive, the same drivers were additionally supported by dynamic and static visual feedback via an AR-HUD concept. The latter was preferred by more than 80% of the sample due to the higher information content and the possibility to leave the eyes on the road. Drivers rated the AR concept to be better understandable and more useful. Eye-tracking revealed lower percentage of gazes to the instrument cluster during AR-HUD drives.
Situation awareness in highly automated vehicles can help the driver to get back in the loop during a take-over request (TOR). We propose to present the driver a detailed digital representation of situations causing a TOR via a scaled down digital twin of the highway inside the car. The digital twin virtualizes real time traffic information and is displayed before the actual TOR. In the car cockpit an augmented reality headset or a Stereoscopic 3D (S3D) interface can realize the augmentation. As today's hardware has technical limitations, we build an HMD based mock-up. We conducted a user study (N=20) to assess the driver behavior during a TOR. We found that workload decreases and steering performance raise significantly with the proposed system. We argue that the augmentation of the surrounding world in the car helps to improve performance during TOR due to better awareness of the upcoming situation.
The communication of system uncertainties may be key for overcoming challenges related to overtrust in automated driving. Existing approaches are limited to conveying uncertainties using visual displays in the instrument cluster. This requires operators to regularly monitor the display in order to perceive changes which impedes the execution of non-driving related tasks and thereby degrades the user experience. This study evaluates variables for the communication of uncertainties using peripheral awareness displays, considering changes in brightness, hue, position, size, pulse frequency, and movement speed. All variables were assessed in terms of how well participants can distinguish different instances, how logical they are, and how interrupting to a secondary task. With the exception of changes in position, all variables were ranked highly in terms of logic while changes in pulse frequency were perceived as most interrupting. The results inform the development of unobtrusive interfaces for uncertainty communication.
Recent findings have indicated that the communication of uncertainties is a promising approach for overcoming human factors challenges associated with overtrust issues. The existing approaches, however, are limited in that they require operators to monitor the instrument cluster to perceive changes. As a consequence, significant changes may be missed and operators are regularly interrupted in the execution of non-driving related tasks even if the system is performing well. To overcome this, unobtrusive interfaces are required that are only interruptive if needed. This paper presents a lab-based study aiming at the preliminary evaluation of haptic variables for communicating automation uncertainties using a haptic vehicle seat. The initial results indicate that particularly increases in amplitude as well as a rhythm consisting of long vibrations separated by short breaks are well suited for communicating the exceedance of specified uncertainty thresholds. The communication of decreases in uncertainty using vibration cannot be recommended.
This work investigated differences between preference and performance in Human Computer Interactions and their dependency to the respective user skill level. A driving simulator study with N=57 participants was conducted to evaluate a Human-Machine Interface for a Level 3 Automated Driving System. Two experimenters rated interaction performance (e.g., input errors, mode confusions, etc.). Additionally, participants reported their preference by means of perceived usability and acceptance. The sample was split into four groups based on the quartiles of performance. Results revealed that the four groups differed significantly in their performance. However, preference ratings did not show this effect. Thus, the present research could find evidence that the dissociation of performance and preference depends on participants' skills. Finally, future research directions are outlined.
Participation in road traffic frequently requires fast and accurate understanding of environmental object characteristics. Here we introduce an assistance function and corresponding interface targeted at enhancing a driver's perception and understanding of environment dynamics in order to improve driving safety and performance. The core functionality of this assistance function lies in the tactile communication of spatio-temporal proximity information about one or multiple traffic participants that are on a collision trajectory with the ego-vehicle. We investigate effects of this assistance function on driver perception and performance in a driving simulator study. Preliminary results show that participants were able to intuitively understand and use the assistance function and that its utility seems to increase with task difficulty.
The ability to monitor and detect potentially dangerous behaviour in surrounding traffic is vital for the development of intelligent vehicles. However, data collection for these kinds of scenarios is difficult in real-life, and a driving simulator therefore becomes an important substitute. In this paper we present an approach to enhance driving simulators. We experiment on an open source development platform, which is used to test real-life use cases within a simulated vehicle environment. We propose replacing pre-programmed traffic dynamics with real driving data recorded from human drivers in the same environment. This enhances the engagement of the host driver in the more realistically simulated traffic scenario. Signal lights and indicator sounds are also integrated to enrich the driver's sensation. Our preliminary quantitative and qualitative evaluation shows that our enhanced traffic simulation results in an improvement to the driver's perception of the realism of the driving simulator.
The way drivers relate to cars is likely bound to change with the rise of automated vehicles and new ownership models. However, personal relationships towards products are an important part of buying decisions. Car manufacturers thus need to provide novel bonding experiences for their future customers in order to stay competitive. We introduce a vehicle attachment model based on related work from other domains. In interviews with 16 car owners we verify the approach as promising and derive four attachment types by applying the model: interviewees' personal attachments were grounded on either self-empowering reasons, memories with the car, increased status, or a loving friendship towards their car. We propose how to address the needs of these four attachment types as a first step towards emotionally irreplaceable automated and shared vehicles.
Many automotive user studies allow users to experience and evaluate interactive concepts. They are however often limited to small and specific groups of participants, such as students or experts. This might limit the generalizability of results for future users. A possible solution is to allow a large group of unbiased users to actively experience an interactive prototype and generate new ideas, but there is little experience about the realization and benefits of such an approach. We placed an interactive prototype in a public space and gathered objective and subjective data from 693 participants over the course of three months. We found a high variance in data quality and identified resulting restrictions for suitable research questions. This results in concrete requirements to hardware, software, and analytics, e.g. the need for assessing data quality, and give examples how this approach lets users explore a system and give first-contact feedback which differentiates highly from common in-depth expert analyses.
Advanced vehicle technologies (AVTs) have the potential to modify an older driver's behind-the-wheel performance by compensating for age and/or health-related changes that can negatively impact their ability to operate a motor vehicle. However, the safety implications of these rapidly evolving technologies are not well understood. A scoping review was conducted to understand the current state of research on AVTs with a particular focus on subjective outcome measures specific to older drivers. Sixteen articles met the inclusion criteria for this scoping review. The methods used to address subjective outcomes across studies were summarized. Seven main subjective outcomes were identified: trust, functionality, satisfaction, usability, workload, acceptability, and usefulness. The results highlight inconsistencies in the research with defining these concepts. Consequently, there is an identified need for a more rigorous classification system and consistent application and interpretation of subjective measures with regard to AVTs.
When it comes to highly automated driving, several studies indicate that drivers should be "kept in the loop" when driving in automated mode in order to be better prepared when they need to take over. The challenge lies in finding a way that raises the drivers' situation awareness without annoying the driver, who may be occupied with another task. Ambient light systems using LED visualizations provide a feasible way to draw attention, however, the kind of information that can be communicated is limited. In this paper, we present an exploratory study, where we investigated the semantic quality of different LED patterns (shown on an LED-strip) by capturing experience and associated information contents. Our initial findings show that LED visualizations, which are experienced quite similar at first, can nonetheless be distinctive with regard to the associated information contents.
Drivers' emotional and physical states have a big impact on their driving performance. New technological sensing methods are currently investigated and will soon allow to automatically detect the driver's state. Yet, how to communicate the detected state to the driver is less well understood. In an iterative design process, we developed two concepts to increase the driver's awareness of this issue:
(1) a dashboard which provides a continuous overview of four potentially safety-critical states, namely drowsiness, aggressiveness, high workload, and hypoglycaemia, and
(2) on-time warnings which alert the driver to an immediate safety risk. We then let 70 drivers experience both concepts in a driving simulation and collected their qualitative feedback in post-study interviews. We found that participants preferred to receive only safety-critical notifications of the driver's state but appreciated a progressive status indicator for easier interpretation. Based on our findings, we suggest first recommendations for visualizing driver's states.
One potential contributor to mitigating the CO2 emissions caused by road transport is eco-driving. Ecodriving encompasses all driver behaviors performed to reduce the vehicle's energy consumption. Drivers' optimal on-road interaction with the kinetic energy resources is particularly relevant for eco-driving success. Hence, the question is what information do drivers require to optimally interact with the kinetic energy resources? We conducted ten interviews with hybrid electric vehicle (HEV) eco-drivers who actively interact with kinetic energy resources on a daily basis. From these interviews, a set of information requirements was derived. Further steps will comprise the development and testing of an interface prototype based on these information requirements.
Voice interaction provides a natural and efficient form of communication with our cars. Current vehicles require the driver to push a button or to utter an artificial keyword before they can use speech input. This limits the potential naturalness and efficiency of voice input. In human communication, we usually use eye contact to express our intention to communicate with others. We conducted a user study with 25 participants that investigated gaze as a means to activate speech input while being occupied with a primary task. Our results indicated a strong dependency on the task. For tasks that refer to information on the screen, gaze activation was superior to push-to-talk and keyword, but it was less valuable if the task had no relation to screen content. We conclude that gaze cannot replace other modes for activation, but it can boost efficiency and user experience for display related tasks.
Automated driving functions are traditionally tested in on-road studies, however, mainly focusing on technological aspects (sensor accuracy, etc.). Field studies addressing users' individual needs and expectations are still rare. As a consequence, it is still unclear whether or not automated driving systems will reach a comprehensive market penetration. To address this issue, we set-up a user study and compared users' acceptance (utilizing TAM) as a passenger (N=12) of a traditional group taxi vs. an automated bus shuttle both driving in regular traffic. Results show that participants questioned the usefulness of the automated bus shuttle, mainly due to the reduced speed, but, on the other hand, rated their perceived ease of use and their attitude towards using the ADS more positive than expected. Thus, we conclude that with further development of the technology and by including a user-centered design approach, high user acceptance of ADSs can finally be achieved.
Currently, paramedics are provided information from the 911 operator regarding the emergency faced by the patient/victim in a medical distress. While many distress scenarios for a patient/victim exist, the challenges faced by a victim with a medical problem has to be imagined by the paramedics driving to the emergency situation. Augmenting the emergency scenario on the ambulance instrument panel of the vehicle dashboard with pre-triage scenarios of patients will help to prepare paramedics for an improved patient care protocol on site. Providing the paramedics with patient distress conditions on a real-time basis will help with facilitating the onboarding experience using a syncing of vital statistics, body positioning and level of medical distress.
Natural science research methods are appropriate for the study of existing and emerging phenomena; However, they are insufficient for the study of "wicked problems". We use design science research methodology to answer research questions relevant to human problems via the creation of innovative artifacts (RAUX), thereby contributing new knowledge to the body of scientific evidence. RAUX demonstrates the front-end interaction of a supportive Remote UX design system in Automotive. It does not only mitigate the automotive domain deficiencies that previous research highlights but also supports in remote UX research and design activities. The users can navigate through and be supported in various activities including contextualization, communication, and presentation.
Scribble is a haptic Human Machine Interface (HMI) that utilizes a drawing interaction to steer a semi-autonomous vehicle. The interface uses a display and a haptic pointing device that enables the driver to draw a path which represents the vehicle's future trajectory. The objective of Scribble is to blur the lines between who is in control and proposes a more 'muddy' form of interaction where higher decision making is performed by the human operator, whilst the machine manages more mundane driving tasks such as lane keeping and managing speed. We present a fully interactive desktop-based demo that includes a 3D simulation environment for highway driving. The interface has been tested in an elaborate simulation study in collaboration with the Mercedes-Benz Advanced Digital Design team at Daimler AG, Sindelfingen, Germany. The contents of this document describe: the Scribble concept, system setup, demo setup, and contribution.
This demo presents DriverSense, a novel experimental platform for designing and validating onboard user interfaces for self-driving and remotely controlled vehicles. Most of currently existing vehicular testbeds and simulators are designed to reproduce with high fidelity the ergonomic aspects associated with the driving experience. However, with increasing deployment of self-driving and remotely controlled or monitored vehicles, it is expected that the digital components of the driving experience will become more relevant. That is because users will be less engaged in the actual driving tasks and more involved with oversight activities. In this respect, high visual testbed fidelity becomes an important pre-requisite for supporting the design and evaluation of future interfaces. DriverSense, which is based on the hyper-realistic video game GTA V, has been developed to satisfy this need. To showcase its experimental flexibility, a set of self-driving interfaces have been implemented, including Heads-Up Display (HUDs), Augmented Reality (ARs) and directional audio.
With advancements in display technology and increased user demands, automotive manufactures are targeting multiple displays, in various sizes and orientations. They are also adding features and interactions that span across multiple displays in an attempt to engage more users in the car. Today developers and designers use 'rapid prototyping' tools like UXPin and Framer, to quickly validate and test their ideas. But these tools do not support Multi-Screens and Multi-Displays prototyping environments. Also, other prototyping tools like EB Guide and Qt, which support such environments, require extensive software learning and development time that denies the purpose of 'rapid prototyping'. Hence, we propose a solution that would help any automotive designer and/or developers to quickly prototype and test their multi-media solutions for multi-screen vehicle system using any browser based UX tool.
There have been clear needs to address the impact of driver emotions such as anger and happiness on aggressive or distracted driving behaviors. To tackle this issue, we have developed an affect detection system for identifying a driver's emotional arousal, including the driver's physiological data and vehicle's kinematic data. Multimodal sensors are wirelessly connected to a smartphone and then, all the driver and driving data are displayed on our Android application in real-time. With the benefits of this multimodal, portable, non-intrusive, and cost-efficient system, subsequent experiments were designed to test and improve the system. After identifying significant features, various machine learning algorithms will be used to model a driver's emotional states. Our final goal is to develop an optimized classifier of specific emotional states including arousal and valence. We hope that we can spark lively discussions on driver emotions at AutoUI and use the feedback to improve our system.
Conducting User Experience Design (UXD) research in the context of automated driving today is challenging due to the lack of availability of automated cars that are SAE level 3 or above. This Extended Abstract presents a novel methodological tool for rapid prototyping, as well as testing and evaluation of future automotive interface design ideas. The tool combines Virtual Reality (VR) and real-world driving videos in a flexible Unity3D environment to create a visually realistic and immersive experiences. Using portable VR headsets, these exploratory experiences can be safely created anywhere anytime with little overhead, as shown through this interactive demo. We provide insights of using this method in the context of two exploring two concepts that use novel windscreen technologies and AR applications for level 3 automated cars and above.
Auditory displays have widely been used in vehicle contexts because they do not require drivers' visual attention. Examples include collision warning systems, parking assistance systems, and personal navigation devices to name a few. To explore the full potential of sonification applications in the automotive context, we have developed an in-vehicle sonification system . Our system aims to integrate driving performance data from the vehicle and driver affective state data from physiological sensors in real-time. Those data can be mapped to auditory parameters and generate various sonification pieces. This sonification can be used to provide feedback about how drivers are driving, guidance on how they are supposed to drive, or situation awareness about the traffic environment. As an early prototype, the present work demonstrates driving data sonification using the MiniSim simulator. We hope this sonification work can enhance driver experience as well as road safety in both manual and automated vehicles.
The purpose of this project was to understand the difficulties encountered in the proper placement of secondary controls on an automotive instrument panel while driving a rented car on a short-term basis, such as provided through a service such as Zip Car. Suggested interaction design guidelines for designing a sharing-friendly car were the outcome. The results of the project suggest the most common contemporary control locations and interaction types should be used to improve usability. A summary of the guidelines is provided at the end of the video.
The shift from manual to automated driving has been one of the most focused research topics in the AutomotiveUI community in recent years. User acceptance and experience was extensively discussed in surveys and simulator studies, however, naturalistic driving studies are still rare. Now, in 2018, we had the chance to test the first authorized automated vehicle (shuttle bus service) in Germany driving in regular traffic. Our video aims to be suggestive of our study setup, in which we compared users' acceptance and experience as a passenger in an automated driving versus a manual driven vehicle. In this video paper, we provide first insights into our results.
Predictive touch is an emerging HMI technology that can significantly improve the usability and performance of in-vehicle displays [1-4]. It relies on predicting, early in the pointing gesture, the interface item the driver or passenger intends to select on the display and simplifies the selection task. The user need not touch the display as the system can autonomously auto-select the predicted interface component. This video shows a prototype of a predictive touch system operating in real-time, in a laboratory and vehicle environment. It also depicts the prediction results as calculated by the system whilst pointing in a moving car.
We foresee conversational driver assistants playing a crucial role in automated driving interactions. In this video we present a study of user interactions with an in-vehicle agent, "Theo", under SAE Level 4 automated driving. We use a remote Wizard-of-Oz setup where participants, sitting in a driving simulator, experience real-life video footage transmitted from a vehicle in the neighborhood and interact with Theo to instruct the vehicle where to go. We configured Theo to present 3 levels of conversational abilities (terse, verbose and helpful). We show the results of 9 participants tasked to negotiate destinations and route changes. Voice interaction was reported as preferred means of communication with Theo. There was a clear preference for talkative assistants which were perceived more responsive and intelligent. We highlight challenging interactions for users such as vehicle maneuvers in parking areas and specifying drop off points and interesting associations between the agent performance and the automated vehicle abilities.