Future mobility will be highly automated, multimodal, and ubiquitous and thus have the potential to address a broader range of users. Yet non-average users with special needs are often underrepresented or simply not thought of in design processes of vehicles and mobility services, leading to exclusion from standard transportation. In consequence, it is crucial for designers of such vehicles and services to consider the needs of non-average users from the begin on. In this paper, we present a design framework that helps designers taking the perspective and thinking of the needs of non-average users. We present a set of exemplary applications from the literature and interviews and show how they fit into the framework, indicating room for further developments. We further demonstrate how the framework supports in designing a mobility service in a fictional design process. Overall, our work contributes to universal design of future mobility.
Shared automated vehicles (SAV) are expected to benefit society and the environment as vehicles and rides are shared among passengers. However, this requires acceptance by different types of people. Recent research confirms that women and older people are particularly concerned about this mobility form due to security reasons. These concerns must be considered to assure the adoption of SAVs from women and senior citizens, too. Accordingly, we conducted a qualitative user study (N=21) using participatory design methods. Our work contributes insights into women’s security needs by taking a holistic view of a ride with an SAV from booking to arrival from the perspective of women of different age groups. From our results, we derived general design implications and propose three concrete concepts for high levels of security. Lastly, we present a research agenda for further investigation on security concepts in SAVs.
The highest CO2 quota per person is personal transport. An ecological driving style (eco-driving) could drastically reduce it’s emissions. Current interventions focus mainly on training, which benefits are mostly short-term, and individual feedback, which needs commitment by setting (individual) goals. We present the concept of displaying not only the eco-friendly behavior of the driver but peers around them. As perceivable competition has been shown to lead to higher task performance and a more eco-friendly behavior, adding a competitive aspect and social enforcement to ecological driving shortcuts the goal-setting. In a virtual reality within-subjects study (N=19), we explored this possibility in manual and automated driving. We found that adding a comparative factor to ecological feedback did not lead to significantly more ecological driving in manual or automated driving.
Sustainable mobility not only requires eco-friendly transportation systems, but humans to fundamentally change their mobility behaviors. One way to instill change is designing technologies that incorporate sustainability goals and shape use towards these goals. This, however, requires acceptance. This study aims to understand people’s behaviors towards sustainable mobility and the factors influencing mobility choices. We analyzed 1150 Reddit comments from both climate-related and car-enthusiastic subreddits and coded them along the ”stages of behavior change” and ”theory of planned behavior” components applying a-priori and emergent coding methods. The results show that a behavioral change is not based on an information deficit, but on attitudes and control beliefs towards sustainable mobility. Despite a willingness to change, most remain in a state where no concrete action is taken. Based on this we recommend designs that influence people’s attitude or social behavior at earlier stages, and their control beliefs at later stages of change.
Autonomous vehicles (AVs) could potentially provide independent mobility to people with physical and sensory disabilities. It is critical to understand the needs of people with disabilities and design interfaces specifically for accessible interaction before, during, and while ending an AV trip. Using a research-through-design approach, we identify needs and explore design concepts for a smartphone application to allow people with disabilities to use AVs independently. Through 20 individual interviews, 12 monthly meetings with a group of people with disabilities and transportation advocates, and design prototyping, we develop design concepts for accessible smartphone control of an AV trip. Our work contributes a set of interaction needs for accessible AV interaction, design concepts centered on confidence and control developed in partnership with people with disabilities, and a discussion of an accessible co-design process that other designers can learn from to create accessible automotive user interfaces.
This paper presents a structured framework of automotive shape-changing interfaces, which can act as a guide for researchers and practitioners in the automotive user interface domain towards designing for interactions between vulnerable road users (VRU) and automated vehicles (AV). Recent research in external human-machine interfaces (eHMI) for facilitating AV-VRU interactions have looked into the potential of external shape-change (eSC) as a means of intuitive communication of an AV’s intent, and calls for more structured design explorations. To systematize this unstructured design space, this paper presents an overview of how shape-change is currently implemented and executed in the automotive context, for communication purposes and beyond. The paper has two contributions: (1) an examination of the state-of-the-art of automotive shape-changing interfaces, and (2) a reusable taxonomy which can be used to structure the design space and identify the potentials of shape-changing interfaces towards more intuitive eHMIs.
In several circumstances, a level-three automated vehicle cannot continue driving in an automated driving mode and requests a human driver to take over. In this study, a series of experiments to examine how to provide a TOR was conducted. First, for forty-one persons, a HUD icon, earcon, seat vibration, and combinations were compared. The results indicated that the HUD icon-earcon and HUD icon-seat vibration were the most effective. Second, the combinations of A-pillar LED light and cluster icon (visual), earcon and speech message (auditory), and presence/absence of seat vibration (haptic) were compared. Thirty-six volunteers participated in the ADS failure and forty in the highway exit experiment. In the ADS failure, the combination of A-pillar LED light and seat vibration (AH) reduced the RT but can induce stress. In the highway exit, a speech message is recommended due to control stability, and the AH is not recommended due to longitudinal instability.
In conditionally automated driving, drivers decoupled from driving while immersed in non-driving-related tasks (NDRTs) could potentially either miss the system-initiated takeover request (TOR) or a sudden TOR may startle them. To better prepare drivers for a safer takeover in an emergency, we propose novel context-aware advisory warnings (CAWA) for automated driving to gently inform drivers. This will help them stay vigilant while engaging in NDRTs. The key innovation is that CAWA adapts warning modalities according to the context of NDRTs. We conducted a user study to investigate the effectiveness of CAWA. The study results show that CAWA has statistically significant effects on safer takeover behavior, improved driver situational awareness, less attention demand, and more positive user feedback, compared with uniformly distributed speech-based warnings across all NDRTs.
In-vehicle intelligent agents (IVIAs) can provide versatile information on vehicle status and road events and further promote user perceptions such as trust. However, IVIAs need to be constructed carefully to reduce distraction and prevent unintended consequences like overreliance, especially when driver intervention is still required in conditional automation. To investigate the effects of speech style (informative vs. conversational) and embodiment (voice-only vs. robot) of IVIAs on driver perception and performance in conditionally automated vehicles, we recruited 24 young drivers to experience four driving scenarios in a simulator. Results indicated that although robot agents received higher system response accuracy and trust scores, they were not preferred due to great visual distraction. Conversational agents were generally favored and led to better takeover quality in terms of lower speed and smaller standard deviation of lane position. Our findings provide a valuable perspective on balancing user preference and subsequent user performance when designing IVIAs.
The demand for autonomous vehicles (AVs) is rapidly growing these years. As AVs have a potential to free drivers’ cognitive resources from driving to other tasks, reading is one of the common activities users conduct in travel multitasking. Nevertheless, ways to supporting reading in AVs have been little explored. To fill this gap, we explored the design of an in-vehicle reader on a windshield in AVs along three dimensions: dynamics, position, and text segmentation. We conducted two in-lab within-subject experiments to examine the eight kinds of in-car reading modalities that represented the combinations of the three dimensions in terms of drivers’ reaction time and reading comprehension. Our results show a case where an adaptive positioning would be particularly beneficial for supporting reading in AVs. And our general suggestion is to use a static reading zone presented on-sky and in sentences because it leads to faster reaction and better reading comprehension.
In Level 3 automated vehicles, preparing drivers for take-over requests (TORs) on the head-up display (HUD) requires their repeated attention. Visually salient HUD elements can distract attention from potentially critical parts in a driving scene during a TOR. Further, attention is (a) meanwhile needed for non-driving-related activities and can (b) be over-requested. In this paper, we conduct a driving simulator study (N=12), varying required attention by HUD warning presence (absent vs. constant vs. TOR-only) across gaze-adaptivity (with vs. without) to fit warnings to the situation. We found that (1) drivers value visual support during TORs, (2) gaze-adaptive scene complexity reduction works but creates a benefit-neutralizing distraction for some, and (3) drivers perceive constant HUD warnings as annoying and distracting over time. Our findings highlight the need for (a) HUD adaptation based on user activities and potential TORs and (b) sparse use of warning cues in future HUD designs.
With automated driving, vehicles are no longer just tools but become teammates, which enable an increasing space of new interaction possibilities. By changing the relationship between the drivers and the automated vehicles (AV), conflicts regarding maneuver selection can occur. Conflicts can lead to safety-critical takeovers by the drivers. Current research mainly focuses on information requirements for takeovers, only a few works explored the factors necessary for automation engagement. Therefore, a fix-based driving simulator study with N=28 participants was conducted to investigate how verifiable information influences automation engagement, gaze behavior, trust, conflict, criticality, stress, and interaction perception. The results indicate, if drivers can verify the information given by the system, they perceive less conflict and more trust in the system, leading to a lower rejection frequency of an overtaking maneuver performed by an AV. The results indicate that systems that aim to prevent drivers initiated interventions should provide verifiable information.
Computational models embedded in advanced driver assistance systems (ADAS) require insights on drivers’ perception and understanding of their environment. This is particularly important as vehicles become increasingly automated and the partnership between the controllers (driver or vehicle) needs to be attentive to each other’s future intentions. This study investigates the impact of environmental factors (road type, lighting) on driver situation awareness (SA) using 75 real-world driving scenes viewed within a driving simulator environment. The Situational Awareness Global Assessment Technique (SAGAT) was adopted to compute SA scores from spatially continuous data. A hurdle model showed that visual complexity, which was not considered in previous SA prediction models, significantly impacted driver SA. The number of objects in the visual scene as well as in the peripheral view were also found to significantly affect driver SA. The findings of this study provide insights on environmental factors that may impact SA predictions.
Advanced driver assistance systems (ADAS) are designed to improve vehicle safety. However, it is difficult to achieve such benefits without understanding the causes and limitations of the current ADAS and their possible solutions. This study 1) investigated the limitations and solutions of ADAS through a literature review, 2) identified the causes and effects of ADAS through consumer complaints using natural language processing models, and 3) compared the major differences between the two. These two lines of research identified similar categories of ADAS causes, including human factors, environmental factors, and vehicle factors. However, academic research focused more on human factors of ADAS issues and proposed advanced algorithms to mitigate such issues while drivers complained more of vehicle factors of ADAS failures, which led to associated top consequences. The findings from these two sources tend to complement each other and provide important implications for the improvement of ADAS in the future.
As drivers’ expectations guide their perception and behavior, violated expectations can lead to mistakes and discomfort. In this work, the role of expectations regarding drivers’ reactions to automated (AVs) and manual vehicles (MVs) was investigated. Interviews were conducted to explore expectations toward AVs and MVs. Then, drivers interacted with AVs and MVs in a multi-agent driving simulator. The vehicles yielded or insisted on the right-of-way, indicated by a lateral offset. Self-report data revealed that drivers expected AVs to drive second (yield) and MVs to drive first (insist) in narrow passages. Driving simulator data showed that driving behavior improved, i.e., faster passing time, higher average speed, and higher lateral position, when AVs yielded and matched drivers’ expectations, compared to MVs that behaved the same way. No improvement was found when MVs (vs. AVs) insisted on the right-of-way. Overall, yielding was evaluated more trustworthy and cooperative than insisting for both vehicle categories.
Users prefer different styles (more defensive or aggressive) for their autonomous vehicle (AV) to drive. This preference depends on multiple factors including user’s trust in AV and the scenario. Understanding users’ preferred driving style and takeover behavior can assist in creating comfortable driving experiences. In this driving simulator study, participants were asked to interact with L2 driving automation with different driving style adaptations. We analyze the effects of different AV driving style adaptations on users’ survey responses. We propose linear and generalized linear mixed effect models for predicting the user’s preference and takeover actions. Results suggest that trust plays an important role in determining users’ preferences and takeover actions. Also, the scenario, pressing brakes, and AV’s aggressiveness level are among the main factors correlated with users’ preferences. The results provide a step toward developing human-aware driving automation that can implicitly adapt its driving style based on the user’s preference.
Augmented reality (AR) windshield display (WSD) offers promising ways to engage in non-driving tasks in automated vehicles. Previous studies explored different ways WSD can be used to present driving and other tasks-related information and how that can affect driving performance, user experience, and performance in secondary tasks. Our goal for this study was to examine how drivers expect to use gesture and voice commands for interacting with WSD for performing complex, multi-step personal and work-related tasks in an automated vehicle. In this remote unmoderated online elicitation study, 31 participants proposed 373 gestures and 373 voice commands for performing 24 tasks. We analyzed the elicited interactions, their preferred modality of interaction, and the reasons behind this preference. Lastly, we discuss our results and their implications for designing AR WSD in automated vehicles.
A well-known cause of driver distraction is the engagement in non-driving-related activities (NDRA) using in-vehicle-infotainment systems (IVIS). Prior research shows that collaborative approaches to NDRAs with input from passengers can help to reduce these effects. Nevertheless, there is no systematic work on how to support collaboration in the car. In this paper, we designed and evaluated five concepts that exemplify different collaborative approaches with the goal of studying in-vehicle collaboration. Our results provide insights into how these approaches affect collaboration through the measures of (1) social connectedness (connectedness, affiliation, belongingness, companionship) and (2) team performance (coordination effectiveness, team cohesion). We found that Anarchic or Hierarchical control empower front-seat passengers, reduce power dynamics, and minimize driver distraction (caused by interacting passengers). We discuss the implications of these findings and posit recommendations to design future IVIS in passenger cars with improved driver-passenger collaboration.
Only a few works so far have addressed functional specificity for trust formation in automated driving. Previous research indicated that drivers could hardly distinguish between sub-systems, while their trust is influenced by other in-vehicle technologies. Thus, we conducted a user study where participants had to supervise a level 2 vehicle while reading and communicating with conversational agents. In two conditions, the vehicle was either represented by a single agent or by two agents that portrayed the driving automation and the infotainment system. We hypothesized that a clear differentiation between sub-systems could allow drivers to better calibrate their trust. However, our results show quite the opposite. Correlation analyses suggest that participants’ functional specificity was high, and they based their situational and general trust ratings mainly on the perception of the driving system. Also, dispositional trust did not influence trust formation, but many participants still failed to monitor the system appropriately.
Investigating trust, acceptance, and attitudes towards automated driving is often investigated in simulator experiments. Therefore, behavioral validity is a crucial aspect of automated driving studies. However, static simulators have reduced behavioral validity because of their inherent safe environment. We propose VAMPIRE (VR automated movement platform for immersive realistic experiences), a movement platform designed to increase the sensation of realism in automated driving simulator studies using an automated wheelchair. In this work, we provide a detailed description to build the prototype (including software components and assembly instructions), a proposal for safety precautions, an analysis of possible movement patterns for overtaking scenarios, and practical implications for designers and practitioners. We provide all project-related files as auxiliary materials.
Several researchers have focused on studying driver cognitive behavior and mental load for in-vehicle interaction while driving. Adaptive interfaces that vary with mental and perceptual load levels could help in reducing accidents and enhancing the driver experience. In this paper, we analyze the effects of mental workload and perceptual load on psychophysiological dimensions and provide a machine learning-based framework for mental and perceptual load estimation in a dual task scenario for in-vehicle interaction (https://github.com/amrgomaaelhady/MWL-PL-estimator). We use off-the-shelf non-intrusive sensors that can be easily integrated into the vehicle’s system. Our statistical analysis shows that while mental workload influences some psychophysiological dimensions, perceptual load shows little effect. Furthermore, we classify the mental and perceptual load levels through the fusion of these measurements, moving towards a real-time adaptive in-vehicle interface that is personalized to user behavior and driving conditions. We report up to 89% mental workload classification accuracy and provide a real-time minimally-intrusive solution.
Automatically inferring drivers’ emotions during driver-pedestrian interactions to improve road safety remains a challenge for designing in-vehicle, empathic interfaces. To that end, we carried out a lab-based study using a combination of camera and physiological sensors. We collected participants’ (N=21) real-time, affective (emotion self-reports, heart rate, pupil diameter, skin conductance, and facial temperatures) responses towards non-verbal, pedestrian crossing videos from the Joint Attention for Autonomous Driving (JAAD) dataset. Our findings reveal that positive, non-verbal, pedestrian crossing actions in the videos elicit higher valence ratings from participants, while non-positive actions elicit higher arousal. Different pedestrian crossing actions in the videos also have a significant influence on participants’ physiological signals (heart rate, pupil diameter, skin conductance) and facial temperatures. Our findings provide a first step toward enabling in-car empathic interfaces that draw on behavioural and physiological sensing to in situ infer driver emotions during non-verbal pedestrian interactions.
This study extends efforts to understand the interplay of contextual factors in the personalization of in-vehicle voice interfaces. In particular, an online study found that neither aggressiveness nor gender of voice assistants (VAs) would influence users’ attitudes like trust, perceived usefulness and overall positive emotions, towards in-vehicle VAs. Our results contradict the similarity-attraction effect as the VAs’ perceived aggressiveness, and drivers’ preferences for aggressive driving styles did not correlate. In addition, the results showed that prior experiences might affect users’ reliance and trust in VAs. This study also discovered relationships between users’ attitudes to in-vehicle VAs and their age and driving experiences.
Automated vehicles should improve both traffic safety and user experience. While novel behavior patterns such as platooning become feasible to reduce fuel usage, such time- and fuel-reducing behavior at intersections can be perceived as unsafe and possibly disconcert users. Therefore, we designed and implemented nine feedback strategies for a simulated intersection and compared these in an online video-based between-subjects study (N=226). We found that visual feedback strategies limiting the view on the actual scene by providing calming views (a Landscape or the scene with hidden vehicles) were rated significantly higher in terms of perceived safety and trust. We discuss implications regarding future traffic and whether automated vehicles will necessitate altering reality for the user.
Although driving automation systems have made significant progress over recent years, human involvement is still vital, especially for Level 2 (L2) automated vehicles. This study aimed to design and evaluate an in-vehicle interface for vehicles with L2 features to help drivers understand critical situations that require intervention. This study was conducted in two phases. First, new dashboard prototypes were developed through four design iterations. Second, a between-design experimental study was conducted to test the dashboard designs’ efficiency. Forty-two participants were assigned to three dashboard groups (Advanced, Basic, Original) and drove through seven scenarios. Results showed that participants took back control significantly earlier (i.e., ∼2 seconds earlier) in Advanced and Basic Dashboard groups. The results indicated that providing take back control feedback helps drivers to be more aware while driving vehicles with L2 features and that additional feedback regarding road geometry can improve drivers’ take back control performance.
With ever-improving driver assistance systems and large touchscreens becoming the main in-vehicle interface, drivers are more tempted than ever to engage in distracting non-driving-related tasks. However, little research exists on how driving automation affects drivers’ self-regulation when interacting with center stack touchscreens. To investigate this, we employ multilevel models on a real-world driving dataset consisting of 10,139 sequences. Our results show significant differences in drivers’ interaction and glance behavior in response to varying levels of driving automation, vehicle speed, and road curvature. During partially automated driving, drivers are not only more likely to engage in secondary touchscreen tasks, but their mean glance duration toward the touchscreen also increases by 12 % (Level 1) and 20 % (Level 2) compared to manual driving. We further show that the effect of driving automation on drivers’ self-regulation is larger than that of vehicle speed and road curvature. The derived knowledge can facilitate the safety evaluation of infotainment systems and the development of context-aware driver monitoring systems.
While automated driving systems (ADS) have progressed fast in recent years, there are still various situations in which an ADS cannot perform as well as a human driver. Being able to anticipate situations, particularly when it comes to predicting the behaviour of surrounding traffic, is one of the key elements for ensuring safety and comfort. As humans are still surpassing state-of-the-art ADS in this task, this led to the development of a new concept, called prediction-level cooperation, in which the human can help the ADS to better anticipate the behaviour of other road users. Following this concept, we implemented an interactive prototype, called Prediction-level Cooperative Automated Driving system (PreCoAD), which allows human drivers to intervene in an existing ADS that has been validated on the public road, via gaze-based input and visual output. In a driving simulator study, 15 participants drove different highway scenarios with plain automation and with automation using the PreCoAD system. The results show that the PreCoAD concept can enhance automated driving performance and provide a positive user experience. Follow-up interviews with participants also revealed the importance of making the system’s reasoning process more transparent.
To interact with the increasing number of infotainment systems today, touchless gestures are gaining popularity. But only if the interaction is adapted to human needs, the proposed benefits will come into effect. A three-way (2 x 2 x 2) mixed design was adopted examining basic psychological needs and their association with motivation, UX, and the acceptance of gestures. Thereby, the influence of freedom in the gesture execution, visual cues, and motivation framing was investigated. In this study, 27 participants experienced gesture interaction with infotainment content in an experiment with a realistic car mock-up. Results suggest that participants perceived higher autonomy interacting with free gestures and higher competence for the supportive visualization with visual cues. Autonomy, competence, and system relatedness affected the motivation, UX, and acceptance. The present study provides novel insights into the acceptance of in-vehicle gesture interaction and implications for future design of automotive user interfaces.
Advances in artificial intelligence (AI) are leading to an increased use of algorithm-generated user-adaptivity in everyday products. Explainable AI aims to make algorithmic decision-making more transparent to humans. As future vehicles become more intelligent and user-adaptive, explainability will play an important role ensuring that drivers understand the AI system's functionalities and outputs. However, when integrating explainability into in-vehicle features there is a lack of knowledge about user needs and requirements and how to address them. We conducted a study with 59 participants focusing on how end-users evaluate explainability in the context of user-adaptive comfort and infotainment features. Results show that explanations foster perceived understandability and transparency of the system, but that the need for explanation may vary between features. Additionally, we found that insufficiently designed explanations can decrease acceptance of the system. Our findings underline the requirement for a user-centered approach in explainable AI and indicate approaches for future research.
External human-machine interfaces (eHMIs) support automated vehicles (AVs) in interacting with vulnerable road users such as pedestrians. eHMI research has mostly dealt with investigating the communication an AV’s yielding intent, but there is little insight into how (or if) an eHMI should communicate an AV’s non-yielding intent. We conducted a video-based study (N = 25) with two eHMI concepts that offer pedestrians information about the vehicle’s non-yielding intent either explicitly or implicitly, and compared it with a baseline of an AV without an eHMI. Results show that while both kinds of eHMIs are effective and perform better than the baseline, there is no evidence of significant difference in road-crossing decision performance between explicit and implicit eHMIs in ambiguous situations. However, subjective feedback shows a trend of preference for eHMIs that communicate an AV’s intent explicitly at all times, although with a need for a significant distinction between the yielding and non-yielding messages.
Modern cars express three moving directions (left, right, straight) using turn signals (i.e., blinkers), which is insufficient when multiple paths are toward the same side. As such, drivers give additional hints (e.g., gesture, eye contact) in the conventional car-to-pedestrian interaction. As more self-driving cars without drivers join the public roads, we need additional communication channels. In this work, we discussed the problem of self-driving cars expressing their fine-grained moving direction to pedestrians in addition to blinkers. We built anthropomorphic robotic eyes and mounted them on a real car. We applied the eye gazing technique with the common knowledge: I gaze at the direction I am heading to. We found that the eyes can convey fine-grained directions from our formal VR-based user study, where participants could distinguish five directions with a lower error rate and less time compared to the conventional turn signals.
Shared space reduces segregation between vehicles and pedestrians and encourages them to share roads without imposed traffic rules. The behaviour of road users (RUs) is then controlled by social norms, and interactions are more versatile than on traditional roads. Autonomous vehicles (AVs) will need to adapt to these norms to become socially acceptable RUs in shared spaces. However, to date, there is not much research into pedestrian-vehicle interaction in shared-space environments, and prior efforts have predominantly focused on traditional roads and crossing scenarios. We present a video observation investigating pedestrian reactions to a small, automation-capable vehicle driven manually in shared spaces based on a long-term naturalistic driving dataset. We report various pedestrian reactions (from movement adjustment to prosocial behaviour) and situations pertinent to shared spaces at this early stage. Insights drawn can serve as a foundation to support future AVs navigating shared spaces, especially those with a high pedestrian focus.
First concepts for cooperative driving already illustrate the potential of the cooperation between human drivers and automated vehicles. However, the main influencing factors that determine an efficient and effective cooperation still need to be investigated. We therefore examined the effects of displaying the vehicle’s intended decision in a critical situation in combination with a confidence value. In a driving simulator study (N=49), the automated vehicle communicated uncertainty in predicting the behavior of a pedestrian and the participants could support the automation by entering their own decision to stop or drive through. The results show that the time until entering the own decision (system override) was longer when a pending system decision was indicated with a confidence value in the HMI. A low confidence value resulted in the longest interaction times. In addition, trust in automation and usability were lower compared to the baseline cooperative HMI without a confidence value.
Various car manufacturers and researchers have explored the idea of adding eyes to a car as an additional communication modality. A previous study demonstrated that autonomous vehicles’ (AVs) eyes help pedestrians make faster street-crossing decisions. In this study, we examine a more critical question, "can eyes reduce traffic accidents?” To answer this question, we consider a critical street-crossing situation in which a pedestrian is in a hurry to cross the street. If the car is not looking at the pedestrian, this implies that the car does not recognize the pedestrian. Thus, pedestrians can judge that they should not cross the street, thereby avoiding potential traffic accidents. We conducted an empirical study using 360-degree video shooting of an actual car with robotic eyes. The results showed that the eyes can reduce potential traffic accidents and that gaze direction can increase pedestrians’ subjective feelings of safety and danger. In addition, the results showed gender differences in critical and noncritical scenarios in AV-to-pedestrian interaction.
Past research suggests that displays on the exterior of the car, known as eHMIs, can be effective in helping pedestrians to make safe crossing decisions. This study examines a new application of eHMIs, namely the provision of directional information in scenarios where the pedestrian is almost hit by a car. In an experiment using a head-mounted display and a motion suit, participants had to cross the road while a car driven by another participant approached them. The results showed that the directional eHMI caused pedestrians to step back compared to no eHMI. The eHMI increased the pedestrians’ self-reported understanding of the car's intention, although some pedestrians did not notice the eHMI. In conclusion, there may be potential for supporting pedestrians in situations where they need support the most, namely critical encounters. Future research may consider coupling a directional eHMI to autonomous emergency steering.