TOC - Main Proceedings
Full Paper Session 1: Measuring and modeling: Getting it right
To understand driver’s gaze behavior, the gaze is usually matched to surrounding objects or static areas of interest (AOI) at fixed positions around the car. Full surround object tracking allows for an understanding of the traffic situation. However, because it requires an extensive sensor set and a lot of processing power, it’s not yet broadly available in production cars. The use of static AOIs only requires the addition of eye tracking sensors. They are at fixed positions around the car and can’t adapt to the environment, therefore their usefulness is limited. We propose geopositioned 3D AOIs. With adaptability and the use of a small sensor set, they combine the strengths of both methods. To test 3D AOIs’ capabilities for gaze analysis, a driving simulator study with 74 participants was conducted. We show that 3D AOIs are suitable for driver’s gaze analysis and a promising tool for driver intention prediction.
Human factors research and engineering of advanced driving assistance systems (ADAS) must consider how drivers adapt to their presence. The major obstruction to this at the moment is poor understanding of the details of the adaptive processes that the human cognition undergoes when faced with such changes. This paper presents a simulation model that predicts how drivers adapt to a steering assistance system. Our approach is based on computational rationality, and demonstrates how task interleaving strategies adapt to the task environment and the driver’s goals and cognitive limitations. A supervisor controls eye movements between the driving and non-driving tasks, making this choice on the basis of maximising expected joint task utility. The model predicts that with steering assistance, drivers’ in car glance durations increase. We also show that this adaptation leads to risky driving in cases where the reliability of the system is compromised.
How Will Drivers Take Back Control in Automated Vehicles? A Driving Simulator Test of an Interleaving Framework
We explore the transfer of control from an automated vehicle to the driver. Based on data from N=19 participants who participated in a driving simulator experiment, we find evidence that the transfer of control often does not take place in one step. In other words, when the automated system requests the transfer of control back to the driver, the driver often does not simply stop the non-driving task. Rather, the transfer unfolds as a process of interleaving the non-driving and driving tasks. We also find that the process is moderated by the length of time available for the transfer of control: interleaving is more likely when more time is available. Our interface designs for automated vehicles must take these results into account so as to allow drivers to safely take back control from automation.
Full Paper Session 2: Uh oh! Drowsiness and motion sickness
Queasy Rider: How Head Movements Influence Motion Sickness in Passenger Use of Head-Mounted Displays
In autonomous cars, drivers will spend more time on non-driving-related activities. Getting their hands off the wheel and eyes off the road, the driver, similar to a rear-seat passenger today, can use multiple built-in displays for such activities or even mobile head-mounted displays (HMDs) in virtual reality (VR). A wider motion range is known to increase engagement, but might also amplify the risk of motion sickness while switching between displays. In a rear-seat VR field study (N=21) on a city highway, we found a head movement range of ± 50° with a speed of 1.95m/s to provide the best trade-off between motion sickness and engagement. Compared to the pitch (Y) axis, movement around the yaw (X) axis induced less discomfort and more engagement with less motion sickness. Our work provides a concrete starting point for future research on self-driving carsickness, starting from today’s rear-seat passengers.
Automated vehicles (AVs) are the next wave of evolution in the transportation industry, but the progress towards increased levels of automation faces several challenges. One of the major problems, that gets overlooked, is motion sickness. As more drivers become passengers engaging in ‘passenger tasks’, it will lead to greater occurrences of motion sickness, preventing AVs from providing their true benefit to society. In an attempt to encourage more researchers to solve the problem of motion sickness in AVs, this study conducted a literature review following the PRISMA framework to identify the latest research trends and methodologies. Based on the findings and limitations in the existing literature, this study suggests a bird's-eye-view research framework consisting of causation, induction, measurement, and mitigation techniques, that researchers and early practitioners can utilize to conduct research in this field. Furthermore, the paper highlights future research directions in mitigation techniques to combat motion sickness in AVs.
Performance and Acceptance Evaluation of a Driver Drowsiness Detection System based on Smart Wearables
Current systems for driver drowsiness detection often use driving-related parameters. Automated driving reduces the availability of these parameters. Techniques based on physiological signals seem to be a promising alternative. However, in a dynamic driving environment, only non- or minimal intrusive methods are accepted. In this work, a driver drowsiness detection system based on a smart wearable is proposed. A mobile application with an integrated machine learning classifier processes heart rate from a consumer-grade wearable. A simulator study (N=30) with two age groups (20-25, 65-70 years) was conducted to evaluate acceptance and performance of the system. Acceptance evaluation resulted in high acceptance in both age groups. Older participants showed higher attitudes and intentions towards using the system compared to younger participants. Overall detection accuracy of 82.72% was achieved. The proposed system offers new options for in-vehicle human-machine interfaces, especially for driver drowsiness detection in the lower levels of automated driving.
Full Paper Session 3: Gestures and actions
Voice user interfaces (VUI) are becoming indispensable in car for offering drivers the opportunity to make distraction-free inputs and conduct complex tasks. However, the usability and control efficiency of today’s VUI remain to be enhanced due to its sequential nature. In this work, we explored gestural input on the steering wheel to improve the interaction efficiency of VUI. Based on limitations in VUI, we designed novel gestural commands on the steering wheel to augment them. We also elicited corresponding user-defined gestures by exploring drivers’ touch behavior. Then, we implemented a prototype attached to the steering wheel for recognizing gestures. Finally, we evaluated our system’s usability regarding driving performance, interaction efficiency, cognitive workload and user feedback. Results revealed that our system improved the control efficiency of VUI and reduced workload without a significant reduction in driving distraction than just using VUI.
Comparing an Unbalanced Motor with Mechanical Vibration Transducers for Vibrotactile Forward Collision Warnings in the Steering Wheel
A static driving simulator study was conducted to investigate an unbalanced actuator (UAK) and two mechanical vibration transducers (exciters) integrated in a steering wheel as vibrotactile warnings during manual driving. In a repeated-measures design, two vibration signal conditions (UAK vs. exciters) were presented to 32 subjects in two test routes with two forward collision warning scenarios. The effects of the vibration signals on driving behavior, reaction times, workload, acceptance, preference, and vibrotactile feel were examined in order to evaluate signal usability. The exciters led to lower SD speed and lower mean steering wheel angle acceleration and were preferred by 63% compared to the UAK. However, subjects showed longer reaction times and shorter time to collisions with the exciters. Due to its intense vibration, the UAK is more suitable for acutely dangerous situations requiring quick reactions. For less hazardous situations or incremental warnings, exciters are more suitable to avoid startle effects.
Automated vehicles are about to enter the mass market. However, such systems regularly meet limitations of varying criticality. Even basic tasks such as Object Identification can be challenging, for example, under bad weather or lighting conditions or for (partially) occluded objects. One common approach is to shift control to manual driving in such circumstances, however, post-automation effects can occur in these control transitions. Therefore, we present ORIAS, a system capable of asking the driver to (1) identify/label unrecognized objects or to (2) select an appropriate action to be automatically executed. ORIAS extends the automation capabilities, prevents unnecessary takeovers, and thus reduces post-automation effects. This work defines the capabilities and limitations of ORIAS and presents the results of a study in a driving simulator (N=20). Results indicate high usability and input correctness.
Full Paper Session 4: Shall we cooperate?
After You! Design and Evaluation of a Human Machine Interface for Cooperative Truck Overtaking Maneuvers on Freeways
Truck overtaking maneuvers on freeways are inefficient, risky and promote high potential for conflict between road users. Collective perception based on V2X communication allow coordination with all parties to reduce the negative impact and could be installed in a timely manner compared to automation. However, the prerequisite for the success of this system is a human-machine interface that the driver can easily operate, trusts and accepts. In this approach, a user-centered conception and design of a human-machine interface for cooperative truck overtaking maneuver on freeways is presented. The development process is separated in two steps: After a prototype is build based on task analysis it is initially evaluated and improved iteratively with a heuristic evaluation by experts. The final prototype is tested in a simulator study with 30 truck drivers. The study provides initial feedback regarding the drivers' attitudes towards such a system and how it can be further improved.
With the introduction of driving automation, the driving task has become a shared task between driver and vehicle. Today, an increasing amount of driving tasks can be performed by the automation and the view of driver and automation acting as collaborative partners has been well established. Although this notion has been adopted in the research and design domains, means to assess the quality of the driver-automation interaction in a structured way are still lacking. Moreover, most design evaluations are usually addressed either from a technical stance or from a human factors viewpoint, which does not comply with a general acknowledged view of a unified driver-vehicle system. The aim of the current study is therefore to investigate the possibility to quantitatively evaluate the quality of the driver and vehicle cooperation. Seven dimensions indicative for the quality of cooperation are identified, based on a literature survey and expert input during focus groups. This work potentially supports road authorities, legislation, regulation and original equipment manufacturers to monitor, evaluate and design driver-vehicle cooperation.
From SAE-Levels to Cooperative Task Distribution: An Efficient and Usable Way to Deal with System Limitations?
Automated driving seems to be a promising approach to increase traffic safety, efficiency, and driver comfort. The defined automation capability levels (SAE) recommend a distinct takeover of the vehicle’s control from the human driver. This implies that if the system reaches a system boundary, the control falls back to the human. However, another possibility might be the cooperative approach of task distribution: The driver provides the missing information to the automation, which will stay activated. In a driving simulator study, we compared both a classical and a cooperative approach (N = 18). An automated car was driving on a rural road when a slower leading vehicle made it impossible for the automation to overtake. The participants could either initiate the overtake by providing the missing information cooperatively or fully taking over the vehicle’s control. Results showed that the cooperative approach has a higher usage and reduces workload. Therefore, the suggested cooperative approach seems to be more promising.
Full Paper Session 5: Can you feel it? User experience
In automotive research, the current hot topic of emotion recognition is mainly technology-driven, focusing on the development of sensors and algorithms that ensure recognition accuracy and reliability. Often, a subjective reference, i.e., information about what drivers actually feel, is missing for the interpretation of the data collected. Thus, this paper explores the subjective component of drivers’ emotions, focusing on when, where, and why they occur. In an on-the-road study, 34 drivers tracked their emotions and the triggers of these experiences in-situ. In total, 367 verbal self-reports were captured, providing insights into the spatial-temporal distribution of drivers’ emotions and their determinants. Results show, for example, that intersections are emotional hotspots, and that positive emotions arise especially at the beginning and at the end of the drive. The results can help to understand emotion recognition data and to infer drivers’ emotions from contextual information if no emotion data is available.
Emotions are a topic of increasing interest in vehicle design and research as they have a substantive impact on people's behaviour, affecting driving performance and being a source of safety issues particularly on long journeys. However, emotions do not usually occur distinctly and individually and frequently transition and transform between states. It can be challenging to obtain information about the exact emotions drivers experience, especially when subtle. We present design-led research focusing on identifying scenarios that contain normally unarticulated emotions and mental reminders that drivers use to make a journey safer and develop concepts for in-vehicle interactions that assist with these rituals. As results of the research, we designed and user tested in-vehicle interactions for two emotional transition scenarios - pre-journey preparation (‘Ready... Steady … Relax’) and checking the progress of a journey (‘Driving Whisper’).
Recent technological advancements in automation have sparked interest in how automation will affect truck drivers and the trucking industry. However, there is a gap in the literature addressing how truck drivers perceive automation and how they believe it will impact trucking. This study aims to understand truck drivers’ perspectives on automation in the trucking industry. Extending a preliminary study, we conducted a broader analysis of comments discussing automation in the r/Truckers subreddit from February 2017 to March 2021. In general, the community had negative sentiments towards automation in the trucking industry. Participants speculated when automation would become mainstream in trucking and discussed the feasibility of automation in the context of executing non-driving tasks and having accommodating infrastructure. Our findings indicate that truck drivers seek to participate in conversations about the future and to prepare themselves for when automation is more prominent in the trucking industry.
Full Paper Session 6: Automation: Are you ready?
With increasing automation capabilities and a push towards full automation in vehicles, mode awareness, i.e., the driver’s awareness of the vehicle’s current automation mode, becomes an important factor. While issues surrounding mode awareness are known, research concerning and design towards mode awareness appears to not yet be a focal point in the automated driving domain. In this paper, we provide a state-of-the art on mode awareness from the related domains of automated driving, aviation, and Human-Robot Interaction. We present a summary of existing mode awareness interface solutions as well as existing techniques and recognized gaps concerning mode awareness. We found that existing interfaces are often simple, sometimes outdated, yet are difficult to meaningfully expand without overloading the user. We also found predictive approaches as a promising strategy to lessen the need for mode awareness via separate indicators.
In Level 3 automated vehicles, drivers must take back control when prompted by a Take Over Request (TOR). However, there is currently no consensus on the safest way to achieve this. Research has shown that participants interact faster with an avatar when this “glows” in synchrony with participant physiology (heartbeat). We hypothesized that a similar form of synchronization might allow drivers to react faster to a TOR. Using a driving simulator, we studied driver responses to a TOR when permanently visible ambient lighting was synchronized with participants’ breathing. Experimental participants responded to the TOR faster than controls. There were no significant effects on self-reported trust or physiological arousal, and none of the participants reported that they were aware of the manipulation. These findings suggest that new ways of keeping the driver unconsciously “connected” to the vehicle could facilitate faster, and potentially safer, transfers of control.
Technology readiness affects the acceptance of and intention to use an automated driving system after repeated experience
User acceptance is the key to success of automated driving. The user's technology readiness is one important factor in their behavioral intention to use automated driving. User acceptance changes with actual experience of the technology. The effect of user's technology readiness, automation level and experience with the technology on the acceptance of automated driving are assessed in a driving simulator study. N=60 drivers tested an L3 or L4 motorway automated driving system during six drives taking place at six different days. They evaluated the tested systems for a variety of relevant aspects. The results show an impact of technology readiness on higher level concepts like usefulness, satisfaction and behavioural intention but not on direct evaluation of the functionality or on drivers’ immediate experience of driving with the system. The meaning of the results for future research is discussed.
In conditionally automated driving, in-vehicle alert systems can provide drivers with information to assist their takeovers from automated driving. This study investigated how display modality and information influenced drivers’ acceptance of the in-vehicle alert systems under different event criticality situations. We conducted an online video study with a 3 (information type) × 3 (display modality) × 2 (event criticality) mixed design involving 60 participants. The results showed that considering drivers’ perceived usefulness and ease of use, presenting why only information was not sufficient for takeovers as compared to what will only information and why + what will information. Participants reported higher ease of use in the combination of speech and augmented reality condition when compared to the speech only condition. High event criticality led to drivers’ lower perceived usefulness and more negative opinions of the displays. The findings have implications for the design of in-vehicle alert systems during takeover transitions.
Full Paper Session 7: New methods: What's in your toolbox?
Automatic Generation of Road Trip Summary Video for Reminiscence and Entertainment using Dashcam Video
Vehicle dashboard cameras are becoming an increasingly popular kind of automotive accessory. While it is easy to obtain the high-definition video data recorded by dashcams using Secure Digital memory cards, this data is rarely used except for safety purposes because it takes substantial time and effort to review or edit many hours of such recorded videos. In this paper, we propose a new usage for this data through the automatic video editing system we have developed that can create enjoyable video summaries of road trips utilizing video and other data from the vehicle. We also report the results of comparisons between automatically edited videos created by the proposed system and manually edited videos created by study participants. The prototype developed in this study and the findings from our experiments will contribute to improving the driving experience by providing entertainment for automobile users after road trips, and by memorializing their travels.
Measuring user experience is highly important for human-centered development and thus for designing automated driving systems. Multi-item measures such as the System Usability Scale (SUS)  or the Usability Metric for User Experience (UMUX)  are commonly used for collecting user feedback on technical systems or products. The goal of the present study was to investigate the potentials of a single-item approach as an economic alternative for measuring user experience compared to multi-item scales. Therefore, a single-item measure was developed to assess both event-related and cumulative user experience in automated driving. User experience was manipulated in a between-subject design implemented in a real-world driving task and feedback was collected using the newly developed Single Item User Experience (SIUX) scale, the UMUX, and the SUS. Results indicate that the SIUX scale is more sensitive than the UMUX to differences in event-related user experience, but not in cumulative user experience. Both the SIUX and the UMUX were more sensitive than the SUS when measuring differences in cumulative user experience. Future studies should be aimed at investigating the applicability of the SIUX scale to domains other than automated driving and at collecting more extensive data on validity and reliability of all three instruments.
Development and Evaluation of a Data Privacy Concept for a Frustration-Aware In-Vehicle System: Development and Evaluation of a Data Privacy Concept for a Frustration-Aware In-Vehicle System
To realize frustration-aware in-vehicle systems based on real-time user monitoring, personal data have to be recorded, analyzed and (potentially) stored raising data privacy concerns that may reduce the user acceptance and hence the spread of such systems. Complementing the development of a frustration-aware system with voice interface in the project F-RELACS, a data privacy concept was created based on the principles privacy by design and privacy by default recommended in the European General Data Protection Regulation. Nine criteria were formulated and 23 concrete measures to satisfy the criteria were derived. The measures were evaluated in an online study with 96 participants between 18 and 74 years. On average, the measures were rated as rather sufficient to sufficient. Participants evaluated the use of commercial third-party software for speech processing as most critical. All results are discussed and proposals to further increase the acceptance of frustration-aware systems are outlined.
Full Paper Session 8: Brave new world: Displays and visualisations
The future urban environment may consist of mixed traffic in which pedestrians interact with automated vehicles (AVs). However, it is still unclear how AVs should communicate their intentions to pedestrians. Augmented reality (AR) technology could transform the future of interactions between pedestrians and AVs by offering targeted and individualized communication. This paper presents nine prototypes of AR concepts for pedestrian-AV interaction that are implemented and demonstrated in a real crossing environment. Each concept was based on expert perspectives and designed using theoretically-informed brainstorming sessions. Prototypes were implemented in Unity MARS and subsequently tested on an unmarked road using a standalone iPad Pro with LiDAR functionality. Despite the limitations of the technology, this paper offers an indication of how future AR systems may support future pedestrian-AV interactions.
With modern In-Vehicle Information Systems (IVISs) becoming more capable and complex than ever, their evaluation becomes increasingly difficult. The analysis of large amounts of user behavior data can help to cope with this complexity and can support UX experts in designing IVISs that serve customer needs and are safe to operate while driving. We, therefore, propose a Multi-level User Behavior Visualization Framework providing effective visualizations of user behavior data that is collected via telematics from production vehicles. Our approach visualizes user behavior data on three different levels: (1) The Task Level View aggregates event sequence data generated through touchscreen interactions to visualize user flows. (2) The Flow Level View allows comparing the individual flows based on a chosen metric. (3) The Sequence Level View provides detailed insights into touch interactions, glance, and driving behavior. Our case study proves that UX experts consider our approach a useful addition to their design process.
Driving requires high cognitive capabilities in which drivers need to be able to focus on first-level driving tasks. However, each interaction with the User Interface (UI) system presents a potential distraction. Designing UIs based on insights from field-collected user interaction logs, as well as real-time estimation of the most probable interaction modality, can contribute to engineering focus-supporting UIs. However, the question arises of how user interactions can be predicted in in-the-wild driving scenarios. In this paper, we present HMInference, an automotive machine-learning framework which exploits user interaction log data. HMInference analyzes the interaction sequences of users based on UI domains (e.g., navigation, media, settings) and driving context (e.g., vehicle trajectory) to predict different interaction modalities (e.g., touch, speech). In 10-fold cross-validation, HMInference achieves a mean accuracy of 73.2% (SD: 0.02). Our work advances areas where user interaction prediction for in-car scenarios is required e.g., to enable adaptive system designs.
Full Paper Session 9: Watch your language!
How to Design the Perfect Prompt: A Linguistic Approach to Prompt Design in Automotive Voice Assistants – An Exploratory Study
In-vehicle voice user interfaces (VUIs) are becoming increasingly popular while needing to handle more and more complex functions. While many guidelines exist in terms of dialog design, a methodical and encompassing approach to prompt design is absent in the scientific landscape. The present work closes this gap by providing such an approach in form of linguistic-centered research. By extracting syntactical, lexical, and grammatical parameters from a German contemporary grammar, we examine how their respective manifestations affect users’ perception of a given system output across different prompt types. Through exploratory studies with a total of 1,206 participants, we provide concrete best practices to optimize and refine the design of VUI prompts. Based on these best practices, three superordinate user needs regarding prompt design can be identified: a) a suitable level of (in)formality, b) a suitable level of complexity/simplicity, and c) a suitable level of (im)mediacy.
In-Vehicle Intelligent Agents in Fully Autonomous Driving: The Effects of Speech Style and Embodiment Together and Separately
Speech style and embodiment are two widely researched characteristics of in-vehicle intelligent agents (IVIAs). This study aimed to investigate the influence of speech style (informative vs. conversational) and embodiment (voice-only vs. robot) and their interaction effects on driver-agent interaction. We conducted a driving simulator experiment, where 24 young drivers experienced four different fully autonomous driving scenarios, accompanied by four types of agents each, and completed subjective questionnaires about their perception towards the agents. Results showed that both conversational agents and robot agents promoted drivers' likability and perceived warmth. These two features also demonstrated independent impacts. Conversational agents received higher anthropomorphism and animacy scores, while robot agents received higher competence and lower perceived workload scores. The pupillometry indicated that drivers were more engaged while accompanied by conversational agents. Our findings are able to provide insights on applying different features to IVIAs to fulfill various user needs in highly intelligent autonomous vehicles.
Effects of Native and Secondary Language Processing on Emotional Drivers’ Situation Awareness, Driving Performance, and Subjective Perception
Research shows that emotions have a substantial influence on the cognitive processes of humans and in the context of driving, can negatively influence driving performance. Drivers’ interaction with in-vehicle agents can improve their emotional state and can lead to increased road safety. Language is another important aspect that influences human behavior and information processing. This study aims to explore the effects of native and secondary-language processing on emotional drivers’ situation awareness, driving performance, and subjective perception by conducting a within-subject simulation study. Twenty-four young drivers drove three different laps with a native-language speaking agent, secondary-language speaking agent, and no agent. The study results are indicative of the importance of native-language processing in the context of driving. Native-language agent condition resulted in improved driving performance and heightened situation awareness. The study results and discussions have theoretical and practical design implications and are expected to help foster future work in this domain.
Full Paper Session 10: eHMIs: Sharing is caring
Autonomous vehicles (AVs) are expected to communicate to vulnerable road users as a substitution of, for example, driver-pedestrian communication, leading to increased safety and acceptance. This communication is currently one-directional, i.e., from the AV to the pedestrian. However, today’s communication between drivers and pedestrians in crossing scenarios is bidirectional. Pedestrians gesture “thank you’’ or wave drivers through in case they do not want to cross. Human drivers often acknowledge this, for example, with a nod. We present an experiment in Virtual Reality (N=20), in which the effect of such acknowledgment of the AVs via its external communication is investigated for the two described scenarios and concerning pedestrian presence. Results show that such feedback is perceived as highly necessary, depends on the scenario, and improves the perceived intelligence of the AV, confirming a Halo-Effect.
Current research on external Human-Machine Interfaces (eHMIs) in facilitating interactions between automated vehicles (AVs) and pedestrians have largely focused on one-to-one encounters. In order for eHMIs to be viable in reality, they need to be scalable, i.e., facilitate interaction with more than one pedestrian with clarity and unambiguity. We conducted a virtual-reality-based empirical study to evaluate four eHMI designs with two pedestrians. Results show that even in this minimum criteria of scalability, traditional eHMI designs struggle to communicate effectively whom the AV intends to yield to. Road-projection-based eHMIs show promise in clarifying the specific yielding intention of an AV, although it may still not be an ideal solution. The findings point towards the need to consider the element of scalability early in the design process, and potentially the need to reconsider the current paradigm of eHMI design.
In mixed traffic environments, highly automated vehicles (HAV) can potentially be disruptive and a source of hazards due to their non-human driving behavior and a lack of “traditional” communication means (gestures, eye contact, and similar) to resolve issues or otherwise unclear situations. As a result, additional external human-machine interfaces (eHMI) for automated vehicles that replace the now absent human element in communication have been proposed. In this paper, we present the results from a study, in which two versions of a light band eHMI to communicate driving intend of an automated shuttle were evaluated in a real driving environment. We found that the green-red traffic light metaphor and simple animations could improve interaction success in certain aspects. We also found and discuss that the effect of using vs. not using the visual eHMIs was overall lower than expected and that the shuttle’s position and observable driving behavior seemed to play a larger role than anticipated.
Evaluating the Impact of Decals on Driver Stereotype Perception and Exploration of Personalization of Automated Vehicles via Digital Decals
Traffic behavior and its perception is shaped by various factors such as vehicle color or size. Decals are used to express information about the owner’s beliefs or are intended to be funny. In the future, with external displays on (automated) vehicles, individualized customization could be even more pronounced. While some research looked at the messages these decals convey, it is unclear how these decals influence the perception of surrounding drivers on the operator of the vehicle. We gathered data on decals in 29 cities in 8 countries. A thematic analysis unveiled 17 dominant themes among decals. Subsequently, we investigated effects of decals of 9 supra-regional common themes in an online study (N=64) finding that participants attributed different characteristics to the driver of a vehicle with a decal based on the type of decal and the participants’ country of origin. Additionally, a Virtual Reality study (N=16) revealed diverse opinions on future usage of such personalization options.