Skip to main content

Adjunct Proceedings

AutomotiveUI Adjunct ’25: Adjunct Proceedings of the 17th International Conference on Automotive User Interfaces and Interactive Vehicular Applications

AutomotiveUI Adjunct ’25: Adjunct Proceedings of the 17th International Conference on Automotive User Interfaces and Interactive Vehicular Applications

Full Citation in the ACM Digital Library

SESSION: Work in Progress

Exploring a Novel Ball‐Shaped Steering Mechanism System Architecture and Preliminary User Feedback

  • Jiaqi Fu
  • Jeremy R Cooperstock

Heavy‐duty vehicle operators often suffer from elevated muscle strain when making low‐speed or tight turns due to high steering torque demands. This study presents a pilot evaluation of a novel ball‐shaped steering mechanism designed to improve usability and reduce muscle stress compared to a conventional steering wheel (CSW). Our prototype maps ball rotations (pitch, roll, yaw) into throttle, braking, and steering commands, simulating a potential drive-by-wire setup. We ran a small pilot comparing two different mappings (roll‐based vs. yaw‐based) against a conventional steering wheel. Early observations suggest that both ball‐based mappings reduce arm‐acceleration (as a rough proxy for muscle stress) by approximately 60–70%. However, survey data indicates that participants still preferred the conventional wheel for precise control. We discuss the mechanical design, preliminary pilot data, and planned improvements, such as haptic return springs and more robust EMG measurements, to guide future heavy‐vehicle applications.

Exploring Deictic Interface Referencing Outside-Vehicle Landmarks: Study of In-cabin Multimodal Interaction

  • Yueteng Yu
  • Yan Zhang
  • Wafa Johal
  • Gary Burnett

Multimodal interfaces have the potential to enhance user experience by aligning with natural human communication. While previous In-Vehicle Infotainment (IVI) systems have explored voice and gesture for internal controls (e.g., using voice and gesture to operate car side windows), little research has examined how drivers might reference objects outside the vehicle. This study investigates the user experience of in-cabin deictic interfaces that enable multimodal referencing of external landmarks. Through a participatory design workshop and a user study, we explored how drivers use combined speech, gestures and visual cues to reference external landmarks during driving. Our preliminary findings provide early empirical insights into external-referencing HMI design, user behaviour and technology acceptance, informing design implications and challenges for future automotive systems that support outside-referencing interaction.

From Brake Lights to Beyond – Assessing Rear eHMI Designs through a Video-Based Survey

  • Feiqi Gu
  • Zhixiong Wang
  • Zhenyu Wang
  • Hongling Sheng
  • Dengbo He

Rear-end collisions account for a large portion of road crashes and are closely related to drivers’ car-following (CF) behaviors. Thus, providing additional information, especially the beyond-visual-range information, to support CF behaviors may reduce the rear-end collision risk. As a preliminary step, novel external human-machine interfaces (eHMIs) have been proposed to provide ego drivers with the information of the indirect leading vehicle (ILV) ahead of the direct leading vehicle (DLV), We evaluated these eHMIs in a video-based survey study. The results from 165 valid responses showed that drivers with different characteristics had different preferences for the eHMIs with different contents (ILV speed, distance between ILV and DLV, brake of ILV, collision risk between ILV and DLV, and real-life video information) and communication methods (sign, text, animation and real-life video). The findings indicate the potential of eHMIs in supporting CF behaviors and highlight the importance of considering user heterogeneity when designing eHMIs.

Cross or Nah? LLMs Get in the Mindset of a Pedestrian in front of Automated Car with an eHMI

  • Md Shadab Alam
  • Pavlo Bazilinskyy

This study evaluates the effectiveness of large language model-based personas for assessing external Human-Machine Interfaces (eHMIs) in automated vehicles. 13 different models namely BakLLaVA, ChatGPT-4o, DeepSeek-VL2-Tiny, Gemma3:12B, Gemma3:27B, Granite Vision 3.2, LLaMA 3.2 Vision, LLaVA-13B, LLaVA-34B, LLaVA-LLaMA-3, LLaVA-Phi3, MiniCPM-V and Moondream were tasked with simulating pedestrian decision making for 227 vehicle images equipped with eHMI. Confidence scores (0-100) were collected under two conditions: no memory (images independently assessed) and memory-enabled (conversation history preserved), each in 15 independent trials. The model outputs were compared with the ratings of 1,438 human participants. Gemma3:27B achieved the highest correlation with humans without memory (r = 0.85), while ChatGPT-4o performed best with memory (r = 0.81). DeepSeek-VL2-Tiny and BakLLaVA showed little sensitivity to context, and LLaVA-LLaMA-3, LLaVA-Phi3, LLaVA-13B and Moondream consistently produced limited-range output.

Taming XR Zombies: Proxemics-Aware Kinesthetic Feedback Interfaces for Safe Social Harmony in Autonomous Mobility Environments

  • Yongjun Kim
  • Jungho Kwon
  • Sojin Han
  • Kyung Yun Choi

As extended reality (XR) technologies approach mainstream adoption in autonomous vehicles (AVs), passengers will engage in sustained immersive experiences throughout their journeys. However, when multiple passengers use XR headsets in confined vehicle spaces, this creates new challenges in physical safety and social dynamics due to compromised spatial awareness and movement interference. We present a proximity-aware kinesthetic constraint and adaptive XR interaction system that addresses both physical coordination and interaction design challenges. Grounded in proxemic theory, the system operates through progressive Open/Medium/Dense modes: dynamically constraining arm movement via elbow-to-waist and wrist-to-shoulder restrictions while simultaneously adapting XR interactions from grab to ray-casting to micro-gesture (thumb-based). The system includes an emergency shutdown for sudden vehicle motion changes, immediately releasing all physical constraints and pausing XR content. Our approach demonstrates a novel solution for maintaining both usable XR experiences and respectful personal space in shared autonomous mobility environments.

Designing Multi-Modal Communication for Merge Negotiation with Automated Vehicles: Insights from a Design Exploration with Prototypes

  • Ruolin Gao
  • Haoyu Liu
  • Pavlo Bazilinskyy
  • Marieke Martens

Deciding whether to allow an automated vehicle (AV) to merge in front can present a complex negotiation for human drivers. To address this, we explored the human-machine interface (HMI) design for merge negotiation between a manually driven vehicle and an AV from the human driver’s perspective. We developed five HMI designs, each integrating different combinations of visual cues, haptic alerts, and explicit approve and reject controls. They were evaluated together with two baseline conditions in a video-based driving simulator. The results of Likert-scale ratings indicated that HMI designs with explicit accept and reject controls received higher mean ratings in communication clarity, perceived adequacy, safety perception, trust in AV behaviour, and decision-making efficiency than those without such controls. Open-ended feedback further suggested that these HMIs may foster a stronger sense of control and reduce perceptions of aggressiveness. Based on the findings, we outlined three preliminary design considerations for HMIs that support merge negotiation between AVs and human drivers. This work offers early guidance and sets the stage for future research on integrating diverse modalities and driver inputs (e.g., explicit approve and reject controls) into HMI design for merge negotiation.

DEV: A Driver-Environment-Vehicle Closed-Loop Framework for Risk-Aware Adaptive Automation of Driving

  • Anaïs Halin
  • Christel Devue
  • Marc Van Droogenbroeck

The increasing integration of automation in vehicles aims to enhance both safety and comfort, but it also introduces new risks, including driver disengagement, reduced situation awareness, and mode confusion. In this work, we propose the DEV framework, a closed-loop framework for risk-aware adaptive driving automation that captures the dynamic interplay between the driver, the environment, and the vehicle. The framework promotes to continuously adjusting the operational level of automation based on a risk management strategy. The real-time risk assessment supports smoother transitions and effective cooperation between the driver and the automation system. Furthermore, we introduce a nomenclature of indexes corresponding to each core component, namely driver involvement, environment complexity, and vehicle engagement, and discuss how their interaction influences driving risk. The DEV framework offers a comprehensive perspective to align multidisciplinary research efforts and guide the development of dynamic, risk-aware driving automation systems.

Towards an intelligent risk perception assistant: a neuroscience approach

  • Andry Rakotonirainy
  • Mohammed Mamdouh Zakaria Elhenawy
  • Lucas Henneçon

The road safety community aims to halve traffic fatalities by 2030, yet inadequate risk perception among drivers remains a critical contributor to crashes. Risk perception is among the brain’s most important high-level cognitive functions, influencing our decision-making at every turn. Dysfunctional risk perception has serious crash consequences. There is an urgent need for a foundational computational model that captures the underlying neural mechanisms and the complex interplay of exogenous and endogenous factors—environmental, emotional, physiological, and psychosocial—that shape risk perception. This paper presents our progress towards the design and implementation of a risk perception assistant (RPA) system that assesses risk perception and thereby improve driver safety. Building on recent findings on brain imagery and AI we propose a methodology to model and build an neuroadaptive in-vehicle user interface that mimic cognitive risk perception. This paper presents our methodology and preliminary results.

Vibe Coding in Practice: Building a Driving Simulator Without Expert Programming Skills

  • Margarida Fortes-Ferreira
  • Md Shadab Alam
  • Pavlo Bazilinskyy

The emergence of large language models has introduced new opportunities in software development, particularly through a revolutionary paradigm known as vibe coding or “coding by vibes”, in which developers express their software ideas in natural language and where the LLM generates the code. This paper investigates the potential of vibe coding to support novice programmers. The first author, without coding experience, attempted to create a 3D driving simulator using the Cursor platform and Three.js. The iterative prompting process improved the simulation’s functionality and visual quality. The results indicated that LLM can reduce barriers to creative development and expand access to computational tools. However, challenges remain: prompts often required refinements, output code can be logically flawed, and debugging demanded a foundational understanding of programming concepts. These findings highlight that while vibe coding increases accessibility, it does not completely eliminate the need for technical reasoning and understanding prompt engineering.

Evaluating User Interfaces in Electric Vehicle Interiors: Development of a Refined Coding Scheme

  • Rafael Gomez
  • Thomas J Long
  • Levi Swann
  • Peter Florentzos
  • Alexandra E Singleton

As electric vehicles (EVs) reshape the automotive landscape, their unique interior design potential necessitates a re-evaluation of traditional user interface (UI). The study presented here is part of a larger project investigating EV adoption based on a qualitative observation of participants interacting with EVs. This work-in-progress presents a refined coding scheme that captures a novel taxonomy of EV interior elements, organised into overarching themes (information systems, operations, access/visibility, and sections/features) and sub-themes (aesthetics, usability, and material/build quality). These findings reveal novel insights into factors influencing user evaluations of EV interiors, while supporting and expanding previous findings conducted by the research team outlining preferences for screen size, button layout, tactile feedback, and simple spatial configurations. This work contributes to automotive UI design by offering a refined coding scheme for future inquiry into automotive UX serving as a design-relevant resource for industry practitioners and researchers seeking to craft engaging, user-centered EV interiors.

Virtual Worlds for Real Agents: Validating Virtual Pedestrian Behavior for Vehicle Assistance Systems

  • Grace A Douglas
  • Linda Ng Boyle

Advanced Driver Assistance Systems (ADAS) require accurate pedestrian behavioral models for safety-critical applications, yet current development paradigms rely heavily on unvalidated virtual reality (VR) environments. This study presents a comprehensive validation framework comparing VR pedestrian behaviors to naturalistic traces at a common Seattle, Washington location. Through analysis of 8 participants across controlled scenarios, we demonstrate that VR environments preserve key behavioral relationships while requiring calibration for real-world deployment. Participants exhibited constrained exploration patterns despite unlimited virtual space, maintaining expected speed differentials between task types (20% reduction for goal-directed navigation). These findings suggest VR can capture decision-making processes useful for contextualizing vehicle assistance systems’ pedestrian detection and trajectory forecasting algorithms, provided validation protocols account for environment-specific behavioral calibration factors.

Are Electrodermal Activity-Based Indicators of Driver Cognitive Distraction Robust to Varying Traffic Conditions and Adaptive Cruise Control Use?

  • Anaïs Halin
  • Marc Van Droogenbroeck
  • Christel Devue

In this simulator study, we investigate whether and how electrodermal activity (EDA) reflects driver cognitive distraction under varying traffic conditions and adaptive cruise control (ACC) use. Participants drove in six scenarios, combining two levels of cognitive distraction (presence/absence of a mental calculation task) and three levels of driving environment complexity (different traffic conditions). Throughout the experiment, they were free to activate or deactivate ACC (ACC use, two levels). We analyzed three EDA-based indicators of cognitive distraction: SCL (mean skin conductance level), SCR amplitude (mean amplitude of skin conductance responses), and SCR rate (rate of skin conductance responses). Results indicate that all three indicators were significantly influenced by cognitive distraction and ACC use, while environment complexity influenced SCL and SCR amplitude, but not SCR rate. These findings suggest that EDA-based indicators reflect variations in drivers’ mental workload due not only to cognitive distraction, but also to driving environment and automation use.

Post-Crash UX: An Empirical Study on Multi-modal Interfaces for Immediate Driver Response

  • Yeonju Cho
  • Hwiyeon Kim
  • Minseo Ku
  • Suyeon Yu
  • Jieun Lee

Despite advances in vehicle crash prevention technologies, systems supporting immediate driver response post-accident remain underexplored. This study empirically analyzes the effectiveness of multi-modal interfaces in guiding driver behavior immediately after a crash. Twenty-five participants with varying driving and accident experience were tested using Wizard-style and Chat-style interfaces against a baseline. The results revealed significant differences in device preference and interface acceptance based on driver proficiency and accident experience. Novice drivers preferred in-vehicle display and step-by-step Wizard UI, while experienced drivers preferred a mobile device and autonomous Chat UI. In particular, all participants with crash experience favored mobile devices, highlighting the impact of the context of crash experience and acceptance of technology.

Reality Check: Real-World Observations of State-of-the-Art Driver Assistance Systems

  • Alice Rollwagen
  • Ignacio Alvarez
  • Andreas Riener

Research into automated vehicles and Advanced Driver Assistance Systems (ADAS) is developing rapidly. On-road evaluations of state-of-the-art systems are still rare, as novel concepts are mainly investigated in simulations or as theoretical scenarios. Therefore, this study conducts empirical field tests using two representative models (BMW i5 and a Mercedes-Benz EQS) on a standardized test track by 10 HCI and automotive experts under real-life conditions. Followed by an expert workshop that combined AI-supported semantic analysis with human-led thematic analysis to summarize strengths, limitations, and recommendations for action. Our findings expose two key misalignments: one between actual automation level and user-perceived support, and another between user expectations and real-world system behavior. Our insights aim to support the AutoUI community in shaping future ADAS.

Preliminary Study on Implicit Driving Behavior Modulation via Engine Sound Modification

  • Sota Inoue
  • Akira Utsumi
  • Hirotake Yamazoe

This paper proposes a low-intrusion method for supporting safe driving by subtly altering engine sound feedback to influence drivers’ perception of speed. Unlike conventional ADAS, our approach avoids explicit alerts and instead aims to induce unconscious behavioral changes. A driving simulator experiment showed that engine sound modulation—particularly to higher RPM—led to lower speeds, indicating that auditory feedback can serve as implicit driver behavior guidance. Questionnaire results revealed that the modified sounds often caused discomfort but were sometimes mistaken for actual speed changes. These findings highlight the potential of auditory feedback as a subtle and non-intrusive means of influencing driving behavior.

Invisible Barriers: Understanding and Supporting the Mobility Needs and Challenges of Individuals with Mental Health Conditions

  • Carina Manger
  • Anna Preiwisch
  • Andreas Riener

Mental health conditions are highly prevalent but remain underrepresented in inclusive mobility research, which has largely focused on physical and sensory impairments. This study addresses this gap through qualitative expert interviews with N = 20 mental health professionals from the fields of psychology, medicine, and social work. Participants described how conditions such as anxiety, depression, PTSD, and bipolar disorder can significantly impair everyday mobility, triggering fears, avoidance behaviors, and reliance on others. Key barriers include social anxiety, overstimulation, and a lack of supportive infrastructure. Experts emphasized the importance of therapeutic interventions, social support, and user-centered technologies. Recommendations included digital companions, simplified transit apps, and context-sensitive automation. This research contributes to a broader understanding of mental health and mobility for a psychologically inclusive transportation design.

Can an AI Voice Assistant reduce Stress for Drivers confronted with warning Signals?

  • Vinzenz Baptist Huber
  • Finn Luca Ertl
  • Anja Schlaak
  • Hannah Simson
  • Ignacio Alvarez

As voice assistants become more prevalent in modern vehicles, their potential to support driver safety and reduce stress during critical situations is growing increasingly relevant. This study explored if a voice assistant is capable of reducing stress caused by vehicle warning signals and increasing trust, perceived control as well as confidence. In a wizard-of-oz, within-subject study with 27 participants, drivers faced six warning signal scenarios in different severities (light, medium, severe) with and without voice assistant support. Conducted in a near-car simulator, the study measured heart rate and collected data through several questionnaires. Results showed that the voice assistant significantly reduced stress and improved drivers’ perceived confidence, control, and trust in the car’s technology. These findings highlight the potential of voice assistants as valuable support tools for future in-car safety systems.

Exploring Human Abuse of Automated Vehicles: A Review Framed by Robot Abuse Research

  • Anna Preiwisch
  • Andreas Riener

Abusive behavior in Human-Robot Interaction (HRI) presents a growing concern for safety, trust, and technology adoption. Although automated vehicles (AVs) are not traditionally considered social robots, they increasingly operate in interactive spaces as part of the HRI landscape. Despite the increasing attention to robot abuse, insights into the mistreatment of AVs remain limited. To address this gap, this study builds on a prior systematic literature review according to PRISMA guidelines, conducted by the authors, which covered N = 35 publications from 2020 to early 2025 and mapped factors related to robot abuse. From this dataset, a subset of N = 11 studies was reanalyzed to examine the findings directly related to abusive behaviors toward AVs. The analysis revealed that abuse directed at AVs differs from that directed at other robot types, highlighting the need for context-sensitive understanding and AV-specific design strategies to support future research on automated mobility systems.

Effect of Visual Feedback Compensation using Peripheral LED illuminations on Steering Maneuvers in Distracted Driving

  • Akira Utsumi
  • Hirotake Yamazoe
  • Tetsushi Ikeda
  • Yumiko O. Kato
  • Isamu Nagasawa

Artificial compensation of visual feedback can improve a driver’s steering maneuvers in degraded visual conditions. This paper examines whether this benefit takes effect in distracted or drowsy driving. We also investigate whether the continuous presentation of artificial feedback over a long period causes a “learning effect,” in which the effect of such artificial feedback increases or decreases, or a “dependency,” in which driving behavior deteriorates after the artificial feedback is removed. In addition, we examine the effect of the strength of artificial feedback using two different feedback luminance levels. Simulated driving experiments, consisting of over one hour of total driving, show the persistence of the artificial feedback effect but do not support the emergence of any “learning effect” and “dependency.” We also confirm that this effectiveness differs depending on feedback strength, suggesting the importance of proper control of feedback luminance.

Leveraging LLM-based Conversational Agents to Combat Passive Fatigue in Conditional Automated Driving Contexts

  • Yueteng Yu
  • Lewis Cockram
  • Jorge Pardo
  • Xiaomeng Li
  • Andry Rakotonirainy
  • Jonny Kuo
  • Sebastien Demmel
  • Mike Lenné
  • Ronald Schroeter

Passive fatigue during Level 3 automated driving can compromise driver readiness and safety. This paper presents preliminary findings from a field study exploring a conversational agent powered by a large language model (LLM) to mitigate passive fatigue in a real-world automated rural driving scenario. Based on researcher observations and post-drive interviews, early impressions suggest that engaging, context-aware dialogue may help sustain driver alertness, support a balance between relaxation and readiness, and be perceived as natural and beneficial. These findings point to the potential of conversational agents as proactive HMI interventions and highlight the need to further develop systems that adapt to both driver state and driving context.

Towards Adaptive External Communication in Autonomous Vehicles: A Conceptual Design Framework

  • Tram Thi Minh Tran
  • Judy Kay
  • Stewart Worrall
  • Marius Hoggenmueller
  • Callum Parker
  • Xinyan Yu
  • Julie Stephany Berrío Perez
  • Mao Shan
  • Martin Tomitsch

External Human–Machine Interfaces (eHMIs) are key to facilitating interaction between autonomous vehicles and external road actors, yet most remain reactive and do not account for scalability and inclusivity. This paper introduces a conceptual design framework for adaptive eHMIs—interfaces that dynamically adjust communication as road actors vary and context shifts. Using the cyber-physical system as a structuring lens, the framework comprises three layers: Input (what the system detects), Processing (how the system decides), and Output (how the system communicates). Developed through theory-led abstraction and expert discussion, the framework helps researchers and designers think systematically about adaptive eHMIs and provides a structured tool to design, analyse, and assess adaptive communication strategies. We show how such systems may resolve longstanding limitations in eHMI research while raising new ethical and technical considerations.

Understanding Pedestrian Gesture Misrecognition: Insights from Vision-Language Model Reasoning

  • Tram Thi Minh Tran
  • Xinyan Yu
  • Callum Parker
  • Julie Stephany Berrío Perez
  • Stewart Worrall
  • Martin Tomitsch

Pedestrian gestures play an important role in traffic communication, particularly in interactions with autonomous vehicles (AVs), yet their subtle, ambiguous, and context-dependent nature poses persistent challenges for machine interpretation. This study investigates these challenges by using GPT-4V, a vision–language model, not as a performance benchmark but as a diagnostic tool to reveal patterns and causes of gesture misrecognition. We analysed a public dataset of pedestrian–vehicle interactions, combining manual video review with thematic analysis of the model’s qualitative reasoning. This dual approach surfaced recurring factors influencing misrecognition, including gesture visibility, pedestrian behaviour, interaction context, and environmental conditions. The findings suggest practical considerations for gesture design, including the value of salience and contextual redundancy, and highlight opportunities to improve AV recognition systems through richer context modelling and uncertainty-aware interpretations. While centred on AV–pedestrian interaction, the method and insights are applicable to other domains where machines interpret human gestures, such as wearable AR and assistive technologies.

Self-supervised Learning for Detecting Local Contextual Anomalous Gaze Patterns of Individual Drivers in Level 3 Automated Driving

  • Huy Thanh Phan
  • Yueteng Yu
  • Rafael Cirino Gonçalves
  • Jonny Kuo
  • Sebastien Glaser
  • Mohammed Mamdouh Zakaria Elhenawy
  • Ronald Schroeter
  • Ashish Bhaskar

Anomaly detection can enhance autonomous driving safety through identifying rare events and potentially dangerous deviations from expected gaze distributions. We propose a statistical anomaly detection model that processes temporal gaze dynamics and creates linear inference to contextual features. The model is trained using a reconstruction technique to familiarize it with normal data, enabling it to detect anomalous gaze patterns. The reconstruction-based approach allows the model to learn what constitutes typical gaze behaviour in a given context. Our research on both Level 3 (L3) modes demonstrates that drivers usually change their gaze direction roughly after a couple of seconds, presumably as part of their regular safety checks of the surrounding environment, or because their attention was drawn by noticeable objects. The findings support the proposition that the driver’s gaze shifts toward prioritizing central-road information over peripheral cues and tends to suppress physical distractions by maintaining a more fixed gaze and focusing straight ahead.

Gaze-Based Indicators of Driver Cognitive Distraction: Effects of Different Traffic Conditions and Adaptive Cruise Control Use

  • Anaïs Halin
  • Adrien Deliège
  • Christel Devue
  • Marc Van Droogenbroeck

In this simulator study, we investigate how gaze parameters reflect driver cognitive distraction under varying traffic conditions and adaptive cruise control (ACC) use. Participants completed six driving scenarios that combined two levels of cognitive distraction (with/without mental calculations) and three levels of driving environment complexity. Throughout the experiment, participants were free to activate or deactivate an ACC. We analyzed two gaze-based indicators of driver cognitive distraction: the percent road center, and the gaze dispersions (horizontal and vertical). Our results show that vertical gaze dispersion increases with traffic complexity, while ACC use leads to gaze concentration toward the road center. Cognitive distraction reduces road center gaze and increases vertical dispersion. Complementary analyses revealed that these observations actually arise mainly between mental calculations, while periods of mental calculations are characterized by a temporary increase in gaze concentration.

Mixed Methods Scenario Development for Human-Vehicle Interaction Research: A Case Study on Winter Driving

  • Elliot Weiss
  • Srijan Srivatsa
  • Hiroshi Yasuda
  • Tiffany L. Chen

Scenarios provide a fundamental link between driving simulators and real-world conditions, shaping the extent to which the findings of a user study can be applied to public roads. However, compared to other aspects of study design, scenario development in human–vehicle interaction research tends to receive less deliberate attention. To encourage more methodical scenario generation, this work introduces a mixed methods approach for extracting representative scenarios from an integration of three real-world data sources: aggregated crash statistics, interviews with experienced drivers, and naturalistic driving data. Through a case study on winter driving, we outline the derivation of a nighttime, two-lane road scenario from these data sources and conduct an initial driving simulator pilot study to assess its realism. We hope that this demonstration of scenario generation from quantitative and qualitative data inspires researchers to consider more rigorous methods for scenario design in future work.

Real-World Validation of Dynamic Text-based eHMIs for Pedestrian Interaction with Autonomous Shuttles

  • Hyunmin Kang
  • Hyochang Kim
  • Hyungchai Park
  • Hoseok Jung
  • Joonwoo Son
  • Myoungouk Park

External human-machine interfaces (eHMIs) are designed to explicitly communicate autonomous vehicles’ (AVs) intentions, thereby enhancing safety in complex traffic interactions. This study evaluated the effectiveness of a dynamic text-based eHMI on an autonomous shuttle operating in a naturalistic setting at unsignalized crosswalks in South Korea. Through field observations, we identified scenarios in which traditional yielding or stopping messages were insufficient, especially under conditions of continuous pedestrian flow causing vehicle delays. Using this scenario, explicitly communicating the vehicle’s imminent departure, was tested against a static control condition. Post-interaction surveys of 60 pedestrians revealed that the dynamic eHMI significantly improved message visibility, comprehension, and perceived system support. Additionally, pedestrians exposed to the dynamic eHMI prioritized explicit textual cues over implicit vehicle cues when deciding to cross, leading to increased trust in AV technology. These results highlight the practical value of context-sensitive, explicit eHMIs for enhancing real-world AV-pedestrian interactions.

Quo-Vadis Multi-Agent Automotive Research? Insights from a Participatory Workshop and Questionnaire

  • Pavlo Bazilinskyy
  • Francesco Walker
  • Debargha Dey
  • Tram Thi Minh Tran
  • Hyungchai Park
  • Hyochang Kim
  • Hyunmin Kang
  • Patrick Ebel

The transition to mixed-traffic environments that involve automated vehicles, manually operated vehicles, and vulnerable road users presents new challenges for human-centered automotive research. Despite this, most studies in the domain focus on single-agent interactions. This paper reports on a participatory workshop (N = 15) and a questionnaire (N = 19) conducted during the AutomotiveUI ’24 conference to explore the state of multi-agent automotive research. The participants discussed methodological challenges and opportunities in real-world settings, simulations, and computational modeling. Key findings reveal that while the value of multi-agent approaches is widely recognized, practical and technical barriers hinder their implementation. The study highlights the need for interdisciplinary methods, better tools, and simulation environments that support scalable, realistic, and ethically informed multi-agent research.

Older passengers’ expectations about highly automated driving: Implications for inclusive designs

  • Chen Peng
  • İbrahim Öztürk
  • Ruth Madigan
  • Sina Nordhoff
  • Sascha Hoogendoorn-Lanser
  • Marjan Hagenzieker
  • Natasha Merat

Understanding older adults’ overall expectations about automated vehicles (AVs) is crucial for inclusive designs. The work-in-progress presents an exploratory study based on semi-structured interviews with 27 older adults in the Netherlands. A thematic analysis revealed an open-minded attitude towards AVs, optimism for improved safety, and pragmatic concerns about reliability. Participants expected AVs to be “well-behaved”, delivering safe, predictable, and socially considerate driving styles. Participants also showed a desire for AVs to be communicative, providing feedback to reduce uncertainties. The findings provide implications for inclusive AV designs.

Leveraging Visual Language Models for Detecting Distracted Driving

  • Long Wang
  • Andry Rakotonirainy
  • Mohammed Elhenawy

Distracted driving is a leading cause of road crashes, yet traditional analysis often relies on labour-intensive manual video annotation. This study investigates the use of the Vision-Language Model (VLM) PaliGemma 2 to automatically detect driver distraction. Using video data from the Australian Naturalistic Driving Study (ANDS), which captures the driver’s steering wheel, we sampled frames and processed them through PaliGemma 2 to generate textual descriptions of behaviour. The model was fine-tuned by updating only its attention layers, improving recognition of distractions like phone use and adjusting controls while retaining general knowledge. Outputs were standardised to enable systematic analysis. Preliminary results show that VLM-based detection accurately identifies key distraction behaviours, greatly reducing the need for manual labelling. These findings support the use of VLMs in road safety research, driver monitoring, and AI-driven interventions, and mark one of the first applications of large-scale multimodal models to naturalistic driving data.

Designing Multimodal In-Car Conversational Agents for Parents: A Simulator Study

  • Esther Carolina Kaehne
  • Amelie Patzer
  • Julia Lachmann
  • Ignacio Alvarez

This paper investigates how in-car conversational agents can support parents driving with young children—a group prone to distraction and cognitive overload. In a high-stress driving simulator study, 22 participants experienced a navigation assistant using four modalities: visual, verbal, non-verbal auditory, and haptic. Feedback was collected using a modified NASA TLX (DALI) and interviews. Verbal navigation instructions were rated most effective, reducing stress and distraction, while haptic feedback was seen as a useful secondary channel for urgent cues. Non-verbal auditory signals were largely ineffective in noisy environments. Participants favored multimodal combinations that balanced clarity with low cognitive demand. Based on these findings, we offer design recommendations for adaptive, family-aware in-car systems, emphasizing verbal communication, context sensitivity, and user customization.

Flow or Distraction? Exploring Multimodal Patterns of Driver States in Automated Driving

  • Pei-Ru Chen
  • I-Jui Lee
  • Wei-Chun Tai
  • Jo-Yu Kuo

Understanding driver states is critical for adaptive systems, yet research has focused on fatigue and distraction, overlooking positive engagement (e.g., flow state) during Non-Driving-Related Tasks. This study explores whether drivers under various automation levels remain in flow or become distracted, and how physiological signals vary across Society of Automotive Engineers Levels 0, 3, and 5. Participants completed all levels in a simulator while listening to a conversational podcast. Eye movement, heart-rate variability (HRV), vehicle control and, subjective measures from NASA-TLX and the Flow Short Scale. The statistical results revealed reduced visual attention at Level 5 but stable flow and HRV. The unsupervised learning analysis revealed similar patterns. These findings suggest a latent state where drivers exhibit flow state while visually distracted, which conventional monitoring may miss. The findings enhance our understanding of driver experience and inspire flow-aware automotive interface design.

Trust Through Customization: A Benefit for Conservative Automated Vehicles

  • Lucas E. Provine
  • Sidney Scott-Sharoni
  • Bruce N. Walker
  • Sarah J Larkin

Affording users the opportunity to customize automated vehicle (AV) behavior may meaningfully improve trust and acceptance. We examined this in a driving simulator study with 49 students and 18 community members (N = 67), randomly assigned to customization or non-customization conditions. Customization participants selected from a range of behavior options and experienced one of three driving styles. Results showed that the effect of customization on trust depended on the driving style. Specifically, those who customized and received the conservative style reported significantly higher trust than those who passively received it. Exploratory analyses using the Unified Theory of Acceptance and Use of Technology (UTAUT) revealed that AV interest explained 8% additional variance in behavioral intentions beyond UTAUT attitude variables. These findings support prior work emphasizing trust in AV adoption and extend it by positioning trust as a potential distal predictor of behavioral intentions, acting through performance and effort expectancy.

Exploring User Needs in Fully Driverless Robotaxis: A Think-Aloud Study of First-Time On-Road Rides

  • Zhenyu Wang
  • Haolong Hu
  • Weiyin Xie
  • Xiang Chang
  • Peixuan Xiong
  • Dengbo He

As fully driverless robotaxi services emerge, understanding user needs under real-world conditions is critical. This study employed the think-aloud method to capture real-time cognitive and emotional responses during users’ first ride in a fully driverless robotaxi. Analysis of 30 participants’ verbal reports revealed three key user needs: perceived safety, efficiency, and comfort. We found that users’ trust can be enhanced by conservative driving behaviors and transparent human–machine interface (HMI) design. Conversely, inconsistencies between user expectations and driving behaviors, potentially stemming from technical limitations, and individual differences, can undermine trust. Further, while conservative driving enhanced perceived safety, it can also reduce efficiency, especially in time-sensitive scenarios. Finally, comfort can be shaped by both driving behaviors and HMI interactivity. These findings highlight the importance of user-adaptive interfaces and context-aware driving strategies to balance perceived safety, efficiency, and comfort, thereby supporting the acceptance and deployment of driverless mobility services.

Enhancing Pedestrian Realism in Adverse-Weather Driving Simulations Using Motion Capture Data

  • Jakob Peintner
  • Carina Manger
  • Ignacio Alvarez
  • Andreas Riener

This work-in-progress aims to support more realistic and nuanced representations of pedestrian behavior in automated driving research. We present a motion capture dataset comprising N = 11 participants, who were recorded crossing a street under varying weather conditions while carrying different objects. The dataset includes 220 motion sequences with detailed gait data, analyzed with a focus on walking speed, cadence, and step length. Results indicate that pedestrian gait is significantly affected when using a smartphone and when exposed to rain without an umbrella. Future work will expand the scenario diversity and develop a toolchain based on the Open Simulation Interface (OSI) to integrate that data into simulation environments, and provide an open-source dataset, enabling more ecologically valid studies of human-vehicle interaction.

Time to Focus on the Road: Adaptive Monitoring Requests Design for Attentional Shift in Conditional Automated Driving

  • Yeonsu Lim
  • Sangyeon Kim
  • Sangwon Lee

In conditional automated driving, drivers may perform non-driving related tasks (NDRTs) but remain responsible for supervising the driving environment. A monitoring request (MR) redirects attention from NDRTs to the road before possible take-over situations. This study aims to design an adaptive MR interface that optimizes interruption, reaction, and comprehension (IRC) based on urgency and criticality in traffic contexts. A driving simulator study assessed participants’ perceived needs for interruption, reaction, and comprehension while performing dual tasks involving both NDRTs and monitoring tasks. Results showed that drivers’ interruption and reaction needs increased in high-urgency situations, while they preferred lower levels of both in low-urgency contexts. Comprehension remained consistently high across all scenarios, reflecting their strong desire for a broad understanding of driving risks. Based on these findings and insights from the interview, we finally propose an adaptive MR interface to enhance human–automated vehicle collaboration.

Understanding Driver Expectations and Preferences Around Data Transparency in Vehicular Digital Twins

  • Cansu Demir
  • Nicola Leschke
  • Glenda Hannibal
  • Mahdi Akil
  • Alexander Meschtscherjakov

While vehicles become more data-intensive, hardly any studies have systematically examined how drivers of such vehicles perceive the various privacy aspects related to the potential data usage. To address this gap, we present preliminary findings from an online survey of 30 drivers that investigates their expectations and preferences around data transparency in the context of vehicular digital twins (VDTs). We found that privacy aspects related to data access and control are important for potential VDT users, although they expressed uncertainty about their understanding of how the data is collected and used. Nevertheless, participants also showed some tolerance towards data collection and usage if it would potentially have a direct personal benefit. To inform future design of user-centered and privacy-aware VDT interfaces, we conclude that it is relevant to include and further discuss how the expectation of personal benefits might alter or even compromise privacy concerns.

Designing for Mutual Awareness: Early Prototyping of a Shared-Alert System for Preventing Vehicle-Cyclist Collisions

  • Wesly S. Menard
  • Tavienne O Millner
  • César Brasileiro de Alencar
  • Kristy Elizabeth Boyer

Collisions between cyclists and motor vehicles often result in significant emotional distress for the motorist and injuries or fatalities for the cyclist who rely mainly on passive equipment for protection. While radar-based sensing has long supported driver assistance, cyclist safety technologies have only recently begun to incorporate comparable capabilities for environmental awareness. However, most current solutions provide asymmetric safety, placing the responsibility for awareness only on one party. In this article, we present the preliminary design, implementation, and testing of a GPS-enabled, radio-based, audio-visual alert system that supports mutual awareness and reduces the likelihood of accidents by alerting both parties to each other’s presence and distance. We detail our design process, report on early prototyping, and reflect on our challenges. This novel approach to mutual awareness in traffic safety hopes to inspire new research into cooperative road user technologies and help shift the safety paradigm from individual to shared responsibility.

Towards Context-Aware Usability Assessments in Vehicles: Developing the CAUS Scale for Multimodal Interfaces

  • Akshay Madhav Deshmukh
  • Bastian Pfleging

Modern vehicle infotainment systems are increasingly multimodal, enabling drivers to interact through voice, touch, and gesture. While these input methods enhance usability, most existing tools overlook the dynamic driving context-such as traffic density, road complexity, and cognitive load, leading to incomplete evaluations. This work-in-progress introduces the Context-Aware Usability Scale (CAUS), a psychometric instrument for evaluating multimodal in-vehicle interfaces with sensitivity to environmental factors. CAUS is developed through a three-phase process: (1) item generation from a systematic literature review using the PRISMA framework, (2) expert validation via Content Validity Index (CVI) methods, and (3) planned empirical testing in a high-fidelity driving simulator.

The 20-item scale spans five core usability dimensions – efficiency, effectiveness, cognitive load, distraction, and satisfaction – and five context-sensitive items. Expert evaluations confirmed strong content validity (S-CVI > 0.90).

The final phase will assess reliability and factor structure. CAUS offers an ecologically valid, scalable tool for evaluating automotive HCI, with applications in smart mobility and mobile interface design.

It Matters Who Is Behind The Wheel: Driver Monitoring Feature Analysis Using Explainable AI

  • Rafael Cirino Gonçalves
  • Jorge Pardo
  • Mohammed Mamdouh Zakaria Elhenawy
  • Jonny Kuo
  • Mohsen Azarmi
  • Mahdi Rezaei
  • Michael G. Lenné
  • Ronald Schroeter
  • Natasha Merat

This work-in-progress examines how gaze-based features and individual driver characteristics influence takeover performance prediction in partially automated vehicles. We present preliminary findings from a driving simulator study (N=33) that used a decision-tree (XGBoost) machine learning model and explainable AI techniques (permutation feature importance and SHAP analysis). Results show that driver profile features—particularly professional training, experience, and age—emerged as highly predictive of takeover readiness alongside traditional gaze metrics like fatigue indicators. While current Driver Monitoring Systems (DMS) approaches and regulatory recommendations focus on universal gaze thresholds, our preliminary analysis reveals that individual driver characteristics may be more important for predicting takeover performance. These findings suggest potential for developing adaptive automotive interfaces that adjust based on driver profiles rather than one-size-fits-all approaches. The preliminary results highlight the need for careful consideration when designing driver monitoring systems and automotive interfaces for partially automated vehicles.

Designing a Gamified Driving Mode with Music and Mapping to Assist Drivers with ADHD on Monotonous Road Segments

  • Kaiyuan Tang
  • Kerui Chen
  • Qiuyu Lin
  • Yiqi Li
  • Zhou Jiang

This paper explores the design of the automotive user interface to assist drivers with Attention Deficit Hyperactivity Disorder (ADHD) who face unique difficulties in maintaining focus during low-demand driving conditions, such as long highway stretches and slow-moving traffic. We proposed an innovative solution by integrating a music game-based incentive system into the vehicle’s navigation interface. By synchronising driving periods with the duration of individual songs, the system segments extended low-demand driving into manageable tasks, providing short-term goals and real-time feedback on driving performance. Through gamification, our goal is to naturally stimulate the driver’s attention and enhance their driving behaviour. Preliminary tests indicate that this approach has the potential to reduce instances of attention shifts and improve the overall driving experience. The design can complement traditional treatments and training methods for drivers with ADHD.

V2G Validation: A Concept Validation Study of UI Features to Promote Vehicle-to-Grid Adoption

  • Monica P. Van
  • Clement Wong
  • David A. Shamma
  • Candice L. Hogan
  • Matthew L. Lee
  • Alexandre L. S. Filipowicz

Vehicle-to-grid (V2G) technology allows electric vehicle owners to store energy from the grid and return it back when needed, offering potential economic and environmental benefits. However, user adoption may be limited by concerns about V2G’s impact on people’s daily routines and their vehicles. We conducted a mixed-methods “speed-dating” concept validation study—including storyboards, think-aloud protocols, and surveys—to examine how user interface (UI) features alleviate these concerns. Our findings show that users most value UI elements highlighting V2G’s financial benefits, followed by those tracking battery health and participation monitoring. Features emphasizing social comparisons, environmental or grid-level benefits were less appealing. Based on these insights, we propose design strategies to enhance perceived personal value, user trust, and engagement with informational V2G interfaces.

Analysis of Driver and Pedestrian Gesture Use in the Boston Area. Automated Vehicles May Need More Than Kinematics in Ambiguous Situations

  • Hatice Şahin İppoliti
  • Alexander Weibert
  • Dietrich Manstetten
  • Bryan Reimer
  • Pnina Gershon
  • Bruce L Mehler
  • Larbi Abdenebaoui

Roadways, despite their formal regulations, are dynamic spaces where humans interact beyond formal rules to resolve conflicts. In ambiguous situations, the right of way is often unclear. Self-driving vehicles in urban traffic introduce challenges to their coexistence with humans, indicating a need for greater social awareness in these vehicles. To investigate social interactions among roadway users, we analyzed a naturalistic driving dataset focusing on instances where drivers yielded to pedestrians, by noting gestures. Video analysis showed that gestures were more common in ambiguous situations than in regulated scenarios. Drivers used gestures to navigate the right of way efficiently, while pedestrians used them to express gratitude. These findings highlight the importance of understanding social expressions in designing socially aware self-driving vehicles.

An Initial Systematic Review of eHMIs for underrepresented VRUs: Elderly, Children & People with Disabilities

  • Benjamin Wei-Jie Kwok
  • Thomas Alexander Goodge
  • Ryan Yan-Hern Sim
  • Kan Chen
  • Jeannie Su-Ann Lee

As autonomous vehicles (AVs) become increasingly integral into traffic environments, external human–machine interfaces (eHMIs) have emerged as a critical topic in facilitating safe interactions with vulnerable road users (VRUs), specifically children, elderly, and persons with disabilities (PwDs). While recent studies have explored eHMI concepts designed to address the needs of these demographics, much of the existing research centered on the interface solutions with less emphasis placed on the underlying methodologies used to evaluate them. As a result, the methodological approaches used to evaluate such interfaces remaining varied and underexamined. Rather than focusing on interface design outcomes, this paper presents a systematic review of 19 studies with the attention shifted to the methodologies used, focusing on study methodologies, evaluation tools, and participant engagement practices. The findings revealed notable inconsistencies in evaluation methods, limited adoption of inclusive design practices, and a lack of standardized protocols. By highlighting these gaps, this body of work provides actionable suggestions to guide future research and promote more standardized practices in the development of eHMIs for inclusive evaluation approaches for future eHMI research.

Situation Awareness and Fatigue Detection for Semi-Automated Train Driving

  • Samuel Zhi-Hao Wong
  • Shi-Ru Chew
  • Eddie Yiu-Sun Ma
  • Benjamin Wei-Jie Kwok
  • Jeannie Su-Ann Lee
  • Kan Chen

Ensuring rail safety is a critical concern where train drivers must maintain high vigilance to prevent incidents such as collisions, derailments, and Signal Passed at Danger (SPAD) events. A SPAD event occurs when a train passes a stop signal without authorization. This can happen due to various factors, including fatigue, misjudgments and driver errors. The current anti-SPAD system of the rail network operator employs video analytics to detect such risks, however the size and cost of the device, limits accessibility, particularly during emergencies. The proposed mobile application enhances manual train operations by enabling real-time monitoring of trackside objects through a smartphone mounted on the driving console. It utilizes real-time object detection (YOLOv11) and facial analysis, optimized for mobile devices, thus reducing deployment time and improves situational awareness and safety during critical operations. It could also form the basis of modeling fatigue in partially automated driving for train operations and autonomous vehicles.

Multi-agent Based Content Recommendation for In-vehicle Infotainment System

  • Hiromu Ogawa
  • Yuta Hagio
  • Takashi Shoji
  • Shinjiro Urata
  • Shinji Hoshino
  • Yuji Ikeda
  • Takahiro Fukushima
  • Hisayuki Ohmata

Advances in autonomous driving technology are expected to increase in-vehicle free time, raising demand for content recommendation services tailored to user and driving contexts (e.g., conversation atmosphere, destination). However, many existing systems rely on predefined context-sharing schemes between platform providers and service providers, which cause cross-platform incompatibility and limit context expressiveness. This paper proposes a multi-agent architecture in which platform agents infer rich contextual information from sensor data using a large language model (LLM) and communicate it via natural language to service agents. We implemented a simulator and demonstrated the system at two exhibitions, followed by user evaluations. The results indicate that the proposed architecture enables flexible context sharing, effective content recommendation, and the generation of explanations for recommendation rationales without requiring predefined context definitions. These findings contribute to the design of adaptive, platform-agnostic in-vehicle infotainment services for future autonomous vehicles.

CARSI 3.0: A Context-Driven Intelligent User Interface

  • Marco Wiedner
  • Adrian Fatol
  • Andri Furrer
  • Leon Eisemann
  • Emilio Frazzoli

Modern automotive infotainment systems offer a complex and wide array of controls and features through various interaction methods. However, such complexity can distract the driver from the primary task of driving, increasing response time and posing safety risks to both car occupants and other road users. Additionally, an overwhelming user interface (UI) can significantly diminish usability and the overall user experience. A simplified UI enhances user experience, reduces driver distraction, and improves road safety. Adaptive UIs that recommend preferred infotainment items to the user represent an intelligent UI, potentially enhancing both user experience and traffic safety. Hence, this paper presents a deep learning foundation model to develop a context-aware recommender system for infotainment systems (CARSI). It can be adopted universally across different user interfaces and car brands, providing a versatile solution for modern infotainment systems.

SESSION: Workshops

1st Workshop on Exploring the Potential of XAI and HMI to Alleviate Ethical, Legal, and Social Conflicts in Automated Vehicles

  • Krishna Sahithi Karur
  • Andreas Riener
  • Ignacio Alvarez
  • Philipp Wintersberger
  • Jeongeun Park
  • Seul Chan Lee

As high-level, automated vehicles (AVs) become more present on our roads, resolving ethical, legal, and social implication (ELSI) conflicts in decision-making is a complex challenge. To possibly find solutions to such challenges, this workshop explores how Explainable AI(XAI), and Human-Machine Interfaces (HMIs) can improve transparency and increase trust, particularly in ambiguous situations. We propose a scenario-based workshop that invites participants to reflect on decision-making, expectations for explanations, and possible communication through HMI. Outcomes of this workshop will be the first step to meaningfully add XAI to ensure human-centered decisions of AVs.

Measuring Passengers’ Comfort and Perceived Safety in Automated Driving: Good Practices, Challenges, and Opportunities

  • Chen Peng
  • Pavlo Bazilinskyy
  • Yueteng Yu
  • Natasha Merat

Passenger comfort and perceived safety, as two psychological states, are crucial for user acceptance of automated driving. The accurate measurement of these passenger states contributes to human-centred designs for automated vehicles and developing predictive models for providing personalised settings. However, practical challenges and best practices are rarely discussed in the literature. This workshop aims to address this gap by creating a forum to synthesise current practices and explore novel, effective measurement approaches. The session includes expert talks on subjective, objective, and model-based measurement methodologies, and interactive breakout sessions. Participants will critically evaluate existing methodologies and design future multi-modal strategies, including incorporating artificial intelligence (AI). The workshop will produce numerous outcomes, including the collaborative development of a research outlook and a methodological paper for knowledge sharing.

Shaping In-Vehicle Behaviours through Activity-Centered Design

  • Ankit R. Patel
  • Pnina Gershon
  • Azra Habibovic
  • Fjollë Novakazi
  • Sakura Akahoshi
  • Areen Alsaid
  • Kyungjoo Cha

In today’s fast-paced society, most individuals commute either by personal vehicle or public transportation. User preferences and requirements are crucial, with design playing a significant role. The nature of design should be such that it is both inclusive and assimilative, and its purpose is to propel innovation and progress while also improving the quality of life of the user. That is why a general focus was given to the user-centered design approach while developing vehicles, especially, cabin (cockpit) design. With prioritizing the user activities, it is interesting to explore how users’ experience and behavior vary through the application of different design approaches. Nevertheless, existing literature has significantly overlooked the impact of design approaches on “human activity”. Therefore, the main objective of the workshop is to examine the relationships between activity-centered design and user behavior.

Sustainable by Design: A Workshop on Life-Cycle-Aware Future Mobility

  • Melanie Berger
  • Patrick Ebel
  • Andreas Riener
  • Ignacio Alvarez
  • Philipp Wintersberger
  • Shadan Sadeghian

In this workshop, we invite researchers, designers, and practitioners to explore together how life-cycle thinking can contribute to the design of intelligent, sustainable mobility solutions. While current research primarily focuses on making the usage period of such solutions more sustainable, we aim to take a broader perspective by integrating sustainability from the earliest design stages through to the end of the vehicle’s life. Therefore, we will use speculative and critical design thinking to inspire and explore challenges as well as opportunities concerning sustainability at every stage of the life-cycle, from Design and Production to Usage, and End-Of-Life. Next, we will lead an ideation and prototyping session, followed by an interdisciplinary discussion reflecting on how intelligent technology can promote sustainable mobility. The outcomes will include potential design ideas and future research directions for incorporating life-cycle considerations into future mobility solutions.

The Future of In-Car Applications: How can Data & AI personalize the User Experience?

  • Marco Wiedner
  • Dominik Kratky
  • Euiyoung Kim
  • Emilio Frazzoli

As in-car applications evolve, the potential to personalize the user experience through data and AI is becoming a key focus in automotive research. This workshop will explore how real-time data from sensors, user preferences, and behavioral insights can be leveraged to create individual in-car experiences. Current trends in AI-driven personalization, including voice assistants, adaptive interfaces, and predictive algorithms, will be discussed. Participants will dive into challenges such as privacy, data security, and user acceptance, while also exploring new possibilities for enhancing the in-car experience. Through interactive discussions and hands-on case studies, this workshop aims to uncover innovative ways to use data and AI to enrich automotive user interfaces.

The UnScripted Trip: Fostering Policy Discussion on Future Human–Vehicle Collaboration in Autonomous Driving Through Design-Oriented Methods

  • Xinyan Yu
  • Julie Stephany Berrio Perez
  • Marius Hoggenmüller
  • Martin Tomitsch
  • Tram Thi Minh Tran
  • Stewart Worrall
  • Wendy Ju

The rapid advancement of autonomous vehicle (AV) technologies is fundamentally reshaping paradigms of human–vehicle collaboration, raising not only an urgent need for innovative design solutions but also for policies that address corresponding broader tensions in society. To bridge the gap between HCI research and policy making, this workshop will bring together researchers and practitioners in the automotive community to explore AV policy directions through collaborative speculation on the future of AVs. We designed The UnScripted Trip, a card game rooted in fictional narratives of autonomous mobility, to surface tensions around human–vehicle collaboration in future AV scenarios and to provoke critical reflections on design solutions and policy directions. Our goal is to provide an engaging, participatory space and method for automotive researchers, designers, and industry practitioners to collectively explore and shape the future of human–vehicle collaboration and its policy implications.

What makes a ready driver? A deep dive into the measurement and validation of readiness estimation for driver monitoring systems of partially automated (SAE level 2) vehicles

  • Rafael Cirino Gonçalves
  • Courtney Michael Goodridge
  • Jorge Pardo
  • Amélie Reher
  • Jonny Kuo
  • Audrey Bruneau
  • Natasha Merat

Recent regulations and Euro NCAP requirements enforcing the inclusion of driver monitoring systems (DMS) in vehicle fleets have substantially increased the demand for a better understanding of how driver state changes during automated driving, and how this can be accurately measured. The term “readiness” is commonly associated with the likelihood of a successful recovery of manual control from vehicle automation and used as a proxy for determining driver state during SAE Level 2 automation. However, the implementation of a unified holistic metric to predict drivers’ state has faced several challenges. For example, there is a lack of consensus on how readiness is defined and measured, as well as the absence of systematic validation protocols. To address this gap, this workshop will facilitate a discussion about the challenges and potential solutions regarding driver readiness measurement and validation, discussing the practicalities for implementation in DMS products.

SESSION: Interactive Demos

Interactive Visualization of Real-World Automated Driving Data using AWSIM and VARJO-XR4

  • Leon Sebastian Fuessner
  • Togtokhtur Batbold
  • Ronald Schroeter
  • Sebastien Glaser

We present a high-fidelity interactive visualization of real-world automated driving data, developed using AWSIM and Autoware. The system replays sensor data (LiDAR, camera, GPS) captured during a drive through the Mount Cotton closed circuit near Brisbane, Queensland, Australia. These data are synchronized with a Unity-based simulation of the same environment, enabling detailed visualization and analysis of edge cases in a safe and repeatable virtual setting. This demo integrates virtual reality (VR) to immerse users in the simulation using a VARJO-XR4 1 headset, providing intuitive access to recorded data streams, vehicle telemetry, and ROS2 communication channels. Our aim is to showcase how immersive, data-driven digital twins can enhance interaction design, debugging, and human-in-the-loop evaluation in autonomous driving. The platform offers practical value for researchers and practitioners in simulation-based testing, teleoperation, and human-machine interface (HMI) development.

SimV2G: Personalized Interface to Simulate and Promote Vehicle-to-Grid Participation

  • Clement Wong
  • Jonathan Q. Li
  • Amalie Trewartha
  • Steven B. Torrisi
  • Alexandre L. S. Filipowicz

We demonstrate a web app user interface that simulates the benefits and trade-offs of Vehicle-to-Grid (V2G) given an individual’s electric vehicle driving habits. Users input their own driving and charging data (or explore example usage patterns), specify their battery type, and indicate their availability for V2G. The interface then generates a personalized V2G schedule and presents both the potential financial benefits (e.g., earnings from supplying energy to the grid) and impact on battery health. This interactive and personalized interface aims to mitigate known barriers to V2G participation and increase consumer interest in the technology.

SESSION: Videos

Reconfigurable Roadways: Envisioning the Future of V2X-Driven Autonomous Urban Infrastructure through an Interactive Tabletop Simulator

  • Jiyeon Lee
  • Jiwon Kang
  • Gyeonghun Min
  • Kyung Yun Choi

Autonomous driving and cooperative platooning are expected to transform urban mobility, significantly reducing traffic density in populated cities. Therefore large portions of road infrastructure could become underutilized, presenting valuable opportunities for spatial reallocation. These spaces could host Tactical Urbanism strategies supporting socially adaptive, community-centered urban transformation. We introduce Reconfigurable Roadways (RR)—an adaptive system reallocating road space by expanding pedestrian zones and dynamically modulating vehicle flows through real-time demand. Using V2X-enabled traffic coordination, RR offers a scalable framework for highly flexible, multi-use streets. To illustrate RR and clarify current technology and diverse future possibilities, we present an interactive tabletop simulator of an urban mobility scene. The tangible simulator lets users rearrange modular blocks—each representing a roadway role—to explore near-future scenarios and observe simulated changes in traffic behavior and urban form. The platform fosters intuitive understanding of RR principles and engages users in shaping future urban mobility systems.

Driving without a Steering Wheel: Designing a Multilevel Feedback Interface for Passenger Anxiety in the Transition to Level 4 Autonomous Mobility

  • Juang Lee
  • Yeonwoo Kang
  • Seongchan Nam
  • Soonbum Kwon
  • Kyung Yun Choi

As autonomous mobility approaches Level 4, we are now in a transitional phase shaped by fundamental changes in how passengers perceive safety, trust, and control. This study proposes a concept of multilevel feedback interface to alleviate passenger anxiety toward autonomous systems during this critical shift. The interface adopts a hybrid multilevel feedback approach, consisting of four distinct modes–Standby, Infotainment, Indicating, and Control–that respond to varying levels and sources of passenger anxiety through tactile and graphical cues. We present the concept, design process, and implementation of this interface, which aims to envision future autonomous mobility systems that are emotionally responsive, trust-enhancing, and passenger-centered.

Turning Perceptions into Design Recommendations: Exploring the Implications of Multisensory Stimuli on Alertness and Mood in Manual and Partially Automated Vehicles

  • Jing Zang
  • Bowen Zheng
  • Mansoor Nasir
  • Ksenia Kozak
  • Denny Yu
  • Brandon J. Pitts

Driver alertness and mood are critical for driving safety and experience. Despite interests in multisensory interventions to elevate alertness and mood, few studies have combined multisensory modalities and evaluated their joint impact in real-world driving. This study introduces and evaluates a novel in-vehicle multisensory experience that synchronizes sound, vibration, light, scent, and airflow to enhance driver engagement. Using a 2×2 within-subject design, we conducted a naturalistic study with 45 participants to examine the effect of driving mode (manual vs. assisted-driving) and experience duration (one song vs. two songs) on the driver’s perceptions and responses. Self-reported alertness and mood were collected alongside qualitative user feedback. Overall, alertness and arousal were significantly increased following the onset of the experience. Arousal remained elevated throughout the experience, while alertness diminished at the end. The positive effect was more pronounced during manual driving. These findings offer insights for future development of adaptive, user-centered in-vehicle systems.

Context-Aware Take-Over Requests for Promoting Emergency Corridor Formation in Level 3 Automated Vehicles

  • Tuğcan Önbaş
  • Michael A. Gerber
  • Andreas Riener

In some countries, emergency corridor formation is a legally mandated maneuver to enable emergency vehicle access during congestion. In countries such as Turkey, limited public awareness complicates compliance. As vehicles progress toward level 3 automation, Take-Over Requests (TORs) prompt humans to continue driving, but most designs lack cultural or contextual relevance. This study examines whether context-aware and multimodal TORs improve compliance of emergency corridor formation during take-over situations. Using video-based simulations with five licensed drivers experienced in Turkish traffic, participants responded to three TOR conditions varying in modality and contextual specificity. Measures included reaction time, compliance accuracy, and subjective ratings, complemented by semi-structured interviews. Results indicate that context-aware multimodal TORs led to higher compliance and were perceived as clearer and more trustworthy than generic prompts. Findings highlight the importance of adaptive TOR designs and inform future research on culturally sensitive automated vehicle interfaces.

AwareDoor: Enhancing Vehicle-Exit Safety via Multimodal Risk Communication

  • Hang Yu
  • Krishnakant Shedge
  • Nourhan Mohamed
  • Nada Samak
  • Waleed Binsad
  • Simantini Bhosale
  • Eunji Kim
  • Shreya Kondvilkar
  • Andreas Riener

The video shows AwareDoor, an intelligent exit assistance system for automated vehicles. It combines predictive intent detection and hazard assessment to deliver clear multimodal feedback that guides passengers when exiting. The system detects approaching dangers and user intentions, providing timely visual and auditory cues to prevent accidents. A mixed-method evaluation showed that clear, proactive feedback increased trust and supported safer decisions, while distractions and false alerts reduced confidence. The findings highlight the value of integrating predictive sensing and intuitive communication to improve safety and trust in automated mobility. AwareDoor illustrates how intelligent in-vehicle systems enhance situational awareness and empower users to exit confidently.

COCOON : Emotional Survival in the Age of Automated Care

  • Hyeonjoon So
  • Eugene Han
  • Haon Jo
  • Kyung Yun Choi

Cocoon is a speculative autonomous mobility system designed to support children in a future where traditional caregiving structures have collapsed. As birth rates decline and parental presence diminishes due to socioeconomic pressures, children are increasingly left to navigate daily life alone. Cocoon addresses this by offering a responsive, emotionally intelligent space that interacts through voice and gesture recognition. It features a transformable seating system, an omnidirectional treadmill, and a character-based interface that adapts to a child’s physical and emotional state. By incorporating affective computing and embodied interaction, Cocoon fosters a sense of companionship and autonomy during transit. Rather than replacing human care, the project critically explores what it means for technology to assume caregiving roles. Through scenario-based user flows and immersive storytelling, Cocoon presents a provocative design fiction that challenges conventional mobility paradigms and raises ethical questions about the automation of empathy and care in children’s everyday lives.

Ride Recall: An Aftermarket Item-Reminder System for Shared & Rented Cars

  • Hannah Maria Müller
  • Natalie Sachin Valsangkar
  • Warda Rashid
  • Smit Bhanderi
  • Lukas Peter Berghegger
  • Ritvik Rajiv Kolhe
  • Maham Malik
  • Meetkumar Sukani
  • Berde Volkan
  • Andreas Riener

We present Ride Recall, a retrofit system designed to help users recover forgotten items in shared or rented vehicles. The accompanying video demonstrates the system in a Wizard-of-Oz setup, combining in-cabin cameras with pressure-sensitive mats. When an unattended item is detected after the user exits, the system sends a mobile alert showing the item and its location. Users can then block the vehicle, return to retrieve the item, or dismiss the alert. This enables timely recovery before the vehicle is reused, addressing a common challenge in shared mobility contexts.

When and How to Explain: Designing Human-Machine Interfaces for Pilotless Urban Air Mobility

  • Jongwoo Park
  • Siyoung Kim
  • Young Woo Kim

Urban Air Mobility (UAM) is expected to operate without onboard pilots, raising concerns about passenger trust in automation. Building on findings from autonomous vehicle (AV) research, this study investigates whether explanation-based human-machine interfaces (HMIs) can enhance trust in UAM. We developed a virtual simulation with five conditions, varying the type (how vs. how+why) and timing (pre-event vs. post-event) of system-generated explanations. A baseline condition with no explanation was also included. Participants will experience multiple UAM issue events (e.g., fog, gusts), during which trust, transparency, and competence are measured. We anticipate that both how and why explanations will increase trust, with why and pre-event explanations expected to have stronger effects. Our findings will inform the design of explainable UAM systems that foster psychological comfort and acceptance.

Inclusive Vehicle Dashboard Design: Supporting Neuro diverse ADHD Drivers Through Visual Simplicity

  • Vidhi Raghvani
  • Michael A. Gerber
  • Andreas Riener

As vehicle instrument clusters become increasingly digital and complex, concerns are growing about their impact on neuro diverse drivers. This study examines how two dashboard designs – a minimalist and a high information density cluster – affect the situational awareness of adults with ADHD in SAE Level 0-2 vehicles. In an online session, six study participants were presented mid-fidelity dashboard prototypes, during which their situational understanding (assessed via the SART) and verbal feedback were recorded. Results indicate that the low-density dashboard significantly improved SART scores and lowered self-reported cognitive load compared to the high-density version. Participants perceived the simplified interface easier to process and less distracting. These results suggest that reducing visual complexity in vehicle dashboards can enhance safety and comfort for neuro diverse users. The study underscores the need for adaptive, cognitively accessible interfaces in automotive design and points to the value of inclusive design options in future research.

Dynamic Head-Up Display Design: Cognitive Load as a Parametric Driver

  • Laetitia Pina-Lydia Solombrino
  • Michael A. Gerber
  • Andreas Riener

Head-up displays (HUDs) are increasingly used in vehicles to present essential information within the driver’s field of view, aiming to reduce distraction and cognitive load. However, concerns remain that adaptive, dynamic HUDs may unintentionally increase cognitive workload if changes are unpredictable or visually intrusive. This exploratory study investigates how subtle, context-aware HUD adaptations affect perceived workload during simulated driving. Six licensed drivers navigated urban, rural, and highway environments using one of three HUD types; static, mildly adaptive, or highly adaptive, in a Wizard-of-Oz setup. Subjective workload was assessed using weighted NASA-TLX scores and semi-structured interviews. While statistical analyses showed no significant differences across conditions, descriptive results suggested that predictable, minimal adaptations may reduce perceived effort and frustration. Participants emphasized the importance of clarity, consistency, and user control. Despite limitations in sample size and HUD placement, findings offer preliminary support for restrained, context-sensitive HUD design and point to the need for further user-centered research.

Investigation of Habituation Effects of Visual Variations of Cues to the Fallback-Driver for Automated Level 3 Vehicles

  • Nourhan Mohamed
  • Michael A. Gerber
  • Andreas Riener

In L3 conditional automated driving, the fallback driver may disengage from the driving task but must remain ready to resume control when requested. A major challenge is maintaining this readiness without compromising user experience. Most systems lack ongoing proactive cues to sustain engagement, and existing ones are often so monotonous they quickly become habituating. This video paper examines whether varying the visual design of periodic engagement cues can reduce habituation and sustain driver readiness. In a within-subjects experiment (N=5), participants experienced a Wizard-of-Oz video driving simulation with head-up display messages presented every three minutes, using either (a) a fixed or (b) a variable alert design. Results from a 3-point Likert scale and post-test semi-structured interviews suggest that variation reduces perceived repetition, supports engagement, and is not perceived as annoying. Our findings indicate that subtle, varied visual cues may be a promising approach to maintain fallback readiness.

SESSION: Student Research Track

Student Research Track: Driving Context Framework for Context-Based In-Vehicle Applications

  • Andri Furrer
  • Marco Wiedner

Understanding driving context is essential for developing automotive UI applications. Existing models like VSS and VSSo offer partial coverage but lack support for dynamic, out-of-vehicle context. This paper presents a novel, extensible framework for categorizing in-vehicle and out-vehicle, static and dynamic context variables. Grounded in the concepts of observability and dynamism, the framework guides the selection of relevant context parameters for specific use cases. A p-value-based method is proposed to support data-driven variable selection. The lightweight structure ensures modularity and transferability, providing a practical foundation for designing context-aware applications and contributing a standardized approach to context modeling in the automotive domain.

Student Research Track: Twin the Drive: An HGBR-based Machine Learning Model as a Vehicular Digital Twin for Risk Detection

  • Cansu Demir
  • Narges Mehran
  • Alexander Meschtscherjakov

Vehicular Digital Twins (VDTs) can enhance safety in automated driving by modeling and analyzing driving behavior in real time. However, most current approaches rely on complex machine learning (ML) models that are difficult to interpret and deploy in human-centered interfaces. In this paper, we present a proof-of-concept VDT that uses a simple, interpretable ML model from the category of boosting to detect three key risk indicators: unsafe time headway, harsh braking, and time-to-collision (TTC) for both consecutive vehicles. Our approach is motivated by the need for real-time risk detection in cooperative in-vehicle intelligent agents (IVIAs). We evaluate the model using the NGSIM dataset and show that it closely approximates unsafe time headway and braking events, with a conservative prediction bias that may be beneficial in risky contexts. TTC risks are under-predicted, reflecting limitations in modeling complex interactions with simple features. This research demonstrates the feasibility of interpretable VDTs for near real-time behavioral risk detection, laying the groundwork for future integration with IVIAs.

Student Research Track: Ambient Lighting Based, Low Cost Blind Spot Monitoring System

  • Dinesh Bhathad

Current development in passenger vehicles has seen a wide integration of Advanced Driver Assistance Systems (ADAS) in modern vehicles. However, many drivers still operate vehicles lacking these driver assistance features such as blind spot monitoring, particularly the elderly, learners or beginners and economically constrained users. This project presents a retrofittable, human-centered ambient visual interface that provides lane specific blind spot feedback using a low-cost light bar system and computer vision-based proximity detection. Our virtual prototype divides the rear view into left and right zones and informs proximity of vehicles in blind spots through subtle, pillar to pillar placed light gradients. This design supports glanceable, non-intrusive feedback to reduce cognitive and visual demands associated with frequent mirror checks. Grounded in research drawing from robotic-human-machine interface (RHMI), ambient displays and situational awareness, this system promotes co-perception over automation that aligns with findings on trust, usability and inclusive safety design. This approach offers an inclusive, scalable retrofit solution for enhancing driver safety in under-supported vehicles and contributes to the broader HCI discourse on retrofit interaction design and cognitive ergonomics in driving.