Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
en:safeav:hmc:lang [2025/10/20 18:27] raivo.sellen:safeav:hmc:lang [2025/10/20 19:25] (current) – [7.3 Language of Driving Concepts] raivo.sell
Line 1: Line 1:
 ====== Language of Driving Concepts ====== ====== Language of Driving Concepts ======
  
-<todo @raivo.sell></todo> +The Language of Driving (LoD) describes the implicit and explicit signals that allow autonomous vehicles and humans to understand each other in mixed traffic [1–3].
-====== Language of Driving Concepts ======+
  
-The concept of the **Language of Driving (LoD)** refers to the implicit and explicit communication occurring among all traffic participants—drivers, pedestrians, cyclists, and increasingly, autonomous vehicles (AVs). Traditionally, human-to-human communication in traffic has relied on cues such as eye contact, gestures, micro-accelerations, and auditory signalsAs driverless vehicles emerge in mixed trafficthis established communication framework becomes insufficientraising new challenges in ensuring mutual understanding and trust between humans and machines.+===== Semantics and Pragmatics of Driving ===== 
 +Driving behavior can be analyzed as a layered communication system: 
 +  * **Phonetics:** visible cues such as lights or motion rhythm.   
 +  * **Semantics:** the meaning of those cues (e.g., yieldproceed).   
 +  * **Pragmatics:** how meaning changes with context and environment 
  
-===== Understanding the Language of Driving =====+An autonomous vehicle must infer human intent and simultaneously display legible intent of its own [2].
  
-Driving is a complex, interactive behavior shaped by both social conventions and environmental factors. Each agent in traffic—human or autonomous—must interpret others’ intentions and act predictablyIn human-driven traffic, intentions are often conveyed through subtle actions: changes in speed, head movementsor hand gestures+===== Cultural Adaptation and Universality ===== 
 +Driving “languages” vary globally; hence interfaces must maintain universal meaning while allowing local adaptation [1]  
 +Behavior should be recognizable but not anthropomorphicpreserving clarity across cultures [3].
  
-The absence of these human cues in AVs requires a new communication framework to ensure that other road users can safely interpret an AV’s behavior and decisions.+===== LoD Implementation Examples ===== 
 +Field experiments using light-based cues have shown that simple color and motion patterns effectively communicate awareness and yielding.   
 +Participants reported improved understanding when signals were consistent and redundant across modalities [2].
  
-The LoD, therefore, must evolve into a structured communication system combining *external Human–Machine Interfaces (eHMI)* and *internal HMIs (iHMI)*. These systems should express intent (e.g., yielding, accelerating, stopping) in intuitive, culturally neutral ways that do not rely on language or prior training. For instance, pedestrians must clearly understand when an AV intends to stop or proceed without ambiguity.+{{:en:safeav:hmc:iseauto_crossing_scenario.jpg?600| Typical pedestrian crossing scenario using visual LoD cues}}
  
-===== Defining LoD Through Experiments =====+===== Future Development ===== 
 +Formalizing LoD as a measurable framework is essential for verification, standardization, and interoperability of automated behavior [3].
  
-The **Tallinn University of Technology (TalTech)** pilot project, conducted with the **ISEAUTO autonomous shuttle**, provided real-world insights into LoD interactions. The pilot involved 539 passengers and 176 pedestrians, combining surveys with on-site observations and expert focus groups from national transport authorities.+----
  
-A key component of the experiment was the development of LED-based signaling patterns to communicate the shuttle’s intent to pedestrians. +**References:** 
- +[1] RazdanR. et al. (2020). *Unsettled Topics Concerning Human and Autonomous Vehicle Interaction.* SAE EDGE Research Report EPR2020025.   
-The ISEAUTO shuttle used three distinct visual symbols displayed on front panels to indicate its awareness and behavior (see Table 1). +[2] KaldaK.SellR., SoeR.-M. (2021). *Use Case of Autonomous Vehicle Shuttle and Passenger Acceptance.ProcEstonian Academy of Sciences, 70 (4).   
- +[3] KaldaK.Pizzagalli, S.-L., SoeR.-M., SellR., BelloneM. (2022). *Language of Driving for Autonomous Vehicles.* *Applied Sciences*, 12 (11).
-^ Trigger ^ Situation ^ Visualization / Meaning ^ +
-| Vehicle approaching a pedestrian crossing (defined by map or V2I) | Shuttle nearing crosswalk | **Vertical bars** — awareness of pedestrian zone | +
-| Objects detected near crossing | Shuttle slowing and preparing to yield | **Green arrows** — invitation to cross | +
-| Object detected on path or potential conflict | Shuttle stoppedpossible danger | **Blinking red cross** — do not cross | +
- +
-{{:en:safeav:hmc:iseauto_crossing_scenario.jpg?600 | Typical pedestrian crossing scenario with ISEAUTO AV shuttle (source: Kalda et al., 2022)}} +
- +
-Pedestrian interviews revealed that while most understood the general meaning of light signalssome participants lacked confidence about their interpretationindicating the need for clearerstandardized visual cues. +
- +
-===== Public Perception and Trust ===== +
- +
-Survey results showed that **75% of passengers felt very safe** aboard the autonomous buseven without a safety operatorwhile **60% indicated they would use a fully autonomous service** if proven safe.  +
- +
-Among pedestrians**68% reported feeling comfortable** sharing the road with an AValthough nearly one-third expressed hesitation due to uncertainty about its intentions. +
- +
-These findings underline a crucial aspect of LoD: **trust emerges from clarity**. Both visual and auditory signals must be immediately understandable to all demographics—childrenelderly peopleand those unfamiliar with technologyInclusivitythereforebecomes a design imperative. +
- +
-===== Towards a Common Language ===== +
- +
-Defining the Language of Driving is an ongoing interdisciplinary task that combines behavioral studieshuman–machine communication researchand simulation-based validation +
- +
-Mixed-reality (MRtools have proven valuable for rapidly prototyping LoD interfaces and testing user reactions safelyThey allow for repeatable, diverse, and inclusive testing scenarios, offering a scalable pathway toward standardization. +
- +
-Ultimately, a **universally recognized LoD** — supported by intuitive HMIsadaptive communication cues, and validated through real-world and XR experiments — will be a key enabler of public acceptance and safe integration of AVs into everyday traffic. +
- +
-----+
  
-**Reference:**   
-Kalda, K.; Pizzagalli, S.-L.; Soe, R.-M.; Sell, R.; Bellone, M. (2022). *Language of Driving for Autonomous Vehicles.* Applied Sciences, 12(11), 5406. [https://doi.org/10.3390/app12115406](https://doi.org/10.3390/app12115406) 
  
en/safeav/hmc/lang.1760984865.txt.gz · Last modified: 2025/10/20 18:27 by raivo.sell
CC Attribution-Share Alike 4.0 International
www.chimeric.de Valid CSS Driven by DokuWiki do yourself a favour and use a real browser - get firefox!! Recent changes RSS feed Valid XHTML 1.0