This shows you the differences between two versions of the page.
| Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
| en:safeav:hmc:lang [2025/10/20 18:28] – raivo.sell | en:safeav:hmc:lang [2025/10/20 19:25] (current) – [7.3 Language of Driving Concepts] raivo.sell | ||
|---|---|---|---|
| Line 1: | Line 1: | ||
| ====== Language of Driving Concepts ====== | ====== Language of Driving Concepts ====== | ||
| - | <todo @raivo.sell></ | + | The Language of Driving |
| - | ====== | + | |
| - | The concept | + | ===== Semantics and Pragmatics |
| + | Driving behavior can be analyzed as a layered communication system: | ||
| + | | ||
| + | * **Semantics: | ||
| + | * **Pragmatics: | ||
| - | ===== Understanding the Language | + | An autonomous vehicle must infer human intent and simultaneously display legible intent |
| - | Driving | + | ===== Cultural Adaptation and Universality ===== |
| + | Driving | ||
| + | Behavior should be recognizable but not anthropomorphic, preserving clarity across cultures [3]. | ||
| - | The absence of these human cues in AVs requires a new communication framework to ensure | + | ===== LoD Implementation Examples ===== |
| + | Field experiments using light-based | ||
| + | Participants reported improved understanding when signals were consistent and redundant across modalities [2]. | ||
| - | The LoD, therefore, must evolve into a structured communication system combining *external Human–Machine Interfaces (eHMI)* and *internal HMIs (iHMI)*. These systems should express intent (e.g., yielding, accelerating, | + | {{: |
| - | ===== Defining LoD Through Experiments | + | ===== Future Development |
| + | Formalizing LoD as a measurable framework is essential for verification, | ||
| - | The **Tallinn University of Technology (TalTech)** pilot project, conducted with the **ISEAUTO autonomous shuttle**, provided real-world insights into LoD interactions. The pilot involved 539 passengers and 176 pedestrians, | + | ---- |
| - | A key component of the experiment was the development of LED-based signaling patterns to communicate the shuttle’s intent to pedestrians. | + | **References:** |
| - | + | [1] Razdan, R. et al. (2020). *Unsettled Topics Concerning Human and Autonomous Vehicle Interaction.* SAE EDGE Research Report EPR2020025. | |
| - | The ISEAUTO shuttle used three distinct visual symbols displayed on front panels to indicate its awareness and behavior (see Table 1). | + | [2] Kalda, K., Sell, R., Soe, R.-M. (2021). |
| - | + | [3] Kalda, K., Pizzagalli, S.-L., Soe, R.-M., Sell, R., Bellone, M. (2022). *Language of Driving | |
| - | ^ Trigger ^ Situation ^ Visualization / Meaning ^ | + | |
| - | | Vehicle approaching a pedestrian crossing (defined by map or V2I) | Shuttle nearing crosswalk | **Vertical bars** — awareness of pedestrian zone | | + | |
| - | | Objects detected near crossing | Shuttle slowing and preparing to yield | **Green arrows** — invitation to cross | | + | |
| - | | Object detected on path or potential conflict | Shuttle stopped, possible danger | **Blinking red cross** — do not cross | | + | |
| - | + | ||
| - | {{: | + | |
| - | + | ||
| - | Pedestrian interviews revealed that while most understood the general meaning of light signals, some participants lacked confidence about their interpretation, indicating the need for clearer, standardized visual cues. | + | |
| - | + | ||
| - | ===== Public Perception and Trust ===== | + | |
| - | + | ||
| - | Survey results showed that **75% of passengers felt very safe** aboard the autonomous bus, even without a safety operator, while **60% indicated they would use a fully autonomous service** if proven safe. | + | |
| - | + | ||
| - | Among pedestrians, **68% reported feeling comfortable** sharing the road with an AV, although nearly one-third expressed hesitation due to uncertainty about its intentions. | + | |
| - | + | ||
| - | These findings underline a crucial aspect of LoD: **trust emerges from clarity**. Both visual and auditory signals must be immediately understandable to all demographics—children, elderly people, and those unfamiliar with technology. Inclusivity, therefore, becomes a design imperative. | + | |
| - | + | ||
| - | ===== Towards a Common Language ===== | + | |
| - | + | ||
| - | Defining the Language of Driving is an ongoing interdisciplinary task that combines behavioral studies, human–machine communication research, and simulation-based validation. | + | |
| - | + | ||
| - | Mixed-reality | + | |
| - | + | ||
| - | Ultimately, a **universally recognized LoD** — supported by intuitive HMIs, adaptive communication cues, and validated through real-world and XR experiments — will be a key enabler of public acceptance and safe integration of AVs into everyday traffic. | + | |
| - | + | ||
| - | ---- | + | |
| - | **Reference: | ||
| - | Kalda, K.; Pizzagalli, S.-L.; Soe, R.-M.; Sell, R.; Bellone, M. (2022). *Language of Driving for Autonomous Vehicles.* Applied Sciences, 12(11), 5406. [https:// | ||