This is an old revision of the document!


Language of Driving Concepts

[raivo.sell]

Language of Driving Concepts

The concept of the Language of Driving (LoD) refers to the implicit and explicit communication occurring among all traffic participants—drivers, pedestrians, cyclists, and increasingly, autonomous vehicles (AVs). Traditionally, human-to-human communication in traffic has relied on cues such as eye contact, gestures, micro-accelerations, and auditory signals. As driverless vehicles emerge in mixed traffic, this established communication framework becomes insufficient, raising new challenges in ensuring mutual understanding and trust between humans and machines.

Understanding the Language of Driving

Driving is a complex, interactive behavior shaped by both social conventions and environmental factors. Each agent in traffic—human or autonomous—must interpret others’ intentions and act predictably. In human-driven traffic, intentions are often conveyed through subtle actions: changes in speed, head movements, or hand gestures.

The absence of these human cues in AVs requires a new communication framework to ensure that other road users can safely interpret an AV’s behavior and decisions.

The LoD, therefore, must evolve into a structured communication system combining *external Human–Machine Interfaces (eHMI)* and *internal HMIs (iHMI)*. These systems should express intent (e.g., yielding, accelerating, stopping) in intuitive, culturally neutral ways that do not rely on language or prior training. For instance, pedestrians must clearly understand when an AV intends to stop or proceed without ambiguity.

Defining LoD Through Experiments

The Tallinn University of Technology (TalTech) pilot project, conducted with the ISEAUTO autonomous shuttle, provided real-world insights into LoD interactions. The pilot involved 539 passengers and 176 pedestrians, combining surveys with on-site observations and expert focus groups from national transport authorities.

A key component of the experiment was the development of LED-based signaling patterns to communicate the shuttle’s intent to pedestrians.

The ISEAUTO shuttle used three distinct visual symbols displayed on front panels to indicate its awareness and behavior (see Table 1).

Trigger Situation Visualization / Meaning
Vehicle approaching a pedestrian crossing (defined by map or V2I) Shuttle nearing crosswalk Vertical bars — awareness of pedestrian zone
Objects detected near crossing Shuttle slowing and preparing to yield Green arrows — invitation to cross
Object detected on path or potential conflict Shuttle stopped, possible danger Blinking red cross — do not cross

Typical pedestrian crossing scenario with ISEAUTO AV shuttle (source: Kalda et al., 2022)

Pedestrian interviews revealed that while most understood the general meaning of light signals, some participants lacked confidence about their interpretation, indicating the need for clearer, standardized visual cues.

Public Perception and Trust

Survey results showed that 75% of passengers felt very safe aboard the autonomous bus, even without a safety operator, while 60% indicated they would use a fully autonomous service if proven safe.

Among pedestrians, 68% reported feeling comfortable sharing the road with an AV, although nearly one-third expressed hesitation due to uncertainty about its intentions.

These findings underline a crucial aspect of LoD: trust emerges from clarity. Both visual and auditory signals must be immediately understandable to all demographics—children, elderly people, and those unfamiliar with technology. Inclusivity, therefore, becomes a design imperative.

Towards a Common Language

Defining the Language of Driving is an ongoing interdisciplinary task that combines behavioral studies, human–machine communication research, and simulation-based validation.

Mixed-reality (MR) tools have proven valuable for rapidly prototyping LoD interfaces and testing user reactions safely. They allow for repeatable, diverse, and inclusive testing scenarios, offering a scalable pathway toward standardization.

Ultimately, a universally recognized LoD — supported by intuitive HMIs, adaptive communication cues, and validated through real-world and XR experiments — will be a key enabler of public acceptance and safe integration of AVs into everyday traffic.


Reference: Kalda, K.; Pizzagalli, S.-L.; Soe, R.-M.; Sell, R.; Bellone, M. (2022). *Language of Driving for Autonomous Vehicles.* Applied Sciences, 12(11), 5406. [https://doi.org/10.3390/app12115406](https://doi.org/10.3390/app12115406)

en/safeav/hmc/lang.1760984913.txt.gz · Last modified: 2025/10/20 18:28 by raivo.sell
CC Attribution-Share Alike 4.0 International
www.chimeric.de Valid CSS Driven by DokuWiki do yourself a favour and use a real browser - get firefox!! Recent changes RSS feed Valid XHTML 1.0