Module: Human Machine communication (Part 2)

Study level Master
ECTS credits 1 ECTS
Study forms Hybrid or fully online
Module aims The aim of the module is to introduce safety, validation and societal aspects of human–machine interaction in autonomous systems. The course develops students’ ability to design and evaluate human-centred, explainable and standards-compliant HMI solutions that support usability, trust and safety.
Pre-requirements Basic knowledge of human factors or HMI design principles and interest in system safety. Familiarity with user interface development, AI concepts, ergonomics or safety-related standards is recommended but not mandatory.
Learning outcomes Knowledge
• Explain safety and reliability concerns in HMI design for autonomous and semi-autonomous systems.
• Describe standards and frameworks for HMI validation.
• Understand social, ethical, and psychological dimensions influencing public trust in AI-driven systems.
• Identify factors affecting cross-cultural and demographic acceptance of automation.
Skills
• Design validation procedures for HMI systems using both experimental and simulation-based testing.
• Evaluate user behavior, workload, and situational awareness using quantitative and qualitative methods.
• Apply AI tools to simulate user interaction, predict response variability, and analyze safety-related feedback.
• Conduct usability assessments and generate compliance reports aligned with HMI safety standards.
Understanding
• Appreciate the ethical importance of transparency, inclusivity, and user autonomy in interface design.
• Recognize human limitations and adapt systems to support shared control and human oversight.
• Develop awareness of public communication, risk perception, and media framing in acceptance of autonomy.
Topics 1. Human–Machine Interaction Safety:
– Human error taxonomy and resilience engineering.
– Shared control and human oversight in automated systems.
2. Verification and Validation of HMI:
– Testing frameworks, simulation methods, and standards.
– Usability metrics: workload, trust, explainability, and accessibility.
3. Public Acceptance and Risk Perception:
– Cultural and social factors influencing acceptance of automation.
– Role of transparency, explainability, and user trust.
4. AI-Assisted Interaction Evaluation:
– Emotion and intent recognition, human-in-the-loop testing.
– Adaptive HMIs and predictive user modeling.
5. Standards and Case Studies:
– AVSC Best Practices, ISO/SAE frameworks, and real-world HMI validation studies.
Type of assessment The prerequisite of a positive grade is a positive evaluation of module topics and presentation of practical work results with required documentation
Learning methods Lecture — Cover theoretical foundations of safety, public trust, and V&V frameworks in HMI.
Lab works — Implement HMI prototypes and perform usability and safety validation using simulation environments.
Individual assignments — Evaluate and document HMI validation plans for different user scenarios and safety levels.
Self-learning — Review literature on human factors, public acceptance, and ethical design in automation.
AI involvement AI tools may assist in user behavior prediction, emotion recognition analysis, and usability simulation. Students must transparently disclose AI usage, validate data integrity, and comply with academic and ethical standards.
Recommended tools and environments Unity, MATLAB, ROS2
Verification and Validation focus
Relevant standards and regulatory frameworks ISO 26262, ISO 21448, SAE J3016