| Both sides previous revisionPrevious revisionNext revision | Previous revision |
| en:safeav:curriculum:as [2025/10/20 13:59] – [Table] larisas | en:safeav:curriculum:as [2025/11/05 08:56] (current) – airi |
|---|
| ====== Module: Autonomous Vehicles ====== | ====== Module: Autonomous Vehicles ====== |
| | **Study level** | Bachelor || | |
| | **ECTS credits** | 1 ECTS || | ^ **Study level** | Bachelor | |
| | **Study forms** | Hybrid or fully online || | ^ **ECTS credits** | 1 ECTS | |
| | **Module aims** | Develop a system-level understanding of autonomous vehicles across ground, aerial and marine domains. \\ By the end of the module, students will be able to explain the core functional stack (perception–planning–control), \\ compare reference architectures (ROS/ROS 2, DDS, AUTOSAR Adaptive, JAUS, MOOS‑IvP), and evaluate trade‑offs between centralized \\ and distributed designs. Students will also assess the role of AI/ML in perception and decision‑making, \\ outline validation & verification (V&V) strategies tied to Operational Design Domains (ODDs), \\ and discuss legal, ethical, and cybersecurity considerations that influence real-world deployment. || | ^ **Study forms** | Hybrid or fully online | |
| | **Pre-requirements** | Motivation to study autonomous systems; recommended basics in programming (Python/C++), signals and control, and electronics/mechatronics. \\ Helpful prior exposure to robotics concepts (kinematics, sensors/actuators) and Linux/ROS tooling. Familiarity with linear algebra and \\ probability will support perception and estimation topics. || | ^ **Module aims** | The aim of the module is to introduce the fundamental concepts, architectures and application domains of autonomous vehicles across ground, aerial and marine systems. The course develops students’ system-level understanding of the autonomy stack from perception and localisation to planning and control, highlighting the role of AI, safety and basic verification considerations in real-world deployment. | |
| | **Learning outcomes** | Knowledge\\ • Explain the Sense–Plan–Act paradigm and the layered autonomy stack (perception, localization, planning, control, coordination).\\ • Describe and contrast middleware/architectures: ROS/ROS 2 (DDS-based), AUTOSAR Adaptive, JAUS, and MOOS‑IvP.\\ • Summarize AI/ML roles in perception (detection, segmentation, tracking) and decision‑making, plus limits and safety implications.\\ • Identify V&V concepts (ODD, coverage, field response) and domain-specific safety standards (e.g., ISO 26262, DO‑178C).\\ Skills\\ • Build a minimal autonomy pipeline in simulation (sensor ingestion → perception → planning → control) and tune it for a given ODD.\\ • Integrate modules via publish/subscribe interfaces and evaluate latency, determinism, and fault‑tolerance trade‑offs.\\ • Design basic experiments to validate algorithms (scenario design, criteria for correctness) and interpret results.\\ Understanding\\ • Reason about distributed vs. centralized architectures and their impact on scalability and reliability.\\ • Appraise governance, legal/ethical constraints, and cybersecurity risks (OTA, remote control, sensor spoofing) for AV deployment. || | ^ **Pre-requirements** | Interest in autonomous systems and basic knowledge of programming, signals and control, and electronics or mechatronics. Prior exposure to robotics concepts and Linux/ROS environments, as well as familiarity with linear algebra and probability, is recommended. | |
| | ** Topics ** | 1. Introduction to autonomous systems and autonomy definitions; Industry 4.0/5.0 context\\ 2. Sense–Plan–Act and data flow in autonomous vehicles; centralized vs. distributed designs; safety & redundancy\\ 3. Reference architectures and middleware: ROS/ROS 2 (DDS), AUTOSAR Adaptive, JAUS, MOOS‑IvP\\ 4. Application domains: ground (Autoware), aerial (PX4/ArduPilot), and marine (MOOS‑IvP); domain challenges\\ 5. AI/ML for perception and decision-making; hybrid model‑based + learning‑based stacks\\ 6. Validation & Verification (ODD, coverage, field response); simulation, SIL/HIL; safety standards\\ 7. Governance, legal and ethical frameworks for autonomy\\ 8. Cybersecurity for autonomous systems: electronics/firmware, communication, control, operations || | ^ **Learning outcomes** | **Knowledge**\\ • Explain the Sense–Plan–Act paradigm and the layered autonomy stack.\\ • Describe and contrast middleware/architectures.\\ • Summarize AI/ML roles in perception and decision-making, plus limits and safety implications.\\ • Identify V&V concepts and domain-specific safety standards.\\ **Skills**\\ • Build a minimal autonomy pipeline in simulation and tune it for a given ODD.\\ • Integrate modules via publish/subscribe interfaces and evaluate latency, determinism, and fault-tolerance trade-offs.\\ • Design basic experiments to validate algorithms and interpret results.\\ **Understanding**\\ • Reason about distributed vs. centralized architectures and their impact on scalability and reliability.\\ • Appraise governance, legal/ethical constraints, and cybersecurity risks for AV deployment. | |
| | **Type of assessment** | The prerequisite of a positive grade is a positive evaluation of module topics and presentation of practical work results with required documentation || | ^ **Topics** | 1. Introduction to autonomous systems and autonomy definitions\\ 2. Sense–Plan–Act and data flow in autonomous vehicles; centralized vs. distributed designs; safety & redundancy\\ 3. Reference architectures and middleware: ROS/ROS2 (DDS), AUTOSAR Adaptive, JAUS, MOOS-IvP\\ 4. Application domains: ground, aerial, and marine; domain challenges\\ 5. AI/ML for perception and decision-making; hybrid model-based, learning-based stacks\\ 6. Validation and Verification introduction (ODD, coverage, field response); simulation, SIL/HIL; safety standards\\ 7. Governance, legal and ethical frameworks for autonomy\\ 8. Cybersecurity for autonomous systems: electronics/firmware, communication, control, operations | |
| | **Learning methods** | Lecture — Conceptual foundations (architectures, middleware, SPA, safety/V&V, governance) with case studies from ground, aerial, and marine domains.\\ Lab works — Hands‑on exercises in simulation (ROS 2/Autoware/PX4 or MOOS‑IvP) to assemble perception‑planning‑control pipelines and evaluate behavior.\\ Individual assignments — Focused mini‑projects (e.g., perception module, path planner, DDS QoS study) with short reports on design and results.\\ Self learning — Guided readings and video demos on standards and frameworks; independent experimentation to deepen understanding of chosen topics. || | ^ **Type of assessment** | The prerequisite of a positive grade is a positive evaluation of module topics and presentation of practical work results with required documentation. | |
| | **AI involvement** | Yes — Deep learning for perception (object detection, semantic segmentation, tracking); \\ learning‑based prediction; SLAM and sensor fusion with ML components; \\ reinforcement/behavior‑tree hybrids for decision‑making; data‑centric evaluation in simulation. | | | ^ **Learning methods** | **Lecture** — Conceptual foundations (architectures, middleware, SPA, safety/V&V, governance) with case studies from ground, aerial, and marine domains.\\ **Lab works** — Hands-on exercises in simulation (ROS2/Autoware/PX4 or MOOS-IvP) to assemble perception-planning-control pipelines and evaluate behavior.\\ **Individual assignments** — Focused mini-projects (e.g., perception module, path planner, DDS QoS study) with short reports on design and results.\\ **Self-learning** — Guided readings and video demos on standards and frameworks; independent experimentation to deepen understanding of chosen topics. | |
| | **References to\\ literature** | 1) Bass, Clements, Kazman. Software Architecture in Practice (4th ed.), Addison‑Wesley, 2021.\\ 2) Quigley, Gerkey, Smart. Programming Robots with ROS, O’Reilly, 2009.\\ 3) OMG. Data Distribution Service (DDS) Standard, 2023.\\ 4) Benjamin, Curcio, Leonard. MOOS‑IvP autonomy software for marine robots. J. Field Robotics, 2012.\\ 5) AUTOSAR Consortium. AUTOSAR Adaptive Platform Specification, 2023.\\ 6) LeCun, Bengio, Hinton. Deep Learning. Nature, 2015.\\ 7) Kato et al. Autoware on Board. ICCPS, 2018.\\ 8) FAA. Advisory Circular 20‑167A: Airborne Systems Safety, 2021. || | ^ **AI involvement** | Deep learning for perception (object detection, semantic segmentation, tracking); learning-based prediction; SLAM and sensor fusion with ML components; reinforcement/behavior-tree hybrids for decision-making; data-centric evaluation in simulation. | |
| | **Lab equipment** | Yes || | ^ **Recommended tools and environments** | ROS/ROS2, MOOS-IvP, Autoware, PX4/ArduPilot | |
| | **Virtual lab** | Yes || | ^ **Verification and Validation focus** | | |
| | **MOOC course** | || | ^ **Relevant standards and regulatory frameworks** | ISO 26262, DO-178C, AUTOSAR, JAUS | |
| |