This is an old revision of the document!
| Study level | Master |
|---|---|
| ECTS credits | 1 ECTS |
| Study forms | Hybrid or fully online |
| Module aims | To provide students with advanced theoretical and practical knowledge in the validation and verification of control, planning, and decision-making systems used in autonomous platforms. The module explores simulation-based testing, formal verification, and model-checking techniques that ensure the safety and stability of AI-enhanced controllers. Students will learn to integrate hybrid simulation frameworks, formal reasoning tools, and optimization algorithms to assess real-world readiness of control architectures. Emphasis is placed on quantifiable safety guarantees, robustness against uncertainty, and compliance with international standards (ISO 26262, ISO 21448, and IEEE 2846). |
| Pre-requirements | Strong background in control theory, optimization, and planning algorithms. Familiarity with programming (Python, C++, MATLAB), model-based design tools (Simulink, ROS 2), and AI decision-making frameworks. Basic understanding of formal methods, hybrid systems, and system modeling. Prior exposure to simulation environments or real-time control applications is recommended. |
| Learning outcomes | Knowledge: • Explain simulation-based and formal validation approaches for control and planning systems. • Describe the use of model-checking, reachability analysis, and verification frameworks in autonomous systems. • Understand ISO 26262, ISO 21448 (SOTIF), and IEEE 2846 standards relevant to control and decision-making validation. • Discuss trade-offs between simulation fidelity, computational efficiency, and real-time constraints. Skills: • Develop and validate control and planning algorithms in simulation environments (MATLAB/Simulink, ROS 2, CARLA). • Apply formal verification tools (UPPAAL, SPIN, or CBMC) to analyze safety and correctness properties. • Design hybrid validation workflows combining Monte Carlo simulation and symbolic reasoning. • Evaluate algorithm robustness and decision safety under stochastic and adversarial conditions. Understanding/Attitudes: • Appreciate the role of rigorous validation in certifying autonomous behaviors and AI-based decision-making. • Recognize limitations of current simulation and formal verification tools in high-dimensional, data-driven systems. • Adopt ethical, transparent, and standards-compliant practices in the assurance of autonomy. |
| Topics | 1. Validation of Control and Planning Systems: – System-level validation frameworks and verification-driven design. – Simulation fidelity, corner-case testing, and scenario coverage. 2. Simulation Environments and Tools: – SIL/HIL setups, Monte Carlo analysis, and statistical validation. – Multi-domain co-simulation for cyber-physical systems. 3. Formal Verification and Model Checking: – Safety property specification and temporal logic (LTL, CTL). – Reachability analysis, invariant verification, and constraint solving. 4. Hybrid and Nonlinear Systems: – Modeling hybrid automata and nonlinear control loops. – Formal abstraction and conservative over-approximation techniques. 5. Standards and Safety Frameworks: – ISO 26262, ISO 21448, IEEE 2846, and ASAM OpenSCENARIO for validation. 6. Case Studies: – Autonomous driving, UAV flight control, and robotic path planning validation. |
| Type of assessment | The prerequisite of a positive grade is a positive evaluation of module topics and presentation of practical work results with required documentation |
| Learning methods | Lectures: Cover theory and methodologies for simulation-based and formal validation of control and planning systems. Lab works: Implement and test controllers in virtual and hybrid environments (ROS 2, MATLAB, CARLA, Scenic, CommonRoad, UPPAAL). Individual assignments: Develop validation pipelines, perform reachability analysis, and document results. Self-learning: Study research papers and international standards on autonomy verification and formal safety assurance. |
| AI involvement | AI tools may be used to automate scenario generation, identify unsafe trajectories, and optimize validation coverage. Students must validate AI-assisted outcomes, ensure reproducibility, and cite AI involvement transparently in deliverables. |
| Recommended tools and environments | |
| Verification and Validation focus | |
| Relevant standards and regulatory frameworks |