This is an old revision of the document!


Module: Control, Planning, and Decision-Making (Part 2)

Study level Master
ECTS credits 1 ECTS
Study forms Hybrid or fully online
Module aims To provide students with advanced theoretical and practical knowledge in the validation and verification of control, planning, and decision-making systems used in autonomous platforms. The module explores simulation-based testing, formal verification, and model-checking techniques that ensure the safety and stability of AI-enhanced controllers. Students will learn to integrate hybrid simulation frameworks, formal reasoning tools, and optimization algorithms to assess real-world readiness of control architectures. Emphasis is placed on quantifiable safety guarantees, robustness against uncertainty, and compliance with international standards (ISO 26262, ISO 21448, and IEEE 2846).
Pre-requirements Strong background in control theory, optimization, and planning algorithms. Familiarity with programming (Python, C++, MATLAB), model-based design tools (Simulink, ROS 2), and AI decision-making frameworks. Basic understanding of formal methods, hybrid systems, and system modeling. Prior exposure to simulation environments or real-time control applications is recommended.
Learning outcomes Knowledge:
• Explain simulation-based and formal validation approaches for control and planning systems.
• Describe the use of model-checking, reachability analysis, and verification frameworks in autonomous systems.
• Understand ISO 26262, ISO 21448 (SOTIF), and IEEE 2846 standards relevant to control and decision-making validation.
• Discuss trade-offs between simulation fidelity, computational efficiency, and real-time constraints.
Skills:
• Develop and validate control and planning algorithms in simulation environments (MATLAB/Simulink, ROS 2, CARLA).
• Apply formal verification tools (UPPAAL, SPIN, or CBMC) to analyze safety and correctness properties.
• Design hybrid validation workflows combining Monte Carlo simulation and symbolic reasoning.
• Evaluate algorithm robustness and decision safety under stochastic and adversarial conditions.
Understanding/Attitudes:
• Appreciate the role of rigorous validation in certifying autonomous behaviors and AI-based decision-making.
• Recognize limitations of current simulation and formal verification tools in high-dimensional, data-driven systems.
• Adopt ethical, transparent, and standards-compliant practices in the assurance of autonomy.
Topics 1. Validation of Control and Planning Systems:
– System-level validation frameworks and verification-driven design.
– Simulation fidelity, corner-case testing, and scenario coverage.
2. Simulation Environments and Tools:
– SIL/HIL setups, Monte Carlo analysis, and statistical validation.
– Multi-domain co-simulation for cyber-physical systems.
3. Formal Verification and Model Checking:
– Safety property specification and temporal logic (LTL, CTL).
– Reachability analysis, invariant verification, and constraint solving.
4. Hybrid and Nonlinear Systems:
– Modeling hybrid automata and nonlinear control loops.
– Formal abstraction and conservative over-approximation techniques.
5. Standards and Safety Frameworks:
– ISO 26262, ISO 21448, IEEE 2846, and ASAM OpenSCENARIO for validation.
6. Case Studies:
– Autonomous driving, UAV flight control, and robotic path planning validation.
Type of assessment The prerequisite of a positive grade is a positive evaluation of module topics and presentation of practical work results with required documentation
Learning methods Lectures: Cover theory and methodologies for simulation-based and formal validation of control and planning systems.
Lab works: Implement and test controllers in virtual and hybrid environments (ROS 2, MATLAB, CARLA, Scenic, CommonRoad, UPPAAL).
Individual assignments: Develop validation pipelines, perform reachability analysis, and document results.
Self-learning: Study research papers and international standards on autonomy verification and formal safety assurance.
AI involvement Yes — AI tools may be used to automate scenario generation, identify unsafe trajectories, and optimize validation coverage. Students must validate AI-assisted outcomes, ensure reproducibility, and cite AI involvement transparently in deliverables.
References to
literature
1. Rajamani, R. (2012). Vehicle Dynamics and Control (2nd ed.). Springer.
2. Baier, C., & Katoen, J.-P. (2018). Principles of Model Checking. MIT Press.
3. Koopman, P., & Widen, J. (2023). The AI Driver: Defining Human-Equivalent Safety for Automated Vehicles. IEEE Transactions on Intelligent Vehicles.
4. ISO 21448 (2022). Road Vehicles – Safety Of The Intended Functionality (SOTIF).
5. ISO 26262 (2018). Road Vehicles – Functional Safety.
6. IEEE 2846 (2022). Assumptions for Models in Safety-Related Automated Vehicle Behavior.
7. Althoff, M., et al. (2021). Formal Verification of Autonomous Systems: State of the Art and Future Directions. IEEE Access.
8. Goodfellow, I., Bengio, Y., & Courville, A. (2016). Deep Learning. MIT Press.
Lab equipment Yes
Virtual lab Yes
MOOC course Suggested MOOC: 'Formal Methods for Autonomous Systems' (edX, University of York) or 'Verification and Validation of AI Systems' (Coursera, Stanford University).
en/safeav/curriculum/ctrl-m.1761821376.txt.gz · Last modified: 2025/10/30 10:49 by momala
CC Attribution-Share Alike 4.0 International
www.chimeric.de Valid CSS Driven by DokuWiki do yourself a favour and use a real browser - get firefox!! Recent changes RSS feed Valid XHTML 1.0