This is an old revision of the document!
| Study level | Bachelor | |
| ECTS credits | 1 ECTS | |
| Study forms | Hybrid or fully online | |
| Module aims | Provide a comprehensive understanding of control and planning strategies for autonomous systems, emphasizing both classical and AI-based paradigms. Students will explore how control algorithms translate high-level planning decisions into safe and precise vehicle motion under real-world uncertainties. The module highlights the integration of feedback control, optimization, and learning-based techniques to ensure stability, robustness, and adaptability in dynamic environments. Practical focus is given to hybrid control architectures, motion planning, and behavioral decision-making for safe autonomy. | |
| Pre-requirements | Solid foundation in linear algebra, differential equations, and basic control theory (PID, feedback concepts). Programming proficiency in Python or C++, familiarity with numerical computation tools (e.g., MATLAB, ROS, or Simulink), and understanding of system dynamics and kinematics. Prior exposure to robotics, machine learning, or embedded systems is beneficial but not required. | |
| Learning outcomes | Knowledge: • Explain classical control principles (PID, LQR, MPC) and their application to vehicle dynamics. • Describe AI-based control methods including reinforcement learning and neural network controllers. • Understand motion planning and behavioral algorithms (FSM, Behavior Trees, A*, RRT, MPC). • Discuss safety verification, validation, and certification issues for autonomous control systems. Skills: • Design, simulate, and tune classical controllers for trajectory tracking and stabilization. • Implement basic reinforcement learning or hybrid control strategies in simulation environments. • Develop motion planning pipelines integrating perception, planning, and control layers. Understanding/Attitudes: • Recognize trade-offs between transparency, performance, and adaptability in control architectures. • Evaluate robustness, explainability, and ethical implications in AI-driven control. • Appreciate interdisciplinary approaches to achieve safe and reliable autonomous operation. |
|
| Topics | 1. Classical Control Strategies: – Feedback control fundamentals, PID design and tuning, LQR, Sliding Mode Control. – Model Predictive Control (MPC) and real-time optimization. 2. AI-Based Control Strategies: – Reinforcement learning for control, supervised imitation learning. – Neural network controllers and hybrid architectures. 3. Integration and Safety: – Verification, validation, and certification of control systems. – Robustness, interpretability, and failure handling. 4. Motion Planning and Behavioral Algorithms: – FSMs, Behavior Trees, and rule-based systems. – Planning methods: A*, D*, RRT, RRT*, and MPC-based trajectory generation. – Predictive and optimization-based planning for dynamic environments. 5. Future Trends: – Explainable AI control, safe RL, and human-like behavioral models. |
|
| Type of assessment | The prerequisite of a positive grade is a positive evaluation of module topics and presentation of practical work results with required documentation | |
| Learning methods | Lectures: Introduce theoretical and mathematical foundations of classical and AI-based control strategies. Lab works: Implement and compare controllers (PID, LQR, RL) and motion planners (A*, RRT) using simulation tools such as low-fidelity planning simulators, or MATLAB/Simulink. Individual assignments: Design a control or planning pipeline and evaluate safety/performance trade-offs. Self-learning: Independent exploration of open-source control frameworks and reading of selected research literature. |
|
| AI involvement | Yes — students may use AI tools to generate code templates, optimize control parameters, or analyze planning performance. All AI-assisted work must be reviewed, validated, and cited properly in accordance with academic integrity standards. | |
| References to literature | 1. Thrun, S. (2010). Toward robotic cars. Communications of the ACM, 53(4). 2. Broy, M., et al. (2021). Modeling Automotive Software and Hardware Architectures with AUTOSAR. Springer. 3. LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep Learning. Nature, 521(7553). 4. Raj, A., & Saxena, P. (2022). Software architectures for autonomous vehicle development. IEEE Access, 10. 5. Paden, B., et al. (2016). A Survey of Motion Planning and Control Techniques for Self-Driving Urban Vehicles. IEEE Transactions on Intelligent Vehicles. 6. Sutton, R. S., & Barto, A. G. (2018). Reinforcement Learning: An Introduction (2nd ed.). MIT Press. 7. Lee, E. A., & Seshia, S. A. (2020). Introduction to Embedded Systems: A Cyber-Physical Systems Approach (3rd ed.). MIT Press. |
|
| Lab equipment | Yes | |
| Virtual lab | Yes | |
| MOOC course | Suggested MOOC: 'Modern Robotics: Control of Mobile Robots' (Coursera, University of Pennsylvania) or 'AI for Autonomous Systems' (edX, University of Toronto). | |