| Both sides previous revisionPrevious revisionNext revision | Previous revision |
| en:safeav:curriculum:maps-b [2025/10/20 14:24] – [Table] larisas | en:safeav:curriculum:maps-b [2025/11/05 09:05] (current) – airi |
|---|
| ====== Module: Perception, Mapping, and Localization (Part 1) ====== | ====== Module: Perception, Mapping, and Localization (Part 1) ====== |
| | **Study level** | Bachelor || | |
| | **ECTS credits** | 1 ECTS || | ^ **Study level** | Bachelor | |
| | **Study forms** | Hybrid or fully online || | ^ **ECTS credits** | 1 ECTS | |
| | **Module aims** | To provide foundational and applied knowledge in perception, mapping, and localization for autonomous systems. Students will explore how sensor modalities (cameras, LiDAR, radar, GNSS, IMU) are combined to detect and interpret the environment, build and maintain maps, and determine vehicle position in real time. Emphasis is placed on AI-based perception methods, sensor fusion algorithms, and dealing with uncertainty in diverse conditions. The module connects theoretical foundations with practical techniques used in self-driving and robotic navigation systems. || | ^ **Study forms** | Hybrid or fully online | |
| | **Pre-requirements** | Basic understanding of linear algebra, probability, and signal processing. Familiarity with Python or C++ programming, and fundamental knowledge of control systems and kinematics. Prior experience with Linux, ROS (Robot Operating System), or computer vision libraries (OpenCV, PyTorch, TensorFlow) is advantageous but not mandatory. || | ^ **Module aims** | The aim of the module is to introduce perception, mapping and localisation methods for autonomous systems. The course develops students’ ability to combine data from multiple sensors to detect and interpret the environment, build maps, estimate vehicle pose in real time and handle uncertainty using modern AI-based perception and sensor fusion techniques. | |
| | **Learning outcomes** | Knowledge:\\ • Describe perception, mapping, and localization processes in autonomous systems.\\ • Explain principles of sensor fusion, simultaneous localization and mapping (SLAM), and global navigation techniques (GNSS, Visual Odometry).\\ • Understand AI-based perception, including object detection, classification, and scene understanding.\\ Skills:\\ • Implement basic perception and mapping algorithms using data from multiple sensors.\\ • Apply AI models (e.g., CNNs) to detect and classify environmental objects.\\ • Evaluate uncertainty and performance in localization and mapping using simulation tools.\\ Understanding/Attitudes:\\ • Appreciate challenges of perception under varying environmental conditions.\\ • Recognize the role of data quality, calibration, and synchronization in sensor fusion.\\ • Adopt responsible practices when designing AI-driven perception modules for safety-critical applications. || | ^ **Pre-requirements** | Basic knowledge of linear algebra, probability and signal processing, as well as programming skills. Familiarity with control systems, kinematics, Linux/ROS environments or computer vision libraries is recommended but not mandatory. | |
| | ** Topics ** | 1. Fundamentals of Perception and Sensor Modalities:\\ – Cameras, LiDARs, radars, and IMUs in perception and mapping.\\ – Sensor calibration, synchronization, and uncertainty modeling.\\ 2. Object Detection and Sensor Fusion:\\ – Principles of multi-sensor fusion (Kalman/Particle filters, deep fusion networks).\\ – Object recognition and classification under variable conditions.\\ 3. Mapping and Localization Techniques:\\ – SLAM, Visual Odometry, and Global Navigation Satellite Systems (GNSS).\\ – Map representation and maintenance for autonomous navigation.\\ 4. AI-Based Scene Understanding:\\ – CNNs, semantic segmentation, and predictive modeling of dynamic environments.\\ 5. Challenges and Case Studies:\\ – Perception under poor visibility, occlusions, and sensor noise.\\ – Integration of perception and localization pipelines in ROS 2. || | ^ **Learning outcomes** | **Knowledge**\\ • Describe perception, mapping, and localization processes in autonomous systems.\\ • Explain principles of sensor fusion, simultaneous localization and mapping.\\ • Understand AI-based perception, including object detection, classification, and scene understanding.\\ **Skills**\\ • Implement basic perception and mapping algorithms using data from multiple sensors.\\ • Apply AI models to detect and classify environmental objects.\\ • Evaluate uncertainty and performance in localization and mapping using simulation tools.\\ **Understanding**\\ • Appreciate challenges of perception under varying environmental conditions.\\ • Recognize the role of data quality, calibration, and synchronization in sensor fusion.\\ • Adopt responsible practices when designing AI-driven perception modules for safety-critical applications. | |
| | **Type of assessment** | The prerequisite of a positive grade is a positive evaluation of module topics and presentation of practical work results with required documentation || | ^ **Topics** | 1. Cameras, LiDARs, radars, and IMUs in perception and mapping.\\ 2. Sensor calibration, synchronization, and uncertainty modeling.\\ 3. Principles of multi-sensor fusion (Kalman/Particle filters, deep fusion networks).\\ 4. Object recognition and classification under variable conditions.\\ 5. SLAM, Visual Odometry, and GNSS.\\ 6. Map representation and maintenance for autonomous navigation.\\ 7. CNNs, semantic segmentation, and predictive modeling of dynamic environments.\\ 8. Perception under poor visibility, occlusions, and sensor noise.\\ 9. Integration of perception and localization pipelines in ROS2. | |
| | **Learning methods** | Lectures: Theoretical background on perception, mapping, and AI-based scene understanding.\\ Lab works: Implementation of sensor fusion and mapping algorithms using ROS 2, Python, and simulated data.\\ Individual assignments: Analysis of perception pipeline performance and report preparation.\\ Self-learning: Study of academic papers, datasets (KITTI, NuScenes), and open-source AI perception frameworks. || | ^ **Type of assessment** | The prerequisite of a positive grade is a positive evaluation of module topics and presentation of practical work results with required documentation | |
| | **AI involvement** | Yes — AI tools can assist in code debugging, model training, and visualization of perception results. Students must cite AI-generated assistance transparently and verify the correctness of outcomes. | | | ^ **Learning methods** | **Lecture** — Theoretical background on perception, mapping, and AI-based scene understanding.\\ **Lab works** — Implementation of sensor fusion and mapping algorithms using ROS2, Python, and simulated data.\\ **Individual assignments** — Analysis of perception pipeline performance and report preparation.\\ **Self-learning** — Study of academic papers, datasets, and open-source AI perception frameworks. | |
| | **References to\\ literature** | 1. Thrun, S. (2010). Toward robotic cars. Communications of the ACM, 53(4).\\ 2. Zhang, J., & Singh, S. (2017). LOAM: Lidar Odometry and Mapping in Real-time. Robotics: Science and Systems.\\ 3. Cadena, C., et al. (2016). Past, Present, and Future of Simultaneous Localization and Mapping (SLAM). IEEE Transactions on Robotics.\\ 4. LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep Learning. Nature, 521(7553).\\ 5. Quigley, M., et al. (2009). ROS: an open-source Robot Operating System. ICRA Workshop on Open Source Software.\\ 6. Kümmerle, R., et al. (2011). g2o: A General Framework for Graph Optimization. ICRA. || | ^ **AI involvement** | AI tools can assist in code debugging, model training, and visualization of perception results. Students must cite AI-generated assistance transparently and verify the correctness of outcomes. | |
| | **Lab equipment** | Yes || | ^ **Recommended tools and environments** | SLAM, CNN, OpenCV, PyTorch, TensorFlow, KITTI, NuScenes | |
| | **Virtual lab** | Yes || | ^ **Verification and Validation focus** | | |
| | **MOOC course** | Suggested MOOC: 'Self-Driving Cars Specialization' (Coursera) or 'Robotics: Perception' (edX, University of Pennsylvania). || | ^ **Relevant standards and regulatory frameworks** | ISO 26262, ISO 21448 (SOTIF) | |
| | |
| |