Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
en:safeav:curriculum:maps-b [2025/11/04 14:41] raivo.sellen:safeav:curriculum:maps-b [2025/11/05 09:05] (current) airi
Line 5: Line 5:
 ^ **Study forms** | Hybrid or fully online | ^ **Study forms** | Hybrid or fully online |
 ^ **Module aims** | The aim of the module is to introduce perception, mapping and localisation methods for autonomous systems. The course develops students’ ability to combine data from multiple sensors to detect and interpret the environment, build maps, estimate vehicle pose in real time and handle uncertainty using modern AI-based perception and sensor fusion techniques. | ^ **Module aims** | The aim of the module is to introduce perception, mapping and localisation methods for autonomous systems. The course develops students’ ability to combine data from multiple sensors to detect and interpret the environment, build maps, estimate vehicle pose in real time and handle uncertainty using modern AI-based perception and sensor fusion techniques. |
-^ **Pre-requirements** | Basic knowledge of linear algebra, probability and signal processing, as well as programming skills in Python or C++. Familiarity with control systems, kinematics, Linux/ROS environments or computer vision libraries is recommended but not mandatory. | +^ **Pre-requirements** | Basic knowledge of linear algebra, probability and signal processing, as well as programming skills. Familiarity with control systems, kinematics, Linux/ROS environments or computer vision libraries is recommended but not mandatory. | 
-^ **Learning outcomes** | **Knowledge**\\ • Describe perception, mapping, and localization processes in autonomous systems.\\ • Explain principles of sensor fusion, simultaneous localization and mapping (SLAM), and global navigation techniques (GNSS, Visual Odometry).\\ • Understand AI-based perception, including object detection, classification, and scene understanding.\\ **Skills**\\ • Implement basic perception and mapping algorithms using data from multiple sensors.\\ • Apply AI models to detect and classify environmental objects.\\ • Evaluate uncertainty and performance in localization and mapping using simulation tools.\\ **Understanding**\\ • Appreciate challenges of perception under varying environmental conditions.\\ • Recognize the role of data quality, calibration, and synchronization in sensor fusion.\\ • Adopt responsible practices when designing AI-driven perception modules for safety-critical applications. |+^ **Learning outcomes** | **Knowledge**\\ • Describe perception, mapping, and localization processes in autonomous systems.\\ • Explain principles of sensor fusion, simultaneous localization and mapping.\\ • Understand AI-based perception, including object detection, classification, and scene understanding.\\ **Skills**\\ • Implement basic perception and mapping algorithms using data from multiple sensors.\\ • Apply AI models to detect and classify environmental objects.\\ • Evaluate uncertainty and performance in localization and mapping using simulation tools.\\ **Understanding**\\ • Appreciate challenges of perception under varying environmental conditions.\\ • Recognize the role of data quality, calibration, and synchronization in sensor fusion.\\ • Adopt responsible practices when designing AI-driven perception modules for safety-critical applications. |
 ^ **Topics** | 1. Cameras, LiDARs, radars, and IMUs in perception and mapping.\\ 2. Sensor calibration, synchronization, and uncertainty modeling.\\ 3. Principles of multi-sensor fusion (Kalman/Particle filters, deep fusion networks).\\ 4. Object recognition and classification under variable conditions.\\ 5. SLAM, Visual Odometry, and GNSS.\\ 6. Map representation and maintenance for autonomous navigation.\\ 7. CNNs, semantic segmentation, and predictive modeling of dynamic environments.\\ 8. Perception under poor visibility, occlusions, and sensor noise.\\ 9. Integration of perception and localization pipelines in ROS2. | ^ **Topics** | 1. Cameras, LiDARs, radars, and IMUs in perception and mapping.\\ 2. Sensor calibration, synchronization, and uncertainty modeling.\\ 3. Principles of multi-sensor fusion (Kalman/Particle filters, deep fusion networks).\\ 4. Object recognition and classification under variable conditions.\\ 5. SLAM, Visual Odometry, and GNSS.\\ 6. Map representation and maintenance for autonomous navigation.\\ 7. CNNs, semantic segmentation, and predictive modeling of dynamic environments.\\ 8. Perception under poor visibility, occlusions, and sensor noise.\\ 9. Integration of perception and localization pipelines in ROS2. |
 ^ **Type of assessment** | The prerequisite of a positive grade is a positive evaluation of module topics and presentation of practical work results with required documentation | ^ **Type of assessment** | The prerequisite of a positive grade is a positive evaluation of module topics and presentation of practical work results with required documentation |
Line 13: Line 13:
 ^ **Recommended tools and environments** | SLAM, CNN, OpenCV, PyTorch, TensorFlow, KITTI, NuScenes | ^ **Recommended tools and environments** | SLAM, CNN, OpenCV, PyTorch, TensorFlow, KITTI, NuScenes |
 ^ **Verification and Validation focus** |  | ^ **Verification and Validation focus** |  |
-^ **Relevant standards and regulatory frameworks** |  |+^ **Relevant standards and regulatory frameworks** | ISO 26262, ISO 21448 (SOTIF) |
  
  
  
en/safeav/curriculum/maps-b.1762267269.txt.gz · Last modified: 2025/11/04 14:41 by raivo.sell
CC Attribution-Share Alike 4.0 International
www.chimeric.de Valid CSS Driven by DokuWiki do yourself a favour and use a real browser - get firefox!! Recent changes RSS feed Valid XHTML 1.0