Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
en:safeav:maps:ai [2025/10/21 13:16] – [Deep Learning Architectures] kosnarken:safeav:maps:ai [2025/10/21 13:18] (current) – [Data Requirements] kosnark
Line 39: Line 39:
 Alternatively, point-based networks like ''PointNet'' and ''PointNet++'' operate directly on raw point sets without voxelization, preserving fine geometric detail. Alternatively, point-based networks like ''PointNet'' and ''PointNet++'' operate directly on raw point sets without voxelization, preserving fine geometric detail.
 These models are critical for estimating the shape and distance of objects in 3D space, especially under challenging lighting or weather conditions. These models are critical for estimating the shape and distance of objects in 3D space, especially under challenging lighting or weather conditions.
 +
 +{{ :en:safeav:maps:cnn.webp?400 |}}
  
 === Transformer Architectures === === Transformer Architectures ===
Line 46: Line 48:
 Notable examples include ''DETR'' (Detection Transformer), ''BEVFormer'', and ''TransFusion'', which unify information from cameras and LiDARs into a consistent spatial representation. Notable examples include ''DETR'' (Detection Transformer), ''BEVFormer'', and ''TransFusion'', which unify information from cameras and LiDARs into a consistent spatial representation.
  
-{{ :en:safeav:maps:cnn.webp?400 |}}+
  
 === Recurrent and Temporal Models === === Recurrent and Temporal Models ===
Line 72: Line 74:
 Robust perception requires exposure to the full range of operating conditions that a vehicle may encounter.  Robust perception requires exposure to the full range of operating conditions that a vehicle may encounter. 
 Datasets must include variations in: Datasets must include variations in:
-* **Sensor modalities** – data from cameras, LiDAR, radar, GNSS, and IMU, reflecting the multimodal nature of perception. + 
-* **Environmental conditions** – daytime and nighttime scenes, different seasons, weather effects such as rain, fog, or snow. +  * **Sensor modalities** – data from cameras, LiDAR, radar, GNSS, and IMU, reflecting the multimodal nature of perception. 
-* **Geographical and cultural contexts** – urban, suburban, and rural areas; diverse traffic rules and road signage conventions. +  * **Environmental conditions** – daytime and nighttime scenes, different seasons, weather effects such as rain, fog, or snow. 
-* **Behavioral diversity** – normal driving, aggressive maneuvers, and rare events such as jaywalking or emergency stops. +  * **Geographical and cultural contexts** – urban, suburban, and rural areas; diverse traffic rules and road signage conventions. 
-* **Edge cases** – rare but safety-critical situations, including near-collisions or sensor occlusions.+  * **Behavioral diversity** – normal driving, aggressive maneuvers, and rare events such as jaywalking or emergency stops. 
 +  * **Edge cases** – rare but safety-critical situations, including near-collisions or sensor occlusions.
  
 A balanced dataset should capture both common and unusual situations to ensure that perception models generalize safely beyond the training distribution. A balanced dataset should capture both common and unusual situations to ensure that perception models generalize safely beyond the training distribution.
en/safeav/maps/ai.1761052599.txt.gz · Last modified: 2025/10/21 13:16 by kosnark
CC Attribution-Share Alike 4.0 International
www.chimeric.de Valid CSS Driven by DokuWiki do yourself a favour and use a real browser - get firefox!! Recent changes RSS feed Valid XHTML 1.0