Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
en:safeav:maps:validation [2025/10/23 08:04] momalaen:safeav:maps:validation [2025/10/23 20:46] (current) – [Localization Validation] momala
Line 15: Line 15:
 The objective is to quantify detection performance—and its safety impact—across the ODD. In end-to-end, high-fidelity (HF) simulation, we log both simulator ground truth and the stack’s detections, then compute per-class statistics as a function of distance and occlusion. Near-field errors are emphasized because they dominate braking and collision risk. Scenario sets should include partial occlusions, sudden obstacle appearances, vulnerable road users, and adverse weather/illumination, all realized over the site map so that failures can be replayed and compared. The objective is to quantify detection performance—and its safety impact—across the ODD. In end-to-end, high-fidelity (HF) simulation, we log both simulator ground truth and the stack’s detections, then compute per-class statistics as a function of distance and occlusion. Near-field errors are emphasized because they dominate braking and collision risk. Scenario sets should include partial occlusions, sudden obstacle appearances, vulnerable road users, and adverse weather/illumination, all realized over the site map so that failures can be replayed and compared.
  
-Selective bullets to keep things crisp: 
  
-KPIsprecision/recall per class and distance bin; time-to-detect and time-to-react deltas; TTC availability and whether perceived obstacles trigger sufficient braking distance.+<figure Detection Validation> 
 +{{ :en:safeav:maps:perception_val.png?400 | Detection Validation}} 
 +<caption>Detection validation example. The Ground truth of the detectable vehicles is indicated using green boxes, while the detections are marked using red boxes. </caption> 
 +</figure>
  
-Search strategy: use low-fidelity (LF) sweeps for breadth (planner-in-the-loop, simplified sensors) and confirm top-risk cases in HF with full sensor simulation before any track trials.+  * **KPIs**: precision/recall per class and distance bin; time-to-detect and time-to-react deltas; TTC availability and whether perceived obstacles trigger sufficient braking distance. 
 +  * **Search strategy**: use low-fidelity (LF) sweeps for breadth (planner-in-the-loop, simplified sensors) and confirm top-risk cases in HF with full sensor simulation before any track trials.
  
 +Figure 1 explains object comparison. Green boxes are shown for objects captured by ground truth, while Red boxes are shown for objects detected by the AV stack. Threshold-based rules are designed to compare the objects. It is expected to provide specific indicators of detectable vehicles in different ranges for safety and danger areas.
 ====== Mapping / Digital-Twin Validation ====== ====== Mapping / Digital-Twin Validation ======
  
Line 31: Line 35:
  
  
-Here the focus is robustness of ego-pose to sensor noise, outages, and map inconsistencies. In simulation, you inject GNSS multipath, IMU bias, packet dropouts, or short GNSS blackouts and watch how quickly the estimator diverges and re-converges. Similar tests perturb the map (e.g., small lane-mark misalignments) to examine estimator sensitivity to mapping error.+Herethe focus is on the robustness of ego-pose to sensor noise, outages, and map inconsistencies. In simulation, you inject GNSS multipath, IMU bias, packet dropouts, or short GNSS blackouts and watch how quickly the estimator diverges and re-converges. Similar tests perturb the map (e.g., small lane-mark misalignments) to examine estimator sensitivity to mapping error.
  
-Keep a short KPI list:+The following is a short KPI list:
  
-  * Pose error & drift: per-frame position/orientation error, drift rate during GNSS loss.+  * **Pose error & drift**: per-frame position/orientation error, drift rate during GNSS loss.
  
-  * Continuity: lane-level continuity at junctions and during sharp maneuvers.+  * **Continuity**: lane-level continuity at junctions and during sharp maneuvers.
  
-  * Recovery: re-convergence time and heading stability after outages.+  * **Recovery**: re-convergence time and heading stability after outages.
  
-  * Safety propagation: impact on distance-to-collision (DTC), braking sufficiency, and rule-checking (e.g., lane keeping within margins).+  * **Safety propagation**: impact on distance-to-collision (DTC), braking sufficiency, and rule-checking (e.g., lane keeping within margins).
  
 +
 +<figure Localization Validation>
 +{{ :en:safeav:maps:localization_val.png?400 | localization validation}}
 +<caption> Localization validation, in some cases, the difference between the expected location and the actual location may lead to accidents.</caption>
 +</figure>
 +
 +The current validation methods perform a one-to-one mapping between the expected and actual locations. As shown in Fig. 2, for each frame, the vehicle position deviation is computed and reported in the validation report. Later parameters, like min/max/mean deviations, are calculated from the same report. In the validation procedure, it is also possible to modify the simulator to embed a mechanism to add noise in the localization process to check the robustness and validate its performance. 
 ====== Multi-Fidelity Workflow and Scenario-to-Track Bridge ====== ====== Multi-Fidelity Workflow and Scenario-to-Track Bridge ======
  
  
 A two-stage workflow balances coverage and realism. First, use LF tools (e.g., planner-in-the-loop with simplified sensors and traffic) to sweep large grids of logical scenarios and identify risky regions in parameter space (relative speed, initial gap, occlusion level). Then, promote the most informative concrete scenarios to HF simulation with photorealistic sensors for end-to-end validation of perception and localization interactions. Where appropriate, a small, curated set of scenarios is carried to closed-track trials. Success criteria are consistent across all stages, and post-run analyses attribute failures to perception, localization, prediction, or planning so fixes are targeted rather than generic. A two-stage workflow balances coverage and realism. First, use LF tools (e.g., planner-in-the-loop with simplified sensors and traffic) to sweep large grids of logical scenarios and identify risky regions in parameter space (relative speed, initial gap, occlusion level). Then, promote the most informative concrete scenarios to HF simulation with photorealistic sensors for end-to-end validation of perception and localization interactions. Where appropriate, a small, curated set of scenarios is carried to closed-track trials. Success criteria are consistent across all stages, and post-run analyses attribute failures to perception, localization, prediction, or planning so fixes are targeted rather than generic.
en/safeav/maps/validation.1761206687.txt.gz · Last modified: 2025/10/23 08:04 by momala
CC Attribution-Share Alike 4.0 International
www.chimeric.de Valid CSS Driven by DokuWiki do yourself a favour and use a real browser - get firefox!! Recent changes RSS feed Valid XHTML 1.0