This shows you the differences between two versions of the page.
| Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
| en:safeav:maps:validation [2025/10/23 08:12] – [Perception Validation] momala | en:safeav:maps:validation [2025/10/23 20:46] (current) – [Localization Validation] momala | ||
|---|---|---|---|
| Line 15: | Line 15: | ||
| The objective is to quantify detection performance—and its safety impact—across the ODD. In end-to-end, high-fidelity (HF) simulation, we log both simulator ground truth and the stack’s detections, then compute per-class statistics as a function of distance and occlusion. Near-field errors are emphasized because they dominate braking and collision risk. Scenario sets should include partial occlusions, sudden obstacle appearances, | The objective is to quantify detection performance—and its safety impact—across the ODD. In end-to-end, high-fidelity (HF) simulation, we log both simulator ground truth and the stack’s detections, then compute per-class statistics as a function of distance and occlusion. Near-field errors are emphasized because they dominate braking and collision risk. Scenario sets should include partial occlusions, sudden obstacle appearances, | ||
| + | |||
| + | <figure Detection Validation> | ||
| + | {{ : | ||
| + | < | ||
| + | </ | ||
| * **KPIs**: precision/ | * **KPIs**: precision/ | ||
| * **Search strategy**: use low-fidelity (LF) sweeps for breadth (planner-in-the-loop, | * **Search strategy**: use low-fidelity (LF) sweeps for breadth (planner-in-the-loop, | ||
| + | Figure 1 explains object comparison. Green boxes are shown for objects captured by ground truth, while Red boxes are shown for objects detected by the AV stack. Threshold-based rules are designed to compare the objects. It is expected to provide specific indicators of detectable vehicles in different ranges for safety and danger areas. | ||
| ====== Mapping / Digital-Twin Validation ====== | ====== Mapping / Digital-Twin Validation ====== | ||
| Line 29: | Line 35: | ||
| - | Here the focus is robustness of ego-pose to sensor noise, outages, and map inconsistencies. In simulation, you inject GNSS multipath, IMU bias, packet dropouts, or short GNSS blackouts and watch how quickly the estimator diverges and re-converges. Similar tests perturb the map (e.g., small lane-mark misalignments) to examine estimator sensitivity to mapping error. | + | Here, the focus is on the robustness of ego-pose to sensor noise, outages, and map inconsistencies. In simulation, you inject GNSS multipath, IMU bias, packet dropouts, or short GNSS blackouts and watch how quickly the estimator diverges and re-converges. Similar tests perturb the map (e.g., small lane-mark misalignments) to examine estimator sensitivity to mapping error. |
| + | |||
| + | The following is a short KPI list: | ||
| + | |||
| + | * **Pose error & drift**: per-frame position/ | ||
| - | Keep a short KPI list: | + | * **Continuity**: lane-level continuity at junctions and during sharp maneuvers. |
| - | * Pose error & drift: per-frame position/ | + | * **Recovery**: re-convergence time and heading stability after outages. |
| - | * Continuity: lane-level continuity at junctions | + | * **Safety propagation**: impact on distance-to-collision (DTC), braking sufficiency, |
| - | * Recovery: re-convergence time and heading stability after outages. | ||
| - | * Safety propagation: impact on distance-to-collision (DTC), braking sufficiency, and rule-checking (e.g., lane keeping within margins). | + | <figure Localization Validation> |
| + | {{ :en: | ||
| + | < | ||
| + | </ | ||
| + | The current validation methods perform a one-to-one mapping between the expected and actual locations. As shown in Fig. 2, for each frame, the vehicle position deviation is computed and reported in the validation report. Later parameters, like min/ | ||
| ====== Multi-Fidelity Workflow and Scenario-to-Track Bridge ====== | ====== Multi-Fidelity Workflow and Scenario-to-Track Bridge ====== | ||
| A two-stage workflow balances coverage and realism. First, use LF tools (e.g., planner-in-the-loop with simplified sensors and traffic) to sweep large grids of logical scenarios and identify risky regions in parameter space (relative speed, initial gap, occlusion level). Then, promote the most informative concrete scenarios to HF simulation with photorealistic sensors for end-to-end validation of perception and localization interactions. Where appropriate, | A two-stage workflow balances coverage and realism. First, use LF tools (e.g., planner-in-the-loop with simplified sensors and traffic) to sweep large grids of logical scenarios and identify risky regions in parameter space (relative speed, initial gap, occlusion level). Then, promote the most informative concrete scenarios to HF simulation with photorealistic sensors for end-to-end validation of perception and localization interactions. Where appropriate, | ||