This shows you the differences between two versions of the page.
| Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
| en:safeav:maps:validation [2025/07/02 13:02] – pczekalski | en:safeav:maps:validation [2025/10/23 20:46] (current) – [Localization Validation] momala | ||
|---|---|---|---|
| Line 3: | Line 3: | ||
| <todo @bertlluk> | <todo @bertlluk> | ||
| + | |||
| + | This section presents a practical, simulation-driven approach to validating the perception, mapping (HD maps/ | ||
| + | |||
| + | ====== Scope, ODD, and Assurance Frame ====== | ||
| + | |||
| + | We decompose the stack into Perception (object detection/ | ||
| + | |||
| + | ====== Perception Validation ====== | ||
| + | |||
| + | |||
| + | The objective is to quantify detection performance—and its safety impact—across the ODD. In end-to-end, high-fidelity (HF) simulation, we log both simulator ground truth and the stack’s detections, then compute per-class statistics as a function of distance and occlusion. Near-field errors are emphasized because they dominate braking and collision risk. Scenario sets should include partial occlusions, sudden obstacle appearances, | ||
| + | |||
| + | |||
| + | <figure Detection Validation> | ||
| + | {{ : | ||
| + | < | ||
| + | </ | ||
| + | |||
| + | * **KPIs**: precision/ | ||
| + | * **Search strategy**: use low-fidelity (LF) sweeps for breadth (planner-in-the-loop, | ||
| + | |||
| + | Figure 1 explains object comparison. Green boxes are shown for objects captured by ground truth, while Red boxes are shown for objects detected by the AV stack. Threshold-based rules are designed to compare the objects. It is expected to provide specific indicators of detectable vehicles in different ranges for safety and danger areas. | ||
| + | ====== Mapping / Digital-Twin Validation ====== | ||
| + | |||
| + | |||
| + | Validation begins with how the map and digital twin are produced. Aerial imagery or LiDAR is collected with RTK geo-tagging and surveyed control points, then processed into dense point clouds and classified to separate roads, buildings, and vegetation. From there, you export OpenDRIVE (for lanes, traffic rules, and topology) and a 3D environment for HF simulation. The twin should be accurate enough that perception models do not overfit artifacts and localization algorithms can achieve lane-level continuity. | ||
| + | |||
| + | Key checks include lane topology fidelity versus survey, geo-consistency in centimeters, | ||
| + | |||
| + | ====== Localization Validation ====== | ||
| + | |||
| + | |||
| + | Here, the focus is on the robustness of ego-pose to sensor noise, outages, and map inconsistencies. In simulation, you inject GNSS multipath, IMU bias, packet dropouts, or short GNSS blackouts and watch how quickly the estimator diverges and re-converges. Similar tests perturb the map (e.g., small lane-mark misalignments) to examine estimator sensitivity to mapping error. | ||
| + | |||
| + | The following is a short KPI list: | ||
| + | |||
| + | * **Pose error & drift**: per-frame position/ | ||
| + | |||
| + | * **Continuity**: | ||
| + | |||
| + | * **Recovery**: | ||
| + | |||
| + | * **Safety propagation**: | ||
| + | |||
| + | |||
| + | <figure Localization Validation> | ||
| + | {{ : | ||
| + | < | ||
| + | </ | ||
| + | |||
| + | The current validation methods perform a one-to-one mapping between the expected and actual locations. As shown in Fig. 2, for each frame, the vehicle position deviation is computed and reported in the validation report. Later parameters, like min/ | ||
| + | ====== Multi-Fidelity Workflow and Scenario-to-Track Bridge ====== | ||
| + | |||
| + | |||
| + | A two-stage workflow balances coverage and realism. First, use LF tools (e.g., planner-in-the-loop with simplified sensors and traffic) to sweep large grids of logical scenarios and identify risky regions in parameter space (relative speed, initial gap, occlusion level). Then, promote the most informative concrete scenarios to HF simulation with photorealistic sensors for end-to-end validation of perception and localization interactions. Where appropriate, | ||