This shows you the differences between two versions of the page.
| Both sides previous revisionPrevious revision | |||
| en:safeav:maps:validation [2025/10/23 20:36] – [Localization Validation] momala | en:safeav:maps:validation [2025/10/23 20:46] (current) – [Localization Validation] momala | ||
|---|---|---|---|
| Line 48: | Line 48: | ||
| - | {{ : | + | <figure Localization Validation> |
| + | {{ : | ||
| + | < | ||
| + | </ | ||
| + | |||
| + | The current validation methods perform a one-to-one mapping between the expected and actual locations. As shown in Fig. 2, for each frame, the vehicle position deviation is computed and reported in the validation report. Later parameters, like min/ | ||
| ====== Multi-Fidelity Workflow and Scenario-to-Track Bridge ====== | ====== Multi-Fidelity Workflow and Scenario-to-Track Bridge ====== | ||
| A two-stage workflow balances coverage and realism. First, use LF tools (e.g., planner-in-the-loop with simplified sensors and traffic) to sweep large grids of logical scenarios and identify risky regions in parameter space (relative speed, initial gap, occlusion level). Then, promote the most informative concrete scenarios to HF simulation with photorealistic sensors for end-to-end validation of perception and localization interactions. Where appropriate, | A two-stage workflow balances coverage and realism. First, use LF tools (e.g., planner-in-the-loop with simplified sensors and traffic) to sweep large grids of logical scenarios and identify risky regions in parameter space (relative speed, initial gap, occlusion level). Then, promote the most informative concrete scenarios to HF simulation with photorealistic sensors for end-to-end validation of perception and localization interactions. Where appropriate, | ||