This shows you the differences between two versions of the page.
| Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
| en:safeav:ctrl:vctrl [2025/10/24 07:51] – [Scenario-Based Validation with Digital Twins] momala | en:safeav:ctrl:vctrl [2025/10/24 09:43] (current) – [Methods and Metrics for Planning & Control] momala | ||
|---|---|---|---|
| Line 18: | Line 18: | ||
| The V&V workflow begins with a formal scenario description: | The V&V workflow begins with a formal scenario description: | ||
| - | To maintain broad coverage without sacrificing realism, validations can be done using a two-layer approach. A low-fidelity (LF) layer (e.g., SUMO) sweeps wide parameter grids quickly to reveal where planning/ | + | To maintain broad coverage without sacrificing realism, validations can be done using a two-layer approach |
| - | Formal methods strengthen this flow. In the simulation-to-track pipeline, scenarios and safety properties are specified formally (e.g., via Scenic and Metric Temporal Logic), falsification synthesizes challenging test cases, and a mapping executes those cases on a closed track. In published evidence, a majority of unsafe simulated cases reproduced as unsafe on track, and safe cases mostly remained safe—while time-series comparisons (e.g., DTW, Skorokhod metrics) quantified the sim-to-real differences relevant to planning and control. This is exactly the kind of transferability and measurement discipline a planning/ | ||
| - | Finally, environment twins are built from aerial photogrammetry and point-cloud processing (with RTK-supported georeferencing), | + | <figure Low and High Fidelity> |
| + | {{ : | ||
| + | < | ||
| + | Pang Flötteröd, | ||
| + | ner, and Evamarie Wießner. Microscopic traffic simulation using sumo. In The 21st | ||
| + | IEEE International Conference on Intelligent Transportation Systems. IEEE, 2018.)) b) High-Fidelity AWSIM | ||
| + | simulator ((Autoware Foundation. TIER IV AWSIM. https:// | ||
| + | 2022.)) </ | ||
| + | </ | ||
| + | |||
| + | |||
| + | Formal methods strengthen this flow. In the simulation-to-track pipeline, scenarios and safety properties are specified formally (e.g., via Scenic and Metric Temporal Logic), falsification synthesizes challenging test cases, and a mapping executes those cases on a closed track((Fremont, | ||
| + | |||
| + | Finally, environment twins are built from aerial photogrammetry and point-cloud processing (with RTK-supported georeferencing), | ||
| ====== Methods and Metrics for Planning & Control ====== | ====== Methods and Metrics for Planning & Control ====== | ||
| Line 29: | Line 41: | ||
| Mission-level planning validation starts from a start–goal pair and asks whether the vehicle reaches the destination via a safe, policy-compliant trajectory. Your platform publishes three families of evidence: (i) trajectory-following error relative to the global path; (ii) safety outcomes such as collisions or violations of separation; and (iii) mission success (goal reached without violations). This couples path selection quality to execution fidelity. | Mission-level planning validation starts from a start–goal pair and asks whether the vehicle reaches the destination via a safe, policy-compliant trajectory. Your platform publishes three families of evidence: (i) trajectory-following error relative to the global path; (ii) safety outcomes such as collisions or violations of separation; and (iii) mission success (goal reached without violations). This couples path selection quality to execution fidelity. | ||
| - | At the local planning level, your case study focuses on OpenPlanner | + | At the local planning level, your case study focuses on the planner |
| + | |||
| + | <figure Trajectory Validation> | ||
| + | {{ : | ||
| + | < | ||
| + | </ | ||
| - | Control validation links perception-induced delays to braking and steering outcomes. Your framework computes Time-to-Collision (Formula) along with simulator and AV-stack response times to detected obstacles. Sufficient response time allows a safe return to nominal headway; excessive delay predicts collision, sharp braking, or planner oscillations. By logging ground truth, perception outputs, CAN bus commands, and the resulting dynamics, the analysis separates sensing delays from controller latency, revealing where mitigation belongs (planner margins vs. control gains). | + | Control validation links perception-induced delays to braking and steering outcomes. Your framework computes Time-to-Collision (Formula) along with the simulator and AV-stack response times to detected obstacles. Sufficient response time allows a safe return to nominal headway; excessive delay predicts collision, sharp braking, or planner oscillations. By logging ground truth, perception outputs, CAN bus commands, and the resulting dynamics, the analysis separates sensing delays from controller latency, revealing where mitigation belongs (planner margins vs. control gains). |
| A necessary dependency is localization health. Your tests inject controlled GPS/IMU degradations and dropouts through simulator APIs, then compare expected vs. actual pose per frame to quantify drift. Because planning and control are sensitive to absolute and relative pose, this produces actionable thresholds for safe operation (e.g., maximum tolerated RMS deviation before reducing speed or restricting maneuvers). | A necessary dependency is localization health. Your tests inject controlled GPS/IMU degradations and dropouts through simulator APIs, then compare expected vs. actual pose per frame to quantify drift. Because planning and control are sensitive to absolute and relative pose, this produces actionable thresholds for safe operation (e.g., maximum tolerated RMS deviation before reducing speed or restricting maneuvers). | ||