Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
en:safeav:ctrl:vctrl [2025/10/24 09:36] – [Scenario-Based Validation with Digital Twins] momalaen:safeav:ctrl:vctrl [2025/10/24 09:43] (current) – [Methods and Metrics for Planning & Control] momala
Line 41: Line 41:
 Mission-level planning validation starts from a start–goal pair and asks whether the vehicle reaches the destination via a safe, policy-compliant trajectory. Your platform publishes three families of evidence: (i) trajectory-following error relative to the global path; (ii) safety outcomes such as collisions or violations of separation; and (iii) mission success (goal reached without violations). This couples path selection quality to execution fidelity. Mission-level planning validation starts from a start–goal pair and asks whether the vehicle reaches the destination via a safe, policy-compliant trajectory. Your platform publishes three families of evidence: (i) trajectory-following error relative to the global path; (ii) safety outcomes such as collisions or violations of separation; and (iii) mission success (goal reached without violations). This couples path selection quality to execution fidelity.
  
-At the local planning level, your case study focuses on OpenPlanner inside AutowareOpenPlanner synthesizes a global path and a set of lateral rollouts, then evaluates them under prediction of surrounding actors to select a safe local trajectory for maneuvers like passing and lane changes. By parameterizing scenarios with variables such as the initial separation to the lead vehicle and the lead vehicle’s speed, you create a grid of concrete cases that stress the evaluator’s thresholds. The outcomes are categorized by meaningful labels—Success, Collision, Distance-to-Collision (DTC) violation, excessive deceleration, long pass without return, and timeout—so that planner tuning correlates directly with safety and comfort.+At the local planning level, your case study focuses on the planner inside the autonomous softwareThe planner synthesizes a global and a local path, then evaluates them based on predictions from surrounding actors to select a safe local trajectory for maneuvers such as passing and lane changes. By parameterizing scenarios with variables such as the initial separation to the lead vehicle and the lead vehicle’s speed, you create a grid of concrete cases that stress the evaluator’s thresholds. The outcomes are categorized by meaningful labels—Success, Collision, Distance-to-Collision (DTC) violation, excessive deceleration, long pass without return, and timeout—so that planner tuning correlates directly with safety and comfort.
  
-Control validation links perception-induced delays to braking and steering outcomes. Your framework computes Time-to-Collision (Formula) along with simulator and AV-stack response times to detected obstacles. Sufficient response time allows a safe return to nominal headway; excessive delay predicts collision, sharp braking, or planner oscillations. By logging ground truth, perception outputs, CAN bus commands, and the resulting dynamics, the analysis separates sensing delays from controller latency, revealing where mitigation belongs (planner margins vs. control gains).+<figure Trajectory Validation> 
 +{{ :en:safeav:ctrl:trajectory_validation.png?300 |Trajectory Validation}} 
 +<caption>Trajectory validation example</caption> 
 +</figure> 
 + 
 +Control validation links perception-induced delays to braking and steering outcomes. Your framework computes Time-to-Collision (Formula) along with the simulator and AV-stack response times to detected obstacles. Sufficient response time allows a safe return to nominal headway; excessive delay predicts collision, sharp braking, or planner oscillations. By logging ground truth, perception outputs, CAN bus commands, and the resulting dynamics, the analysis separates sensing delays from controller latency, revealing where mitigation belongs (planner margins vs. control gains).
  
 A necessary dependency is localization health. Your tests inject controlled GPS/IMU degradations and dropouts through simulator APIs, then compare expected vs. actual pose per frame to quantify drift. Because planning and control are sensitive to absolute and relative pose, this produces actionable thresholds for safe operation (e.g., maximum tolerated RMS deviation before reducing speed or restricting maneuvers). A necessary dependency is localization health. Your tests inject controlled GPS/IMU degradations and dropouts through simulator APIs, then compare expected vs. actual pose per frame to quantify drift. Because planning and control are sensitive to absolute and relative pose, this produces actionable thresholds for safe operation (e.g., maximum tolerated RMS deviation before reducing speed or restricting maneuvers).
en/safeav/ctrl/vctrl.1761298579.txt.gz · Last modified: 2025/10/24 09:36 by momala
CC Attribution-Share Alike 4.0 International
www.chimeric.de Valid CSS Driven by DokuWiki do yourself a favour and use a real browser - get firefox!! Recent changes RSS feed Valid XHTML 1.0