This shows you the differences between two versions of the page.
| Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
| en:safeav:handson [2025/10/30 07:40] – raivo.sell | en:safeav:handson [2025/10/30 07:46] (current) – raivo.sell | ||
|---|---|---|---|
| Line 1: | Line 1: | ||
| - | ====== Hands-on guide - Use Cases ====== | + | ====== |
| + | |||
| + | |||
| + | ===== Use-case requirements ===== | ||
| + | |||
| + | This chapter defines the minimum requirements for the use cases that will be developed, validated, and comparatively assessed within the SafeAV framework (AV shuttle, F1TENTH, mobile robot, UAV). The goal is to align learning outcomes with technical, safety, and regulatory constraints, | ||
| + | |||
| + | ===== Use Case #1 AV Shuttle | ||
| + | |||
| + | The **TalTech iseAuto AV shuttle** is Estonia’s first self-driving vehicle developed as an academic–industry collaboration led by Tallinn University of Technology. TalTech iseAuto operates as a fully electric vehicle with a top speed of approximately 25 km/h and a capacity of up to eight passengers. It can run for around eight hours on a single charge, making it well-suited for short urban routes and campus loops. The shuttle is equipped with a comprehensive perception system that includes three LiDAR sensors and five cameras, providing 360-degree environmental awareness. Navigation is based on pre-mapped routes, while a remote control room enables teleoperation and system monitoring when necessary. Within TalTech, iseAuto serves as a research and educational platform that bridges theoretical learning and real-world experimentation in autonomous driving. The shuttle integrates with the Autoware open-source software stack for perception, planning, and control, and it supports a digital twin simulation environment that allows testing of algorithms in virtual conditions before deploying them on the physical vehicle. This approach has made iseAuto an essential testbed for validating autonomous vehicle safety, sensor fusion, and human–machine interaction. | ||
| + | |||
| + | The focus on developing a simulation-based use case that supports education, prototyping, | ||
| + | |||
| + | - **Multi-level simulation**: | ||
| + | - **Scenario testing**: The environment should support OpenSCENARIO-based scenario definitions to allow testing of complex multi-agent interactions, | ||
| + | - **Autoware compatibility**: | ||
| + | - **Evaluation metrics**: Tools must enable automated analysis of safety metrics such as collisions, mission completion, traffic rule violations, and behavioral KPIs, suitable for both assessment and comparison. | ||
| + | - **Containerized deployment**: | ||
| + | - **Educational accessibility**: | ||
| + | - **Iterative development**: | ||
| + | - **Sustainability**: | ||
| + | |||
| + | The AV Shuttle use case requires a flexible and scalable V&V setup that supports both low- and high-fidelity simulations, | ||
| + | |||
| + | ===== Use Case #2 F1TENTH ===== | ||
| + | |||
| + | |||
| + | The F1TENTH platform is an open-source, | ||
| + | |||
| + | . The platform’s sensor outputs include LiDAR scans, RGB or RGB-D camera feeds, and IMU data, which can be noisy and unreliable due to the small size of the vehicle. Additional outputs, such as pose and velocity estimates, may be derived using external localization methods. Control inputs consist of speed commands—either normalized or expressed in meters per second—and steering angle values, typically represented as PWM signals that can be translated into degrees. These interfaces ensure smooth integration between the physical and simulated components, supporting real-time testing, algorithm verification, | ||
| + | |||
| + | The F1TENTH use case defines simulation requirements that ensure accessible and reproducible environments for validating autonomous driving algorithms within academic and research settings. | ||
| + | |||
| + | - The simulation environment must support lightweight, | ||
| + | - It should utilize kinematic single-track or Ackermann steering models to balance computational efficiency with sufficient accuracy for low-speed vehicles. | ||
| + | - The framework must enable reproducible validation of perception, planning, and control algorithms in both 2D and 3D environments. | ||
| + | - ROS2 compatibility is required, with optional integration to simplified Autoware vehicle models for advanced testing. | ||
| + | - Simulations should run smoothly on standard laptops without the need for high-end GPUs, ensuring broad accessibility for students. | ||
| + | - The overall design must support iterative experimentation, | ||
| + | |||
| + | The F1TENTH use case focuses on providing an open, reproducible, | ||
| + | |||
| + | ===== Use Case #3 Mobile Robot ===== | ||
| + | |||
| + | |||
| + | The Mobile Robot (RTU) use case focuses on cooperative indoor logistics systems designed to demonstrate autonomous navigation, coordination, | ||
| + | |||
| + |  use case focus on ensuring safe, reliable, and verifiable operation of cooperative indoor logistics robots within both simulated and physical environments. The framework is designed to validate system behavior across all levels—communication, | ||
| + | |||
| + | - The verification and validation framework must support systematic testing across design, simulation, and real-world operation, ensuring traceability from requirements to test results. | ||
| + | - It should enable component-level and interface testing of ROS2 nodes, MQTT communication, | ||
| + | - The setup must allow hardware-in-the-loop (HIL) and simulation-based validation, confirming that control, navigation, and planning functions behave safely under realistic indoor conditions. | ||
| + | - Runtime monitoring and safety shields should be implemented to detect failures, prevent unsafe actions, and support continuous verification in CI/CD workflows. | ||
| + | - The framework must assess system performance under communication losses or operator intervention, | ||
| + | - All tools and procedures should remain open-source, | ||
| + | |||
| + | The defined V&V requirements establish a comprehensive validation chain that connects design, simulation, and real-world testing. They emphasize fault tolerance, runtime monitoring, and reproducibility using open-source ROS2 and MQTT-based architectures. This ensures that students can study and experiment with advanced verification techniques while developing safe and resilient autonomous robotic systems. | ||
| + | |||
| + | ===== Use Case #4 Drone ===== | ||
| + | |||
| + | |||
| + | The Drone (SUT, PRO) use case focuses on unmanned aerial vehicle (UAV) systems that bridge aviation safety principles with autonomous mobility education. Developed through Prodron’s extensive experience in UAV training and system design, this use case explores real-world challenges such as emergency response, navigation under uncertainty, | ||
| + | |||
| + |  systems for rapid onboarding with open-source ecosystems for advanced, research-driven experimentation. This structure enables both immediate applicability in training contexts and long-term scalability for integration into academic courses, laboratories, | ||
| + | |||
| + | ===== Conclusions and Decisions ===== | ||
| + | |||
| + | In conclusion, the consortium has evaluated the available verification and validation frameworks based on current research, technical feasibility, | ||
| + | Use case No | ||
| + | Name | ||
| + | Description | ||
| + | Selected Frameworks and Software Set | ||
| + | Responsible partner | ||
| + | #1 | ||
| + | AV shuttle | ||
| + | Simulation-based V&V use case for an autonomous shuttle used in education, prototyping, | ||
| + | Autoware.Universe (software under test), | ||
| + | AWSIM (digital twin, sensor simulation, Autoware integration), | ||
| + | CARLA (high-fidelity perception & planning validation, synthetic data), | ||
| + | Rosbag Replay (real-world perception regression), | ||
| + | Scenario Simulator v2 (scenario-based planning validation), | ||
| + | CommonRoad (planning benchmarking), | ||
| + | SUMO (traffic co-simulation) | ||
| + | Scenic (Scenario generator for CARLA) | ||
| + | TalTech | ||
| + | #2 | ||
| + | F1Tenth | ||
| + | The F1TENTH use case demonstrates the application of open-source V&V frameworks for validating autonomous driving functions on a 1/10-scale research platform. Verification focuses on trajectory tracking, ensuring that the vehicle follows safe and stable paths without collisions or oscillations when encountering static or dynamic obstacles. In addition, the setup supports perception-level validation, including object and traffic sign detection, persistence of observations, | ||
| + | Rosbag Replay (real-world perception), | ||
| + | F1TENTH Gym (simple 2D simulation) | ||
| + | ROSMonitoring (verifying at runtime) | ||
| + | RoboFuzz (testing with noisy inputs) | ||
| + | CTU | ||
| + | #3 | ||
| + | Mobile robot | ||
| + | The V&V framework for the cooperative indoor logistics robot system ensure that V&V are systematically applied across design, simulation, and real-world operation. Verification focuses on requirement traceability, | ||
| + | Nav2 and MoveIt 2 (navigation and motion planning) | ||
| + | Gazebo/ | ||
| + | MQTT broker Eclipse Mosquitto for message exchange | ||
| + | OpenCV and PCL vision and LIDAR processing | ||
| + | ROS2 testing tools | ||
| + | RTU | ||
| + | #4 | ||
| + | Drone | ||
| + | Simulation-based verification and validation (V&V) for unmanned aerial vehicles (UAVs) is essential before executing real-world missions to ensure software correctness, | ||
| + | AirSim (real-world perception) | ||
| + | Gazeboo (open source simulator) | ||
| + | ArduPilot, QGroundControl (primary framework) | ||
| + | UAV Matlab/ | ||
| + | CARLA (Unreal Engine based simulator) | ||
| + | |||
| + | ===== Key Findings and Recommendations ===== | ||
| + | |||
| + | |||
| + | ROS-based frameworks thus form a critical part of the **SafeAV educational toolchain, | ||
| + | |||
| + | The selected frameworks fulfill higher-education requirements: | ||
| + | |||
| + | - **Accessibility: | ||
| + | - **Ease of setup:** Docker-based deployment for classroom or remote use. | ||
| + | - **Pedagogical link:** supports blended learning (MOOCs + hands-on labs). | ||
| + | - **Interdisciplinary use:** applicable in robotics, AI, safety engineering, | ||
| + | |||
| + | These outcomes directly contribute to WP3 educational digital content and WP2 modular curriculum design. | ||
| + | |||
| + | 1. **Autoware.Universe** should be the baseline platform for SafeAV framework adaptation. | ||
| + | 2. **AWSIM/ | ||
| + | 3. **ROS-based verification tools** enable node-level and formal validation, aligning with ISO 26262 / SOTIF principles. | ||
| + | 4. Partner use cases ensure coverage of ground, aerial, and hybrid autonomous systems for educational demonstration. | ||
| + | |||
| + | ===== Next Steps → T4.2 Adaptation ===== | ||
| + | |||
| + | |||
| + | - Containerize selected frameworks for student deployment. | ||
| + | - Develop a hands-on educational guide linking WP3 digital content to WP4 V&V examples. | ||
| + | - Integrate simulation exercises into the SafeAV MOOC platform. | ||
| + | - Define data interfaces (ROS bag, OpenSCENARIO) for cross-use of materials. | ||
| + | - Establish KPIs for student learning outcomes (practical validation success, reproducibility, | ||