Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Next revision
Previous revision
en:safeav:handson [2024/11/26 12:29] – created raivo.sellen:safeav:handson [2025/10/30 07:46] (current) raivo.sell
Line 1: Line 1:
-====== Hands-on guide ======+====== SafeAV - Hands-on guide ====== 
 + 
 + 
 +===== Use-case requirements ===== 
 + 
 +This chapter defines the minimum requirements for the use cases that will be developed, validated, and comparatively assessed within the SafeAV framework (AV shuttle, F1TENTH, mobile robot, UAV). The goal is to align learning outcomes with technical, safety, and regulatory constraints, and to ensure smooth integration with the selected toolchain (e.g., Autoware/ROS2, simulators, and V&V tools). Requirements cover system boundaries and assumptions, environment and scenario descriptions, data flows, performance and safety targets, acceptance criteria, and end-to-end traceability to course outcomes and WP-level KPIs. We explicitly emphasize compliance with relevant standards and regulations (e.g., UNECE; EASA, where applicable), educational reusability (SITL/HITL), and reproducibility: each use case must ship with standardized scenarios, test scripts, and evaluation report templates. Use cases are selected to cover a wide range of AV, both in the ground and air domains.  
 + 
 +===== Use Case #1 AV Shuttle ===== 
 + 
 +The **TalTech iseAuto AV shuttle** is Estonia’s first self-driving vehicle developed as an academic–industry collaboration led by Tallinn University of Technology. TalTech iseAuto operates as a fully electric vehicle with a top speed of approximately 25 km/h and a capacity of up to eight passengers. It can run for around eight hours on a single charge, making it well-suited for short urban routes and campus loops. The shuttle is equipped with a comprehensive perception system that includes three LiDAR sensors and five cameras, providing 360-degree environmental awareness. Navigation is based on pre-mapped routes, while a remote control room enables teleoperation and system monitoring when necessary. Within TalTech, iseAuto serves as a research and educational platform that bridges theoretical learning and real-world experimentation in autonomous driving. The shuttle integrates with the Autoware open-source software stack for perception, planning, and control, and it supports a digital twin simulation environment that allows testing of algorithms in virtual conditions before deploying them on the physical vehicle. This approach has made iseAuto an essential testbed for validating autonomous vehicle safety, sensor fusion, and human–machine interaction. 
 + 
 +The focus on developing a simulation-based use case that supports education, prototyping, and pre-deployment testing. The requirements reflect both academic and technical needs, guiding the selection of suitable V&V tools and simulation environments. 
 + 
 +- **Multi-level simulation**: The V&V setup must support both low-fidelity and high-fidelity 3D simulation tools, enabling validation across different abstraction levels. 
 +- **Scenario testing**: The environment should support OpenSCENARIO-based scenario definitions to allow testing of complex multi-agent interactions, traffic behaviors, and reproducible validation sequences. 
 +- **Autoware compatibility**: All V&V tools must integrate smoothly with Autoware.Universe, allowing validation of core modules such as localization, planning, and control within the same architecture. 
 +- **Evaluation metrics**: Tools must enable automated analysis of safety metrics such as collisions, mission completion, traffic rule violations, and behavioral KPIs, suitable for both assessment and comparison. 
 +- **Containerized deployment**: All software should support Docker-based environments for consistent, platform-independent deployment across student and research systems. 
 +- **Educational accessibility**: Tools must be open-source, Linux-compatible, and well-documented, with a gentle learning curve suitable for university-level instruction in robotics and autonomy. 
 +- **Iterative development**: The system should allow for quick modifications and testing of nodes or logic, supporting hands-on experimentation and frequent updates in a classroom setting. 
 +- **Sustainability**: Tools and content must be maintainable post-project, avoiding dependencies on commercial services or non-portable infrastructure to ensure long-term use. 
 + 
 +The AV Shuttle use case requires a flexible and scalable V&V setup that supports both low- and high-fidelity simulations, OpenSCENARIO-based scenario testing, and full integration with Autoware.Universe stack. The environment must enable automated safety and performance evaluation, containerized deployment, and open-source accessibility suitable for higher education. It emphasizes iterative, hands-on experimentation and long-term sustainability without reliance on commercial tools 
 + 
 +===== Use Case #2 F1TENTH ===== 
 + 
 + 
 +The F1TENTH platform is an open-source, small-scale autonomous racing car designed for research and education in autonomous systems. Built on a 1/10-scale RC chassis, it integrates sensors such as LiDAR, camera, and IMU, all running on a ROS2-based control stack. Its modular architecture allows experiments in perception, planning, and control, while remaining low-cost and portable for classroom and laboratory use. The platform is supported by an active international community, offering simulation environments, datasets, and open course materials that make it ideal for hands-on learning and benchmarking in robotics and self-driving research. In academia, it serves as a standardized benchmark for teaching autonomous driving algorithms, allowing students to bridge theory and practice through competitions, lab assignments, and project-based learning. Universities use F1TENTH to demonstrate safety validation, sensor fusion, and real-time decision-making concepts within a manageable and reproducible framework, making it an ideal entry point for higher education in robotics and autonomous vehicle research. 
 + 
 +![image.png](attachment:d4ab1d39-89e9-454d-a3f1-571bfdd5a3f1:image.png) 
 + 
 +The interface requirements for the F1TENTH use case define how the simulation and control systems communicate within the ROS2 environment (ROS2 Humble, with possible but nontrivial porting to Jazzy). The platform’s sensor outputs include LiDAR scans, RGB or RGB-D camera feeds, and IMU data, which can be noisy and unreliable due to the small size of the vehicle. Additional outputs, such as pose and velocity estimates, may be derived using external localization methods. Control inputs consist of speed commands—either normalized or expressed in meters per second—and steering angle values, typically represented as PWM signals that can be translated into degrees. These interfaces ensure smooth integration between the physical and simulated components, supporting real-time testing, algorithm verification, and reproducibility in educational and research contexts. 
 + 
 +The F1TENTH use case defines simulation requirements that ensure accessible and reproducible environments for validating autonomous driving algorithms within academic and research settings. 
 + 
 +- The simulation environment must support lightweight, scalable, and educationally accessible setups suitable for university use. 
 +- It should utilize kinematic single-track or Ackermann steering models to balance computational efficiency with sufficient accuracy for low-speed vehicles. 
 +- The framework must enable reproducible validation of perception, planning, and control algorithms in both 2D and 3D environments. 
 +- ROS2 compatibility is required, with optional integration to simplified Autoware vehicle models for advanced testing. 
 +- Simulations should run smoothly on standard laptops without the need for high-end GPUs, ensuring broad accessibility for students. 
 +- The overall design must support iterative experimentation, open-source deployment, and hands-on learning in line with SafeAV’s educational objectives. 
 + 
 +The F1TENTH use case focuses on providing an open, reproducible, and educational platform for validating perception, planning, and control algorithms in small-scale autonomous vehicles. Its simulation and interface requirements emphasize lightweight, ROS2-compatible environments that run efficiently on standard hardware, enabling hands-on learning, iterative testing, and scalable experimentation in autonomous systems education. 
 + 
 +===== Use Case #3 Mobile Robot ===== 
 + 
 + 
 +The Mobile Robot (RTU) use case focuses on cooperative indoor logistics systems designed to demonstrate autonomous navigation, coordination, and task management in controlled environments. The setup consists of two mobile robot platforms, a central server for planning and task distribution, and MQTT-based communication for asynchronous message exchange. Each robot operates under a ROS2-based control architecture integrating LiDAR, camera, and deep learning–based segmentation for enhanced mapping and path planning. Within the academic context, this use case provides a practical environment for teaching multi-robot coordination, communication reliability, and safety validation, bridging theoretical concepts in robotics and AI with real-world industrial applications. 
 + 
 +![rtu_agv.png](attachment:3c93f42d-e4fc-4856-970f-97ca7146f0a4:e6754dc7-26f5-4a35-b070-c49f7f0f754e.png) 
 + 
 +The V&V requirements for the Mobile Robot (RTU) use case focus on ensuring safe, reliable, and verifiable operation of cooperative indoor logistics robots within both simulated and physical environments. The framework is designed to validate system behavior across all levels—communication, perception, navigation, and task management—while supporting educational objectives through accessible, open-source tools. These requirements align with SafeAV’s broader goal of integrating real-world industrial practices into higher-education robotics training. 
 + 
 +- The verification and validation framework must support systematic testing across design, simulation, and real-world operation, ensuring traceability from requirements to test results. 
 +- It should enable component-level and interface testing of ROS2 nodes, MQTT communication, sensor data pipelines, and deep learning–based segmentation modules. 
 +- The setup must allow hardware-in-the-loop (HIL) and simulation-based validation, confirming that control, navigation, and planning functions behave safely under realistic indoor conditions. 
 +- Runtime monitoring and safety shields should be implemented to detect failures, prevent unsafe actions, and support continuous verification in CI/CD workflows. 
 +- The framework must assess system performance under communication losses or operator intervention, ensuring resilience and fault tolerance in distributed multi-robot environments. 
 +- All tools and procedures should remain open-source, modular, and reproducible, allowing students to perform iterative experiments, analyze safety properties, and understand industrial V&V practices within an educational context. 
 + 
 +The defined V&V requirements establish a comprehensive validation chain that connects design, simulation, and real-world testing. They emphasize fault tolerance, runtime monitoring, and reproducibility using open-source ROS2 and MQTT-based architectures. This ensures that students can study and experiment with advanced verification techniques while developing safe and resilient autonomous robotic systems. 
 + 
 +===== Use Case #4 Drone ===== 
 + 
 + 
 +The Drone (SUT, PRO) use case focuses on unmanned aerial vehicle (UAV) systems that bridge aviation safety principles with autonomous mobility education. Developed through Prodron’s extensive experience in UAV training and system design, this use case explores real-world challenges such as emergency response, navigation under uncertainty, sensor fusion, and communication reliability. The drones operate using open-source frameworks like ArduPilot and QGroundControl, supporting both software-in-the-loop (SIL) and hardware-in-the-loop (HIL) validation. In the academic context, this use case provides a versatile platform for teaching autonomous flight control, safety verification, and resilience testing, allowing students to apply V&V methodologies from aerial robotics to broader autonomous vehicle domains. 
 + 
 +![image.png](attachment:026b44b9-df3c-4124-9206-c8a0cc43b705:image.png) 
 + 
 +The UAV use-case requirements focus on defining the essential verification and validation conditions for safe and reliable autonomous flight operations. They emphasize emergency response handling, navigation under uncertainty, sensor integration, and communication robustness, including lessons from MAVLink telemetry that can directly apply to V2X communication challenges in automotive systems. These requirements ensure realistic, reproducible, and educationally relevant validation of UAV systems. 
 + 
 +- The framework must enable **emergency response validation**, including fault injection, communication loss recovery, and system redundancy testing. 
 +- It should support **navigation under uncertainty**, allowing for dynamic obstacle avoidance, no-fly zone compliance, and route optimization under changing conditions. 
 +- The simulation environment must facilitate **sensor integration testing**, including GNSS, IMU, and vision-based systems, with modeling of environmental factors such as wind or weather effects. 
 +- **Communication reliability and latency simulation** should be included to evaluate telemetry, interference resilience, and cybersecurity aspects of UAV operations. 
 +- The setup must support **SIL and HIL configurations**, enabling both pure software and mixed real-hardware validation of control and perception modules. 
 +- Tools and models should support, where reasonable, a **dual-tier approach**, combining **solutions** with **open-source frameworks** for advanced experimentation. 
 + 
 +The UAV V&V requirements ensure comprehensive testing of autonomous flight systems under realistic and variable conditions, supporting both educational and research-oriented objectives. By integrating open-source simulation, communication reliability testing, and hardware-in-the-loop validation, they provide a robust foundation for safety assurance and hands-on learning. The SafeAV study recommends a dual-tier approach for UAV simulation and validation—combining commercial off-the-shelf (COTS) systems for rapid onboarding with open-source ecosystems for advanced, research-driven experimentation. This structure enables both immediate applicability in training contexts and long-term scalability for integration into academic courses, laboratories, and testbeds. 
 + 
 +===== Conclusions and Decisions ===== 
 + 
 +In conclusion, the consortium has evaluated the available verification and validation frameworks based on current research, technical feasibility, and educational applicability. The resulting decisions reflect a balanced consideration of open-source maturity, interoperability, and relevance to the specific SafeAV use cases. The selected frameworks demonstrate strong community support, active ongoing development, and proven suitability for academic integration. Their open-source nature ensures transparency, adaptability, and long-term sustainability, while their functionality aligns closely with the technical and pedagogical goals defined for each use case. 
 +Use case No 
 +Name 
 +Description 
 +Selected Frameworks and Software Set 
 +Responsible partner 
 +#1 
 +AV shuttle 
 +Simulation-based V&V use case for an autonomous shuttle used in education, prototyping, and pre-deployment testing. Requirements include multi-level simulation, scenario testing with OpenSCENARIO, Autoware.Universe integration, automated safety metrics, containerized deployment, and educational accessibility. Both planning and perception modules are targeted for validation. 
 +Autoware.Universe (software under test),  
 +AWSIM (digital twin, sensor simulation, Autoware integration),  
 +CARLA (high-fidelity perception & planning validation, synthetic data),  
 +Rosbag Replay (real-world perception regression),  
 +Scenario Simulator v2 (scenario-based planning validation),  
 +CommonRoad (planning benchmarking),  
 +SUMO (traffic co-simulation) 
 +Scenic (Scenario generator for CARLA) 
 +TalTech 
 +#2 
 +F1Tenth 
 +The F1TENTH use case demonstrates the application of open-source V&V frameworks for validating autonomous driving functions on a 1/10-scale research platform. Verification focuses on trajectory tracking, ensuring that the vehicle follows safe and stable paths without collisions or oscillations when encountering static or dynamic obstacles. In addition, the setup supports perception-level validation, including object and traffic sign detection, persistence of observations, and accuracy of estimated object velocities. Sensor data are also subject to pre-processing validation, guaranteeing their reliability for mapping, localization, and higher-level decision-making. This use case highlights how compact, reproducible, and ROS2-based V&V environments can provide students and researchers with an accessible platform for hands-on testing of autonomous vehicle algorithms and safety assurance methods. 
 +Rosbag Replay (real-world perception), 
 +F1TENTH Gym (simple 2D simulation) 
 +ROSMonitoring (verifying at runtime) 
 +RoboFuzz (testing with noisy inputs) 
 +CTU 
 +#3 
 +Mobile robot 
 +The V&V framework for the cooperative indoor logistics robot system ensure that V&V are systematically applied across design, simulation, and real-world operation. Verification focuses on requirement traceability, component and interface testing (ROS2 nodes, MQTT messaging, sensors, DL segmentation, and planners), and formal checks for safety and deadlock freedom. Validation uses simulation, hardware-in-the-loop, and on-site trials to confirm that the system performs safely and efficiently under realistic conditions, including communication losses and operator overrides. Runtime monitoring, safety shields, and continuous testing in the CI/CD pipeline maintain ongoing assurance, ensuring reliable autonomous operation and operator control in dynamic indoor logistics environments. 
 +Nav2 and MoveIt 2 (navigation and motion planning) 
 +Gazebo/Ignition (simulation and testing)  
 +MQTT broker Eclipse Mosquitto for message exchange 
 +OpenCV and PCL  vision and LIDAR processing 
 +ROS2 testing tools 
 +RTU 
 +#4 
 +Drone 
 +Simulation-based verification and validation (V&V) for unmanned aerial vehicles (UAVs) is essential before executing real-world missions to ensure software correctness, safety, and performance. The framework must simulate the drone’s plant model, environmental perception, and control system with sufficient fidelity while maintaining real-time execution capability. Both software-in-the-loop (SITL) and hardware-in-the-loop (HIL) configurations are required to validate autonomous behaviors such as navigation in complex environments, object detection, and precise landing. Modern 3D simulation engines, such as Unreal Engine used in CARLA, now enable highly realistic testing conditions—some even experimentally extended to aerial vehicles—providing a robust basis for safe, repeatable, and educational UAV validation. 
 +AirSim (real-world perception) 
 +Gazeboo (open source simulator)  
 +ArduPilot, QGroundControl (primary framework) 
 +UAV Matlab/Simulink Toolbox (real-time simulation and deployment for UAV) 
 +CARLA (Unreal Engine based simulator) 
 + 
 +===== Key Findings and Recommendations ===== 
 + 
 + 
 +ROS-based frameworks thus form a critical part of the **SafeAV educational toolchain,** ensuring scalability from lightweight student projects to advanced V&V experiments in research and industrial contexts. 
 + 
 +The selected frameworks fulfill higher-education requirements: 
 + 
 +- **Accessibility:** open-source and licence-free use. 
 +- **Ease of setup:** Docker-based deployment for classroom or remote use. 
 +- **Pedagogical link:** supports blended learning (MOOCs + hands-on labs). 
 +- **Interdisciplinary use:** applicable in robotics, AI, safety engineering, and mechatronics courses. 
 + 
 +These outcomes directly contribute to WP3 educational digital content and WP2 modular curriculum design. 
 + 
 +1. **Autoware.Universe** should be the baseline platform for SafeAV framework adaptation. 
 +2. **AWSIM/CARLA + SUMO/Scenic** provide complementary environments for digital twin, perception, and traffic-level validation. 
 +3. **ROS-based verification tools** enable node-level and formal validation, aligning with ISO 26262 / SOTIF principles. 
 +4. Partner use cases ensure coverage of ground, aerial, and hybrid autonomous systems for educational demonstration. 
 + 
 +===== Next Steps → T4.2 Adaptation ===== 
 + 
 + 
 +- Containerize selected frameworks for student deployment. 
 +- Develop a hands-on educational guide linking WP3 digital content to WP4 V&V examples. 
 +- Integrate simulation exercises into the SafeAV MOOC platform. 
 +- Define data interfaces (ROS bag, OpenSCENARIO) for cross-use of materials. 
 +- Establish KPIs for student learning outcomes (practical validation success, reproducibility, safety comprehension). 
  
en/safeav/handson.1732624192.txt.gz · Last modified: 2024/11/26 12:29 by raivo.sell
CC Attribution-Share Alike 4.0 International
www.chimeric.de Valid CSS Driven by DokuWiki do yourself a favour and use a real browser - get firefox!! Recent changes RSS feed Valid XHTML 1.0