This is an old revision of the document!


Self-driving vehicles simulation

Autonomous vehicle (AV) development is one of the top trends in the automotive industry and the technology behind them has been evolving to make them safer. In this way, engineers are facing new challenges especially moving toward the Society of Automotive Engineers (SAE) levels 4 and 5. To put self-driving vehicles on roads and evaluate the reliability of their technologies they have to be driven billions of miles, which takes a long time to achieve unless with the help of simulation. Furthermore, due to the past real crash cases of AVs, a high-fidelity simulator has become an efficient and alternative approach to provide different testing scenarios for control of these vehicles, also enabling safety validation before real road driving. Different high-resolution virtual environments can be developed based on the real world for simulators by using cameras or lidars to simulate the scenarios as close as possible to the real. Also, virtual environment development enables us to customize and create various urban backgrounds for testing the vehicle. Creating a virtual copy of an existing intelligent system is a common approach nowadays called a digital twin.

Simulator

Simulation has been widely used in vehicle manufacturing, particularly for mechanical behavior and dynamical analysis. However, AVs need more than that due to their nature. Simulation in various complex environments and scenarios included other road users with different sensors combination and configuration enables us to verify their decision-making algorithms. One of the most popular robotic simulator platforms is Gazebo. It is based on ROS and utilizes physics engines and various sensor modules suitable for autonomous systems. Nevertheless, Gazebo lacks modern game engine features like Unreal and Unity which gives the power to create a complex virtual environment and realistic rendering. On the other hand, CARLA and LGSVL are modern open-source simulators based on the game engines, Unreal and Unity respectively.

Virtual Environment

Nowadays the fierce competition in the gaming industry has brought many features to the table in terms of game engines. These engines can simulate physics and thus can be exploited as simulators aside from game development. LGSVL and others have already taken advantage of these engines and have created a framework for testing autonomous vehicles within these physics simulators. However, even though these simulators provide some basic tools and assets to get started, it is not enough. To make it more realistic, we need real-life terrains simulated.

Data Collection and Processing

The capture of aerial imagery with a drone, over the area to be mapped, has to be conducted. The images are captured at a grid-based flight path. This ensures that the captured images contain different sides of a subject. In order to make sure the images have maximum coverage, the flight path is followed three times in different camera angles but at a constant altitude. Taking aerial photo is one of the most important steps in the mapping process as it will significantly affect the outcome of the process and the amount of work to be done to process those images. There are also external factors that may affect the quality of the pictures taken off the ground. Weather conditions and scene lighting may create artifacts on the pictures that may disturb the photogrammetric process. The images taken are georeferenced by the drone and if necessary a stationary RTK device can be utilized to mitigate errors and shifting on the positioning data stamped on the pictures. The onboard IMU provides the pictures with the orientation so that later they can be stitched together and used for photogrammetric processing. Third-party software aligns and creates the dense point-cloud from the pictures that were captured. Once the dense point-cloud is created, the segmentation and classification of the points are needed in order to separate unwanted objects and vegetation from the point-cloud data. However, removing is not to be done in the point-cloud as the positional information they provide for their respective objects will aid terrain generation to spawn details. The figure below shows the three main steps to generate the Unity train from geospatial data.

Terrain generation

Digitalization of a real-life environment can be used for simulating AVs in countless different scenarios without taking the vehicle out for once. Terrain generation from point-cloud is done right in Unity. The in-house developed plugin reads a pre-classified point-cloud file and based on chosen parameters it creates a normal map, a heightmap, and a color map to use in conjunction with the unity’s terrain engine to create realistic environments.

en/ros/title.1611231541.txt.gz · Last modified: 2021/01/21 10:00 (external edit)
CC Attribution-Share Alike 4.0 International
www.chimeric.de Valid CSS Driven by DokuWiki do yourself a favour and use a real browser - get firefox!! Recent changes RSS feed Valid XHTML 1.0