Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
en:safeav:maps:detection [2025/10/24 09:40] kosnarken:safeav:maps:detection [2025/10/31 09:52] (current) – [Learning-Based Fusion Approaches] kosnark
Line 98: Line 98:
 The pipeline operates continuously in real time (typically 10–30 Hz) with deterministic latency to meet safety and control requirements. The pipeline operates continuously in real time (typically 10–30 Hz) with deterministic latency to meet safety and control requirements.
  
 +=====  Sensor Fusion =====
 +
 +No single sensor technology can capture all aspects of a complex driving scene under all circumstances,  diverse weather, lighting, and traffic conditions. Therefore,  data from multiple sensors is fused (combined)  to obtain a more complete, accurate, and reliable understanding of the environment than any single sensor could provide alone. 
 +
 +Each sensor modality has distinct advantages and weaknesses:
 +  * **Cameras** provide high-resolution color and texture information, essential for recognizing traffic lights, signs, and object appearance, but are sensitive to lighting and weather.
 +  * **LiDAR** delivers precise 3D geometry and range data, allowing accurate distance estimation and shape reconstruction, yet is affected by rain, fog, and reflective surfaces.
 +  * **Radar** measures object velocity and distance robustly, even in poor visibility, but has coarse angular resolution and may struggle with small or static objects.
 +  * **GNSS** provides global position but suffers from signal blockage and reflection in e.g., urban canyons, tunnels, and under tree canopies.  
 +  * **IMU** provides motion estimation but is prone to drift and accumulated error.
 +
 +By fusing these complementary data sources, the perception system can achieve redundancy, increased accuracy, and fault tolerance — key factors for functional safety (ISO 26262).
 +
 +Sensor fusion can be focused on **complementarity** – different sensors contribute unique, non-overlapping information and  **redundancy** – overlapping sensors confirm each other’s measurements, improving reliability. As multiple sensor modalities are used, both goals can be achieved.
 +
 +Accurate fusion depends critically on spatial and temporal alignment among sensors. 
 + 
 +  * **Extrinsic calibration** determines the rigid-body transformations between sensors (translation and rotation). It is typically estimated through target-based calibration (e.g., checkerboard or reflective spheres) or self-calibration using environmental features.  
 +  * **Intrinsic calibration** corrects sensor-specific distortions, such as lens aberration or LiDAR beam misalignment.  
 +  * **Temporal synchronization** ensures that all sensor measurements correspond to the same physical moment, using hardware triggers, shared clocks, or interpolation.
 +
 +Calibration errors lead to spatial inconsistencies that can degrade detection accuracy or cause false positives. Therefore, calibration is treated as part of the functional safety chain and is regularly verified in maintenance and validation routines.
 +
 +
 +Fusion can occur at different stages in the perception pipeline, commonly divided into three levels:
 +
 +  * **Data-level** fusion combines raw signals from sensors before any interpretation, providing the richest input but also the heaviest computational load. 
 +  * **Feature-level** fusion merges processed outputs such as detected edges, motion vectors, or depth maps, balancing detail with efficiency. 
 +  * **Decision-level** fusion integrates conclusions drawn independently by different sensors, producing a final decision that benefits from multiple perspectives.
 +
 +The mathematical basis of sensor fusion lies in probabilistic state estimation and Bayesian inference. 
 +Typical formulations represent the system state as a probability distribution updated by sensor measurements.  
 +Common techniques include:
 +  * **Kalman Filter (KF)** and its nonlinear extensions, the **Extended Kalman Filter (EKF)** and **Unscented Kalman Filter (UKF)**, which maintain a Gaussian estimate of state uncertainty and iteratively update it as new sensor data arrive.  
 +  * **Particle Filter (PF)**, which uses a set of weighted samples to approximate arbitrary non-Gaussian distributions.  
 +  * **Bayesian Networks** and **Factor Graphs**, which represent dependencies between sensors and system variables as nodes and edges, enabling large-scale optimization.  
 +  * **Deep Learning–based Fusion**, where neural networks implicitly learn statistical relationships between sensor modalities through backpropagation rather than explicit probabilistic modeling. 
 +
 +
 +==== Learning-Based Fusion Approaches ====
 +Deep learning has significantly advanced sensor fusion.  
 +Neural architectures learn optimal fusion weights and correlations automatically, often outperforming hand-designed algorithms.  
 +For example:
 +  * **BEVFusion** fuses LiDAR and camera features into a top-down BEV representation for 3D detection.
 +  * **TransFusion** uses transformer-based attention to align modalities dynamically.
 +  * **DeepFusion** and **PointPainting** project LiDAR points into the image plane, enriching them with semantic color features.
 +
 +End-to-end fusion networks can jointly optimize detection, segmentation, and motion estimation tasks, enhancing both accuracy and robustness.
 +However, deep fusion models require large multimodal datasets for training and careful validation to ensure generalization and interpretability.
 +
 +
 +===== Mapping =====
 +
 +
 +The existence of a certain form of world model is a fundamental prerequisite for autonomous navigation during the execution of various tasks, since their definitions are often directly or indirectly related to the spatial configuration of the environment and the specification of partial goal positions.  
 +The world model thus represents a reference frame relative to which goals are defined.
 +
 +When discussing an environmental model, it is also necessary to consider its suitable representation in relation to the specific problem being solved.  
 +The problem of representing and constructing an environmental model from imprecise sensory data must be viewed from several perspectives:
 +
 +- **Representation.**  
 +  The perceived environment must be stored in a suitable data structure corresponding to the complexity of the environment (e.g., empty halls, furnished rooms).  When working with raw, unprocessed sensory data, a compact data representation is required.
 +
 +- **Uncertainty in data.**  
 +  For reliable use of the constructed model, it is desirable to maintain a description of uncertainty in the data, which arises from the processing of noisy sensory information. Methods for data fusion can be advantageously used here to reduce overall uncertainty. Data can be merged from different sources or from different time intervals during task execution.
 +
 +- **Computational and data efficiency.**  
 +  The data structures used must be efficiently updatable, typically in real time, even in relatively large environments and at various levels of resolution.  It is almost always necessary to find an appropriate compromise between method efficiency (model quality) and computational and data demands.
 +
 +- **Adaptability.**  
 +  The representation of the environmental model should be as application-independent as possible, which is, however, a very demanding task. In real situations, the environmental representation is often tailored directly to the task for which the model is used, so that the existence of the map enables or significantly increases the efficiency of solving the selected problem. The ways of representing environment maps can be divided according to their level of abstraction into the following types:
 +
 +- **Sensor-based ** maps work directly with raw sensory data or only with their relatively simple processing.   Raw sensor data are used at the level of reactive feedback in subsystems for collision avoidance and resolution, where fast responses are required (David, 1996); thus, they are mainly applied at the motion control level.
 +
 +A typical example of a sensor-based map is the **occupancy grid** -- a two-dimensional array of cells, where each cell represents a square area of the real world and determines the probability that an obstacle exists in that region.
 +
 +
 +- **Geometric ** maps describe objects in the environment using geometric primitives in a Cartesian coordinate system.  
 +Geometric primitives include segments (lines), polygons, arc sections, splines, or other curves.  
 +Additional structures such as visibility graphs or rectangular decompositions are often built on top of this description to support robot path planning.  
 +Geometric maps can also be used for robot localization or exported into CAD tools, where further corrections create a complete CAD model of the environment for non-robotic applications.
 +
 +- **Topological ** maps achieve a high level of abstraction.  
 +They do not store information about the absolute coordinates of objects but rather record the **topological relationships** between objects in the environment.  
 +This representation focuses on connectivity and adjacency between places rather than precise geometric positions.
 +
 +- **Symbolic.** maps contain information that the robot typically cannot directly obtain through its sensors — usually abstract data that allow a certain degree of **natural-language communication** with the robot.  
 +These maps are commonly implemented in a **declarative language** such as *Prolog* or *LISP*.
 +  
 +
 +
 +The environmental model can be created in many ways, from manual data collection to fully automatic mapping procedures.  
 +However, the difference in time efficiency is significant.  
 +
 +During the construction of an environmental model — mapping — several basic tasks must be solved.  
 +One of them is the choice of a suitable movement trajectory that ensures the vehicle's sensor system acquires a sufficient amount of data from the entire mapped space.  
 +This issue is addressed by planning algorithms for exploration and space coverage.  
 +The choice of a suitable trajectory can also be replaced by remote teleoperation, where the operator decides which parts of the mapped area the robot should visit.  
 +Another necessary condition for consistent mapping is the ability to localize.  
 +By its nature, the mapping and global localization problem can be described as a “chicken and egg” problem.
 +
 +The intertwined nature of localization and mapping arises from the fact that inaccuracies in the vehicle's motion affect errors and the accuracy of sensory measurements used as input to mapping algorithms.  
 +As soon as the robot moves, the estimate of its current position becomes affected by noise caused by motion inaccuracy.  
 +The perceived absolute position of objects inserted into the map is then influenced both by measurement noise and by the error in the estimated current position of the robot.  
 +The mapping error caused by inaccurate localization can further reduce the accuracy of localization methods that depend on the map.  
 +In this way, both tasks influence each other.
 +
 +In general, the error of the estimated robot trajectory is strongly correlated with errors in the constructed map — that is, an accurate map cannot be built without sufficiently precise, reliable, and robust localization, and vice versa.
 +
 +
 +===== Positioning =====
 +
 +An essential component of spatial orientation and environmental understanding for an autonomous vehicle is **localization** within that environment.
 +**Localization** is defined as the process of estimating the vehicle’s current coordinates (position) based on information from available sensors.  
 +Suitable sensors for localization are those that allow obtaining either information about the vehicle's relative position with respect to the environment or about its own motion within the environment.
 +
 +To obtain information about ego motion, **inertial** and **odometric sensors** are used. These provide an approximate initial estimate of the position.  
 +Inertial sensors infer position from acceleration and rotation, while odometric sensors measure directly the length of the trajectory traversed by the vehicle.  
 +However, both measurement principles suffer from significant measurement errors.  
 +
 +As the primary sensors for accurate position determination, **laser range finders (scanners)** are commonly used.  
 +These measure the direct distance to obstacles relative to which the robot orients itself.  
 +In recent years, **camera-based systems** have also begun to emerge as viable alternatives or complements to laser range sensors.
 +
 +
 +==== Reference Frames for Localization ====
 +
 +The vehicle's coordinates in the working environment can be referenced either to an existing **environment model (map)** or to a chosen **reference frame**.  
 +The reference is often, for example, the vehicle's initial position.
 +
 +If the vehicle's initial position is known, or if only **relative coordinates** with respect to the starting point are important, we speak of the **position tracking problem** (also called **continuous localization**).  
 +These localization tasks correct odometric errors accumulated during movement, thus refining the vehicle's current position estimate.
 +
 +In the opposite case — when the initial position is unknown and we need to determine the vehicle's **absolute coordinates** within the world model — we speak of the **global localization problem**.  
 +
 +One of the fundamental differences between **global localization** and **position tracking** lies in the requirement for a **predefined world model**.  
 +While global localization requires such a model, position tracking does not necessarily depend on it — it can, for example, utilize a model being gradually constructed during movement.
 +
 +
 +
 +==== The “Kidnapped Robot Problem” ====
 +
 +The most general form of the localization task is the so-called **“kidnapped robot problem”**, which generalizes both position tracking and global localization.  
 +In this task, the vehicle must not only continuously track its position but also detect situations in which it has been **suddenly displaced** to an unknown location — without the event being directly observable by its sensors.  
 +
 +The method must therefore be capable of detecting, during continuous localization, that the vehicle is located in a completely different position than expected, and at that moment, switch to a **different localization strategy**.  
 +This new strategy typically employs **global localization algorithms**, which, after some time, re-establish the vehicle's true position in the environment.  
 +Once the correct position has been determined, localization can continue using a less computationally demanding **position-tracking strategy**.
  
en/safeav/maps/detection.1761298859.txt.gz · Last modified: 2025/10/24 09:40 by kosnark
CC Attribution-Share Alike 4.0 International
www.chimeric.de Valid CSS Driven by DokuWiki do yourself a favour and use a real browser - get firefox!! Recent changes RSS feed Valid XHTML 1.0