Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
en:safeav:avt:research [2025/06/30 03:08] rahulrazdanen:safeav:avt:research [2025/10/28 16:05] (current) raivo.sell
Line 2: Line 2:
 {{:en:iot-open:czapka_m.png?50| Masters (2nd level) classification icon }} {{:en:iot-open:czapka_m.png?50| Masters (2nd level) classification icon }}
  
-<todo @raivo.sell></todo> +<todo @raivo.sell #raivo.sell:2025-09-18></todo> 
-Verification and Validation Capability + 
-1) Components: Validation of AI components continue to be a stumbling block to overall safetyContinued research is required on methods to mathematically measure completenessefficiently generate tests, and bound AI behaviorResearch directions in areas such as explainable AIdigital twin monitorsand more formal methods may well move the state-of-the-art forward. +Autonomy is part of the next big megatrend in electronics which is likely to change society. As a new technology, there are a large number of open research problems. These problems can be classified in four broad categories:  Autonomy hardware, Autonomy Software, Autonomy Ecosystem, and Autonomy Business models. In terms of hardware, autonomy consists of mobility component (increasingly becoming electric), sensors, and computation  
-2) Scalability: Todaycyber-physical systems do not have a clear hierarchy of abstraction which allows for scalingResearch is required to develop methods where component validation results can lead to higher abstraction validation leading to scaleLearning from the broad methodologies of design artifactswhich enable scaling from semiconductor electronics spacemay be good guides for this research direction.  + 
-3) Progressive structure: To project trust from   consumers and regulatorsthe industry must develop a clear process which shows a progressive structure of safety progressA key part of this process are clear arguments for lack of regression of functionality in OTA updates and inclusion of feedback from field failures.  +Research in sensors for autonomy is rapidly evolving, with a strong focus on "sensor fusionrobustness, and intelligent perception." One exciting area is "multi-modal sensor fusion," where data from LiDAR, radar, cameras, and inertial sensors are combined using AI to improve perception in complex or degraded environmentsResearchers are developing **uncertainty-aware fusion models** that not only integrate data but also quantify confidence levels, essential for safety-critical systems. There's also growing interest in "event-based cameras" and "adaptive LiDAR," which offer low-latency or selective scanning capabilities for dynamic sceneswhile **self-supervised learning** enables autonomous systems to extract semantic understanding from raw, unlabeled sensor data. Another critical thrust is the **development of resilient and context-aware sensors**This includes sensors that function in all-weather conditionssuch as "FMCW radar" and "polarization-based vision," and systems that can detect and correct for sensor faults or spoofing in real-timeResearchers are also exploring "terrain-aware sensing," "semantic mapping," and "infrastructure-to-vehicle (I2V)" sensor networks to extend situational awareness beyond line-of-sightFinally, sensor co-design—where hardwareplacement, and algorithms are optimized together—is gaining tractionespecially in "edge computing architectures" where real-time processing and low power are crucialThese advances support autonomy not just in cars, but also in drones, underwater vehicles, and robotic systems operating in unstructured or GPS-denied environments
-Verification and Validation Test Apparatuses + 
-1. Virtual:  As methodologies develop for V&Vvirtual testing tools will be required to support themThis means that the work done in standards organizations such as ASAM  must continue as a key enabling feature to develop these methodologies. +In terms of computationexciting research focuses on enabling real-time decision-making in environments where cloud connectivity is limited, latency is critical, and power is constrainedOne prominent area is the "co-design of perception and control algorithms with edge hardware," such as integrating neural network compression, quantization, and pruning techniques to run advanced AI models on embedded systems (e.g., NVIDIA Jetson, Qualcomm RB5, or custom ASICs). Research also targets "dynamic workload scheduling," where sensor processing, localization, and planning are intelligently distributed across CPUs, GPUs, and dedicated accelerators based on latency and energy constraints. Another major focus is on "adaptive, context-aware computing," where the system dynamically changes its computational load or sensing fidelity based on situational awareness—for instance, increasing compute resources during complex maneuvers or reducing them during idle cruising. Related to this is "event-driven computing" and "neuromorphic architectures" that mimic biological efficiency to reduce energy use in perception tasksResearchers are also exploring "secure edge execution," such as trusted computing environments and runtime monitoring to ensure deterministic behavior under adversarial conditionsFinally"collaborative edge networks," where multiple autonomous agents (vehicles, drones, or infrastructure nodes) share compute and data at the edge in real time, open new frontiers in swarm autonomy and decentralized intelligence
-2. Physical: Test tracks must continue to adapt to support their key functions in the overall flowSpecificallythey must provide resources for characterization of virtual models for simulation and be able to quickly recreate complex scenarios from simulation or field failureThusa movie set operational model will be critical for success.  + 
-3. EMI: Due to the heavy use of active sensing technologiesEMI is a critical new issue for cyber-physical systems. TodayEMI testing is limited to static testing devices such as anechoic chambersHowever, with cyber-physical systems, the combination of mechanical movement and complex reflective materials requires a movement in the state-of-art of EMI testingFurthertodayprogrammatic methods such as LabView enable efficient test programming for electronicsyet nothing similar exists for cyber-physical EMI testing [48]+Finallyas there is a shift towards "software defined vehicles, there is an increasing need to develop computing hardware architectures bottom-up with critical properties of software reuse and underlying hardware innovationThis process mimics computer architectures in information technologybut does not exist in the world of autonomy today. 
-Cybersecurity + 
-1. Systematic risk: Networked cyber-physical systems are introducing the notion of systematic risk to bad actorsGovernmental oversight is required to ensure that all levels of this chain from physical security to the software architecture are fully dependable   +In terms of software, important system functions such as perception, path planning, and location services sit in software/AI layer. While somewhat effective, AV stacks are quite a bit less effective then a human who can navigate the world spending only about a 100 watts of power. There are a number of places where humans/machine autonomy differ. These include: 
-2. Sensors: Various electronics modalities (e.g.LiDARradarGPSare critical to autonomous operation while also being vulnerable to spoofing.  Research is required to detect and mitigate these spoofing effects. +  Focus:  Humans have the idea of focus and peripheral vision...whereas AVs monitor all directions all the time. This has implications on powerdataand computation 
-3. Cyber-security: The surface for cyber-security attacks is constructed through the design and validation process of the productThe V&process should include a minimization available for adversarial attacks through communication interfaces. This effectively adds a new constraint to the V&process +  - Movement based Perception:  Humans use movement as a key signature for identification. In contrastcurrent perception engines effectively try to work on static photos
-Supply Chain: +  Perception based recognition: Humans use an expectation of the future movement of objects to limit computationThis technique has advantages in computation, but is not currently used in AVs. 
-1. Economics: The economics of semiconductors will limit the development of chips for limited volume markets. Thusdeveloping methodologies which provide securityreliability and performance from Commercial Off-The-Shelf (COTSproducts is critical for success  +  
-2. Design: Field maintainability, skew minimization, and total lifetime cost have been active topics in many industriesbut not typically for the electronics componentsWith the increasing absorption of electronics componentsa clear Design for Supply Chain function is required to understand the down-stream costs of electronic design choices in Electronic Design Systems. +Thus, in addition to traditional machine learning techniques, newer AI architectures with properties of robustness, power/compute efficiency, and effectiveness are open research problems.  
-3. Supply Chain: In the construction of supply chain relationshipthere is often functional decomposition to suppliersHowever, for integrated productsa peek through and joint development of the underlying hardware/software system is highly recommendedOf coursethis is exactly the direction taken by the concept of a software defined vehicle.  + 
-4. Hardware Fabrics:  Building on a minimal number of high -volume semiconductor chips is critical to the longterm viability of the LLC supply chain. To enable this process, a broader concept of hardware programmable fabrics which include softwaredigital hardware, and analog/sensor function must be developedDevelopment of this newer version of the computer architecture is key to absorbing the supply chain shocks from the consumer marketplace. +In terms of Ecosystemkey open research problems exist in areas such as safety validationV2X communicationand ecosystem partners. 
-Finally, at a systems level there are two strong recommendations: + 
-1. Integrated Functional Approach:  V&VCybersecurity, and supply chain form the crux of the product assurance functionThinking of these in an integratednot siloedfashionespecially in the process of initial product design, is highly recommended. +Verification and validation (V\&Vfor autonomous systems is evolving rapidly, with key research focused on making AI-driven behavior both "provably safe and explainable." One major direction involves "bounding AI behavior" using formal methods and developing "explainable AI" (XAI) that supports safety arguments regulators and engineers can trustResearchers is also focused on "rare and edge-case scenario generation" through adversarial learning, simulation, and digital twins, aiming to create test cases that challenge the limits of perception and planning systems. Defining new "coverage metrics"—such as semantic or risk-based coverage—has become crucial, as traditional code coverage doesn’t capture the complexity of non-deterministic AI componentsAnother active area is "scalable system-level V&V," where component-level validation must support higher-level safety guarantees. This includes "compositional reasoning," contracts-based design, and model-based safety case automation. The integration of **digital twins** for closed-loop simulation and real-time monitoring is enabling continuous validation even post-deployment. In parallel, "cybersecurity-aware V&V" is emerging, focusing on spoofing resilience and securing the validation pipeline itselfFinally, standardization of simulation formats (e.g., OpenSCENARIO, ASAM) and the rise of "test infrastructure-as-code" are laying the groundwork for scalablecertifiable autonomyespecially under evolving regulatory frameworks like UL 4600 and ISO 21448. 
-2. Learnings from adjacent fields: Automotive is the largest cyber-physical marketplaceThe critical learnings from automotive for other ground vehicles as well as adjacent fields such as airbornespace, and marine systems is highly recommended.  + 
 +One of the ecosystem aids to autonomy maybe connection to the infrastructure and of course, in mixed human/machine environments there is the natural Human Machine Interface (HMI).  Key research in V2X (Vehicle-to-Everything) for autonomy centers on enabling cooperative behavior and enhanced situational awareness through low-latency, secure communicationA major area of focus is on "reliablehigh-speed communication" via technologies like "C-V2X and 5G/6G," which are critical for supporting time-sensitive autonomous functions such as coordinated lane changes, intersection management, and emergency responseClosely linked is the development of "edge computing architectures," where V2X messages are processed locally to reduce latency and support real-time decision-making. Research is active in "cooperative perception," where vehicles and infrastructure share sensor data to extend the field of view beyond occlusions, enabling safer navigation in complex urban environmentsAnother core research direction is the integration of "smart infrastructure and digital twins," where roadside sensors provide real-time updates to HD maps and augment vehicle perception. This is essential for detecting dynamic road conditions, construction zones, and temporary signage. In parallel, ensuring "security and privacy in V2X communication" is a growing concern. Work is underway on encrypted, authenticated protocols and on methods to detect and respond to malicious actors or faulty dataFinallystandardization and interoperability are vital for large-scale deployment; efforts are focused on harmonizing communication protocols across vendors and regions and on developing robustscenario-based testing frameworks that incorporate both simulation and physical validationFinallyan open research issue is the tradeoff between individual autonomy and dependence on an infrastructure. Associated with infrastructure dependence are open issues of legal liability, business model, or cost.  
 + 
 +Human-Machine Interface (HMI) for autonomy remains an area with several open research and design challenges, particularly around trust, control, and situational awareness. One major issue is how to build "appropriate trust and transparency" between users and autonomous systems. Current interfaces often fail to clearly convey the vehicle’s capabilities, limitations, or decision-making rationalewhich can lead to overreliance or confusion. There'delicate balance between providing sufficient information to promote understanding and avoiding cognitive overload. Additionally, ensuring "safe and intuitive transitions of control," especially in Level 3 and Level 4 autonomy, remains a critical concern. Drivers may take several seconds to re-engage during a takeover request, and the timing, modality, and clarity of such prompts are not yet standardized or optimized across systems Another set of challenges lies in maintaining "situational awareness" and designing "adaptive, accessible interfaces." Passive users in autonomous systems tend to disengage, losing track of the environment, which can be dangerous during unexpected events. Effective HMI must offer context-sensitive feedback using visual, auditory, or haptic cues while adapting to the user’s state, experience level, and accessibility needs. Moreover, autonomous vehicles currently lack effective ways to interact with external actors—such as pedestrians or other drivers—replacing human cues like eye contact or gestures. Developing standardized, interpretable external HMIs, a language of driving, remains an active area of research. Finally, a lack of unified metrics and regulatory standards for evaluating HMI effectiveness further complicates design validation, making it difficult to compare systems or ensure safety across manufacturers. 
 + 
 +Finallyautonomy will have implications on topics such as civil infrastructure guidancefield maintenance, interaction with emergency services, interaction with disabled and young riders, insurance markets, and most importantly the legal professionThere are many research issues underlying all of these topics.  
 + 
 +In terms of business modelsuse models and their implications for supply chain are open research problems. For examplefor the supply chainthe critical technology is semiconductors which is highly sensitive to very high volume. For example, the largest market in mobility, the auto industry, is approx10% of semiconductor volume, and the other forms (airborne, marine, space) are orders-of-magnitude lowerFrom a supply chain point perspective, a small number of skews which service a large market are ideal. The research problem is: What should be the nature of these very scalable components.  In terms of end-markets, autonomy in traditional transportation is likely to lead to a reduction in unit volumeWhy?   With autonomy, one can get much higher utilization (vs the < 5% in today's automobiles). However, it is also likely that autonomy unleashes a broad class of solutions in markets such as agriculturewarehouses, distribution, delivery, and more. Micromobility applications in particular offer some interesting options for very high volumes.  The exact nature of the applications is an open research problem. 
 + 
 + 
 + 
 + 
  
en/safeav/avt/research.1751252906.txt.gz · Last modified: 2025/06/30 03:08 by rahulrazdan
CC Attribution-Share Alike 4.0 International
www.chimeric.de Valid CSS Driven by DokuWiki do yourself a favour and use a real browser - get firefox!! Recent changes RSS feed Valid XHTML 1.0