Table of Contents

Safety

Safety of Autonomous Systems Working Group (SASWG) has identified a set of the most significant safety challenges considering the following aspects [1]:

  • Domain-specific expert opinions on high-level issues within the domain;
  • A set of modelled example systems and their possible use cases;
  • Analysis of control software in autonomous systems;
  • Analysis of experienced accidents – a small but representative set of accidents.

According to those considerations, the SASWG has defined a domain-specific safety topic to be discussed, considered or addressed by national/international regulations [2].

Air
  • Existing regulations are well established and used for decades. However, this experience is not directly based on autonomous control software applications, which creates challenges to ensure software robustness.
  • Interface with Air Traffic Control, which currently is based on verbal communication with Air Traffic Control operators. Autonomous systems will most likely require dedicated digital communication channels and protocols, which brings novel solutions with appropriate safety challenges.
  • Third-party risks, which usually are related to the limited possibility to isolate third party systems. This creates risks of interaction, software updates, protocol updates, etc… As consequence regulations might be developed far too detailed creating risks of hard implementations and potential violation.
  • Reliance on external systems is current practice. However, in case of malfunctioning of navigation systems like GNSS, there is always a pilot/operator, who takes over the control and uses visual information to navigate. In autonomous systems, this way of action might be problematic and therefore it creates safety risks.
  • Removal of human senses as health monitors might be a source of additional safety risks since pilots usually get acquainted with the system they are operating. Removing pilot from the loop created risks of running into situations that are not properly recognized by automatic software systems.
Automotive
  • Assuring driver readiness is related to different autonomy levels (see chapters on autonomy levels), where the human driver has to be ready to take over the control. However, the mains risk is related to the actual readiness of the driver for an immediate action.
  • Connectivity with other vehicles and the environment might be required on different levels – individually with the environment, with other cars to platoon, with general traffic control systems. The communication mechanisms should be able to switch seamlessly between different modes. This is due to complexity, which brings additional risks of robustness.
  • Through-life behaviour monitoring that due to autonomous operation might be a requirement. However, the data storage, collection and processing on third-party cloud systems, which brings risks related to proper data handling.
  • Behaviour updates most probably will be a part of the exploitation of autonomous systems. Those updates bring several challenges:
    • Balance between recent experience and long-term experience not to lose important behaviours;
    • Balance between self and acquired experience from the cloud;
    • Software version inconstancy.
  • Value of simulation might be overestimated replacing the real-world situation. Thereby the overoptimized software against simulation instead of real-world operation scenarios.
Defence
  • Mission and its completion or non-completion conditions might be in a conflict with the safety requirements, thus compromising both during the decision-making process.
  • Test, Evaluation, Verification and Validation (TEVV) are the key elements of designing highly assured systems. However, the trust might be related to technology acceptance with respect to methods used to formally verify performance and safety.
Maritime
  • Long communication paths make difficult communication with operators or costal behaviour control systems, which defines overall risks or operation.
  • Limited monitoring infrastructure due to specifics of the maritime operation might be not available for long distances, which requires autonomous systems to be resilient enough to be on a self-governing base for a needed period of time.
  • Weather is one of the significant challenges in maritime operations since it is not avoidable by going away or around the stormy regions.
  • Hostile adversaries, which have been a case in maritime operations history. It means that the proper behaviour of the autonomous systems under hostile actions are creating certain challenges.

Besides the regular safety issues related to electro-mechanical and control software system safety and reliability, autonomous systems bring a new variable in the total safety equation – Artificial intelligence (usually in a form of machine learning algorithms). Unfortunately, the classical safety processes that are relied on risk quantification (Quantitative Risk Analysis – QRA) often are with significant limitation in the context of autonomous systems because of AI/ML applications [3]. The main drawback is of the classical approach is the assumption that potential risks (tackled by safety procedures/processes) can be assessed prior to the actual action. Still, the central element of risk assessment is the case, which challenges the safety of the system or other involved objects – other systems or people. Since the autonomous system to a large extent relies on a constant evolvement of the system through heavy use of machine learning. Therefore, it is obvious that the safety procedures have to revise accordingly i.e. constantly. The safety cases according to [4] still are the central elements and have to be constantly updated in respect to the modelled world’s state and sensed state. Thereby the general framework for safety assurance encompasses the main steps:

Figure 1: A general model for Safety assurance
  • The “Real-world” is composed of the autonomous system and its environment including infrastructure and people;
  • World’s model is simulated world and safety analysis results within the simulation;
  • World’s data is composed of data sensed and results of data analysis ML algorithms;
  • Safety case, in general, reflect the world’s model cases, what is updated and tailored to the actual observations, thereby reducing gaps between model and reality.
en/av/autonomy_and_autonomous_systems/autonomy/safety.txt · Last modified: 2021/07/28 09:00 (external edit)
CC Attribution-Share Alike 4.0 International
www.chimeric.de Valid CSS Driven by DokuWiki do yourself a favour and use a real browser - get firefox!! Recent changes RSS feed Valid XHTML 1.0