Safety of Autonomous Systems Working Group (SASWG) has identified a set of the most significant safety challenges considering the following aspects [1]:
According to those considerations, the SASWG has defined a domain-specific safety topic to be discussed, considered or addressed by national/international regulations [2].
Besides the regular safety issues related to electro-mechanical and control software system safety and reliability, autonomous systems bring a new variable in the total safety equation – Artificial intelligence (usually in a form of machine learning algorithms). Unfortunately, the classical safety processes that are relied on risk quantification (Quantitative Risk Analysis – QRA) often are with significant limitation in the context of autonomous systems because of AI/ML applications [3]. The main drawback is of the classical approach is the assumption that potential risks (tackled by safety procedures/processes) can be assessed prior to the actual action. Still, the central element of risk assessment is the case, which challenges the safety of the system or other involved objects – other systems or people. Since the autonomous system to a large extent relies on a constant evolvement of the system through heavy use of machine learning. Therefore, it is obvious that the safety procedures have to revise accordingly i.e. constantly. The safety cases according to [4] still are the central elements and have to be constantly updated in respect to the modelled world’s state and sensed state. Thereby the general framework for safety assurance encompasses the main steps: