===== Autonomy levels ===== Why one should worry about a particular autonomy level scale? There are several good reasons for this: * Depending on autonomy level system owner might expect particular performance and functionality, as we do with technology readiness levels (TRLs) or other classification scales; * Different regulations might be applied to systems of different autonomy levels; * Sometimes it is necessary to forecast the potential performance of the autonomous system for mission planning or design purposes. Besides a plain autonomy level definition, several models have been proposed for assessing UMS (Unmanned Systems) level of autonomy and autonomous performance, and these models are briefly discussed in this section. Among the earliest attempt to quantify autonomy is ((http://www.academia.edu/download/41420818/A_framework_for_autonomy_levels_for_unma20160122-29920-mjbf2h.pdf)) work on autonomy model ALFUS. The ALFUS is not a specific test or metric, but rather a model of how several different test metrics could be combined to generate an autonomy level. As it is depicted below ALFUS model uses three dimensions – Environmental complexity, Mission Complexity, and Human independence describe – are used to assess the autonomy of a given UMS ((http://www.academia.edu/download/41420818/A_framework_for_autonomy_levels_for_unma20160122-29920-mjbf2h.pdf)). The ALFUS framework provides the capability of estimating the level of autonomy of one robot or a team of robots. However, this methodology still has some drawbacks that prevent its direct implementation. The ALFUS methodology does not provide the tools to ((https://ieeexplore.ieee.org/abstract/document/6942823/?casa_token=AOUV31jSjoMAAAAA:qxSTlUXcPOOb71b6gurGnJJBTUA4iC3ICA9OXSKcFXe-Oj0ti5uD9-d0QIMwE3aFG8rsEB0iC9livQ)): * Decompose the tasks in a commonly agreed-upon, standard way; * Test all possible missions, tasks, and sub-tasks; * Assess the interdependency between the metrics, as some of the subtasks can apply to more than one metric; * Allow metrics to be standardized in scoring scales; this will cause subjective evaluation and criteria to influence the results across different robots, users, or competing companies; * Integrate the metrics into the final the autonomy level. Partially ALFUS drawbacks are tackled by another – non-contextual assessment formally called the Non-Contextual Autonomy Potential (NCAP) ((http://gvsets.ndia-mich.org/documents/RS/2011/A%20Non-Contextual%20Model%20for%20Evaluating%20the%20Autonomy%20Level%20of%20Intelligent%20Unmanned%20Ground%20Vehicles.pdf)). The NCAP provides a predictive measure of a UMS’s ability to perform autonomously rather than a retrospective assessment of UMS autonomous performance relaying on tests performed before the actual application of the system being assessed. The NCAP treats autonomy level and autonomous performance separately. A UMS that fails completely at its mission but does so autonomously still operates at the same autonomy level as another UMS that succeeds at the same mission. Model visualization is provided below: As it is said in ((https://ieeexplore.ieee.org/abstract/document/6942823/?casa_token=AOUV31jSjoMAAAAA:qxSTlUXcPOOb71b6gurGnJJBTUA4iC3ICA9OXSKcFXe-Oj0ti5uD9-d0QIMwE3aFG8rsEB0iC9livQ)) the major drawback to these models is that they do not assess, specifically, the mission-specific fitness of a UMS. It might be a case when the user has several UMS assets available for a given mission or task, and the current models do not provide a simple answer for which asset is “best” Furthermore, none of the current model addresses, quantitatively, the impact on the mission-specific performance of changing a given UMS’s level of autonomy. With this need in mind, a metric for measuring autonomous performance is designed to predict the maximum possible mission performance of a UMS for a given mission and autonomy level and is named the Mission Performance Potential (MPP). The major difference of the MPP model in comparison to the mentioned ones is defined by the following assumptions: * not necessarily performance increases gradually if autonomy level increases. It means that ins some particular tasks the performance actually can drop; * performance of the same UMS can vary from mission to mission. It means that the context of the system operation cannot be ignored during the assessment. International Society of Automotive Engineers (SAE, https://www.sae.org/) have defined and explained autonomy levels of autonomous cars: The SAE level definitions are more focused on product features to provide both a better understanding of the actual functionality of the automotive product as well as a foundation for legal regulations for each of the autonomy levels. In the context of Unmanned Areal Vehicles the autonomy levels are addressed by a little different classification while having the same number of autonomy levels: According to the Drone Industry Insights (2019. https://dronelife.com/2019/03/11/droneii-tech-talk-unraveling-5-levels-of-drone-autonomy/), there are 6 levels of drone operations autonomy: