As has been emphasized earlier, autonomous system developers are under high market pressure due to high potential benefits within the particular application area. Artificial intelligence (AI) and its advancements are one of the main driving forces of raising ethical challenges. Most of the potential concerns have been arisen because of the latest advancements of autonomous cars, drones, social robotics, and other technologies that have made some bold demonstrations and start to enter the consumer markets. The IEEE Global Initiative on the Ethics of Autonomous Systems, the United Nations, the International Committee of the Red Cross, the White House, and the Future of Life Institute are among many responsible organizations that are now considering the ramifications of the real-world consequences of machine autonomy as we continue to stumble about trying to find a way forward . As it has been emphasized by we develop technology faster than:
understand implications of technology application in masses;
interpret implications according to the current social and moral frameworks;
develop and implement legislation and policies – global and national.
The mentioned concerns are especially actual in the light of the hyper-fast development of AI technologies and algorithms that are already deployed and not always its users are aware of this.
Referring to the main questions are:
In the context of autonomous cars: Who lives and who dies? This is the most illustrative and probably the most discussed case i.e. in a case of an inevitable car accident what decision for the control system of the autonomous car is the right one. Should the drive be put to the maximum risk to decrease risks of pedestrians or other traffic subjects or should it protect the driver no matter what? Another discussion is on legal aspects – who is responsible for making that decision and to what extent – drivers (car owners), engineers, or somebody else? This comes to some legal issues as well, however, as a consequence another question arises – would it be a right decision to ignore or to obey some of the traffic rules in order to save lives or to decrease potential risks of a car accident? According to (Should a self-driving car kill the baby or the grandma? Depends on where you’re from. | MIT Technology Review) researchers in MIT took an effort to study the question in more details through the experiment of a “Moral Machine”, which tested situation on real people to answer a set of questions: should an autonomous vehicle prioritize people over pets, pedestrian over passengers, more lives over fewer, women over men, young over old, fit over sickly, higher social status over lower, law-abiders over law-benders. It turned out that the answer depends on various factors and is very different in different countries. Therefore another question arises – should the behavior be tailored to a particular market? Unfortunately, those questions are still waiting for their answers.
Unordered List ItemIn the military context: Is saving the lives of civilians a moral imperative? This question has been discussed by science fiction producers for decades – whether the machine should be granted the right to use lethal power against humans? From one point of view, those systems are already on the battlefields in a form of smart weapons and self-guided missiles. From another point of view – do that system really make decisions on using lethal power or decisions are still made by humans – soldiers? Unfortunately, currently non-combatant (people who are not participating in military conflicts directly) lives are not part of the decision-making equations at least on weapon systems. It means that the primary task is to hit the target rather than saving lives.
In the context of people intimacy: how close is too close? In this context, intimacy is the subject of people being attached emotionally to a device – robot. One can refer to the AIBO (
https://us.aibo.com/) robot or others of this kind. The trend is rather clear – the more advanced is technology, the higher the emotional attachment I will cause. So, what could be the consequences? What about human-human relationships on a broader view? Since most of those systems provide some kind of cloud data storage, the simple question is about what are the allowed methods of processing the data?
According to the defined questions, some others raise concerns caused by uncontrolled development of AI :
Unordered List ItemWill in general AI compete with humans, thus, compromising overall social constitutions and behavior frameworks?
As a consequence, will AI undermine societal stability? Here the main challenges are related to technology-led inequality as well as general shifts of the global economy due to digitalization.
Will AI through better performance on data acquisition and processing harm privacy, personal liberty, and autonomy?
To address the defined challenges organizations like IEEE have started discussions and put a lot of effort into defining standards of “ethical” AI solutions, which obviously will change the overall landscape of autonomous technologies.