Volver

Ethical discussion based on artificial intelligence

From a very young age, we humans have been taught to evaluate the possible consequences of our decisions; acting under that knowledge determines our own ethical sense. The development of artificial intelligence (AI) algorithms has allowed considerable progress in different aspects of science in a relatively short time. This constant use generates both enthusiasm for the achievements obtained and concern about possible malicious uses or poorly designed products that affect people’s lives. In this article we present a reflection and analysis of the possible challenges that these algorithms face, from the point of view of ethical thinking. 

Today, artificial intelligence has gained significant ground in technological and scientific development. The ability to perform tasks that typically require human intelligence to be completed with a high degree of confidence has led to advances in several branches of knowledge. This is how these systems are used, for example, in the interpretation of medical images, where we can highlight the early detection of cancer; the driving of autonomous cars; the interpretation of language to the point of having coherent conversations; computer security systems; among others. Given this scenario, it is normal that concerns arise, both well-founded and unfounded, about what is the scope of this technology, how can we be affected positively and/or negatively, what are the implications and what are the ethical considerations involved? 

In this article we recognize that ethics has different definitions and comprises a whole branch of philosophy, so we take the one that in our opinion best represents the case study presented here. Therefore, understanding ethics, according to the definition of the philosopher John Dewey (Philosophy, 2018), as the use of reflective intelligence to analyze our judgments considering the consequences of acting on them; it is possible to reflect on the repercussions that the use of artificial intelligence brings with it, its limitations, scope and possible challenges; both technology itself and of the people who operate and develop it. 

Can a machine ‘think’ ethically? 

Alan Turing defined artificial intelligence as the ability of a machine to imitate the behavior of a person. Thus, the famous ‘Turing test’ evaluates the behavior of the machine depending on whether it can fool a human being into believing that it is also a human being. As mentioned above, at present, artificial intelligence algorithms can perform many tasks obtaining a performance like those achieved by an expert in the field, without performing specific programming for each application, which has facilitated some processes and has allowed significant progress in different branches of science. Despite the progress achieved,  according to Bostrom and Yudkowsky, 2021 there are some concerns that these algorithms entail at the time of their use and on which the research of this type of technologies should be focused, such as allowing a transparent inspection of the process they are carrying out, being predictable, being robust against possible external manipulations and determine responsibility for an event. In this context, questions arise such as: can a machine imitate the ethical behavior of a person; in case of a conflict involving a machine, who is responsible: the machine, the engineers who designed it or the person who uses it; is it possible to predict the decision made by the machine; or is it possible to audit the process performed? 

To give an example on the subject, in 2016 Microsoft created an account on the social platform Twitter which would be controlled by a chatbot1 called Tay. The idea was to conduct an experiment for research in conversational understanding where the robot would learn through the fun and friendly conversations that were generated. The slogan for its launch was “the more you talk to Tay, the smarter it will get” (The guardian, 2016). Through the dialogues, users were able to get the system to promote racist, anti-Semitic ideas and have inconsistent conversations that supported and attacked ideas such as feminism. The system was shut down after 16 hours of use and Microsoft apologized for the comments and deleted the tweets. In a similar case, Facebook faced difficulties with its search suggestion system in 2018, which suggested inappropriate content when people typed in things like “video of …”. In this case the company stopped the system and blamed the platform’s users for the behavior, given that the system represents what users normally search for and/or becomes trending, relieving itself of responsibility. Finally, it is worth noting that Microsoft relaunched its chatbot this time under the name Zo, which was in operation from December 2016 until April 2019; and although it was still possible to make the system say offensive things it did not compare to its predecessor (Angulo, 2018). 

In the above example, it should be noted that in order to make the machine respond and have a fairly coherent conversation, it is not possible to program it in detail because the number of possibilities is extremely large. By giving the machine the ability to learn from the input data, without performing specific programming, the engineers who developed Tay had to compromise the predictability of the system. That is, they could not predict the results obtained, generating effects completely contrary to what they originally intended. Nor did it have a robust system in the face of possible attacks, which led to aggressions. Additionally, the work performed by the machine has a social dimensions and the requirements that apply to humans in this type of task must also be taken into account by the machine. Finally, with the effort made by the engineers it was possible to launch Zo, an improved version that probably required specific programming, human supervision and filtering to prevent it from falling into the mistakes of its predecessor; in terms of Microsoft itself “Zo provides a unique point of view, without leaving behind its manners and emotional expressions” (Microsoft Latinx Team, 2018).  

The example is relatively harmless, in the sense that no one was seriously affected. However, it is clear that in other situations the use of artificial intelligence systems can affect people’s lives in various ways. The development of this type of technology is on the rise; engineers, scientists, people who are involved in the environment and who work with it must pay attention to the direct and indirect consequences that the proposed developments entail. 

Facing ethical dilemmas in artificial intelligence systems 

It can then be thought that it is necessary for the machine to make ethical decisions. In some cases, these can be guided by the legislative system and the regulations set out collectively, for example, not to exceed the permitted speed limit, not to discriminate on the basis of sex, race or nationality, or to maintain the principle of distinction in an armed conflict (clearly distinguishing between combatants and civilians), among others. While the problem presents more complexity in situations where these legislations are not clear, they require an individual decision or the analysis of the particular case. 

In this context there are some theories to address this issue. Amitai and Oren Etzioni (Etzioni & Etzioni, 2017) analyze some possibilities in this sense. One of them consists of programming in a certain way an ethical guidance; which could have a precept previously established by humanity, the three laws of robotics proposed by Asimov, the Kantian theory, the utilitarian theory, a religious belief, etc.; so that in this way the artificial intelligence system acts in accordance with the established precept. The problem with this approach is clear, who determines on which precept to form such guidance, when at any moment they will inevitably make a decision that many would find unethical. Another approach may be the observation of human behavior in real situations; in this case it is important to note that in times of stress human beings will act, driven more by reflex than by a reflection of ethical thinking. Therefore, observation would not teach the machine to have an ethical behavior but a common behavior. In the case of driving, we have established rules and traffic signs which must be obeyed; if a machine were to learn the optimal way of driving by observing human behavior, it would probably disrespect many of them, even if it were capable of avoiding accidents, we would agree that its way of driving would not be appropriate. 

As we have mentioned in previous articles, deep learning is the tool that has allowed systems to learn directly from data. Thanks to this advance, programmers do not have to completely design and predict each of the infinite possible situations to develop a task but allow algorithms to solve the problem by themselves. In this way, performance is gained, but knowledge of the inner workings is lost. This property gave them the name of autonomous systems. Amitai and Oren Etzioni (Etzioni & Etzioni, 2017) reflect on this point and conclude that these machines cannot be completely autonomous, since the goal on which they are built, regardless of their inner workings, is dictated by whoever programs them. In that sense, they remain a tool of the human being who designs, manufactures and uses them. Finally, in their article, they propose that it is possible to have two AI systems, one called mind and the other called partner. The partner is designed to be a support and suggestion system, the final decisions are made by the people who use it. While the system’s mind, are those that are programmed to make all decisions; in this case it is necessary to raise supervision systems that model the ethical behavior of individuals and of course it is an open topic of discussion that must be considered by the developers of the technology, legislators and end -users of the system. 

Conclusions 

With advances in technology and the high degree of confidence with which results are predicted, people tend to relegate decision making to machines and, instead of seeing them as a support system for the subject matter expert, we use them with blind trust, demanding that they be perfect. Ethical behavior, not only of the machine, but also of any scientific and technological development is undoubtedly a discussion of high importance. In the case of artificial intelligence, given its conferred characteristic of autonomy, it becomes relevant to raise these discussions and to propose solutions that allow regulating its use by means of human supervision. It should be noted that researchers in the field are making enormous efforts to create systems that are interpretable, auditable and robust in the face of possible attacks. 

Author: Maria Ximena Bastidas Rodríguez. 

Translated by: Anasol Monguí

Bibliography 

Angulo, I. (2018). Facebook and YouTube should have learned from Microsoft’s racist chatbot. CNBC.  

Bostrom, N., & Yudkowsky, E. (2011). THE ETHICS OF ARTIFICIAL INTELLIGENCE. In W. R. Keith, Cambridge Handbook of Artificial Intelligence. Cambridge University Press.  

Equipo Microsoft Latinx. (2018, Feb 15). Zo, un chatbot como ningún otro. Microsoft, pp. https://blogs.microsoft.com/latinx/2018/02/15/zo-un-chatbot-como-ningun-otro/.  

Etzioni, A., & Etzioni, O. (2017). Incorporating Ethics into Artificial Intelligence. The Journal of Ethics.  

Philosophy, S. E. (2018). Dewey’s Moral Philosophy.   

The guardian. (2016). Tay, Microsoft’s AI chatbot, gets a crash course in racism from Twitter. The guardian.