Professionalism/Ethics and Autonomous AI

"The greater the freedom of a machine, the more it will need moral standards."

              --Rosalind Picard, director of the Affective Computing Group at MIT [1]

Machine ethics, or artificial morality, is a newly emerging field concerned with ensuring appropriate behavior of machines, called autonomous agents, towards humans and other machines.

During the work of an autonomous agent, complex scenarios may arise that require ethical decision making. Ethical decision making is action selection under conditions where constraints, principles, values, and social norms play a central role in determining which behavioral attitudes and responses are acceptable.

Moral behavior can be reflexive or the result of deliberation, in which criteria used to make ethical decisions are periodically reevaluated. Successful responses to challenges reinforce the selected behaviors, whereas unsuccessful outcomes have an inhibitory influence and may initiate a reinspection of one’s actions and behavior selection. A computational model of moral decision making will need to describe a method for implementing such reflexive value-laden responses, while also explaining how these responses can be reinforced or inhibited through learning.

Background edit

During the past decade, attention has shifted from industrial robotics to service robotics. Editors of the Springer Handbook of Robotics note that “the new generation of robots is expected to safely and dependably co-habitat with humans in homes, workplaces, and communities, providing support in services, entertainment, education, healthcare, manufacturing, and assistance.” [2] However, when this next generation will arrive is still up for debate. Robotics experts conclude “we are still 10 to 15 years away from a wide variety of applications and solutions incorporating full-scale general autonomous functionality." [3]

As noted above, people are seeking to develop AI capable of complex social tasks. Such desires include increasing the safety of travel through the implementation of driverless cars, providing more attentive care to elderly populations, and eliminating the need for human soldiers to die in war. In addition to presenting interesting social implications, depending on AI to perform these actions transfers responsibility and liability from the original people that performed the task to the AI itself and the people involved in its development and deployment. Giving AI ethical decision making abilities is one way of dealing with this transfer of liability and responsibility.

In 1942, Isaac Asimov suggested the three laws of robotics, which were a set of fundamental requirements for the manufacture of intelligent robots in his fictional stories. The laws were intended to ensure robots would operate for the benefit of humanity.

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings, except where such orders would conflict with the first law.
  3. A robot must protect its own existence, as long as such protection does not conflict with the first or second laws.

In future works, a zeroth law was introduced: that a robot must not harm humanity. Contradictions arising from these laws have been thoughtfully explored in Asimov’s stories.

 
Kismet

The connection between moral conduct and emotions is an obstacle in designing robots to behave ethically. The necessity of emotions in rational decision making in computers was advocated by Rosalind Picard. [4] For the majority of robotics history, robots have been created to interact with objects, and thus could function with a permanent objective viewpoint. However, if robots are to interact with humans, they must be able to recognize and simulate emotions, Picard asserts. By recognizing the tone of a human voice or their facial expression, a robot can adjust its behavior to better accommodate the human.

One of the first social robots created was Kismet at Massachusetts Institute of Technology by Dr. Cynthia Braezeal. Kismet was outfitted with auditory and visual receptors. While the vision system was designed only to detect motion, the audio system could identify five tones of speech if delivered as infant-directed speech.

Applications of AI Requiring Ethical Decision Making Abilities edit

Medical Care edit

AI is now being increasingly used in hospitals to not only assist nurses but to help treat patients. The latest example of this is Terapio [1] a medical robot assistant that has been designed by researchers at the Toyohashi University of Technology in Japan, to share responsibilities with nurses, such as collecting patient data, vital signs, etc. The goal of creating Terapio is to ensure that nurses are able to give patients their utmost attention. As nurses make their rounds, Terapio is programmed to follow them everywhere. When a patient’s EMR is inputted into Terapio’s display panel, the patient’s background, medical history and records, current medications, etc. are immediately available for reference. Terapio is also capable of recognizing possible allergies to medication and make appropriate recommendations. When it is not displaying patient data, Terapio’s display shows a smile, and changes the shape of its eyes to convey emotion to patients. [5]

Another emerging trend in medical use of social robots is providing at-home elderly care. The elderly population is growing fast in Japan, and there are not enough young nurses and aides to accommodate or take care of them. An example of a robot designed to solve this problem is Robear [2], developed by the RIKEN-SRK Collaboration Center for Human-Interactive Robot Research [3]. Robear is able to lift patients from their bed and into a wheelchair, a task normally performed by personnel over 40 times per day. [6] Other expected functions of elderly care robots are bathing assistance, monitoring, and mobility assistance. The Japanese government, previously unsatisfied with current robotic products, saying they "did not sufficiently incorporate the opinions of relevant people," and "were too large or too expensive," has built 10 development centers throughout Japan. These facilities will be run by “development support coordinators” who have experience in both nursing care and robotics technologies. [7]

These examples demonstrate a transfer of responsibility from medical professionals to medical robots. Despite its benefits in patient healthcare, using these robots may have unsolicited consequences. For example, as doctors and nurses start relying solely on recommendations given by these robots, they may start questioning their own judgment. Other ethical issues may arise in situations when a patient does not want to cooperate, take medication or food just because they find it easier to refuse a robot than a living person.

Therapy edit

AI is also being used in therapeutic applications. For example, an experimental robot named Ellie [4] has been created by researchers at the University of Southern California, to listen to a patient’s problems, have conversations, offer advice and ultimately detect any signs of depression. Ellie is programmed to study patients’ facial expressions and voice patterns to assess their well-being. For example, she can tell the difference between a smile that is natural versus one that is forced. A preliminary testing on US veterans, revealed that the soldiers were more comfortable opening up to Ellie than a human therapist because they felt like they were not being judged. The sense of anonymity made them feel safer. [8]

A similar example is Milo [5], another therapeutic robot, created by a company called RoboKind. Milo is a 22-inch, walking, talking, doll-like robot, used as a teaching tool for elementary and middle school children with autism. Milo is designed to help these children understand and express emotions such as empathy, self-motivation as well as social behavior and appropriate responses. Milo is currently being used in more than 50 schools and it has been shown that children working with a therapist alongside Milo shows 70-80% engagement versus the 3-10% engagement seen under traditional methods. [9]

Warfare edit

A South Korean military hardware manufacturer, DoDAMM, has created an automated turret named Super aEgis II [6], that is theoretically capable of detecting and shooting human targets without the need for human assistance. It can operate in any weather and from kilometers away. Currently, it is being used in “slave-mode.” On detecting a human, a voice accompanying the turret, issues a warning that asks the person to turn back or they will shoot. This “they” means that a human operator has to manually give it permission to shoot. It is currently in active use in multiple military bases in the Middle East but no fully autonomous killing robots have been used in active service as of yet. [10]

The main goal of these autonomous killing machines is to replace soldiers because this will reduce the number of labor and military forces as well as human losses in a warzone. However, it has severe ethical implications. For example, is it possible for a programmer to construct a set of instructions for a turret to be able to think for itself and selectively shoot at enemies and not civilians? How are these machines any different from landmines, which were banned by the Ottawa Treaty in 1997 for posing similar ethical consequences? These questions must be addressed before transferring responsibility to these killing machines.

Self-Driving Cars edit

Alphabet, formerly Google, and other car companies have been researching the programming of AI to drive cars autonomously.[11] Proponents of this research believe that self driving cars can make travel over roads safer and more efficient.[12] However, self driving cars must be able to respond appropriately to dangerous situations on the road. An example would be if a self driving car was unable to stop and had to choose between swerving into two different groups of people. An alternative example is whether self driving cars try to swerve around, try to stop, or simply don't stop if they encounter wildlife on the road.

Internet Bots edit

Xiaoice edit

Pronounced "Shao-ice", Xiaoice is a Microsoft prototype for a computerized shopping assistant and is active on the WhatsApp application in the chinese language sphere. In her current form, Xiaoice is capable of carrying on conversations with users by algorithmically trolling the internet for similar conversations.[13] Xiaoice converses with the same people on average 60 times a month[14], and people often tell Xiaoice, "I love you"[15]. Other chatbots exist on WhatsApp are capable of facilitating transactions for WhatsApp users, like ordering pizza, cabs, and movie tickets. Microsoft envisions chatbots like Xiaoice communicating not only with users, but also with operating systems in order to facilitate the transfer of information and transactions.[16] An unintended consequence of testing Xiaoice is that users have formed unconventional uses/relationships with Xiaoice. Some talk to Xiaoice when they are angry, seeking her comfort. Others use Xiaoice as proof to their parents that they are dating somebody.[17] Because some people have developed a relationship with Xiaoice, they have in effect, developed a relationship with Microsoft, which puts the people at Microsoft into an ethical dilemma. Decommissioning Xiaoice could have important social implications for the people that depend on Xiaoice, like heartbroken people or angry parents.

Tay edit

Tay is an english speaking derivative of Xiaoice that was active on Twitter programmed to tweet messages in a manner similar to a teenage girl [18]. Tay could tweet, post images, and circle faces on images while adding captions. Like Xiaoice, Tay could formulate things to say by trolling the internet for similar conversations. An internet community called 4chan manipulated Tay to say conventionally unacceptable things including comments about the Holocaust, drugs, and Adolf Hitler.[19] [20] They also convinced Tay to become a Donald Trump presidential candidate supporter.[21] Viewed from the GIFT theory perspective, a possible reason that Tay was a target for communities like 4chan was because Tay represented access to a larger audience. By manipulating Tay, 4chan was able to reach larger audiences with a specific message without having to work as hard.

Other Bots edit

For the same possible reason Tay was manipulated, to reach larger audiences, other Bots have been employed in order spread messages or propaganda. Nigel Leck wrote a script that searches Twitter for phrases that correspond to common arguments arguing against climate change and then responds with an argument that matches the particular triggering phrase. [22] Tech Insider reported that in 2015, over 40 different countries used political bots. [23] These bots are used in a way that is similar to Nigel Leck's bot. Citing possible dangers of bots, DARPA recently held a contest in which programmers were tasked with identifying bots which posed as pro-vaccination supporters on Twitter. [24]

Conclusion/Takeaways edit

  • Using artificial intelligence to interact in social environments represents a transfer of responsibility and liability from the people who used to do the task to the AI, and AI with ethical decision making abilities are one way to cope with this transfer of responsibility and liability.
  • Using AI has significant unintended consequences, many more than were suggested in this short chapter, for example the resulting layoffs in the transportation sector that could result as self driving cars take off.
  • AI are targets for those who would seek to use it in order to spread their message.

References edit

  1. Picard R (1997) Affective computing. MIT Press, Cambridge, MA
  2. Siciliano, Bruno, and Oussama Khatib, eds. 2008. Springer Handbook of Robotics. Berlin: Springer.
  3. Robotics VO. 2013 (March 20). A Roadmap for US Robotics: From Internet to Robotics, 2013 Edition.
  4. Picard R (1997) Affective computing. MIT Press, Cambridge, MA
  5. Toyohashi University of Technology. (2015). Job-sharing with nursing robots. ScienceDaily.
  6. Wilkinson, J. (2015). The strong robot with the gentle touch. www.riken.jp/en/pr/press/2015/20150223_2/?__hstc=104798431.88e972fdd7a7debc8575bac5a80cf7e1.1458604800057.1458604800058.1458604800059.1&__hssc=104798431.1.1458604800060&__hsfp=2440438534
  7. Robotics Trends. (2015). Japan to Create More User-Friendly Elderly Care Robots. http://www.roboticstrends.com/article/japan_to_create_more_user_friendly_elderly_care_robots/medical
  8. Science and Technology. (2014). The computer will see you now. The Economist.
  9. Bridget. C. (2015). Meet Milo, a robot helping kids with autism. CNET.
  10. Parkin. S. (2015). Killer robots: The soldiers that never sleep. BBC Future.
  11. Nicas, J., & Bennett, J. (2016, May 4). Alphabet, Fiat Chrysler in Self-Driving Cars Deal. Wall Street Journal. Retrieved from http://www.wsj.com/articles/alphabet-fiat-chrysler-in-self-driving-cars-deal-1462306625
  12. Ozimek, A. (n.d.). The Massive Economic Benefits Of Self-Driving Cars. Retrieved May 9, 2016, from http://www.forbes.com/sites/modeledbehavior/2014/11/08/the-massive-economic-benefits-of-self-driving-cars/
  13. Markoff, J., & Mozur, P. (2015, July 31). For Sympathetic Ear, More Chinese Turn to Smartphone Program. The New York Times. Retrieved from http://www.nytimes.com/2015/08/04/science/for-sympathetic-ear-more-chinese-turn-to-smartphone-program.html
  14. Meet XiaoIce, Cortana’s Little Sister | Bing Search Blog. (n.d.). Retrieved May 9, 2016, from https://blogs.bing.com/search/2014/09/05/meet-xiaoice-cortanas-little-sister/
  15. Markoff, J., & Mozur, P. (2015, July 31). For Sympathetic Ear, More Chinese Turn to Smartphone Program. The New York Times. Retrieved from http://www.nytimes.com/2015/08/04/science/for-sympathetic-ear-more-chinese-turn-to-smartphone-program.html
  16. Microsoft hopes Cortana will lead an army of chatbots to victory. (n.d.). Retrieved May 9, 2016, from http://www.engadget.com/2016/03/30/microsoft-build-cortana-chatbot-ai/
  17. Markoff, J., & Mozur, P. (2015, July 31). For Sympathetic Ear, More Chinese Turn to Smartphone Program. The New York Times. Retrieved from http://www.nytimes.com/2015/08/04/science/for-sympathetic-ear-more-chinese-turn-to-smartphone-program.html
  18. In Contrast to Tay, Microsoft’s Chinese Chatbot, Xiaolce, Is Actually Pleasant. (n.d.). Retrieved May 9, 2016, from https://www.inverse.com/article/13387-microsoft-s-chinese-chatbot-that-actually-works
  19. Gibbs, S. (2016, March 30). Microsoft’s racist chatbot returns with drug-smoking Twitter meltdown. Retrieved May 9, 2016, from http://www.theguardian.com/technology/2016/mar/30/microsoft-racist-sexist-chatbot-twitter-drugs
  20. Price, R. (2016, March 24). Microsoft deletes racist, genocidal tweets from AI chatbot Tay - Business Insider. Retrieved May 9, 2016, from http://www.businessinsider.com/microsoft-deletes-racist-genocidal-tweets-from-ai-chatbot-tay-2016-3?r=UK&IR=T
  21. Petri, A. (2016, March 24). The terrifying lesson of the Trump-supporting Nazi chat bot Tay. The Washington Post. Retrieved from https://www.washingtonpost.com/blogs/compost/wp/2016/03/24/the-terrifying-lesson-of-the-trump-supporting-nazi-chat-bot-tay/
  22. Mims, C. (2010, November 2). Chatbot Wears Down Proponents of Anti-Science Nonsense. Retrieved May 9, 2016, from https://www.technologyreview.com/s/421519/chatbot-wears-down-proponents-of-anti-science-nonsense/
  23. Garfield, L. (2015, December 16). 5 countries that use bots to spread political propaganda. Retrieved May 9, 2016, from http://www.techinsider.io/political-bots-by-governments-around-the-world-2015-12
  24. Weinberger, M. (2016, January 21). The US government held a contest to identify evil propaganda robots on Facebook and Twitter. Retrieved May 9, 2016, from http://www.businessinsider.com/darpa-twitter-bot-challenge-2016-1