Information Technology and Ethics/Robotic Ethics
Introduction Automation and robotic technology are becoming more mainstream every day. As the integration of these cyber based technologies continues to evolve, current ethical practices are divided into three specific application based groups. Each group has its own unique set of challenges. As further integration takes place, ethical risk assessment will continually need to be assessed to stay current with behaviors engineers are ultimately responsible for. Founding Ethics: 3 Laws of Robotics In 1942, a science fiction author by the name Isaac Asimov wrote a short story called “runaround” in which he describes the three laws of robotics. They are as follows: 1. A robot may not injure a human being or, through inaction, allow a human being to come to harm. 2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law. 3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws. While being the work of science fiction writing, these rules could not be more prevalent when it comes to thinking about the ethics of robotics as it rapidly evolves. The 3 laws of robotics, in short, state that in no way, shape or form in which a robot harm a human being. This concept is very relevant within robotics. As robotics evolves it's important to look back and see why robotics were being implemented in the first place, the goal being to make human life easier. Through this endeavor during the last century, humankind has been working together to perhaps creating artificial life on par with the complexity of a human being. The remaining portions of this article will cover some milestones within the past century that support humanity as a whole, working towards replicating human life unnaturally. Demonstration of Intelligence: Deep Blue Supercomputer In 1996 IBM revealed its super computer named Deep Blue, which was used to challenge the current world champion chess player. The first set of matches held in 1996 end with the world champion winning the set, and with Deep Blue winning one game and drawing two. After modifications to Deep Blue, a rematch was held where Deep Blue came out the victor winning three games and drawing one. Chess is known to be a very intellectual game throughout the history of man, since the game was invented. To think people could give a machine the capability to take on the world champion chess player and win is astonishing, and it really highlights two important topics. 1) That humans can program a robot to be on the same logical/technical intelligence while also taking out human error that could be caused by, as an example, emotions. 2) Humans can create devices/robots that are able to make decisions based on what is presented to them. In the case of Deep Blue, as a move was made by the human player, Deep Blue would have to analyze the current board layout, cross reference all current moves and calculating the odds of certain moves giving Deep Blue a victory. That is a very logical approach for a person to take while trying to win a chess game, and Deep Blue was able to mimic/demonstrate the same mental capacity.
Ground Systems Ground system specific ethical concerns currently include the use of robotic droids used to deliver and detonate explosives on human targets as seen in the downtown Dallas shootout on July 7th, 2016. Other issues include the introduction of artificial intelligence into robotics. For instance, whether an emotional bond with a robot is desirable, particularly when the robot is designed to interact with children or the elderly. This concept of managing artificial intelligence within robotic frame is currently the most important issue facing both robotics and artificial intelligence and will remain so moving forward. Everything, from the coding of AI behavior, to the safety parameters for shutting down a robot equipped with AI, deserve intense scrutiny under the provision that they do not harm humans and obey orders. Demonstration of Emotion: Kismet In 1998, MIT had developed a robot named Kismet. Kismet was created to see how learning occurs through visualization, hearing, and speech. Kismet was able to give a response back to researchers after seeing certain interactions beforehand. If a researcher smiled while making a certain noise, Kismet would be able to replicate that emotion back to the researcher through voice and facial expressions. As stated before, robots have had the capability to challenge human beings to logical intelligence. Possibly developing an emotional intelligence in that robot also would also get us closer to a type of robot that was warned to us about by Isaac Asimov. Looking back at the development of robotics, we can see that for the most part, the research was done to further study humans and see how far we can push artificial intelligence. At no point in time were robotics mainly focused on bringing pain to humans, it has always been bringing a benefit to humans as its main goal. An ethical issue brought up by this past experience is, if we have the technology to give robots the same logical thinking intelligence, do we also want to have them mimic human emotions as well. Is it wrong to give them that sort of intelligence while also keeping the 3 laws of robotics in mind, that is to say, humans will always have a priority and superiority over their created robot counterpart? Conclusion Theoretically if humans were able to 100% replicate all three of these topics into one robot, then we would have successful created artificial life. Then at that point, is it still artificial? Should we still look at robots as tools made by humans? These are ethical values we should keep in mind looking back at what has already been accomplished while we look toward the future of robotics.
Current Robotic Ethics Introduction Automation and robotic technology are becoming more mainstream every day. As the integration of these cyber based technologies continues to evolve, current ethical practices are divided into three specific application based groups. Each group has its own unique set of challenges. As further integration takes place, ethical risk assessment will continually need to be assessed as to stay current with behaviors engineers are ultimately responsible for. Safety The most important aspect of safety is protocol regarding stopping the robot. “Robots can do unpredictable things; the bigger/heavier the robot the more space you should allow it when operating. Always verify that the robot is powered off before interacting with it. Never stick your fingers into wheels, tracks, manipulator pinch points, etc. while the robot is powered on. Remotely teleoperated robots may be the most dangerous because the remote operator may not know you decided to perform on-the-spot maintenance! Always familiarize yourself with the EMERGENCY STOP procedures first -- and last -- before interacting with or operating robots. Some implementations are more predictable than others” (NIST Robot guide). Personal protective wear must also be worn when working with robotics. Protective wear consists of helmet, ear and eye protection, long pants and long sleeved shirt as well as boots.
Testing and Implementation As with any cyber technology, robotic engineering must pass through a strenuous process of safety and quality control like automobiles. These standards include testing the mobility, communications, manipulation, and human-system interaction mechanisms to insure they are safe and responsive. Procedures must be clearly outlined for testing with strict disclosure standards for data sets to licensing and governing bodies. Transparency is the key. Ground Systems Ground system specific ethical concerns currently include the use of robotic droids used to deliver and detonate explosives on human targets as seen in the downtown Dallas shootout on July 7th, 2016. Other issues include the introduction of artificial intelligence into robotics. For instance, whether an emotional bond with a robot is desirable, particularly when the robot is designed to interact with children or the elderly. This concept of managing artificial intelligence within robotic frame is currently the most important issue facing both robotics and artificial intelligence and will remain so moving forward. Everything from the coding of AI behavior, to the safety parameters for shutting down a robot equipped with AI deserve intense scrutiny under the provision that they do not harm humans and obey orders.
a) Self-Driving Vehicles Recently the city of Pittsburgh and its relationship with Uber has come under scrutiny as it relates to business practices of its self-driving car development division. “One of the company's most vocal critics, Democratic Mayor Bill Peduto, says he originally envisioned Uber’s much-lauded Advanced Technologies Center as a partnership that would bolster the city’s high-tech evolution. Instead, he’s grown frustrated as the company declined to help Pittsburgh obtain a $50 million federal “Smart Cities” grant, rebuffed his suggestions for providing senior citizens with free rides to doctors appointments, and lobbied state lawmakers to alter his vision for how self-driving vehicles should be rolled out to the public” (Gold 2017 Pg. 1). In the wake of these broken promises and a self-driven auto death of a Tesla owner in Florida have some beginning to question the deployment of robots into everyday life as well as the role and responsibility of the manufacturers of these automated systems and vehicles. It is also still unclear in regards to the programming logic behind these robotic vehicles, and how they make life and death decisions such as a situation where a pedestrian walks into crosswalk. Should you swerve causing contact with another vehicle, or proceed forward? Aerial Systems Issues specific to Aerial systems include surveillance and application for the use of taking human life. Drone strikes under the Obama administration killed up to 117 civilians worldwide. Five hundred and twenty six drone strikes were ordered under the Obama administration. Surveillance specific issues include illegal audio and video recording of private citizens. b) Drones The sales of drones risen steadily over the last couple of years. Drone sales are expected to grow from 2.5 million this year to 7 million in 2020, according to report released this week by the Federal Aviation Administration. Hobbyist sales will more than double from 1.9 million drones in 2016 to 4.3 million in 2020, the agency said. Meanwhile, business sales will triple over the period from 600,000 to 2.7 million (Vanian 2016 Pg. 1). It is already common practice to restrict the flight of drones from airfields, stadiums, and other various public events. Drones are already equipped with applications that allow it to follow a designated user. The user can be snowboarding, golfing, or hiking through the woods. The natural ethical implications that arise relating to this application, still include weaponization in addition to surveillance. The FAA believes that 2017 will be the big turning point in drone adoption by businesses, which use them for everything from scanning power lines to inspecting rooftops for insurance companies. Commercial sales are expected to reach 2.5 million, after which sales will increase only slightly for the next few years. Currently, companies must obtain FAA certification to fly drones for business purposes. Some businesses and drone lobbying groups have grumbled that the regulation is partly to blame for preventing the drone industry from taking off in the United States. As of March of 2016 the FAA has granted over 3,000 business class drone licenses. (Vanian 2016 Pg. 1).
Aquatic Systems Aquatic robotic ethical concerns are related to surveillance and warfare. Current issue includes the seizure of an American submarine drone in December of 2016 by China. The drone was eventually returned, but future incursions are guaranteed. It is also possible to weaponize an aquatic drone similar to its aerial counterpart and deliver lethal strikes.
Future Robotic Ethics Introduction Due to robots being part of the larger more broad field of technology, it makes sense that the ethical aspect grows, expands, and advances simultaneously. In addition to Asimov’s founding laws of robotics, new laws or ideals are being added in order to expand future installments such as the three principles of combat robots which include: 1. Combat robots cannot kill our side, but they can kill enemies. 2. The battle robot must follow the command of the friend. You do not have to follow it when the order is out of line. 3. A battle robot must defend itself as long as it does not violate Article 1 and 2. Future Development in Robotics Robotics in the future is concentrated in three categories, Android, Cyborg and Humanoid. Android is an artificial human made just like a person. Not only in appearance, but also in action and intelligence, Androids are almost the same as human beings. It is covered with artificial skin. Cyborg is a creation that wherein an organism is incorporated into machinery, whether it is a human being or an animal. Humanoid is a robot that has a shape similar to a human body, such as a head, a trunk, an arm, and a leg. Therefore, it is also called humanoid robot because it is the robot which can best imitate the behavior of human best. ASIMO developed by Honda Japan and HUBO developed by Korea's KAIST are typical humanoid robots. But their skin is harder than Android. Issues revolving around ethics in the future Robotics Ethics Dilemma in future The most important concern is safety. Robots have been developed and used only for industrial experts and military use, but now they are used by ordinary people. Robot vacuum cleaners and lawn mowers are already widely used at home, and robot toys are popular with children. As these robots become more intelligent, who is attacking first or harming is becoming unclear. Should designers be held accountable? Should the user be responsible? Do robots have to take responsibility? Robots have physical robots that can be touched and digital robots that can not be touched. Digital robots are as complex as computer programming. For example, you make financial decisions that involve the use of digital robots. If this intelligent expert software robot has made a huge loss in decision making, who will be responsible for it?
a. A Robot’s right?
The second serious problem is the second principles of the Asimov robot. We have to obey human orders unconditionally, but human language and natural language programming make it difficult for robots to distinguish who has been commanded. Therefore, although the three principles of Asimov emphasize only the safety of the human, the problem is more serious if the robot has a sense of perception. If the robot feels pain, does it grant something special rights? If robots possess emotions, should robots be given the right to marry humans? Should I give personal intellectual property or ownership to a robot? This is still a long-time story, but we can know that it is not that far from the discussion of the animal rights abuse prevention robots that can have sex with humans have emerged and are now a major issue in society. Automation
In addition to the major concern of the overall safety of human beings, the other most criticized factor of robots is their ability of automation. Automation is the technique of making an apparatus, a process, or a system operate automatically. People fear this change as the process of automation could result in the loss of job security. By 2021, robots will have eliminated 6% of all jobs in the US. On a positive note, the incorporation of robots will also create jobs as there will be a need to design, manufacture, program, manage, and maintain these robots and systems.
Another huge beneficiary is that it eliminates tedious, mundane, repetitive and potentially dangerous work. This will allow for people to focus on the more important tasks at hand while not being held back by time-consuming work.
User talk page This Wikibook was a collaborative effort of students from Illinois Institute of Technology Spring 2017 Legal and Ethical Issues in Information Technology 485/585. The goal is to create and maintain an ongoing Wikibook that addresses the ethical issues within robots and robotics. The authors involved are a mix of undergraduate and graduate students pursuing information technology degrees with specifications in cyber forensics and security and information technology management. This page will continuously be updated as new issues, and concerns emerge. Proposals are welcome as we hope to make this relevant and accurate a document as possible. Team members included Stephen Grenzia, Sangmin Park, and Daniel Kolodziej as well as graduate student Joshua Kazanova.
References: Standard Test Methods for Response Robots. (2016, November 08). Retrieved April 15, 2017, from https://www.nist.gov/el/intelligent-systems-division-73500/response-robots US Department of Commerce
Buckley, C. (2016, December 20). Chinese Navy Returns Seized Underwater Drone to U.S. Retrieved April 15, 2017, from http://www.nytimes.com/2016/12/20/world/asia/china-returns-us-drone.html Devlin, H. (2016, September 18). Do no harm, don't discriminate: official guidance issued on robot ethics. Retrieved April 20, 2017, from http://www.theguardian.com/technology/2016/sep/18/official-guidance-robot-ethics-british-standards-institute
Gold, A., Voss, S., & Magazine, P. (2017, May 01). How Uber lost its way in the Steel City. Retrieved May 02, 2017, from http://www.politico.com/story/2017/05/01/uber-pittsburgh-city-mayors-237772
Vanian, J. (2016, March 25). Federal Government Believes Drone Sales Will Soar By 2020. Retrieved May 02, 2017, from http://fortune.com/2016/03/25/federal-governmen-drone-sales-soar/ Plastic Pals. (2011, Sep 7). Kismet (MIT A.I. Lab). [Video File]. Retrieved from https://www.youtube.com/watch?v=8KRZX5KL4fA Skitterbot. (2009, Feb 2). Self-replicating blocks from Cornell University. [Video File]. Retrieved from https://www.youtube.com/watch?v=gZwTcLeelAY History.com Staff., Deep Blue Beats Kasparov., Retrieved from http://www.history.com/this-day-in-history/deep-blue-beats-kasparov Auburn.edu., Isaac Asimov’s “Three Laws of Robotics”., Retrieved from