Information Technology and Ethics/Current Robotic Ethics

IntroductionEdit

Automation and robotic technology are becoming more mainstream every day. As the integration of these cyber based technologies continues to evolve, current ethical practices are divided into three specific application based groups. Each group has its own unique set of challenges. As further integration takes place, ethical risk assessment will continually need to be assessed as to stay current with behaviors engineers are ultimately responsible for.

History of RoboticsEdit

Founding Ethics: 3 Laws of RoboticsEdit

In 1942, a science fiction author by the name Isaac Asimov wrote a short story called “runaround” in which he describes the three laws of robotics. They are as follows:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

While being the work of science fiction writing, these rules could not be more prevalent when it comes to thinking about the ethics of robotics as it rapidly evolves. The 3 laws of robotics, in short, state that in no way, shape or form in which a robot harm a human being. This concept is very relevant within robotics. As robotics evolves it's important to look back and see why robotics were being implemented in the first place, the goal being to make human life easier. Through this endeavor during the last century, humankind has been working together to perhaps creating artificial life on par with the complexity of a human being. The remaining portions of this article will cover some milestones within the past century that support humanity as a whole, working towards replicating human life unnaturally.

Demonstration of Intelligence: Deep Blue SupercomputerEdit

In 1996 IBM revealed its super computer named Deep Blue, which was used to challenge the current world champion chess player. The first set of matches held in 1996 end with the world champion winning the set, and with Deep Blue winning one game and drawing two. After modifications to Deep Blue, a rematch was held where Deep Blue came out the victor winning three games and drawing one. Chess is known to be a very intellectual game throughout the history of man, since the game was invented. To think people could give a machine the capability to take on the world champion chess player and win is astonishing, and it really highlights two important topics.

  1. That humans can program a robot to be on the same logical/technical intelligence while also taking out human error that could be caused by, as an example, emotions.
  2. Humans can create devices/robots that are able to make decisions based on what is presented to them.

In the case of Deep Blue, as a move was made by the human player, Deep Blue would have to analyze the current board layout, cross reference all current moves and calculating the odds of certain moves giving Deep Blue a victory. That is a very logical approach for a person to take while trying to win a chess game, and Deep Blue was able to mimic/demonstrate the same mental capacity.

Demonstration of Emotion: KismetEdit

In 1998, MIT had developed a robot named Kismet. Kismet was created to see how learning occurs through visualization, hearing, and speech. Kismet was able to give a response back to researchers after seeing certain interactions beforehand. If a researcher smiled while making a certain noise, Kismet would be able to replicate that emotion back to the researcher through voice and facial expressions. As stated before, robots have had the capability to challenge human beings to logical intelligence. Possibly developing an emotional intelligence in that robot also would also get us closer to a type of robot that was warned to us about by Isaac Asimov. Looking back at the development of robotics, we can see that for the most part, the research was done to further study humans and see how far we can push artificial intelligence. At no point in time were robotics mainly focused on bringing pain to humans, it has always been bringing a benefit to humans as its main goal. An ethical issue brought up by this past experience is, if we have the technology to give robots the same logical thinking intelligence, do we also want to have them mimic human emotions as well. Is it wrong to give them that sort of intelligence while also keeping the 3 laws of robotics in mind, that is to say, humans will always have a priority and superiority over their created robot counterpart?

Demonstration of Self-Replication: University of Cornell Block BotsEdit

In 2005, researchers at Cornell University developed cube-like robots that were able to artificially reproduce. The concept behind this artificial reproduction is, given enough (correct) material the robot could make an exact replica of itself. In the case of Cornell’s robot, the material was specially designed blocks. These blocks could just be the stepping stone of something much greater. It would be one of the final steps to possibly creating an artificial life simulating a human being. Bringing this together with the other demonstrations is what to look at when thinking about ethics in robotics. Ideally, bringing these topics together could create an artificial human. So, is it ethical to still treat robotics with the laws given to us by Asimov? Is creating life that complex artificially any different that creating life naturally? These are subjects to think over when looking forward in robotics while taking in consideration where we have already been in the past.

Food for ThoughtEdit

Theoretically if humans were able to 100% replicate all three of these topics into one robot, then we would have successful created artificial life. Then at that point, is it still artificial? Should we still look at robots as tools made by humans? These are ethical values we should keep in mind looking back at what has already been accomplished while we look toward the future of robotics.

Current Robotic EthicsEdit

SafetyEdit

The most important aspect of safety is protocol regarding stopping the robot. “Robots can do unpredictable things; the bigger/heavier the robot the more space you should allow it when operating. Always verify that the robot is powered off before interacting with it. Never stick your fingers into wheels, tracks, manipulator pinch points, etc. while the robot is powered on. Remotely teleoperated robots may be the most dangerous because the remote operator may not know you decided to perform on-the-spot maintenance! Always familiarize yourself with the EMERGENCY STOP procedures first -- and last -- before interacting with or operating robots. Some implementations are more predictable than others” (NIST Robot guide). Personal protective wear must also be worn when working with robotics. Protective wear consists of helmet, ear and eye protection, long pants and long sleeved shirt as well as boots. Testing and Implementation As with any cyber technology, robotic engineering must pass through a strenuous process of safety and quality control like automobiles. These standards include testing the mobility, communications, manipulation, and human-system Interaction mechanisms to insure they are safe and responsive. Procedures must be clearly outlined for testing with strict disclosure standards for data sets to licensing and governing bodies. Transparency is key.

Ground SystemsEdit

Ground system specific ethical concerns currently include the use of robotic droids used to deliver and detonate explosives on human targets as seen in the downtown Dallas shootout on July 7th, 2016. Other issues include the introduction of artificial intelligence into robotics. For instance, whether an emotional bond with a robot is desirable, particularly when the robot is designed to interact with children or the elderly. This concept of managing artificial intelligence within robotic frame is currently the most important issue facing both robotics and artificial intelligence and will remain so moving forward. Everything from the coding of AI behavior, to the safety parameters for shutting down a robot equipped with AI deserve intense scrutiny under the provision that they do not harm humans and obey orders.

Self-Driving VehiclesEdit

Recently the city of Pittsburgh and its relationship with Uber has come under scrutiny as it relates to business practices of its self-driving car development division. “One of the company's most vocal critics, Democratic Mayor Bill Peduto, says he originally envisioned Uber’s much-lauded Advanced Technologies Center as a partnership that would bolster the city’s high-tech evolution. Instead, he’s grown frustrated as the company declined to help Pittsburgh obtain a $50 million federal “Smart Cities” grant, rebuffed his suggestions for providing senior citizens with free rides to doctors appointments, and lobbied state lawmakers to alter his vision for how self-driving vehicles should be rolled out to the public” (Gold 2017 Pg. 1). In the wake of these broken promises and a self-driven auto death of a Tesla owner in Florida have some beginning to question the deployment of robots into everyday life as well as the role and responsibility of the manufacturers of these automated systems and vehicles. It is also still unclear in regards to the programming logic behind these robotic vehicles, and how they make life and death decisions such as a situation where a pedestrian walks into crosswalk. Should you swerve causing contact with another vehicle, or proceed forward?

Aerial SystemsEdit

Issues specific to Aerial systems include surveillance and application for the use of taking human life. Drone strikes under the Obama administration killed up to 117 civilians worldwide. 526 drone strikes were ordered under the Obama administration. Surveillance specific issues include illegal audio and video recording of private citizens.

DronesEdit

The sales of drones risen steadily over the last couple of years. Drone sales are expected to grow from 2.5 million this year to 7 million in 2020, according to report released this week by the Federal Aviation Administration. Hobbyist sales will more than double from 1.9 million drones in 2016 to 4.3 million in 2020, the agency said. Meanwhile, business sales will triple over the period from 600,000 to 2.7 million (Vanian 2016 Pg. 1). It is already common practice to restrict the flight of drones from airfields, stadiums, and other various public events. Drones are already equipped with applications that allow it to follow a designated user. The user can be snowboarding, golfing, or hiking through the woods. The natural ethical implications that arise relate this application, still include weaponization in addition to surveillance. The FAA believes that 2017 will be the big turning point in drone adoption by businesses, which use them for everything from scanning power lines to inspecting rooftops for insurance companies. Commercial sales are expected to reach 2.5 million, after which sales will increase only slightly for the next few years. Currently, companies must obtain FAA certification to fly drones for business purposes. Some businesses and drone lobbying groups have grumbled that the regulation is partly to blame for preventing the drone industry from taking off in the United States. As of March of 2016 the FAA has granted over 3,000 business class drone licenses. (Vanian 2016 Pg. 1).

Aquatic SystemsEdit

Aquatic robotic ethical concerns are related to surveillance and warfare. Current issue includes the seizure of an American submarine drone in December of 2016 by China. The drone was eventually returned, but future incursions are guaranteed. It is also possible to weaponize a drone similar to its aerial counterpart and deliver lethal strikes.

Ethics, Views, and Impacts of AI and RoboticsEdit

Ethics in Artificial IntelligenceEdit

Ethics are in every part of human life, and that includes robotics and artificial intelligence. Almost with every innovation, a new ethical dilemma arises; This ethical dilemma could affect an individual’s work status, income, or even his behavior. For example, a new technology that automates a certain process might cost a person his job, but at the same time it might create new job opportunities and facilitate people’s lives.[1] This raises the question of whether this should be seen as a positive or a negative. Hypothetically, if a university developed an artificial intelligence that sorts and processes scholarship applications. Then, it would choose the best recipient based on its algorithm. The integrity of the algorithm’s decision-making could be in question. Decisions made by an AI could potentially be unfair.[2] These examples are only a glimpse of the ethical issues that could be the result of technological advancements.

AI BiasEdit

Artificial Intelligence(AI) has been used widely in automatic decision making in many areas. AI has been adapted as an enterprise strategy in helping decision-making processes, such as the hiring processes. AI systems are used to filter in and out job applicants[3] to select best-fit candidates. AI algorithms and systems are formed based on people’s experiences, social values, and biases. The pipeline of how AI is built: Human bias -> Data -> Algorithm -> AI system -> Decision making.[3] It is questionable if AI is ethically acceptable in decision making when it can be influenced by human biases. Gender bias in AI in naming, gender-based social ordering, gender descriptions, and presence in text. Therefore, the training data could be incorporated with people's misconception of gender based on the social preconceptions.[4] “One of the most widely used facial-recognition data training sets was estimated to be more than 75% male and more than 80% white”.[5] The AI algorithms can be affected by the diversity of the development team because of their backgrounds and experiences.

Views and Responses to AutomationEdit

The argument for the implementation of robotics and AI, automation, into the future of jobs is characterized by a false dichotomy, an either/or fallacy, in both press and academic circles. This is between the argument that the introduction and implementation of AI and robotics will end the need for work from humans and the counter-argument that the introduction of new technologies will always create opportunities and demand for the need of human labor. It is seen as a false dichotomy due to the reason that these are the two arguments largely seen when looking into the subject of automation implementation into the job market that does not give space for more logically valid arguments. [6]

From a survey, the two views seen are the views that AI and robotics will have a positive or neutral impact on jobs and the view that AI and robotics will displace more jobs rather than create. Some of the arguments that can be heard is that technology has always created more jobs and industries when they displace them rather than destroy them, some jobs are only reserved for human work, regulation will stop the loss of jobs, and that technology is not advancing fast enough to greatly impact the job market. Arguments on the latter view point out that the displacement of workers is already happening from robocalls to lights-out manufacturing, displacement of jobs due to automation will create more income inequality and have profound consequences like more hollowing out of the middle class, an “underclass” of people that are unemployable, and maybe even social unrest and riots. Though, both groups agree that there is a huge lack of educating workers and the next generation for the upcoming and even current technological shift.[7]


The use of automation has been seen misused in different practices creating an impactful negative press on such technology. Softwares used during the hiring process have created a discriminatory set of outcomes not because the software is malfunction but the lack of data all races. The Automation Era is a new global climate in business operations, a natural successor to the Information Age, where everyday users expect automation to be an integral part of their daily services [8]

Job CreationEdit

It is no secret that robotics has changed the workforce, and that the use of robotics will only grow over time. Robotics has both helped create jobs and unfortunately also contributed to job disappearance. It is important to remember that the jobs that are created from technological advancement will require a different skill set from those that disappear. Edward Tenner notes that “Computers tend to replace one category of worker with another…You are not really replacing people with machines; you are replacing one kind of person-plus-machine with another kind of machine-plus-person”.[9] For example, the use of robots in the workforce has led to an increase for the need of designers, operators, technicians who can repair the robots. In addition, Smith and Anderson’s survey results find that “half of the experts who responded (52%) expect that technology will not displace more jobs than it creates by 2025.”[7] It is important to remain hopeful about all of the positive outcomes of robotics, with time humans will adjust and invent new jobs to work with (rather than against) robotics. [10]

Policy & Procedures in AIEdit

The value of utilizing AI comes from its ability to improve human lives, while policies and procedures are set to guide the development and deployment of AI in order to avoid major concerns and lower risks. AI needs to be designed to be easy to understand and trustworthy. Policies need to be in place to implement AI in a safe manner as there is debate around AI deployment that involves how privacy is protected and whether AI bias can lead it to performing harmful acts.[11] For example, with the advancement in driverless car technology governments have begun to develop regulations to guide or restrict the testing and usage of self-driving vehicles. “The National Highway Traffic Safety Administration (NHTSA) has issued a preliminary statement of policy which advises states against authorizing members of the public to use self-driving vehicle technology at this time”.[12] When it comes to AI such as autonomous vehicles, the United States has been active in producing policies and regulations. Twenty-nine states have enacted legislation related to autonomous vehicles and eleven governors have issued executive orders related to them.[13] While the benefits of AI are significant, it is important to take a calculated approach to AI through the use of policies, procedures and regulations.

New Education StructureEdit

The structure in education has changed drastically over the years and with AI changing our society also the education system is changing to machine learning and machine teaching. AI machine learning replicates human thought and pattern with thanks to students who gather data to create a “human brain” processor. The role of AI in education is about expanding a specific learning style but rather creating an environment where students who struggle in a specific subject can be helped, meanwhile with students who have “natural” ability are able to progress.[14] AI is helping out students with different learning styles that are right for them.

But it doesn’t mean that AI is ethically fair when grading papers, in an AI-powered EdTech education system it showed that the scoring algorithm doesn't analyze the quality of the writing but rather the usage of sophisticated word choice usage in the essay. Which the essay can be wrong with strong vocabulary can be considered a great essay. AI-powered systems are trained on data annotated by humans, which the human biases change the data, which the algorithm is infected with, which produces biased outcomes.[15] Which is the problem our society can face of ethical bias in AI and there still a need for human interactions, but with ethical AI training. The importance is that education should be AI-assisted and not solely AI-led in education because we still need society to depend on human interaction.

ReferencesEdit

1. Plastic Pals. (2011, Sep 7). Kismet (MIT A.I. Lab). [Video File]. Retrieved from https://www.youtube.com/watch?v=8KRZX5KL4fA

2. Skitterbot. (2009, Feb 2). Self-replicating blocks from Cornell University. [Video File]. Retrieved from https://www.youtube.com/watch?v=gZwTcLeelAY

3. History.com Staff., Deep Blue Beats Kasparov., Retrieved from http://www.history.com/this-day-in-history/deep-blue-beats-kasparov

4. Auburn.edu., Isaac Asimov’s “Three Laws of Robotics”., Retrieved from http://www.auburn.edu/~vestmon/robotics.html

5. www.isa.org., What Is Automation?., Retrieved from https://www.isa.org/about-isa/what-is-automation/

6. Standard Test Methods for Response Robots. (2016, November 08). Retrieved April 15, 2017, from https://www.nist.gov/el/intelligent-systems-division-73500/response-robots US Department of Commerce

7. Buckley, C. (2016, December 20). Chinese Navy Returns Seized Underwater Drone to U.S. Retrieved April 15, 2017, from http://www.nytimes.com/2016/12/20/world/asia/china-returns-us-drone.html

8. Devlin, H. (2016, September 18). Do no harm, don't discriminate: official guidance issued on robot ethics. Retrieved April 20, 2017, from http://www.theguardian.com/technology/2016/sep/18/official-guidance-robot-ethics-british-standards-institute

9. Gold, A., Voss, S., & Magazine, P. (2017, May 01). How Uber lost its way in the Steel City. Retrieved May 02, 2017, from http://www.politico.com/story/2017/05/01/uber-pittsburgh-city-mayors-237772

10. Vanian, J. (2016, March 25). Federal Government Believes Drone Sales Will Soar By 2020. Retrieved May 02, 2017, from http://fortune.com/2016/03/25/federal-governmen-drone-sales-soar/

  1. Walz A; Firth-Butterfield K (2019). "Implementing Ethics into Artificial Intelligence: A Contribution, from a Legal Perspective, to the Development of an Ai Governance Regime". Duke Law & Technology Review. 18 (1): 176-231. 
  2. Bostrom N; Yudkowsky E (June 2014). "The ethics of artificial intelligence". The Cambridge Handbook of Artificial Intelligence: 316-334. doi:10.1017/cbo9781139046855.020. 
  3. a b Vasconcelos M; Cardonha C; & Gonçalves B (2018). "Modeling Epistemological Principles for Bias Mitigation in AI Systems". Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society. doi:10.1145/3278721.3278751. 
  4. Leavy S (2018). "Gender bias in artificial intelligence". Proceedings of the 1st International Workshop on Gender Equality in Software Engineering - GE 18. doi:10.1145/3195570.3195580. 
  5. Taulli T (August 2019). "How Bias Distorts AI (Artificial Intelligence)". https://www.forbes.com/sites/tomtaulli/2019/08/04/bias-the-silent-killer-of-ai-artificial-intelligence/#57f6a6387d87. 
  6. Acemoglu, D; Restrepo, P (January 2018). "Artificial Intelligence, Automation and Work". National Bureau of Economic Research. https://www.nber.org/papers/w24196. 
  7. a b Smith, A.; Anderson, J (August 2014). "AI, Robotics, and the Future of Jobs". Pew Research Center. http://www.fusbp.com/wp-content/uploads/2010/07/AI-and-Robotics-Impact-on-Future-Pew-Survey.pdf. 
  8. Alice, J; Figuerola, P (January 2019). "FINDING SUCCESS IN THE AUTOMATION ERA". The SilverLogic. https://https://blog.tsl.io/finding-success-in-the-automation-era. 
  9. Tenner E (1996). "The computerized office: Productivity puzzles". Controlling Technology: contemporary issues, 2nd edn. Prometheus Books, Amherst, NY: 467-485. 
  10. Borenstein, J (2011). "Robots and the changing workforce". AI & Society 26 (1): 87-93. doi:10.1007/s00146-009-0227-0. https://doi.org/10.1007/s00146-009-0227-0. 
  11. Stone P et al (2016). "Artificial Intelligence and Life in 2030". One Hundred Year Study on Artificial Intelligence: Report of the 2015-2016 Study Panel, Stanford University, Stanford, CA: 10. http://ai100.stanford.edu/2016-report. 
  12. Department for Transportation (2015). "The Pathway to Driverless Cars Summary report and action plan". The Pathway to Driverless Cars Summary report and action plan: 1-40. 
  13. Shinkle, D.; & Dubois, G. "Autonomous Vehicles: Self-Driving Vehicles Enacted Legislation". https://www.ncsl.org/research/transportation/autonomous-vehicles-self-driving-vehicles-enacted-legislation.aspx. 
  14. DefinedCrowd (December 2019). [www.definedcrowd.com/back-to-school-how-ai-is-transforming-the-way-we-learn "Back to School: How AI Is Transforming the Way We Learn"]. www.definedcrowd.com/back-to-school-how-ai-is-transforming-the-way-we-learn. 
  15. Lexalytics (April 2020). [www.lexalytics.com/lexablog/ai-in-education-present-future-ethics "AI in Education: Where Is It Now and What Is the Future?"]. www.lexalytics.com/lexablog/ai-in-education-present-future-ethics.