Information Technology and Ethics/History of Ethics in Robotics
Looking forward at the future of robotics, we may want to look back and see where we have already been and where we are heading. The main focus of robotic ethics we want to look at learning from our past is those that pertain to making humanoid robots. Each section below will describe an important aspect of being human and relate that to an important event that has happen in the history of robotics.
Founding ethics: 3 Laws of Robotics Edit
In 1942, a science fiction author by the name Isaac Asimov wrote a short story called “runaround” in which he describes the three laws of robotics. They are as follows:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
While being the work of science fiction writing, these rules couldn't be more prevalent when it comes to thinking about the ethics of robotics as it rapidly evolves. The 3 laws of robotics, in short, state that in no way shape or form may a robot harm a human being. This concept is very relevant within robotics. As robotics evolves it's important to look back and see why robotics were being implemented in the first place, to make human life easier. Through this endeavor through the last century, humankind has been working together to perhaps creating artificial life on par with the complexity of a human being. The remaining portions of this article will cover some milestones within the past century that support humanity as a whole, working towards replicating human life unnaturally.
Demonstration of Intelligence: Deep Blue Edit
In 1996 IBM revealed it’s super computer named Deep Blue, which was used to challenge the current world champion chess player. The first set of matches held in 1996 end with the world champion winning the set, with Deep Blue winning one game and drawing two. After modifications to Deep Blue, a rematch was held where Deep Blue came out the victor winning three games and drawing one. Chess is known to be a very intellectual game throughout the history of man, since the game was invented. To think people could give a machine the capability to take on the world champion chess player and win is astonishing, and it really highlights two important topics.
- That humans can program a robot to be on the same logical/technical intelligence while also taking out human error that could be cause by as an example, emotions. # Humans can create devices/robots that are able to make decisions based on what is presented to them.
In the case of Deep Blue, as a move was made by the human player, Deep Blue would have to analyze the current board layout, cross reference all current moves and calculating the odds of certain moves giving Deep Blue a victory. That is a very logical approach for a person to take while trying to win a chess game, and Deep Blue was able to mimic/demonstrate the same mental capacity.
Demonstration of Emotional Intelligence: Kismet Edit
In 1998, MIT had developed a robot named Kismet. Kismet was created to see how learning occurs through visualization, hearing, and speech. Kismet was able to give a response back to researchers after seeing certain interactions beforehand. If a researcher smiled while making a certain noise, Kismet would be able to replicate that emotion back to the researcher through voice and facial expressions. As stated before, robots have had the capability to challenge human beings to logical intelligence. Possibly developing an emotional intelligence in that robot also would also get us closer to a type of robot that was warned to us about by Issac Asimov. Looking back at the development of robotics, we can see that for the most part, the research was done to further study humans and see how far we can push artificial intelligence. At no point in time was robotics mainly focused on bringing pain to humans, it has always been with benefit as its main goal. An ethical issue brought up by this past experience is, if we have the technology to give robots the same logical thinking intelligence, do we also want to have them mimic human emotions as well. Is it wrong to give them that sort of intelligence while also keeping the 3 laws of robotics in mind, that is to say, humans will always have a priority and superiority over their created robot counterpart.
Demonstration of Self Replication: University of Cornell Robots Edit
In 2005, researchers at Cornell University developed cube-like robots that were able to artificially reproduce. The concept behind this artificial reproduction is, given enough (correct) material the robot could make an exact replica of itself. In the case of Cornell’s robot, the material was specially designed blocks. These blocks could just be the stepping stone of something much greater. It would be one of the final steps to possibly creating an artificial life simulating a human being. Bringing this together with the other demonstrations is what to look at when thinking about ethics in robotics. Ideally, bringing these topics together could create an artificial human. So, is it ethical to still treat robotics with the laws given to us by Asimov? Is creating life that complex artificially any different that creating life naturally? These are subjects to think over when looking forward in robotics while taking in consideration where we have already been in the past.
Bringing it All Together Edit
Theoretically if humans were able to 100% replicate all three of these topics into one robot, then we would have successful created artificial life. Then at that point, is it still artificial? Should we still look at robots as tools made by humans? These are ethical values we should keep in mind looking back at what has already been accomplished while we look toward the future of robotics.