1 / 21

Roboethics

Roboethics. Can robots ever know the difference between “right and wrong”?. BBC Video. Video. We’re not anywhere close yet…. But you can be sure, scientists are going to keep trying.

helen
Download Presentation

Roboethics

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Roboethics Can robots ever know the difference between “right and wrong”?

  2. BBC Video • Video

  3. We’re not anywhere close yet… • But you can be sure, scientists are going to keep trying. • As robots become increasingly intelligent and lifelike, it’s not hard to imagine a future where they’re completely autonomous. • Once robots can do what they please, humans will have to figure out how to keep them from lying, cheating, stealing, and doing all the other nasty things that us carbon-based creatures do on a daily basis. • Enter roboethics, a field of robotic research that aims to ensure robots adhere to certain moral standards.

  4. AJungMoon@RoboEthicsAnything & everything Roboethics. • Researcher in Human-Robot Interaction. Blogger on a mission. Mechatronics Engineer with Philosophy background. • Vancouver · roboethicsdb.com

  5. In a recent paper, researchers at the Georgia Institute of Technology discuss how humans can make sure that robots don’t get out of line. • HAVE ETHICAL GOVERNORS • The killing robots used by the military all have some sort of human component-lethal force won’t be applied until a person makes the final decision. • But that could soon change, and when in does, these robots need to know how to act “humanely”. • What that means in the context of war is debatable, but some sort of ethical boundaries need to be set

  6. An ethical governor--a piece of the robot’s architecture that decides whether a lethal response is warranted based on preset ethical boundaries--may be the answer. • A military robot with an ethical governor might only attack if a victim is in a designated kill zone or near a medical facility, for example. • It could use a "collateral damage estimator" to make sure it only takes out the target and not all the other people nearby.

  7. ESTABLISH EMOTIONS • Emotions can help ensure that robots don’t do anything inappropriate--in a military context and elsewhere. • A military robot could be made to feel an increasing amount of "guilt" if repeatedly chastised by its superiors. • Pile on enough guilt, and the robot might forbid itself from completing any more lethal actions? • Does this work in humans??!!

  8. Emotions can also be useful in non-military human-robot interactions. • Deception could help a robot in a search-and-rescue operation by allowing it to calmly tell panicked victims that they will be fine, while a confused patient with Alzheimer’s might need to be deceived by a nursing robot. • But future programmers need to remember: It’s a slippery slope from benign deception to having autonomous robots that compulsively lie to get what they want.

  9. RESPECT HUMANS • If robots don’t respect humans, we’re in trouble. • That’s why the researchers stress that autonomous robots will need to respect basic human rights, including privacy, identity, and autonomy. • If we can’t ensure that intelligent robots will do these things, we should refrain from unleashing them en masse.

  10. Of course, humans don’t always act ethically. • Perhaps we could use an ethical governor as well for those times when our brains lead us astray. • The researchers explain: • "We anticipate as an outcome of these earlier research thrusts, the ability to generate an ethical advisor suitable for enhancing human performance, where instead of guiding an autonomous robot’s ethical behavior, it instead will be able to provide a second opinion for human users operating in ethically challenging areas, such as handling physically and mentally challenged populations.”

  11. Source: • Moral Decision Making in Autonomous Systems • Ronald C. Arkham & Alan R Wagner, Georgia Institute of Technology.

  12. Roboethics • Roboethics is the ethics inspiring the design, development and employment of Intelligent Machines. • Roboethicsshares many 'sensitive areas' with Computer Ethics, Information Ethics and Bioethics. • It investigates the social and ethical problems due to the effects of the Second and Third Industrial Revolutions in the Humans/Machines interaction’s domain.

  13. Urged by the responsibilities involved in their professions, an increasing number of roboticists from all over the world have started in cross-cultural collaboration with scholars of Humanities – to thoroughly develop the Roboethics, the applied ethics that should inspire the design, manufacturing and use of robots. • The result is the Roboethics Road- map.

  14. Robotics is rapidly becoming one of the leading fields of science and technology. Figures released by IFIR/UNECE Report 2004 show the double digit increasing in many subsectors of Robotics as one of the most developing technological field • We can forecast that in the XXI century humanity will coexist with the first alien intelligence we have ever come into contact with - robots.

  15. Some common questions: • How far can we go in embodying ethics in a robot? • Which kind of “ethics” is a robotics one? • How contradictory is, on one side, the need to implement in robots an ethics, and, on the other, the development of robot’s autonomy? • Although far-sighting and forewarning, could Asimov’s Three Laws become really the Ethics of Robots? • Is it right to talk about “consciousness”, “emotions”, “personality” of Robots?

  16. Main positions on Roboethics • According to the anthropologist Daniela Cerqui, three main ethical positions emerged from the robotics community:

  17. Not interested in ethics. • This is the attitude of those who consider that their actions are strictly technical, and do not think they have a social or a moral responsibility in their work.

  18. Interested in short-term ethical questions. • This is the attitude of those who express their ethical concern in terms of “good” or “bad,” and who refer to some cultural values and social conventions. This attitude includes respecting and helping humans in diverse areas, such as implementing laws or in helping elderly people.

  19. Interested in long-term ethical concerns. • This is the attitude of those who express their ethical concern in terms of global, long-term questions: for instance, the “Digital divide” between South and North; or young and elderly. • They are aware of the gap between industrialized and poor countries, and wonder whether the former should not change their way of developing robotics in order to be more useful to the latter.

  20. The Roboethics Roadmap • The Roboethics Roadmap outlines the multiple pathways for research and exploration in the field and indicates how they might be developed. • The roadmap embodies the contributions of many scientists and technologists, in several fields of investigations from sciences and humanities. • This study hopefully is a useful tool in view of cultural, religious and ethical differences. • Source: International Review of Information Ethics, http://www.i-r-i-e.net/inhalt/006/006_Veruggio_Operto.pdf

  21. Group Assignment • In the groups you will be building & programming in… • Consider: What is the future for autonomous robots, “roboethics”, and “roborights”? • Predict 3 scenarios, and back up your prediction with evidence. • Scenario #1: Probable Future • Scenario #2: Possible Future • Scenario #3: Preferable Future

More Related