1 / 52

Lecture 9: AI, magic and deception

Lecture 9: AI, magic and deception. Adaptive Robotics 2008. Assignment feedback. Read the question! Try and answer the question – be explicit about the link between what you cover and how it relates to the question Structure your argument – e.g. with introduction and conclusion

mairi
Download Presentation

Lecture 9: AI, magic and deception

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Lecture 9: AI, magic and deception • Adaptive Robotics 2008

  2. Assignment feedback • Read the question! • Try and answer the question – be explicit about the link between what you cover and how it relates to the question • Structure your argument – e.g. with introduction and conclusion • Start by saying what you are going to say • Finish by reflecting back on what you have said.

  3. Research – Wikipedia versus published sources • Include material from the course – show that you know and have understood it.

  4. Mark range 45-78% • Abstract – summarises whole essay, or journal paper.

  5. Abstract e.g. “There have been many different approaches to robotics, two of which include the more recent behaviour-based robotics and good old fashioned AI. The characteristics of each approach vary and they both have advantages and disadvantages depending on the overall purpose of the robot. These characteristics of these two approaches will be explored and contrasted”

  6. “Robots in the news” • Robot play at Osaka University • “Hataraku Watashi” (I, worker) • Robots speak lines, and share stage with humans • About a housekeeping robot that loses its motivation to work. • Wakamaru robot (Mitsubishi)

  7. Lecture 9: AI, magic and deception • Human-Robot interaction • Attempts to make humanoid robots, or convincing robot pets • “Android science” (Karl MacDorman) • Robots creating the illusion of life and animacy • Factors to exploit: • Interest in technology • Human tendency to anthropomorphism • Human tendency to zoomorphism • “Darwinian buttons” • See early examples … up to recent examples. • See some experiments on HRI understanding what affects our interactions. • Should we do this? Class discussion …..

  8. Deception and AI • ELIZA – creating the illusion of understanding • Automata – creating the illusion of life • “Android Science” – creating robots with human appearance.

  9. Vaucanson’s duck • Created 1739 by Jacques Vaucanson • Appeared to eat kernels of grain, digest and defecate • (but pellets inserted into duck’s anus)

  10. Chess playing automaton: The Turk • Constructed in 1769 by Baron von Kemplen for Austrian-Hungarian empress Maria Theresa • Played a strong game of chess against many human opponents – including Benjamin Franklin and Napoleon over 80 year period

  11. Gakutensoku(learning from the laws of nature) Built in 1929 by Makoto Nishimura for celebrations of ascension of Emperor Hirohito to his throne. Could smile, move eyes, cheeks and chest, and move a pen. Worked by forcing compressed air through hidden rubber tubes Seated behind desk People would remove their hats and pray to it.

  12. Westinghouse robots • 1927 Roy James Wensley and Televox • New mechanism for controlling electrical substations • Previously – controller would phone worker in substation and tell them which switch to open. Worker would open switch and report back. • New idea – replacing worker with bank of relay switches that could be operated by calling them on the phone • 3 tones from tuning forks directed to phone • At receiving end, tones amplified to operate relay.

  13. “Televox” • Wensley’s machine consisted of 2 boxes of electronics • Westinghouse publicity team – branded it a “mechanical man” • Wensley added head, body and limbs made from prop board • Story spread rapidly • “The club woman with Televox in her home may call up at five o’clock, inquire of Televox what the temperature in the living room is, have Televox turn up the furnace, light the oven in which she has left the roast, light the lamp in the living room, and do whatever else she may wish. Televox comes near to being a scientist’s realization of a dramatist’s fantasy.” (1928)

  14. “American engineer H. J. Wensley of Westinghouse laboratories just created a robot which he named “Televox” because it follows directions remotely from voice commands or sounds of a musical instrument. The vibrations trigger an electric motor in the robot which makes it act according to the commands received.… This bewildering being is the most striking design of our mechanical time, whose creations, having neither sense nor brain, achieve a perfection that truly appears to approach the supernatural.” • Vu magazine, 1928

  15. Other Westinghouse robots • Katrina Televox • Willie Vocalite – smoked cigarettes • Controlled by instructions spoken into telephone – different responses triggered by number of syllables • Elektro A 7 ft walking robot, remote controlled by voice commands • Sparko • A dog for Elektro

  16. Willie Vocalite

  17. Elektro

  18. In these examples, no attempt to model humans, or animals • No attempt to make the mechanisms underlying their behaviour the same as those of humans, or animals • Aim instead is to create an illusion • Of life • Of understanding • Also can serve to advertise a company • E.g. Westinghouse robots • E.g. Asimo and Honda

  19. Factors making AI magic and deception easier • Humans have a natural tendency to anthropomorphise machines • E.g. talking to your car, or your computer as if it could understand you, and as if it could choose to behave well or not • E.g. seeing faces in inanimate objects

  20. Anthropomorphism – attributing human characteristics to non-human creatures • Zoomorphism – attributing animal characteristics to non-animals

  21. Factors making AI magic and deception easier • “willing suspension of disbelief” • Exploited by puppeteers • (see Heart robot) • See children with their favourite teddy bear

  22. Heart Robot • Developed at University of West of England • Designed to encourage emotional responses • Part robot, part puppet – operated by expert puppeteer • Robot appears to respond emotionally to human encounters • - when hugged and treated gently its limbs become limp, eyelids lower, breathing relaxes and heart beat slows down. • If shaken or shouted at, it flinches, clenches its hand, breathing and heart rate speed up and its eyes widen in dismay.

  23. Factors making AI magic and deception easier Sherry Turkle (2006) talks of how robots, or toys, that seem to need nurturing and care “push our Darwinian buttons”

  24. Paro and My Real Baby • Therapeutic seal and Interactive doll • Turkle et al (2006) studied elderly care-home resident’s interactions • Method: observation and conversations with technology users

  25. Turkle (1995) notes a tendency among both children and adults to treat computer artifacts that are minimally responsive as more intelligent than they really are: “Very small amounts of interactivity cause us to project our own complexity onto the undeserving object” e.g. Tamagotchi phenomena.

  26. Kismet • Cynthia Breazeal, MIT • Sherry Turkle (2006) looked at children interacting with Kismet and Cog – • Found they preferred to see Kismet as something with which they could have a relationship. • They would develop elaborate explanations for Kismet’s failures to understand, or to respond appropriately. • E.g “Kismet is too shy” “Kismet is not feeling well”

  27. Human-Robot interaction • What creates an illusion of intelligence? • What kind of robot do people prefer to interact with? • Humanoid? Furry? Friendly? • Eye contact, turn taking

  28. Uncanny valley

  29. Uncanny valley • Japanese roboticist Masahiro Mori wrote about the uncanny valley in 1970. • Mori's hypothesis states that as a robot is made more humanlike in its appearance and motion, the emotional response from a human being to the robot will become increasingly positive and empathic, until a point is reached beyond which the response quickly becomes that of strong repulsion.

  30. Total Turing Test? • Ishiguro, 2006 • Factors affecting our acceptance of robots as social partners • Android in booth viewed for 1 or 2 seconds • Static android • Moving android (micro movements) • Real human • Task – check colour of cloth • 80% aware of android in static condition • 76.9% unaware of android in moving condition

  31. Factors encouraging human-robot interaction • Appearance • Movement • Emotional expression

  32. Factors encouraging human-robot interaction • Conversation – e.g. turn taking, nodding encouragingly • Eye contact • Contingency – responding quickly enough • Sometimes Wizard of Oz approach used • Recognizing and responding to your emotion • Face? Voice? Body language?

  33. Rubi/Qrio project • UCSD (University of California, San Diego) • Studied interactions between children and QRIO robot in day care centre over 5 months

  34. Measured the interactions between toddlers and robots – they interacted with QRIO more than with toy robot, or teddy bear. • The robot responded contingently – e.g. giggling when patted on the head • Part Wizard of Oz (hidden operator) • Their interactions decreased in a middle period where the robot performed a preset, but elaborate dance.

  35. “Results indicate that current robot technology is surprisingly close to achieving autonomous bonding and socialization with human toddlers for sustained periods of time. “ (Tanaka et al, 2007) • But the toddlers’ interactions were supervised and guided by other adults • Also interaction times limited (1 hour sessions) • And some remote control

  36. Social implications? • - making robots appear intelligent? • - making them seem to care “I love you” • - using them as companions

  37. Hello Kitty robot Website claims: "This is a perfect robot for whoever does not have a lot time to stay with their child. Hello Kitty Robot can help you to stay with your child to keep them from being lonely."

  38. PaPeRo robot

  39. Robot companions and carers for the elderly, or the very young- are they a good thing? • Discussion in pairs, then 4s etc, then report back advantages and disadvantages.

  40. The March of the Robot DogsSparrow (2002) • Robot “pets” suggested as companions for the elderly • There are some demonstrable benefits • But “For an individual to benefit significantly from ownership of a robot pet they must systematically delude themselves regarding the real nature of their relation with the animal.” (Sparrow, 2002)

  41. Living animal pets can share experiences with us. • Sparrow argues that it’s right to value our relationships with them • But a robot is not something we can have a relationship with. • To think otherwise is to be deluded. • Morally wrong to delude old people into thinking they can have a relationship with a robot pet • Also old people need human contact – the more robots are used in their care, the less human contact they will receive.

More Related