1 / 23

Beating Common Sense into Interactive Applications

Beating Common Sense into Interactive Applications. Henry Lieberman, Hugo Liu, Push Singh, Barbara Barry AI Magazine, Winter 2004 As (mis-)interpreted by Peter Clark for Boeing KR Group. Introduction. Claim: Commonsense applications are closer than you think

vidal
Download Presentation

Beating Common Sense into Interactive Applications

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Beating Common Sense into Interactive Applications Henry Lieberman, Hugo Liu, Push Singh, Barbara Barry AI Magazine, Winter 2004 As (mis-)interpreted by Peter Clark for Boeing KR Group

  2. Introduction • Claim: Commonsense applications are closer than you think • Problems with CommonSense (CS) applications: • Even large KBs have sparse coverage • Inference is unreliable

  3. Their Common Sense KB: Open Mind Common Sense (OMCS) • 750k NL assertions from 15k contributors • ConceptNet: A semantic net built from these • 20 link types

  4. Against Question-Answering… • Question answering is a bad CS domain: • User expects a direct answer to all his/her questions • System has to be right (almost) all of the time • Got to be fast (few seconds) • Alternative: intelligent interfaces • Assists user when it can • “fail soft” - user can ignore it if he/she wants • But: Is yet another paperclip?

  5. 1. ARIA: Annotation and Retrieval Integration Agent • Helps annotate photos, and find photos • Similar to Thesaurus search • Photos are annotated with keywords • a. People, places and events are recognized in text • b. Use the semantic net to find “close” photos to text • Text also adds to the net (system learns) • “My sister’s name is Mary” → “Joe –sister→Mary”

  6. 2. Detecting Moods (“affect”) in Text   “My wife left me; she took the kids and the dog” • Approach: • Mood keyword (e.g., “sad”) → mine a “small society of linguistic models of effect” from the KB (=?) • Applications: • Empathy Buddy: (purpose=?) • Summarizing a collection of reviews about a topic

  7. 3. Cinematic Commonsense:Video Capture and Editing • Videographer shoots, adds NL annotations • E.g., “a street artist is painting a painting” • Send annotations to KB for elaboration • “after painting, you clean the brushes” • “during painting, you might get paint on your hands” • Elaborations • suggest new shots for the videographer • Also are stored for improved retrieval • Can help order shots into temporal/causal • (isn’t temporal ordering already done?) • But: need more complex story understanding to create effective suggestions for the filmmaker.

  8. 4. Common Sense Storytelling: StoryIllustrator • Continuosly retreive photos relevant to user’s typing • Use Yahoo image search, not annotations, for Web images • CSK for query expansion, E.g., “baby” ↔ “milk”

  9. 5. Common Sense Storytelling:OMAdventure • Generates dungeons-and-dragons game on the fly • E.g., in kitchen → what do you find in kitchen? → Other associated locations? • E.g., oven → what can you do with an oven? • Hence oven, cooking are “moves” for user. Associated locations are “exits”. • User can add objects (e.g., “blender”) → extend KB (“blenders are in kitchens”)

  10. 7. Common Sense Storytelling:StoryFighter • System and user take turns to contribute lines to a story to get from A to B, e.g., • “John is sleepy” (start) • “John is in prison” (end) • Must avoid “taboo” words (e.g., “arrest”) • CSK deduces consequences of an event • “If you commit a crime, you might go to jail” • CSK also picks obvious taboo words

  11. 8. Topic Spotting • Task: Given speech, identify situation • E.g., “fries”, “lunch”, “Styrofoam” → “eating in a fast-food restaurant” • Use Bayesian inference + ConceptNet • Used in collaborative storytelling with kids • Computer starts the story • Kid continues • Computer can’t fully understand kid’s speech, but can at least identify the topic → generate plausible continuation • E.g., “bedroom” → “Jane’s parents walked into the bedroom while she was hiding under the bed”

  12. 9. Globuddy: A Tourist Phrasebook • Type in your situation • “I’ve just been arrested” • It retrieves and translates associated CS (?) • “If you are arrested, you should call a lawyer” • “Bail is a payment that allows an accused person to get out of jail until a trial”

  13. 10. Predictive typing/phrase completion • E.g., for a cellphone keyboard • Use ConceptNet to find next word that “makes sense” • E.g., “train st” → “train station”

  14. 11. Search: GOOSE and Reformulator as Google adjuncts • Infer user’s search goals and add keywords,e,g,: • “my cat is sick” → “Did you mean to look for veterinarians?” • Currently interactive. Later, will suggest better query.

  15. 12. Semantic Web • Given user’s goals, find services that might accomplish subgoals, e.g., • “Schedule a doctor’s appt” → look up directory of doctors, check reputation, geographic lookup, lookup schedules, etc.

  16. 13. Knowledge Acquisition • Criticism: Many OpenMind sentences are decontextualized • “At a wedding, bride and groom exchange rings” is culturally specific • → develop a prompt-based interface to have user’s make context explicit.

  17. Reflections • Logic: What inferences are possible • Commonsense: What inferences are plausible • Qn: How well does OpenMind support this? E.g., • “People live in houses” • “Things fall down rather than up” • “Acid irritates skin”

  18. Same for our own database… + “There is a rocket” = ?

  19. Reflections (cont): Limitations • Spottiness of subject coverage in OpenMind • Inference is unreliable → reluctant to use it • Need new inference methods • E.g., “interleave context-sensitive inference with retrieval in a breadth-first manner” • CS suggestions may be distracting • But trials suggest otherwise (people tolerate wrong but plausible suggestions better than stupid ones)

  20. Some additional thoughts… • Domain-specific vs. domain-general applications • Domain-specific – how much CS is needed? • CycSecure • Oil exploration • etc. • Domain-general – still need task-specific algorithm • Unusual to find a domain- and task-general application • “Scenario completion” is a good task • newswire, incident reports, etc.

More Related