1 / 59

COM1070: Introduction to Artificial Intelligence: week 5 Yorick Wilks Computer Science Department

COM1070: Introduction to Artificial Intelligence: week 5 Yorick Wilks Computer Science Department University of Sheffield www.dcs.shef.ac.uk/-yorick. Two traditions of knowledge representation based on meaning representations.

fraise
Download Presentation

COM1070: Introduction to Artificial Intelligence: week 5 Yorick Wilks Computer Science Department

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. COM1070: Introduction to Artificial Intelligence: week 5 Yorick Wilks Computer Science Department University of Sheffield www.dcs.shef.ac.uk/-yorick

  2. Two traditions of knowledge representation based on meaning representations • First, the Expert Systems tradition which we can link to the meaning and knowledge representtaions in SHRDLU, Winograd’s (1972 MIT system) • Secondly, the tradition of scripts and frames as representational objects, which can be linked back to Conceptual Dependency theory (Schank, Stanford, 1971)

  3. Expert Systems SHRDLU, and the blocks microworld. Domain-specific knowledge (as opposed to domain-general knowledge). Understood a substantial subset of English by representing and reasoning about a very restricted domain: the MIT Blocksworld. The program too complex to be extended to real world scenarios. Expert systems: also relied on depth of knowledge of constrained domain. But commercially exploitable. ‘Real’ applications: medical domains, computer layouts etc.

  4. Shrdlu: Terry Winograd (1972) at MIT. Natural language understanding program. Unlike Eliza and PARRY, knows about environment, can reason, plan, and learn. Simulation of robot arm and environment of coloured play blocks. Communication via key board. You saw it in the video two weeks ago--it knows more than PARRY but would be a far more boring desert island companion!

  5. Typical and untypical SHRDLU • Pick up the green block and put it in the box • I HAVE DONE SO • Pick up the small red block and put it in the box • BY ‘SMALL’ I ASSUME YOU MEAN THE RED BLOCK ON THE BLUE CUBE. I HAVE DONE SO. • Put the red block on the cube in the box.

  6. Pick up the steeple. SORRY, I DON’T KNOW THE WORD ‘STEEPLE’. A ‘steeple’ is a stack that contains two green cubes and a pyramid. I UNDERSTAND Trade you the steeple for three red cubes. SORRY, I DON’T KNOW THE WORD ‘FREE’. Sorry, I thought you were smarter than you are. SORRY, I DON’T KNOW THE WORD ‘SORRY’.

  7. Does the little block that the hatched pyramid’s support supports support anything black

  8. Winograd’s method: based on logic and notion that words point to sets of things in the world.- meaning extracted by looking at what is in common between sets of objects.

  9. Database for a toy world. Pick up the ball to the right of the small box • to the right of the small box = set of objects that have x position greater than that of the small box at (c,d), I.e. with x>c Sentence scanned for known instruction (pick up) Then look for object that satisfies all constraints. Ball (p,q) where p>c Ambiguous – preprogrammed to ask. Retrieve x,y coordinates and grasp object.

  10. SEMANTICS IN WINOGRAD “object semantic structures” for A RED CUBE (GOAL (IS X BLOCK (EQDIM X) Micro planner code (COLOR X RED))) (Block manip physob thing) features }

  11. TRUTH CONDITIONS (on other screen) are questions to be asked after finding a “big NP” Below is what SHRDLU knows about the blocks at a given moment. Pyramid 2 is a block is pyramidal is green Block 7 is a block is blue Pyramid 2 is above BLOCK 7 These facts make all the Truth conditions true (if X is Pyramid 2 and Y is Block 7) ---- SO THE BIG NP fits this world! [the little NP wouldn’t].

  12. On the block In the box RSS’s + Blocks used in Parsing. “Put pyramid Blocks tries to prove either (ON BLOCK PYRAMID) Or (IN BOX BLOCK)

  13. Put the green pyramid on the block in the box Prove (in BOX BLOCK) World I G (on BLOCK PYRAMID) World II G

  14. Actions (verbs which are commands not expressed as RSS’s) “grasp”  i.e. no (CMEANS etc) (VB((TRANS(#GRASP))) “grasp”  (DEFTHEOREM TC-GRASP) (THCONSE (X Y) (#GRASP ?X) (THGOAL (#MANIP ?X) (THCOND ((THGOAL(#GRASPING ?X)) THGOAL (#CLEARTOP ?X) (THUSE TC-CLEARTOP) This is the influential defn. of “grasp” that carries out grasping. Note the assumption of 2 disjoint classes of verbs. All inferential defns. reduces to GRASP, UNGRAD, MOVETO

  15. Microworld Approach Precursor to Expert Systems. Knowledge: possible to make a distinction between domain-specific and domain-independent knowledge. Domain-specific: expertise in specific domain. Domain-independent: more general purpose knowledge. Minsky: supervised several students who looked at microworlds. Eg Daniel Bobrow’s STUDENT program (1967) which solved algebra story problems such as

  16. If the number of customer Tom gets is twice the square of 20 percent of the number of advertisements he runs is 45, what is the number of customers Tom gets? Tom Evans ANALOGY program (1968) solved geometric analogy from IQ tests. Blocks world most famous microworld. Set of solid blocks placed on tabletop. Task is to rearrange blocks in a certain way, using a robot hand. Shrdlu: (limited domain knowledge) Shows understanding through answering questions and carrying out actions.

  17. Impressive: language used is like English, conversation is interactive, and robot ‘understands’ in the sense of doing what is required of it. But SHRDLU approach relies on dealing with small sets. Supposing it could deal with larger sets, to understand ‘a large grey mammal’ would have to find set of all grey things, all large things, and all mammals to find common element. But humans probably immediately think of elephants.

  18. Issues of Knowledge Representation Clearly important to have stored knowledge. Problem with ELIZA and PARRY – no knowledge. Main formalisms for knowledge representation: • Predicate logic • Frames and semantic networks • Rule-based systems

  19. Lisp and Prolog Functions vs. predicates/truth functions. Contrast Mother (john) –> mary Mother (john, mary) which is true or false Contrast: CONVENTIONAL LANGUAGE (A + B) (A - B LISP: (QUOTIENT (SUM A B) (DIFFERENCE A B)) -> ANSWER PROLOG: (SUM A, B, T1),(DIFFERENCE A,B,T2)(QUOTIENT T1, T2, T3) which is true or false for some T1, T2 and a (returned T3)

  20. SHRDLU was never extended and became a key example of a type of expert system that was also ‘toy’;--very few words or rules expressing its information. Disillusionment Methods that worked for demonstrations on simple problems failed when tried on wider selections, or more difficult problems. Lighthill Report (Lighthill, 1973) criticisms of Artificial Intelligence.

  21. The General Problem Solver (Newell, Shaw and Simon, CMU, 1970s) GPS designed to be a general problem solver – does not contain knowledge of problem domains--unlike SHRDLU. Performs means-end analysis, guided by heuristics about which subgoal should be achieved first. But many problems don’t lend themselves to means-end analysis. Also can’t extend method of giving programs heuristics to larger problem domains. Its knowledge stored in rules, plus an interpreter for running them:X--> Y + Z or IF X THEN DO Y AND Z like the rules of grammar and logic.

  22. Knowledge Representation, Slot and Filler structures, and Scripts Why are we capable of performing so many difficult tasks? One answer is the knowledge that we have built up of the world. In order to have intelligent computers, or robots, they will need to be able to understand the world, and to understand it, they must have knowledge of it. Symbolic AI: emphasis on giving computers knowledge about the world. Raises question of how this knowledge will be represented.

  23. KNOWLEDGE AND SYNTACTIC STRUCTURES “The departure of Mr. Whitelaw from N. Ireland at this time has amazed Irish political leaders. While there was no official comment in Dublin, it would appear that the Government was not informed in advance of MR. WHITELAW’S MOVE.”

  24. The man drove down the road in a car ((The man)(drove (down the road)(in a car)))) ((The man)(drove(down the road(in a car))))

  25. Symbolic AI – emphasis on particular kind of knowledge. Can be contrasted to different view of knowledge evident in Adaptive Behaviour approach, and in Neural Computing. Expert system – knowledge represented in form of if-then procedural rules. Clearly we have knowledge that is not represented in this manner. (Dreyfus’ criticisms of the whole expertise view of the world in ‘What computers can’t do’): * we have other kinds of knowledge, including vague knowledge * we have acquired it in certain ways

  26. What is required of a knowledge representation language? • Representational adequacy It should allow you to represent all the knowledge you need to reason with • Inferential adequacy: It should allow new knowledge to be inferred from a basic set of facts • Inferential efficiency Inferences should be made efficiently • Clear Syntax and Semantics We should know what the allowable expressions of the language are and what they mean • Naturalness The language should be reasonably natural and easy to use

  27. Useful form of inference: property inheritance Semantic Networks, and Frames support property inheritance. (Slot and Filler structures are a more specific form of frames). Semantic Networks and Frames use different notations but are effectively the same. They provide a simple and intuitive way of representing facts about objects, and essentially semantic networks are just diagrammatic forms of frames.

  28. animal subclass subclass has-part reptile mammal head subclass large size elephant colour grey instance instance clyde nellie apples likes

  29. Mammal: Subclass: Animal Has_part: head Elephant: Subclass: mammal Colour: grey Size: large Nellie: Instance: elephant Likes: apples

  30. Mammal: Subclass: Animal Has_part: yes *Furry: yes Elephant: subclass: Mammal has_trunk: yes *colour: grey *size: large *furry: noClyde: instance: Elephant colour: pink owner: FredNellie: instance: Elephant size: small

  31. Can represent subclass and instance relationships (Both sometimes called ISA) We can represent same idea as a frame Properties (eg colour and size) can be referred to as slots and slot values (eg grey, large) as slot fillers. Objects can inherit all properties of parent class (therefore Nellie is grey and large). But can properties which are only typical (usually called default, here starred), and can be overridden. For example, animal is typically furry, but this is not so for an elephant.

  32. Situation can be complicated by multiple inheritance, where object or class may have more than one parent class. May result in some conflict: for example if Nellie is both an elephant and a circus animal. From elephant we would expect Nellie’s habitat to be the jungle, but from circus animal we would expect it to be a tent. Could set further precedence order to resolve this – or might need further class for Circus-elephant.

  33. ‘Nixon diamond’ inheritance! • Nixon was a Quaker and a Republican • Quakers are usually pacifists • Republicans are usually not pacifists • What to inherit for Nixon? • He wasn’t! quakers republicans Pacifists Not-pacifists nixon ?

  34. Semantic networks give transitive inference--but….. • Tweety is an elephant, an elephant is a mammal: Tweety is a mammal. • The US President is elected every 4 years, Bush is US President: Bush is elected every 4 years!!!! • My car is a Ford, Ford is a car company: my car is a car company!!!!

  35. Slot and Filler structures Semantic networks (which can be written as frames) are very general systems. They can be seen as examples of slot and filler structures, but it is possible to have slot and filler structures which embody specific notions of what types of objects and relations are permitted. Conceptual dependency and scripts are examples of strong slot-and-filler structures.

  36. Conceptual dependency (CD); slot and filler structures used to represent the kind of knowledge abut events that is usually conveyed in natural language sentences. Goal is to represent knowledge so as to • Facilitate drawing inferences from sentences • Be independent of the language in which the sentences were originally stated.

  37. Conceptual Dependency claim: • For any two sentences that are identical in meaning, regardless of language, there should be only one representation. • Any information in a sentence that is implicit must be made explicit in the representation of the meaning of that sentence.

  38. Schank CD diagrams • John <==> INGEST <--D-- body food

  39. Conceptual Dependency: 11 primitive acts. ATRANS: Transfer of an abstract relationship (e.g. give) PTRANS: Transfer of the physical location of an object (e.g. go) PROPEL: Application of physical force to an object (e.g. push) MOVE: Movement of a body part by its owner (e.g. kick) GRASP: Grasping of an object by an actor (e.g. clutch) INGEST: Ingestion of an object by an animal (e.g. eat)

  40. EXPEL: Expulsion of something from the body of an animal (e.g. cry) MTRANS: Transfer of mental information (e.g. tell) MBUILD: Building new information out of old (e.g. decide) SPEAK: Production of sounds (e.g. say) ATTEND: Focusing of a sense organ towards a stimulus (e.g. listen)

  41. 4 Primitive conceptual categories to build dependency structures. ACTs: Actions PPs: Objects (picture producers) AAs: Modifiers of actions (action aiders) PAs: Modifiers of PPs (picture aiders) Dependencies among conceptualisations correspond to semantic relations among underlying concepts.

  42. Rule 1 describes the relationship between an actor and the event he or she causes. This is a two-way dependency since neither actor nor event can be considered primary. The letter p above the dependency link indicates past tense. • Rule 2 describes the relationship between a PP and a PA that is being asserted to describe it. Many state descriptions, such as height, are represented in CD as numeric scales. • Rule 3 describes the relationship between two PPs, one of which belongs to the set defined by the other. • Rule 4 describes the relationship between a PP and an attribute that has already been predicated of it. The direction of the arrow is toward the PP being described. • Rule 5 describes the relationship between two PPs, one of which provides a particular kind of information about the other. The three most common types of information to be provided in this way are possession (shown as POSS-BY), location (shown as LOC), and physical containment (shown as CONT). The direction of the arrow is again toward the concept being described.

  43. Rule 6 describes the relationship between an ACT and the PP that is the object of that ACT. The direction of the arrow is toward the ACT since the context of the specific ACT determines the meaning of the object relation. • Rule 7 describes the relationship between an ACT and the source and the recipient of the ACT. • Rule 8 describes the relationship between an ACT and the instrument with which it is performed. The instrument must always be a full conceptualisation (I.e. it must contain an ACT), not just a single physical object. • Rule 9 describes the relationship between an ACT and its physical source and destination. • Rule 10 describes the relationship between a PP and a state in which it started and another in which it ended. • Rule 11 describes the relationship between one conceptualisation and another that causes it. Notice that the arrows indicate dependency of one conceptualisation on another and so point in the opposite direction of the implication arrows. The two forms of the rule describe the cause of an action and the cause of a state change.

  44. Rule 12 describes the relationship between a conceptualisation and the time at which the event it describes occurred. • Rule 13 describes the relationship between one conceptualisation and another that is the time of the first. The example for this rule also shows how CD exploits a model of the human information processing system: see is represented as the transfer of information between the eyes and the conscious processor. • Rule 14 describes the relationship between a conceptualisation and the place at which it occurred.

  45. Advantages of Conceptual Dependency primitives • Easier to describe the inference rules by which knowledge can be manipulated. Rules can be represented once for each primitive ACT rather than for all words that describe that ACT, e.g. Give, Take, Steal and Donate all are instances of ATRANS. Can make same inferences about who has the object, and who once had the object. • To construct CD representation we make explicit some information that was not stated in the text, e.g. John took the book from Mary, we make explicit the information that Mary no longer has the book. This might make it easier to understand subsequent statement, e.g. Mary had nothing to read.

  46. Disadvantages of Conceptual Dependency primitives Does require all knowledge to be decomposed into a set of primitives. Not always easy e.g. would take two pages of CD forms to represent ‘John bet Sam fifty dollars that the Mets would win the World Series’. Also emphasis on events – not other knowledge such as our knowledge about physical objects. Problem of a ‘right set’ of primitives

  47. Scripts Reference: Schank, R. & Abelson, R. (1977) Scripts, plans, goals and understanding. Hillsdale, New Jersey: Lawrence Erlbaum. SCRIPTS were predefined sequences of CD structures designed to capture the stereotypical sequence of events in a story. If a story could be matched against a script then inferences present in the script could be drawn to make implicit aspects of the story explicit--as a way of understanding it by machine.

  48. Sample Schank story John went to New York by bus. On the bus he talked to an old lady. When he left the bus, he thanked the driver. He took the subway to Leone’s. On the subway his pocket was picked. He got off the train and entered Leone’s. He had some lasagne. When the check came, he discovered he couldn’t pay. The management told him he would have to wash dishes. When he left, he caught a bus to New Haven.

More Related