1 / 49

Knowledge Acquisition and Problem Solving : Introduction

CS 785, Fall 2001. Knowledge Acquisition and Problem Solving : Introduction. George Tecuci tecuci@cs.gmu.edu http://lalab.gmu.edu/. Learning Agents Laboratory Department of Computer Science George Mason University. Overview. 1. Course objective and Class introduction.

morey
Download Presentation

Knowledge Acquisition and Problem Solving : Introduction

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CS 785, Fall 2001 Knowledge Acquisition and Problem Solving : Introduction George Tecuci tecuci@cs.gmu.eduhttp://lalab.gmu.edu/ Learning Agents LaboratoryDepartment of Computer Science George Mason University

  2. Overview 1. Course objective and Class introduction 2. Artificial Intelligence and intelligent agents 3. Sample intelligent agent: presentation and demo 4. Agent development: Knowledge acquisition and problem solving 5. Overview of the course

  3. 1. Course Objective Present principles and major methods of knowledge acquisition for the development of knowledge bases and problem solving agents. Major topics include: overview of knowledge engineering, general problem solving methods, ontology design and development, modeling of the problem solving process, learning strategies, rule learning and rule refinement. The course will emphasize the most recent advances in this area, such as: knowledge reuse, agent teaching and learning, knowledge acquisition directly from subject matter experts, and mixed-initiative knowledge base development. It will also discuss open issues and frontier research. The students will acquire hands-on experience with a complex, state-of-the-art methodology and tool for the end-to-end development of knowledge-based problem-solving agents.

  4. 2. Artificial Intelligence and intelligent agents What is Artificial Intelligence What is an intelligent agent Characteristic features of intelligent agents Sample tasks for intelligent agents

  5. What is Artificial Intelligence Artificial Intelligence is the Science and Engineering that is concerned with the theory and practice of developing systems that exhibit the characteristics we associate with intelligence in human behavior: perception, natural language processing, reasoning, planning and problem solving, learning and adaptation, etc.

  6. Central goals of Artificial Intelligence Understand the principles that make intelligence possible(in humans, animals, and artificial agents) Developing intelligent machines or agents(no matter whether they operate as humans or not) Formalizing knowledge and mechanizing reasoningin all areas of human endeavor Making the working with computers as easy as working with people Developing human-machine systems that exploit the complementariness of human and automated reasoning

  7. What is an intelligent agent • An intelligent agent is a system that: • perceives its environment (which may be the physical world, a user via a graphical user interface, a collection of other agents, the Internet, or other complex environment); • reasons to interpret perceptions, draw inferences, solve problems, and determine actions; and • acts upon that environment to realize a set of goals or tasks for which it was designed. input/ sensors IntelligentAgent output/ user/ environment effectors

  8. What is an intelligent agent (cont.) Humans, with multiple, conflicting drives, multiple senses, multiple possible actions, and complex sophisticated control structures, are at the highest end of being an agent. At the low end of being an agent is a thermostat.It continuously senses the room temperature, starting or stopping the heating system each time the current temperature is out of a pre-defined range. The intelligent agents we are concerned with are in between. They are clearly not as capable as humans, but they are significantly more capable than a thermostat.

  9. What is an intelligent agent (cont.) An intelligent agent interacts with a human or some other agents via some kind of agent-communication language and may not blindly obey commands, but may have the ability to modify requests, ask clarification questions, or even refuse to satisfy certain requests. It can accept high-level requests indicating what the user wants and can decide how to satisfy each request with some degree of independence or autonomy, exhibiting goal-directed behavior and dynamically choosing which actions to take, and in what sequence.

  10. What an intelligent agent can do • An intelligent agent can : • collaborate with its user to improve the accomplishment of his or her tasks; • carry out tasks on user’s behalf, and in so doing employs some knowledge of the user's goals or desires; • monitor events or procedures for the user; • advise the user on how to perform a task; • train or teach the user; • help different users collaborate.

  11. Characteristic features of intelligent agents Knowledge representation and reasoning Transparency and explanations Ability to communicate Use of huge amounts of knowledge Exploration of huge search spaces Use of heuristics Reasoning with incomplete or conflicting data Ability to learn and adapt

  12. Knowledge representation and reasoning OBJECT SUBCLASS-OF BOOK CUP TABLE INSTANCE-OF ON ON BOOK1 CUP1 TABLE1 An intelligent agent contains an internal representation of its external application domain, where relevant elements of the application domain (objects, relations, classes, laws, actions) are represented as symbolic expressions. This mapping allows the agent to reason about the application domain by performing reasoning processes in the domain model, and transferring the conclusions back into the application domain. ONTOLOGY represents If an object is on top of another object that is itself on top of a third object then the first object is on top of the third object. RULE  x,y,z  OBJECT, (ON x y) & (ON y z)  (ON x z) Model of the Domain Application Domain

  13. Separation of knowledge from control Implements a general method of interpreting the input problem based on the knowledge from the knowledge base Intelligent Agent Input/ Problem Solving Engine Sensors User/ Environment Ontology Rules/Cases/Methods Knowledge Base Output/ Effectors Data structures that represent the objects from the application domain, general laws governing them, action that can be performed with them, etc.

  14. Transparency and explanations The knowledge possessed by the agent and its reasoning processes should be understandable to humans. The agent should have the ability to give explanations of its behavior, what decisions it is making and why. Without transparency it would be very difficult to accept, for instance, a medical diagnosis performed by an intelligent agent. The need for transparency shows that the main goal of artificial intelligence is to enhance human capabilities and not to replace human activity.

  15. Ability to communicate An agent should be able to communicate with its users or other agents. The communication language should be as natural to the human users as possible. Ideally, it should be free natural language. The problem of natural language understanding and generation is very difficult due to the ambiguity of words and sentences, the paraphrases, ellipses and references which are used in human communication.

  16. Ambiguity of natural language • Diamond • a mineral consisting of nearly pure carbon in crystalline form, usually colorless, the hardest natural substance known; • a gem or other piece cut from this mineral; • a lozenge-shaped plane figure (); • in Baseball, the infield or the whole playing field. • Visiting relatives can be boring. • To visit relatives can be boring. • The relatives that visit us can be boring. • She told the man that she hated to run alone. • She told the man: I hate to run alone ! • She told the man whom she hated: run alone ! Words and sentences have multiple meanings

  17. Other difficulties with natural language processing Paraphrase: The same meaning may be expressed by many sentences. Ann gave Bob a cat. Ann gave a cat to Bob. Bob was given a cat by Ann. A cat was given to Bob by Ann. What Ann gave Bob was a cat. Bob received a cat from Ann. Ellipsis: Use of sentences that appear ill-formed because they are incomplete. Typically the parts that are missing have to be extracted from the previous sentences. Bob: What is the length of John: 1072 the ship USS J.F.Kennedy ? Bob: The beam ? John: 130 Reference: Entities may be referred to without giving their names. Bob: What is the length of John: 1072 the ship USS J.F.Kennedy ? Bob: Who is her commander ? John: Captain Nelson.

  18. Use of huge amounts of knowledge In order to solve "real-world" problems, an intelligent agent needs a huge amount of domain knowledge in its memory (knowledge base). Example of human-agent dialog: User: The toolbox is locked. Agent: The key is in the drawer. In order to understand such sentences and to respond adequately, the agent needs to have a lot of knowledge about the user, including the goals the user might want to achieve.

  19. Use of huge amounts of knowledge (example) User: The toolbox is locked. Agent: Why is he telling me this? I already know that the box is locked. I know he needs to get in. Perhaps he is telling me because he believes I can help. To get in requires a key. He knows it and he knows I know it. The key is in the drawer. If he knew this, he would not tell me that the toolbox is locked. So he must not realize it. To make him know it, I can tell him. I am supposed to help him. The key is in the drawer.

  20. Exploration of huge search spaces An intelligent agent usually needs to search huge spaces in order to find solutions to problems. Example 1: A search agent on the internet Example 2: A checkers playing agent

  21. Exploration of huge search spaces: illustration I Opponent win lose win I win win win lose win win lose lose draw lose lose win win lose win win win Determining the best move with minimax: win

  22. Exploration of huge search spaces: illustration The tree of possibilities is far too large to be fully generated and searched backward from the terminal nodes, for an optimal move. Size of the search space A complete game tree for checkers has been estimated as having 1040 nonterminal nodes. If one assumes that these nodes could be generated at a rate of 3 billion per second, the generation of the whole tree would still require around 1021 centuries ! Checkers is far simpler than chess which, in turn, is generally far simpler than business competitions or military games.

  23. Use of heuristics Intelligent agents generally attack problems for which no algorithm is known or feasible, problems that require heuristic methods. A heuristic is a rule of thumb, strategy, trick, simplification, or any other kind of device which drastically limits the search for solutions in large problem spaces. Heuristics do not guarantee optimal solutions. In fact they do not guarantee any solution at all. A useful heuristic is one that offers solutions which are good enough most of the time.

  24. Use of heuristics: illustration Heuristic function for board position evaluation: w1.f1 + w2.f2 + w3.f3 + … where wi are real-valued weights and fi are board features (e.g. center control, total mobility, relative exchange advantage.

  25. Reasoning with incomplete data The ability to provide some solution even if not all the data relevant to the problem is available at the time a solution is required. Example: The reasoning of a physician in an intensive care unit. Planning a military course of action. If the EKG test results are not available, but the patient is suffering chest pains, I might still suspect a heart problem.

  26. Reasoning with conflicting data The ability to take into account data items that are more or less in contradiction with one another (conflicting data or data corrupted by errors). Example: The reasoning of a military intelligence analyst that has to cope with the deception actions of the enemy.

  27. Ability to learn The ability to improve its competence and efficiency. An agent is improving its competence if it learns to solve a broader class of problems, and to make fewer mistakes in problem solving. An agent is improving its efficiency if it learns to solve more efficiently (for instance, by using less time or space resources) the problems from its area of competence.

  28. Illustration: concept learning Learned concept ((1 ? ) (? dark)) ((1 dark) (1 dark)) Conceptexamples ((1 light) (2 dark)) ((1 dark) (2 dark)) ((1 light) (2 light)) ((1 dark) (2 light)) _ _ + + + Learn the concept of ill cell by comparing examples of ill cells with examples of healthy cells, and by creating a generalized description of the similarities between the ill cells :

  29. Ability to learn: classification “Ill cell” concept ((1 ?) (? dark)) Is this cell ill? Is this cell ill? No Yes ((1 light) (1 light)) ((1 dark) (1 light)) The learned concept is used to diagnose other cells This is an example of reasoning with incomplete information.

  30. Extended agent architecture The learning engine implements methods for extending and refining the knowledge in the knowledge base. Intelligent Agent Input/ Problem Solving Engine Sensors Learning Engine User/ Environment Output/ Ontology Rules/Cases/Methods Knowledge Base Effectors

  31. Sample tasks for intelligent agents Planning: Finding a set of actions that achieve a certain goal. Example: Determine the actions that need to be performed in order to repair a bridge. Critiquing: Expressing judgments about something according to certain standards. Example: Critiquing a military course of action (or plan) based on the principles of war and the tenets of army operations. Interpretation: Inferring situation description from sensory data. Example: Interpreting gauge readings in a chemical process plant to infer the status of the process.

  32. Sample tasks for intelligent agents (cont.) Prediction: Inferring likely consequences of given situations. Examples: Predicting the damage to crops from some type of insect. Estimating global oil demand from the current geopolitical world situation. Diagnosis: Inferring system malfunctions from observables. Examples: Determining the disease of a patient from the observed symptoms. Locating faults in electrical circuits. Finding defective components in the cooling system of nuclear reactors. Design: Configuring objects under constraints. Example: Designing integrated circuits layouts.

  33. Sample tasks for intelligent agents (cont.) Monitoring: Comparing observations to expected outcomes. Examples: Monitoring instrument readings in a nuclear reactor to detect accident conditions. Assisting patients in an intensive care unit by analyzing data from the monitoring equipment. Debugging: Prescribing remedies for malfunctions. Examples: Suggesting how to tune a computer system to reduce a particular type of performance problem. Choosing a repair procedure to fix a known malfunction in a locomotive. Repair: Executing plans to administer prescribed remedies. Example: Tuning a mass spectrometer, i.e., setting the instrument's operating controls to achieve optimum sensitivity consistent with correct peak ratios and shapes.

  34. Sample tasks for intelligent agents (cont.) Instruction: Diagnosing, debugging, and repairing student behavior. Examples: Teaching students a foreign language. Teaching students to troubleshoot electrical circuits. Teaching medical students in the area of antimicrobial therapy selection. Control: Governing overall system behavior. Example: Managing the manufacturing and distribution of computer systems. Any useful task: Information fusion. Information assurance. Travel planning. Email management.

  35. 2. Sample intelligent agent: Presentation and demo Agent task: Course of action critiquing Knowledge representation Problem solving Demo Why are intelligent agents important

  36. Critiquing Critiquing means expressing judgments about something according to certain standards. Example: Critique various aspects of a military Course of Action, such as its viability (its suitability, feasibility, acceptability and completeness), its correctness (which considers the array of forces, the scheme of maneuver, and the command and control), and its strengths and weaknesses with respect to the principles of war and the tenets of army operations.

  37. Sample agent: Course of Action critiquer Source:Challenge problem for the DARPA’s High Performance Knowledge Base (HPKB) program (FY97-99). Background: A military course of action (COA) is a preliminary outline of a plan for how a military unit might attempt to accomplish a mission. After receiving orders to plan for a mission, a commander and staff analyze the mission, conceive and evaluate potential COAs, select a COA, and prepare a detailed plans to accomplish the mission based on the selected COA. The general practice is for the staff to generate several COAs for a mission, and then to make a comparison of those COAs based on many factors including the situation, the commander’s guidance, the principles of war, and the tenets of army operations. The commander makes the final decision on which COA will be used to generate his or her plan based on the recommendations of the staff and his or her own experience with the same factors considered by the staff. Agent task:Identify strengths and weaknesses in a COA, based on the principles of war and the tenets of army operations.

  38. COA Example – the sketch Graphical depiction of a preliminary plan. It includes enough of the high level structure and maneuver aspects of the plan to show how the actions of each unit fit together to accomplish the overall purpose.

  39. COA Example – the statement Explains what the units will do to accomplish the assigned mission.

  40. COA critiquing task Answer each of the following questions: Provide general guidance for the conduct of war at the strategic, operational and tactical levels. Describe characteristics of successful operations.

  41. The Principle of Surprise (from FM100-5) Strike the enemy at a time or place or in a manner for which he is unprepared. Surprise can decisively shift the balance of combat power. By seeking surprise, forces can achieve success well out of proportion to the effort expended. Rapid advances in surveillance technology and mass communication make it increasingly difficult to mask or cloak large-scale marshaling or movement of personnel and equipment. The enemy need not be taken completely by surprise but only become aware too late to react effectively. Factors contributing to surprise include speed, effective intelligence, deception, application of unexpected combat power, operations security (OPSEC), and variations in tactics and methods of operation. Surprise can be in tempo, size of force, direction or location of main effort, and timing. Deception can aid the probability of achieving surprise.

  42. Knowledge representation: object ontology The ontology defines the objects from an application domain.

  43. Knowledge representation: problem solving rules R$ASWCER-001 IF the task to accomplish is: ASSESS-SECURITY-WRT-COUNTERING-ENEMY-RECONNAISSANCE FOR-COA ?O1 Question: Is an enemy recon unit present in ?O1 ? Answer: Yes, the enemy unit ?O2 is performing the action ?O3 which is a reconnaissance action. Condition: ?O1 IS COA-SPECIFICATION-MICROTHEORY ?O2 IS MODERN-MILITARY-UNIT--DEPLOYABLE SOVEREIGN-ALLEGIANCE-OF-ORG ?O4 TASK ?O3 ?O3 IS INTELLIGENCE-COLLECTION--MILITARY-TASK ?O4 IS RED--SIDE Then accomplish the task: ASSESS-SECURITY-WHEN-ENEMY-RECON-IS-PRESENT FOR-COA ?O1 FOR-UNIT ?O2 FOR-RECON-ACTION ?O3 A rule is an ontology-based representation of an elementary problem solving process.

  44. Illustration of the problem solving process To what extent does COA411 conform to the Principle of Surprise? Assess surprise wrt countering enemy reconnaissance for-coa COA411 There is a strength with respect to surprise in COA411 because it contains aggressive security / counter-reconnaissance plans, destroying enemy intelligence collection units and activities. Intelligence collection by RED-CSOP1 will be disrupted by its destruction by DESTROY1. Assess surprise when enemy recon is present for-coa COA411 for-unit RED-CSOP1 for-recon-action SCREEN1 Report strength in surprise because of countering enemy recon for-coa COA411 for-unit RED-CSOP1 for-recon-action SCREEN1 for-action DESTROY1 with-importance high Assess COA wrt Principle of Surprise for-coa COA411 Does the COA assign appropriatesurprise and deception actions? I consider enemy recon Is an enemy reconnaissance unit present? Yes, RED-CSOP1 which is performing the reconnaissance action SCREEN1 Is the enemy reconnaissance unit destroyed? Yes, RED-CSOP1 is destroyed by DESTROY1

  45. COA critiquing demo COACritiquingDemo

  46. Why are intelligent agents important Humans have limitations that agents may alleviate (e.g. memory for the details that isn’t effected by stress, fatigue or time constraints). Humans and agents could engage in mixed-initiative problem solving that takes advantage of their complementary strengths and reasoning styles.

  47. Why are intelligent agents important (cont) The evolution of information technology makes intelligent agents essential components of our future systems and organizations. Our future computers and most of the other systems and tools will gradually become intelligent agents. We have to be able to deal with intelligent agents either as users, or as developers, or as both.

  48. Intelligent agents: Conclusion Intelligent agents are systems which can perform tasks requiring knowledge and heuristic methods. Intelligent agents are helpful, enabling us to do our tasks better. Intelligent agents are necessary to cope with the increasing challenges of the information society.

  49. Recommended reading G. Tecuci, Building Intelligent Agents, Academic Press, 1998, pp. 1-12. Tecuci G., Boicu M., Bowman M., and Dorin Marcu, with a commentary by Murray Burke, “An Innovative Application from the DARPA Knowledge Bases Programs: Rapid Development of a High Performance Knowledge Base for Course of Action Critiquing,” invited paper for the special IAAI issue of the AI Magazine, Volume 22, No, 2, Summer 2001, pp. 43-61. http://lalab.gmu.edu/publications/data/2001/COA-critiquer.pdf

More Related