1 / 51

Flexible Reasoning with Functional Models

Flexible Reasoning with Functional Models. J. William Murdock Intelligent Decision Aids Group Navy Center for Applied Research in Artificial Intelligence Naval Research Laboratory, Code 5515 Washington, DC 20375 bill@murdocks.org http://bill.murdocks.org.

mimis
Download Presentation

Flexible Reasoning with Functional Models

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Flexible Reasoning with Functional Models J. William Murdock Intelligent Decision Aids Group Navy Center for Applied Research in Artificial Intelligence Naval Research Laboratory, Code 5515 Washington, DC 20375 bill@murdocks.orghttp://bill.murdocks.org Presentation at Ohio State University: April 15, 2003

  2. General Motivation • Complex dynamic environments demand quick and flexible behavior. • Specialized software is quick but not flexible. • Generative planning, reinforcement learning, etc. are very flexible but very slow. • Functional models provide the power of specialized software and the flexibility of AI.

  3. Specific Objectives (Outline) • Retrospective adaptation • A system encounters a new constraint during execution • It uses a model of itself to redesign itself. • Proactive adaptation • A system is told to perform a new, unknown task. • It must redesign itself before it can do an execution. • Models for existing, similar tasks may be adapted. • Or new models may be built from scratch. • Self-explanation • A system uses models of its reasoning process and products to explain itself and justify its results. • Explanation of threats • A system has a model of enemy behavior. • Uses that model to explain a potential threat and help a user decide whether that that threat is genuine.

  4. URL’s, servers, documents, etc. Access Remote Local … Request Receive Store TMK(Task-Method-Knowledge) • TMK models encode knowledge about a process (e.g., a computer program, a military activity). • TMK encodes: • Tasks: functional specification / requirements and results • Methods: behavioral specification / composition and control • Knowledge: Domain concepts and relations

  5. Partial History of TMK Key Influences: Functional Representation (Sembugamoorthy & Chandrasekaran 1986) Generic Tasks (Chandrasekaran 1986) OSU SBF Models (Goel, Bhatta, & Stroulia 1997) ZD (Liver and Allemang 1995) TMK Projects: Autognostic Failure-driven learning (Stroulia & Goel 1995) ... Interactive Kritik Self-Explanation (Goel & Murdock 1996) GT SIRRINE Retrospective Adaptation (Murdock & Goel 2001) REM Proactive Adaptation (Murdock 2001) ToRQUE Scientific Cognition (Griffith 1999) AHEAD Explanation of Threats (Murdock, Aha, & Breslow 2003) NRL DiscoveryMachine

  6. Georgia Tech SIRRINE:Retrospective Adaptation • Self-Improving Reflective Reasoner Integrating Noteworthy Experience • A shell for adaptive software systems. • Systems encoded in SIRRINE contain a TMK model of themselves • Used to automate adaptation in response to new constraints for a known task.

  7. Execution Credit Assignment Trace Adaptation SIRRINE Functional Architecture Model

  8. Meeting Scheduling Agent • Example agent which schedules a regular weekly meeting. • Given a length of time and a list of schedules, it produces a time slot that fits into those schedules. • It has a set of time constraints which require that meetings be held from 9AM to 5PM on Monday through Friday.

  9. Diagrams of TMK Models

  10. Model Animation Key

  11. Example Problem • Research group needs to schedule a 90 minute meeting. There are no available 90 minute slots between 9AM and 5PM on Mondays through Fridays. • The agent fails for this problem. Feedback is given stating that the meeting should be held on Tuesdays from 4:30PM to 6:00PM. • Credit assignment process identifies the find-next-slot task as one which could have produced the desired result. • Modification process alters that task.

  12. Knowledge forCredit Assignment • Feedback: State what the overall results should have been. • Trace: State what the results actually were; also used to localize failure • Models: indicate differences between what the results should have been and what they were; drive the modification process

  13. Trace Key

  14. Credit Assignment Process • Heuristics guide search through trace • Causal proximity: temporally closest to end first • Functional abstraction: most abstract first • Model used at each step of the search • When a potential contradiction is found, a particular type of credit is assigned • task-does-not-produce-value, method-does-not-produce-value, primitive-generates-invalid-state, primitive-fails • Result: Localization of credit

  15. Modification • Library of modification strategies indexed by the type of credit assigned and characteristics of the task or method • In the example: • Type of credit: task-does-not-produce-value • Localized to: the find-next-slot task (a primitive task implemented by a LISP procedure) • SIRRINE selects the fixed value production by task decomposition strategy.

  16. Fixed Value Productionby Task Decomposition • Existing task is divided into two methods: a base method and an alternate method. • The base method invokes a single task whose behavior is identical to the existing task. • The alternate method invokes a new task: • Primitive that produces a single fixed value • Example: 4:30PM to 6:00PM on Tuesdays • Applicability conditions on the alternate method require that it be invoked only in the “same” situation. • “Same” defined by bindings in the trace that are referenced in the model of the existing task.

  17. Model Zoom Key

  18. Relevant Portions of the Model Key

  19. Modification to the Model Key

  20. Example:Web Browsing Agent • A mock-up of web browsing software • Based on Mosaic for X Windows, version 2.4 • Imitates not only behavior but also internal process and information of Mosaic 2.4 ps ??? html pdf txt

  21. Example: PDF Viewer • The web agent is asked to browse the URL for a PDF file. • Mosaic 2.4 not have any information about external viewers for PDF. • The system cannot display the file. • The user provides feedback indicating the command for the correct viewer. • Adaptation Strategy: Fixed Value Production by Primitive Modification

  22. SIRRINE User Interface

  23. Georgia Tech REM: Proactive Adaptation • Reflective Evolutionary Mind • Like SIRRINE, REM is a shell for adaptive software systems using TMK models. • Unlike SIRRINE, REM is able to address new tasks. • It can retrieve and adapt methods for known tasks, or it can build new methods from scratch. • Off-the-shelf generative planning and reinforcement learning techniques are used to build new methods

  24. Execution Retrieval Credit Assignment Trace Adaptation REM Functional Architecture Model

  25. Task-Method-Knowledge Language (TMKL) • A new, powerful formalism of TMK developed for REM. • Uses LOOM, a popular off-the-shelf knowledge representation framework: concepts, relations, etc. REM models not only the tasks of the domain but also itself in TMKL.

  26. Sample TMKL Task (define-task communicate-with-www-server :input (input-url) :output (server-reply) :makes (:and (document-at-location (value server-reply) (value input-url)) (document-at-location (value server-reply) local-host)) :by-mmethod (communicate-with-server-method))

  27. Sample TMKL Method (define-mmethod external-display :provided (:not (internal-display-tag (value server-tag))) :series (select-display-command compile-display-command execute-display-command))

  28. Decision Making in REM: Q-Learning • Popular, simple form of reinforcement learning. • In each state, each possible decision is assigned an estimate of its potential value (“Q”). • For each decision, preference is given to higher Q values. • Each decision is reinforced, i.e., it’s Q value is altered based on the results of the actions. • These results include actual success or failure and the Q values of next available decisions.

  29. Q-Learning in REM • Decisions are made for method selection and for selecting new transitions within a method. • A decision state is a point in the reasoning (i.e., task, method) plus a set of all decisions which have been made in the past. • Initial Q values are set to 0. • Decides on option with highest Q value or randomly selects option with probabilities weighted by Q value (configurable). • A decision receives positive reinforcement when it leads immediately (without any other decisions) to the success of the overall task.

  30. Example:Disassembly and Assembly • Software agent for disassembly operating in the domain of cameras • Information about cameras • Information about relevant actions • e.g., pulling, unscrewing, etc. • Information about the disassembly process • e.g., decide how to disconnect subsystems from each other and then decide how to disassemble those subsystems separately. • Agent now needs to assemble a camera

  31. Adaptation UsingRelation Mapping • Requires a model for an existing agent which has a task similar to the desired task. • e.g., disassembly is similar to assembly • Effects (:makes slot) of the two tasks must match except for one term, and that one term must be connected by a single relation. • e.g., disassembly produces a disassembled state • assembly produces an assembled state • (inverse-of disassembled assembled) is known. • Uses the relation to alter tasks and methods

  32. Key Task Method Pieces of ADDAM which are key to Disassembly  Assembly Disassemble Plan Then Execute Disassembly Adapt Disassembly Plan Execute Plan Topology Based Plan Adaptation Hierarchical Plan Execution Make Plan Hierarchy Map Dependencies Select Next Action Execute Action Select Dependency Assert Dependency Make Equivalent Plan Nodes Method Make Equivalent Plan Node Add Equivalent Plan Node

  33. Key Task Method New Adapted Task inDisassembly  Assembly Assemble COPIED Plan Then Execute Disassembly COPIED Adapt Disassembly Plan COPIED Execute Plan COPIED Topology Based Plan Adaptation COPIED Hierarchical Plan Execution COPIED Make Plan Hierarchy COPIED Map Dependencies Select Next Action INSERTED Inversion Task 2 Execute Action COPIED Select Dependency INVERTED Assert Dependency COPIED Make Equivalent Plan Nodes Method COPIED Make Equivalent Plan Node INSERTED Inversion Task 1 COPIED Add Equivalent Plan Node

  34. Task: Assert Dependency Before: define-task Assert-Dependency input: target-before-node, target-after-node asserts: (node-precedes (value target-before-node) (value target-after-node)) After: define-task Mapped-Assert-Dependency input: target-before-node, target-after-node asserts: (node-follows (value target-before-node) (value target-after-node)))

  35. Task: Make Equivalent Plan Node define-task make-equivalent-plan-node input: base-plan-node, parent-plan-node, equivalent-topology-node output: equivalent-plan-node makes: (:and (plan-node-parent (value equivalent-plan-node) (value parent-plan-node)) (plan-node-object (value equivalent-plan-node) (value equivalent-topology-node)) (:implies (plan-action (value base-plan-node)) (type-of-action (value equivalent-plan-node) (type-of-action (value base-plan-node))))) by procedure ...

  36. Task:Inverted-Reversal-Task define-task inserted-reversal-task input: equivalent-plan-node asserts: (type-of-action (value equivalent-plan-node) (inverse-of (type-of-action (value equivalent-plan-node))))

  37. Adaptation UsingGenerative Planning • Does not require a pre-existing model • Requires operators and a set of facts (initial state) • Invokes Graphplan • Operators = Those primitive tasks known to the agent which can be translated into Graphplan’s operator language • Facts = Known assertions which involve relations referred to by the operators • Goal = Makes condition of main task • Translates plan into more general method by turning specific objects into parameters & propagating • Stores method for later reuse

  38. Adaptation UsingSituated Learning • Does not require a pre-existing model • Does not even require preconditions and postconditions of the operators • Creates a method that: • Performs any action • Checks whether the desired state is achieved • If not, loops to the start. • During execution, all decision making is done using Q-learning policy. • Over time, the Q-learning mechanism selects actions that tend to lead to desirable results.

  39. ADDAM Example: Layered Roof

  40. Roof Assembly Situated Learning Generative Planning Relation Mapping

  41. Modified Roof Assembly: No Conflicting Goals Situated Learning Relation Mapping Generative Planning

  42. Combining Proactive & Retrospective Adaptation in REM • Proactive adaptation techniques have been the primary focus of REM • However, REM also has facilities for retrospective adaptation • inherited from SIRRINE • REM can use SIRRINE-style analysis of traces to localize an opportunity for adaptation to a particular subtask. • It can then use a proactive technique to build a new method for that subtask.

  43. Explanation • As I have discussed, TMK models are useful for automated adaptation. • This implies that they encode important knowledge about processes. • This suggests that TMK may be an effective mechanism for explaining processes to human users. • Some TMK research has investigated this idea.

  44. Georgia Tech Interactive Kritik:Self-Explanation • Objective: Interactive explanation and justification for conceptual design of physical devices • Input: Functional specification for a device • Output: Model of a device that meets the specification and a graphical explanation of how the device was designed • Knowledge: Library of functional models of devices and a graphical model of the design process • Reasoning: Kritik2 (Goel, Bhatta, & Stroulia, 1997) performs case-based design. Interactive Kritik adds graphical presentation of the reasoning process and product.

  45. Part of a behavior of an acid cooler Water at 25º Heated to 50º By clicking on the transition, a user can jump to the part of the acid cooling behavior that is enabled by the heating of the water

  46. That task is accomplished by a reasoning method. The top level task of Interactive Kritik is design. The method decomposes the task into subtasks. Some subtasks have methods that further decompose them.

  47. NRL DARPA/EELD AHEAD:Explanation of Threats • Analogical Hypothesis Elaborator for Activity Detection • Objective: Helping intelligence analysts understand and trust hypotheses about detected hostile activity • e.g., organized crime, terrorism • Input: Hypothesis about hostile activity & related evidence. • Output: Arguments for and/or against the hypothesis. • Knowledge: TMK models encode how hostile actions are performed and what they are intended to accomplish. • Reasoning: First, MAC/FAC (Forbus & Gentner 1991) maps the hypothesis to a TMK model. Then, TMK simulation guides analysis of hypotheses using evidence.

  48. Inputs: Hypothesis and Evidence Analogy Server from Northwestern University is used to map hypotheses to models. Color Key: Components of AHEAD Hypothesis (Pattern Match) Output is presented to the user directly via a GUI and is available for other software that the analyst uses. Analyst External Systems Inputs and Outputs Models of asymmetric hostile activities including intended effects of actions. Evidence (Structured Data) Argument - + Relationship between the evidence and the hypothesis is analyzed in the context of the model Output: Argument that elaborates the given hypothesis AHEAD Functional Architecture FIRE Analogy Server Link Discovery Tools Hypothesis/Model Mapping TMK Models (Qualitative, Functional) Trace Extractor Evidence Extraction Tools, Existing Knowledge-Bases Model Trace TIA, Assorted EELD Tools, etc. Argument Generator GUI

  49. AHEAD User Interface Statement of the hypothesis (input) Red and black icons indicate qualitative certainty for arguments and evidence Arguments against the hypothesis have missing or contradictory evidence Arguments for the hypothesis are backed by evidence Hyperlinks to original sources for evidence. A key allows users to quickly see what each icon means.

  50. Preliminary User Study • Partial implementation; handmade output files. • Tested the interface and content of AHEAD. • In some trials, users were given hypothesis & evidence • i.e., inputs to AHEAD • In other trials, users also given arguments for and arguments against. • Users with arguments showed better performance. • Difference in error in judgment statistically significant.

More Related