Disciple Reasoning and Learning Agents
Download
1 / 29

- PowerPoint PPT Presentation


  • 258 Views
  • Updated On :

Disciple Reasoning and Learning Agents. Gheorghe Tecuci with Mihai Boicu, Dorin Marcu, Bogdan Stanescu, Cristina Boicu, Marcel Barbulescu. Learning Agents Center George Mason University. Symposium on Reasoning and Learning in Cognitive Systems Stanford, CA, 20-21 May 2004. Overview.

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about '' - daniel_millan


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
Slide1 l.jpg

Disciple Reasoning and Learning Agents

Gheorghe Tecuci

with Mihai Boicu, Dorin Marcu, Bogdan Stanescu,

Cristina Boicu, Marcel Barbulescu

Learning Agents Center

George Mason University

Symposium on Reasoning and Learning in Cognitive Systems

Stanford, CA, 20-21 May 2004


Slide2 l.jpg

Overview

Research Problem, Approach, and Application

Problem Solving Method: Task Reduction

Learnable Knowledge Representation: Plausible Version Spaces

Multistrategy Learning during Problem Solving

Agent Development Experiments

Future Directions: Life-long Continuous Learning

Teaching and Learning Demo

Acknowledgements


Slide3 l.jpg

Research Problem and Approach

Research Problem:Elaborate a theory, methodology and family of tools for the development of knowledge-base agents by subject matter experts, with limited assistance from knowledge engineers.

Approach: Develop a learning agent that can be taught directly by a subject matter expert while solving problems in cooperation.

The expert teaches

the agent to perform various tasks in a way that resembles how the expert would teach a person.

The agent learns

from the expert,

building, verifying

and improving its

knowledge base

1. Mixed-initiative problem solving

2. Teaching and learning

3. Multistrategy learning

Problem

Solving

Ontology

+ Rules

Interface

Learning


Slide4 l.jpg

Sample Domain: Center of Gravity Analysis

The center of gravity of an entity (state, alliance, coalition, or group) is the foundation of capability, the hub of all power and movement, upon which everything depends, the point against which all the energies should be directed.

Carl Von Clausewitz, On War, 1832.

The center of gravity of an entity is its primary source of moral or physical strength, power or resistance.

Joe Strange, Centers of Gravity & Critical Vulnerabilities, 1996.

If a combatant eliminates or influences the enemy’s strategic center of gravity, then the enemy will lose control of its power and resources and will eventually fall to defeat. If the combatant fails to adequately protect his own strategic center of gravity, he invites disaster. Giles and Galvin, USAWC 1996.


Slide5 l.jpg

First computational approach to COG analysis

  • Approach to center of gravity analysis based on the concepts ofcritical capabilities, critical requirements and critical vulnerabilities, which have been recently adopted into the joint military doctrine.

  • Application to current war scenarios (e.g. War on terror 2003, Iraq 2003)with state and non-state actors (e.g. Al Qaeda).

Identify COG candidates

Test COG candidates

Identify potential primary sources of moral or physical strength, power and resistance from:

Test each identified COG candidate to determine whether it has all the necessary critical capabilities:

Which are the critical capabilities?

Are the critical requirements of these capabilities satisfied?

If not, eliminate the candidate.

If yes, do these capabilities have any vulnerability?

Government

Military

People

Economy

Alliances

Etc.


Slide6 l.jpg

Overview

Research Problem, Approach, and Application

Problem Solving Method: Task Reduction

Learnable Knowledge Representation: Plausible Version Spaces

Multistrategy Learning during Problem Solving

Agent Development Experiments

Future Directions: Life-long Continuous Learning

Teaching and Learning Demo

Acknowledgements


Slide7 l.jpg

Problem Solving: Task Reduction

T1

  • A complex problem solving task is performed by:

  • successively reducing it to simpler tasks;

  • finding the solutionsof the simplest tasks;

  • successively composing these solutions until the solution to the initial task is obtained.

S1

Q1

S11

A1n

A11

S1n

T1n

T11a

S11a

T11b

S11b

Q11b

S11b

S11bm

S11b1

A11bm

A11b1

T11b1

T11bm

Let T1 be the problem solving task to be performed.

Finding a solution is an iterative process where, at each step, we consider some relevant information that leads us to reduce the current task to a simpler task or to several simpler tasks.

The question Q associated with the current task identifies the type of information to be considered.

The answer A identifies that piece of information and leads us to the reduction of the current task.


Slide8 l.jpg

COG Analysis:

World War II at the time of Sicily 1943

We need to

Identify and test a strategic COG candidate for Sicily_1943

Which is an opposing_force in the Sicily_1943 scenario?

Allied_Forces_1943

Therefore we need to

Identify and test a strategic COG candidate for Allied_Forces_1943

Is Allied_Forces_1943 a single_member_force or a multi_member_force?

Allied_Forces_1943 is a multi_member_force

Therefore we need to

Identify and test a strategic COG candidate for Allied_Forces_1943 which is a multi_member_force

What type of strategic COG candidate should I consider for this multi_member_force?

I consider a candidate corresponding to a member of the multi_member_force

Therefore we need to

Identify and test a strategic COG candidate corresponding to a member of the Allied_Forces_1943

Which is a member of Allied_Forces_1943?

US_1943

Therefore we need to

Identify and test a strategic COG candidate for US_1943


Slide9 l.jpg

Overview

Research Problem, Approach, and Application

Problem Solving Method: Task Reduction

Learnable Knowledge Representation: Plausible Version Spaces

Multistrategy Learning during Problem Solving

Agent Development Experiments

Future Directions: Life-long Continuous Learning

Teaching and Learning Demo

Acknowledgements


Slide10 l.jpg

Knowledge Base: Object Ontology + Rules

Object Ontology

A hierarchical representation of the objects and types of objects.

A hierarchical representation of the types of features.


Slide11 l.jpg

Knowledge Base: Object Ontology + Rules

We need to

Identify and test a strategic COG candidate corresponding to a member of the Allied_Forces_1943

Which is a member of Allied_Forces_1943?

EXAMPLE OF REASONING STEP

US_1943

Therefore we need to

Identify and test a strategic COG candidate for US_1943

LEARNED RULE

IF

Identify and test a strategic COG candidate corresponding to a member of a force

The force is ?O1

IF

Identify and test a strategic COG candidate corresponding to a member of the ?O1

Plausible Upper Bound Condition ?O1 is multi_member_force

has_as_member ?O2

?O2 is force

QuestionWhich is a member of ?O1 ?

Answer?O2

Plausible Lower Bound Condition

?O1 is equal_partners_multi_state_alliance

has_as_member ?O2

?O2 is single_state_force

THEN

Identify and test a strategic COG candidate for ?O2

THEN

Identify and test a strategic COG candidate for a force

The force is ?O2

INFORMAL STRUCTURE

FORMAL STRUCTURE


Slide12 l.jpg

Learnable knowledge representation

Use of the object ontology as an incomplete and evolving generalization hierarchy.

Plausible version space (PVS)

Use of plausible version spaces to represent and use partially learned knowledge:

Universe of Instances

Plausible Upper Bound

Concept

Plausible Lower Bound

  • Rules with PVS conditions

  • Tasks with PVS conditions

  • Object features with PVS concept

  • Task features with PVS concept

Feature

Domain: PVS concept

Range: PVS concept


Slide13 l.jpg

Overview

Research Problem, Approach, and Application

Problem Solving Method: Task Reduction

Learnable Knowledge Representation: Plausible Version Spaces

Multistrategy Learning during Problem Solving

Agent Development Experiments

Future Directions: Life-long Continuous Learning

Teaching and Learning Demo

Acknowledgements


Slide14 l.jpg

Integrated modeling, learning and problem solving

Input Task

Mixed-Initiative Problem Solving

Ontology + Rules

Generated Reduction

Reject Reduction

Accept Reduction

Modeling

Rule Specialization

Specified Reduction

Rule Generalization

Rule Learning


Slide15 l.jpg

2

Learns

Rule_15

Which is a member of Allied_Forces_1943?

US_1943

Therefore we need to

Identify and test a strategic COG candidate for US_1943

We need to

3

5

Identify and test a strategic COG candidate corresponding to a member of the European_Axis_1943

Applies

Rule_15

Which is a member of European_Axis_1943?

Refines

?

Rule_15

4

Germany_1943

Therefore we need to

Identify and test a strategic COG candidate for Germany_1943

Accepts the example

Disciple uses the learned rules in problem solving, and refines them based on expert’s feedback.

We need to

1

Identify and test a strategic COG candidate corresponding to a member of the Allied_Forces_1943

Provides an example

Learning

Modeling

Problem Solving

Refining


Slide16 l.jpg

Rule learning method

Analogy and Hint

Guided Explanation

Analogy-based

Generalization

Plausible version space rule

plausible explanations

PUB

guidance, hints

Example of a

task reduction

step

PLB

Incomplete

explanation

analogy

Knowledge Base


Slide17 l.jpg

Find an explanation of why the example is correct

We need to

Identify and test a strategic COG candidate corresponding to a member of the Allied_Forces_1943

Which is a member of Allied_Forces_1943?

US_1943

Therefore we need to

Identify and test a strategic COG candidate for US_1943

The explanation is the best possible approximation of the question and the answer, in the object ontology.

has_as_member

Allied_Forces_1943

US_1943


Slide18 l.jpg

Generate the PVS rule

We need to

Identify and test a strategic COG candidate corresponding to a member of a force

The force is Allied_Forces_1943

has_as_member

Allied_Forces_1943

US_1943

Therefore we need to

Identify and test a strategic COG candidate for a force

The force is US_1943

IF

Identify and test a strategic COG candidate corresponding to a member of a force

The force is ?O1

Rewrite

as

explanation

?O1 has_as_member ?O2

Most general generalization

Plausible Upper Bound Condition ?O1 is multi_member_force

has_as_member ?O2

?O2 is force

Condition

?O1 is Allied_Forces_1943has_as_member ?O2

?O2 is US_1943

Plausible Lower Bound Condition

?O1 is equal_partners_multi_state_alliance

has_as_member ?O2

?O2 is single_state_force

Most specific generalization

THEN

Identify and test a strategic COG candidate for a force

The force is ?O2

has_as_member

domain: multi_member_force

range: force


Slide19 l.jpg

Rule refinement method

Learning by Analogy

And Experimentation

Knowledge Base

IF

<task>

PVS Condition<condition 1>

PVS Except When Condition<condition 2>

PVS Except When Condition<condition n>

PVS

Rule

Failure

explanation

Example of task reductions

generated by the agent

THEN

<subtask 1>

<subtask m>

Incorrect

example

Correct

example

Learning from

Explanations

Learning from Examples


Slide20 l.jpg

Overview

Research Problem, Approach, and Application

Problem Solving Method: Task Reduction

Learnable Knowledge Representation: Plausible Version Spaces

Multistrategy Learning during Problem Solving

Agent Development Experiments

Teaching and Learning Demo

Acknowledgements


Slide21 l.jpg

Modeling the problem solving process of the subject matter expert and development of the object ontology of the agent.

Teaching of the agent by the subject matter expert.

Agent Development Methodology


Slide22 l.jpg

Use of Disciple at the US Army War College expert and development of the object ontology of the agent.

319jw Case Studies in Center of Gravity Analysis

Disciple helps the students to perform a center of gravity analysis of an assigned war scenario.

Disciple was taught based on the expertise of Prof. Comello in center of gravity analysis.

Problemsolving

Teaching

DiscipleAgent

KB

Learning

Global evaluations of Disciple by officers from the Spring 03 course

Disciple helped me to learn to perform a strategic COG analysis of a scenario

The use of Disciple is an assignment that is well suited to the course's learning objectives

Disciple should be used in future versions of this course


Slide23 l.jpg

Use of Disciple at the US Army War College expert and development of the object ontology of the agent.

589jw Military Applications of Artificial Intelligence course

Students teach Disciple their COG analysis expertise, using sample scenarios (Iraq 2003, War on terror 2003, Arab-Israeli 1973)

Students test the trained Disciple agent based on a new scenario (North Korea 2003)

Global evaluations of Disciple by officers during three experiments

I think that a subject matter expert can use Disciple to build an agent, with limited assistance from a knowledge engineer

Spring 2001

COG identification

Spring 2002

COG identification

and testing

Spring 2003

COG testing based on critical capabilities


Slide24 l.jpg

Parallel development and merging of KBs expert and development of the object ontology of the agent.

432 concepts and features, 29 tasks, 18 rules

For COG identification for leaders

Initial KB

Domain analysis and ontology development (KE+SME)

Knowledge Engineer (KE)

All subject matter experts (SME)

Training scenarios:

Iraq 2003

Arab-Israeli 1973

War on Terror 2003

Parallel KB development (SME assisted by KE)

37 acquired concepts and

features for COG testing

Extended KB

DISCIPLE-COG

DISCIPLE-COG

DISCIPLE-COG

DISCIPLE-COG

DISCIPLE-COG

stay informed

be irreplaceable

communicate

be influential

have support

be protected

be driving force

Team 1

Team 2

Team 3

Team 4

Team 5

5 features

10 tasks

10 rules

14 tasks

14 rules

2 features

19 tasks

19 rules

35 tasks

33 rules

3 features

24 tasks

23 rules

KB merging (KE)

Learned features, tasks, rules

Integrated KB

Unified 2 features

Deleted 4 rules

Refined 12 rules

Final KB:

+9 features  478 concepts and features

+105 tasks 134 tasks

+95 rules 113 rules

5h 28min average training time / team

3.53 average rule learning rate / team

COG identification and testing (leaders)

DISCIPLE-COG

Testing scenario:

North Korea 2003

Correctness = 98.15%


Slide25 l.jpg

Other Disciple agents expert and development of the object ontology of the agent.

Disciple-WA (1997-1998): Estimates the best plan of working around damage to a transportation infrastructure, such as a damaged bridge or road.

Demonstrated that a knowledge engineer can use Disciple to rapidly build and update a knowledge basecapturing knowledge from military engineering manuals and a set of sample solutions provided by a subject matter expert.

Disciple-COA (1998-1999): Identifies strengths and weaknesses in a Course of Action, based on the principles of war and the tenets of Army operations.

Demonstrated the generality of its learning methods that used an object ontology created by another group (TFS/Cycorp).

Demonstrated that a knowledge engineer and a subject matter expert can jointly teach Disciple.


Slide26 l.jpg

Overview expert and development of the object ontology of the agent.

Research Problem, Approach, and Application

Problem Solving Method: Task Reduction

Learnable Knowledge Representation: Plausible Version Spaces

Multistrategy Learning during Problem Solving

Agent Development Experiments

Future Directions: Life-long Continuous Learning

Teaching and Learning Demo

Acknowledgements


Slide27 l.jpg

Life-Long Continuous Agent Learning expert and development of the object ontology of the agent.

1. Multistrategy teaching and learning

Implicit reasoning of

human expert

Explicit reasoning in natural language

Ontology extensions

Modeling

Ontology Elicitation

Rule & Ontology Learning

  • Plausible version spaces

  • Learning from instruction

  • Learning from examples

  • Learning from explanations

  • Learning by analogy

  • Analogy based methods

  • Explanation based methods

  • Natural Language based methods

  • Abstraction based methods

Learned rules, ontology

Learning

Agent

2. Mixed-initiative problem solving and learning

KB

Maintenance

Rule & Ontology

Refining

4. KB maintenance and optimization

Refined rules, ontology

  • Automatic inductive learning

  • Case-based learning

  • Abductive learning

  • Ontology discovery

  • KB optimization

  • KB maintenance

  • Mixed-initiative learning

  • Routine, innovative,

  • inventive, and creative reasoning

Rules w/o exceptions

Non-disruptive

Learning

User Model

Learning

Exception

Handling

Cases, rules

User model

3. Autonomous (and interactive) multistrategy learning


Slide28 l.jpg

Overview expert and development of the object ontology of the agent.

Research Problem, Approach, and Application

Problem Solving Method: Task Reduction

Learnable Knowledge Representation: Plausible Version Spaces

Multistrategy Learning during Problem Solving

Agent Development Experiments

Future Directions: Life-long Continuous Learning

Teaching and Learning Demo

Acknowledgements


Slide29 l.jpg

Acknowledgements expert and development of the object ontology of the agent.

This research was sponsored by the Defense Advanced Research Projects Agency, Air Force Research Laboratory, Air Force Material Command, USAF under agreement number F30602-00-2-0546, by the Air Force Office of Scientific Research under grant number F49620-00-1-0072 and by the US Army War College.


ad