Strong method problem solving
This presentation is the property of its rightful owner.
Sponsored Links
1 / 105

Strong Method Problem Solving PowerPoint PPT Presentation


  • 100 Views
  • Uploaded on
  • Presentation posted in: General

7. Strong Method Problem Solving. 7.0Introduction 7.1Overview of Expert System Technology 7.2Rule-Based Expert Systems 7.3Model-Based, Case Based, and Hybrid Systems. 7.4Planning 7.5Epilogue and References 7.6Exercises. Chapter Objectives.

Download Presentation

Strong Method Problem Solving

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript


Strong method problem solving

7

Strong Method Problem Solving

7.0Introduction

7.1Overview of ExpertSystem Technology

7.2Rule-Based ExpertSystems

7.3Model-Based, CaseBased, and HybridSystems

7.4Planning

7.5Epilogue and References

7.6Exercises


Chapter objectives

Chapter Objectives

  • Learn about knowledge-intensive AI applications

  • Learn about the issues in building Expert Systems: knowledge engineering, inference, providing explanations

  • Learn about the issues in building Planning Systems: writing operators, plan generation, monitoring execution

  • The agent model: Can perform “expert quality” problem solving; can generate and monitor plans


Expert systems ess motivations

Expert systems (ESs) - motivations

  • Experts usually have a lot of knowledge, why not build a system that incorporates a lot of knowledge in a specific area.

  • Will attempt to solve a problem that is

    • non-trivial

    • complex

    • poorly understood

  • The resulting system will be

    • fast

    • reliable

    • cheap

    • transportable

    • usable in remote sites


What is in an expert system

What is in an expert system?

  • lots of knowledge

  • a production system architecture

  • inference techniques

  • advanced features for the user

    • should make their job easier

    • explanations


Guidelines to determine whether a problem is appropriate for an es solution

Guidelines to determine whether a problem is appropriate for an ES solution

  • The need for the solution justifies the cost and effort of building an expert system.

  • Human expertise is not available in all situations where it is needed.

  • The problem may be solved using symbolic reasoning.

  • The problem domain is well structured and does not require commonsense reasoning.

  • The problem may not be solved using traditional computing methods.

  • Cooperative and articulate experts exist.

  • The problem is of proper size and scope.


Architecture of a typical expert system

Architecture of a typical expert system


The role of mental or conceptual models in problem solving

The role of mental or conceptual models in problem solving.


Exploratory development cycle

Exploratory development cycle


A small es for diagnosing automotive problems

A small ES for diagnosing automotive problems

If

the engine is getting gas, and

the engine will turn over,

then

the problem is spark plugs

If

the engine does not turn over, and

the lights do not come on

then

the problem is battery or cables

If

the engine does not turn over, and

the lights do come on,

then

the problem is the starter motor

If

there is gas in the fuel tank, and

there is gas in the carburetor

then

the engine is getting gas

1

3

2

4


The and or graph searched in the car diagnosis example

The and/or graphsearched in the car diagnosis example


The production system at the start of a consultation

The production system at the start of a consultation


The production system after rule 1 has fired

The production system after Rule 1 has fired


The system after rule 4 has fired note the stack based approach to goal reduction

The system after Rule 4 has fired. Note the stack-based approach to goal reduction


Explanation answering why questions

Explanation: Answering “why” questions

  • ES - Gas in fuel tank?U-yesES-Gas in carburetor?U-yesES-Engine will turn over?U-why

  • ES-It has been established that1. The engine is getting gas,therefore if2. The engine will turn over,then the problem is spark plugs.

  • The user is asking “why is this information important?”


Explanation answering how questions

Explanation: Answering “how” questions

  • U-how the engine is getting gas

  • ES-This follows from rule 4:ifgas in fuel tank, andthere is gas in the carburetorthenthe engine is getting gasgas in fuel tank was given by the usergas in the carburetor was given by the user

  • The user is asking “How did the system come up with this conclusion?”


The production system at the start of a consultation for data driven reasoning

The production system at the start of a consultation for data-driven reasoning


The production system after evaluating the first premise of rule 2 which then fails

The production system after evaluating the first premise of Rule 2, which then fails


After considering rule 4 beginning its second pass through the rules

After considering Rule 4, beginning its second pass through the rules


The search graph as described by the contents of wm data driven bfs

The search graph as described by the contents of WM data-driven BFS


Es examples dendral russell norvig 2003

ES examples - DENDRAL(Russell & Norvig, 2003)

  • DENDRAL is the earliest ES(project 1965- 1980)

  • Developed at Stanford by Ed Feigenbaum, Bruce Buchanan, Joshua Lederberg, G.L. Sutherland, Carl Djerassi.

  • Problem solved: inferring molecular structure from the information provided by a mass spectrometer. This is an important problem because the chemical and physical properties of compounds are determined not just by their constituent atoms, but by the arrangement of these atoms as well.


Es examples dendral russell norvig 20031

ES examples - DENDRAL(Russell & Norvig, 2003)

  • Inputs: elementary formula of the molecule (e.g., C6H13NO2), and the mass spectrum giving the masses of the various fragments of the molecule generated when it is bombarded by an electron beam (e.g., the mass spectrum might contain a peak at m=15, corresponding to the mass of a methyl (CH3) fragment.


Es examples dendral cont d

ES examples - DENDRAL (cont’d)

  • Naïve version: DENDRAL stands for DENDritic Algoritm: a procedure to exhaustively and nonredundantly enumerate all the topologically distinct arrangements of any given set of atoms. Generate all the possible structures consistent with the formula, predict what mass spectrum would be observed for each, compare this with the actual spectrum.This is intractable for large molecules!

  • Improved version: look for well-known patterns of peaks in the spectrum that suggested common substructures in the molecule. This reduces the number of possible candidates enormously.


Es examples dendral cont d1

ES examples - DENDRAL (cont’d)

  • A rule to recognize a ketone (C=0) subgroup (weighs 28)

  • if there are two peaks at x1 and x2 such that(a) x1 + x2 = M + 28 (M is the mass of the whole molecule);(b) x1 - 28 is a high peak(c) x2 - 28 is a high peak(d) at least one of x1 and x2 is highthen there is a ketone subgroup

Cyclopropyl-methyl-ketone

Dicyclopropyl-methyl-ketone


Es examples mycin

ES examples - MYCIN

  • MYCIN is another well known ES.

  • Developed at Stanford by Ed Feigenbaum, Bruce Buchanan, Dr. Edward Shortliffe.

  • Problem solved: diagnose blood infections. This is an important problem because physicians usually must begin antibiotic treatment without knowing what the organism is (laboratory cultures take time). They have two choices: (1) prescribe a broad spectrum drug (2) prescribe a disease-specific drug (better)

  • .


Es examples mycin cont d

ES examples - MYCIN (cont’d)

  • Differences from DENDRAL:

  • No general theoretical model existed from which MYCIN rules could be deduced. They had to be acquired from extensive interviewing of experts, who in turn acquired them from textbooks, other experts, and direct experience of cases.

  • The rules reflected uncertainty associated with medical knowledge: certainty factors (not a sound theory)


Es examples mycin cont d1

ES examples - MYCIN (cont’d)

  • About 450 rules. One example is:

  • If the site of the culture is blood the gram of the organism is neg the morphology of the organism is rod the burn of the patient is seriousthen there is weakly suggestive evidence (0.4) that the identity of the organism is pseudomonas.


Es examples mycin cont d2

ES examples - MYCIN (cont’d)

  • If the infection which requires therapy is meningitis only circumstantial evidence is available for this case the type of the infection is bacterial the patient is receiving corticosteroids then there is evidence that the organisms which might be causing the infection are e.coli(0.4), klebsiella- pneumonia(0.2), or pseudomonas-aeruginosa(0.1).


Es examples mycin cont d3

ES examples - MYCIN (cont’d)

  • Starting rule: “If there is an organism requiring therapy, then, compute the possible therapies and pick the best one.”

  • It first tries to see if the disease is known. Otherwise, tries to find it out.


Es examples mycin cont d4

ES examples - MYCIN (cont’d)

  • Can ask questions during the process:

  • > What is the patient’s name?John Doe.> Male or female?Male.>Age?He is 55.> Have you obtained positive cultures indicating general type?Yes.> What type of infection is it?Primary bacteremia.


Es examples mycin cont d5

ES examples - MYCIN (cont’d)

  • > Let’s call the first significant organismfrom this culture U1. Do you know theidentity of U1?No.> Is U1 a rod or a coccus or something else?Rod.> What is the gram stain of U1?Gram-negative.

  • In the last two questions, it is trying to ask the most general question possible, so that repeated questions of the same type do not annoy the user. The format of the KB should make the questions reasonable.


Es examples mycin cont d6

ES examples - MYCIN (cont’d)

  • Studies about its performance showed its recommendations were as well as some experts, and considerably better than junior doctors.

  • Could calculate drug dosages very precisely.

  • Dealt well with drug interactions.

  • Had good explanation features and rule acquisition systems.

  • Was narrow in scope (not a large set of diseases). Another expert system, INTERNIST, knows about internal medicine.

  • Issues in doctors’ egos, legal aspects.


Asking questions to the user

Asking questions to the user

  • Which questions should be asked and in what order?

  • Try to ask questions to make facilitate a more comfortable dialogue. For instance, ask related questions rather than bouncing between unrelated topics (e.g., zipcode as part of an address or to relate the evidence to the area the patient lives).


Es examples r1 or xcon

ES examples - R1 (or XCON)

  • The first commercial expert system (~1982).

  • Developed at Digital Equipment Corporation (DEC).

  • Problem solved: Configure orders for new computer systems. Each customer order was generally a variety of computer products not guaranteed to be compatible with one another (conversion cards, cabling, support software…)

  • By 1986, it was saving the company $40 million a year. Previously, each customer shipment had to be tested for compatibility as an assembly before being shipper. By 1988, DEC’s AI group had 40 expert systems deployed.


Es examples r1 or xcon cont d

ES examples - R1 (or XCON) (cont’d)

  • Rules to match computers and their peripherals:

  • “If the Stockman 800 printer and DPK202 computer have been selected, add a printer conversion card, because they are not compatible.”

  • Being able to change the rule base easily was an important issue because the products were always changing.

  • Over 99% of the configurations were reported to be accurate. Errors were due to lack of product information on recent products (easily correctible.) Like MYCIN, performs as well as or better than most experts.

  • 6,000 - 10,000 rules.


Expert systems then and now

Expert Systems: then and now

  • The AI industry boomed from a few million dollars in 1980 to billions of dollars in 1988.

  • Nearly every major U.S. corporation had its own AI group and was either using or investigating expert systems.

  • For instance, Du Pont had 100 ESs in use and 500 in development, saving an estimated $10 million per year.

  • AAAI had 15,000 members during the “expert systems craze.”

  • Soon a period called the “AI Winter” came…BIRRR...


Expert systems then and now cont d

Expert Systems: then and now (cont’d)

  • The AI industry has shifted focus and stabilized (AAAI members 5500- 7000)

  • Expert systems continue to save companies money

    • IBM’s San Jose facility has an ES that diagnoses problems on disk drives

    • Pac Bell’s diagnoses computer network problems

    • Boeing’s tells workers how to assemble electrical connectors

    • American Express Co’s helps in card application approvals

    • Met Life’s processes mortgage applications

  • Expert Sytem Shells: abstract away the details to produce an inference engine that might be useful for other tasks. Many are available.


Heuristics and control in expert systems

Heuristics and control in expert systems

  • organization of a rule’s premises

  • rule order

  • costs of different tests

  • which rules to select:

    • refraction

    • recency

    • specificity

  • restrict potentially usable rules


Model based reasoning

Model-based reasoning

  • Attempt to describe the “inner details” of the system.

  • This way, the expert system (or any other knowledge-intensive program) can revert to first principles, and can still make inferences if rules summarizing the situation are not present.

  • Include a description of:

  • each component of the device,

  • device’s internal structure,

  • observations of the device’s actual performance


The behavioral description of an adder davis and hamscher 1988

The behavioral description of an adder (Davis and Hamscher,1988)

Behaviour at the terminals of the device: e.g., C is A+B.


Taking advantage of direction of information flow davis and hamscher 1988

Taking advantage of direction of information flow (Davis and Hamscher, 1988)

Either ADD-1

is bad, or the

inputs are

incorrect

(MULT-1 or

MULT-2 is bad)


Fault diagnosis procedure

Fault diagnosis procedure

  • Generate hypotheses: identify the faulty component(s) , e.g., ADD-1 is not faulty

  • Test hypotheses: Can they explain the observed behaviour?

  • Discriminate between hypotheses: What additional information is necessary to resolve conflicts?


A schematic of the simplified livingstone propulsion system williams and nayak 1996

A schematic of the simplified Livingstone propulsion system (Williams and Nayak ,1996)


A model based configuration management system williams and nayak 1996

A model-based configuration management system (Williams and Nayak, 1996)


Case based reasoning cbr

Case-based reasoning (CBR)

  • Allows reference to past “cases” to solve new situations.

  • Ubiquitous practice: medicine, law, programming, car repairs, …


Common steps performed by a case based reasoner

Common steps performed by a case-based reasoner

  • Retrieve appropriate cases from memory

  • Modify a retrieved case so that it will apply to the current situation

  • Apply the transformed case

  • Save the solution, with a record of success or failure, for future use


Preference heuristics to help organize the storage and retrieval cases kolodner 1993

Preference heuristics to help organize the storage and retrieval cases (Kolodner, 1993)

  • Goal directed preference: Retrieve cases that have the same goal as the current situation

  • Salient-feature preference: Prefer cases that match the most important features or those matching the largest number of important features

  • Specify preference: Look for as exact as possible matches of features before considering more general matches

  • Recency preference: Prefer cases used most recently

  • Ease of adaptation preference: Use first cases most easily adapted to the currrent situation


Transformational analogy carbonell 1983

Transformational analogy (Carbonell, 1983)


Advantages of a rule based approach

Advantages of a rule-based approach

  • Ability to directly use experiential knowledge acquired from human experts

  • Mapping of rules to state space search

  • Separation of knowledge from control

  • Possibility of good performance in limited domains

  • Good explanation facilities


Disadvantages of a rule based approach

Disadvantages of a rule-based approach

  • highly heuristic nature of rules not capturing the functional (or model-based) knowledge of the domain

  • brittle nature of heuristic rules

  • rapid degradation of heuristic rules

  • descriptive (rather than theoretical) nature of explanation rules

  • highly task dependent knowledge


Advantages of model based reasoning

Advantages of model-based reasoning

  • Ability to use functional/structure of the domain

  • Robustness due to ability to resort to first principles

  • Transferable knowledge

  • Aibility to provide causal explanations


Advantages of model based reasoning1

Advantages of model-based reasoning

  • Lack of experiental (descriptive) knowledge of the domain

  • Requirement for an explicit domain model

  • High complexity

  • Unability to deal with exceptional situations


Advantages of case based reasoning

Advantages of case-based reasoning

  • Ability to encode historical knowledge directly

  • Achieving speed-up in reasoning using shortcuts

  • Avoiding past errors and exploiting past successes

  • No (strong) requirement for an extensive analysis of domain knowledge

  • Added problems solving power via appropriate indexing strategies


Disadvantages of case based reasoning

Disadvantages of case-based reasoning

  • No deeper knowledge of the domain

  • Large storage requirements

  • Requirement for good indexing and matching criteria


How about combining those approaches

How about combining those approaches?

  • Complex!! But nevertheless useful.

  • rule-based + case-based can

    • first check among previous cases, then engage in rule-based reasoning

    • provide a record of examples and exceptions

    • provide a record of searches done


How about combining those approaches1

How about combining those approaches?

  • rule-based + model-based can

    • enhance explanations with functional knowledge

    • improve robustness when rules fail

    • add heuristic search to model-based search

  • model-based + case-based can

    • give more mature explanations to the situations recorded in cases

    • first check against stored cases before proceeding with model-based reasoning

    • provide a record of examples and exceptions

    • record results of model-based inference

  • Opportunities are endless!


What is planning

What is planning?

  • It is a system whose task is to find a sequence of actions to accomplish a specific task.

  • The main components of a planning problem are:

    • a description of the starting situation (the initial state),

    • a description of the desired situation (the goal state),

    • the actions available to the executing agent (operator library, aka domain theory).

  • Formally, a (classical) planning problem is a triple: <I, G, D>, where I is the initial state, G is the goal state, and D is the domain theory.

planner

planning

problem

plan


Characteristics of classical planners

Characteristics of classical planners

  • They need a mechanism to reason about actions and the changes they inflict on the world

  • Important assumptions:

    • the agent is the only source of change in the world, otherwise the environment is static

    • all the actions are deterministic

    • the agent is omniscient: knows everything it needs to know about start state and effects of actions

    • the goals are categorical, the plan is considered successful iff all the goals are achieved


The blocks world

The blocks world


Represent this world using predicates

Represent this world using predicates

  • ontable(a)ontable(c)ontable(d)on(b,a)on(e,d)clear(b)clear(c)clear(e)gripping()


Declarative or procedural rules

Declarative (or procedural) rules

  • If a block is clear, then there are no blocks on top of it (declarative)

  • OR

  • To make sure that a block is clear, make sure to remove all the blocks on top of it (procedural)

  • 1. (X) ( clear(X)   (Y) ( on(Y, X) ))

  • 2. (Y)(X)  on(Y, X)  ontable(Y)

  • 3. (Y) gripping()   gripping(Y)


Rules for operation on the states

Rules for operation on the states

  • 4. (X) pickup(X)  (gripping(X)  (gripping()  clear(X)  ontable(X)))

  • 5. (X) putdown(X)  (gripping()  ontable(X)  clear(X)  (gripping(X)))

  • 6. (X) stack(X,Y)  ((on (X,Y)  gripping()  clear(X))  (clear(Y)  gripping(X)) )

  • 7. (X) unstack(X,Y)  ((clear(Y)  gripping(X) )  (on(X,Y)  clear(X)  gripping()) )


The format of the rules

The format of the rules

  • A  (B  C)

  • where, A is the operator

  • B is the “result” of the operation

  • C is the conditions that must be true in order for the operator to beexecutable

  • They tell what changes when the operator is executed (or applied.)


Portion of the search space or the blocks world example

Portion of the search space or the blocks world example


Strong method problem solving

But ...

  • We have no explicit notion of a “state” that changes over time as actions are performed.

  • Remember that predicate logic is “timeless”, everything refers to the same time.

  • In order to work reasoning about actions into logic, we need a way to tell that changes are happening over discrete times (or situations.)


Situation calculus

Situation calculus

  • We need to add an additional parameter which represents the state. We’ll use s0, …, sn to represent states (aka situations).

  • Now we can say:

  • 4. (X) pickup(X, s0)  (gripping(X, s1)  (gripping( , s0)  clear(X, s0)  ontable(X, s0)))

  • If the pickup action was attempted in state 0, with the conditions listed holding, then in state 1, gripping will be true for X.


Introduce holds and result and generalize over states

Introduce “holds” and “result” and generalize over states

  • 4. (X) (s) (holds (gripping( ), s)  holds (clear(X), s)  holds (ontable(X), s) ) (holds(gripping(X), result(pickup(X),s))

  • Using rules like this we can logically prove what happens as several actions are applied consecutively.

  • Notice that gripping, clear, …, are now functions.

  • Is “result” a function or a predicate?


A small plan

A small “plan”

c

c

b

b

a

a

(result(stack(c,b),

(result( pickup(c),

(result (stack(b, a),

(result(pickup(b),

(result(putdown(c),

(result(unstack(c,b),s0 ))))))


Our rules will still not work because

Our rules will still not work, because...

  • We are making an implicit (but big) assumption: we are assuming that if nothing tells us that p has changed, then p has not changed.

  • This is important because we want to reason about change, as well as no-change.

  • For instance, block a is still clear after we move block c around (except on top of block a).

  • Things are going to start to get messier because we now need frame axioms.


A frame axiom

A frame axiom

  • Tells what doesn’t change when an action is performed.

  • For instance, if Y is “unstacked” from Z, nothing happens to X.

  • ( X) (Y) (Z) (s) (holds (ontable(X), s) (holds(ontable(X), result(unstack(Y, Z), s)

  • For our logic system to work, we’ll have to define such an axiom for each action and for each predicate.

  • This is called the frame problem.

  • Perhaps the time to get “un-logical”.


The strips representation

The STRIPS representation

  • No frame problem.

  • Special purpose representation.

  • An operator is defined in terms of its:

  • name,parameters,preconditions, andresults.

  • A planner is a special purpose algorithm rather than a general purpose logic theorem prover:forward or backward chaining (state space),plan space algorithms, and several significant others including logic-based.


Four operators for the blocks world

Four operators for the blocks world

  • P: gripping()  clear(X)  ontable(X)

  • pickup(X)A: gripping(X)

  • D: ontable(X)  gripping()

  • P: gripping(X)

  • putdown(X)A: ontable(X)  gripping()  clear(X)

  • D: gripping(X)

  • P: gripping(X)  clear(Y)

  • stack(X,Y)A: on(X,Y)  gripping()  clear(X)

  • D: gripping(X)  clear(Y)

  • P: gripping()  clear(X)  on(X,Y)

  • unstack(X,Y)A: gripping(X)  clear(Y)

  • D: on(X,Y)  gripping()


Notice the simplification

Notice the simplification

  • Preconditions, add lists, and delete lists are all conjunctions. We no more have the full power of predicate logic.

  • The same applies to goals. Goals are conjunctions of predicates.

  • A detail:

  • Why do we have two operators for picking up (pickup and unstack), and two for putting down (putdown and stack)?


A goal state for the blocks world

A goal state for the blocks world


A state space algorithm for strips operators

A state space algorithm for STRIPS operators

  • Search the space of situations (or states). This means each node in the search tree is a state.

  • The root of the tree is the start state.

  • Operators are the means of transition from each node to its children.

  • The goal test involves seeing if the set of goals is a subset of the current situation.

  • Why is the frame problem no more a problem?


Now the following graph makes much more sense

Now, the following graph makes much more sense


Problems in representation

Problems in representation

  • Frame problem: List everything that does not change. It no more is a significant problem because what is not listed as changing (via the add and delete lists) is assumed to be not changing.

  • Qualification problem: Can we list every precondition for an action? For instance, in order for PICKUP to work, the block should not be glued to the table, it should not be nailed to the table, …

  • It still is a problem. A partial solution is to prioritize preconditions, i.e., separate out the preconditions that are worth achieving.


Problems in representation cont d

Problems in representation (cont’d)

  • Ramification problem: Can we list every result of an action? For instance, if a block is picked up its shadow changes location, the weight on the table decreases, ...

  • It still is a problem. A partial solution is to code rules so that inferences can be made. For instance, allow rules to calculate where the shadow would be, given the positions of the light source and the object. When the position of the object changes, its shadow changes too.


The gripper domain

The agent is a robot with two grippers (left and right)

There are two rooms (rooma and roomb)

There are a number of balls in each room

Operators:

PICK

DROP

MOVE

The gripper domain


A deterministic plan

A “deterministic” plan

  • Pick ball1 rooma right

  • Move rooma roomb

  • Drop ball1 roomb right

  • Remember: no observability, nothing can go wrong.


The domain definition for the gripper domain

The domain definition for the gripper domain

  • (define (domain gripper-strips) (:predicates (room ?r)(ball ?b)(gripper ?g)(at-robby ?r)(at ?b ?r)(free ?g)(carry ?o ?g))

  • (:action move:parameters (?from ?to):precondition (and (room ?from) (room ?to) (at-robby ?from)):effect (and (at-robby ?to) (not (at-robby ?from))))

name of the domain

“?” indicates a variable

combined

add and delete

lists


The domain definition for the gripper domain cont d

The domain definition for the gripper domain (cont’d)

  • (:action pick:parameters (?obj ?room ?gripper):precondition (and (ball ?obj) (room ?room) (gripper ?gripper) (at ?obj ?room) (at-robby ?room) (free ?gripper)):effect (and (carry ?obj ?gripper) (not (at ?obj ?room)) (not (free ?gripper))))


The domain definition for the gripper domain cont d1

The domain definition for the gripper domain (cont’d)

  • (:action drop:parameters (?obj ?room ?gripper):precondition (and (ball ?obj) (room ?room) (gripper ?gripper) (at-robby ?room) (carrying ?obj ?gripper)):effect (and (at ?obj ?room) (free ?gripper) (not (carry ?obj ?gripper))))))


An example problem definition for the gripper domain

An example problem definition for the gripper domain

  • (define (problem strips-gripper2) (:domain gripper-strips) (:objects rooma roomb ball1 ball2 left right) (:init (room rooma)(room roomb)(ball ball1)(ball ball2)(gripper left)(gripper right)(at-robby rooma)(free left)(free right)(at ball1 rooma)(at ball2 rooma) ) (:goal (at ball1 roomb)))


Running vhpop

Running VHPOP

  • Once the domain and problem definitions are in files gripper-domain.pddl and gripper-2.pddl respectively, the following command runs Vhpop:

  • vhpop gripper-domain.pddl gripper-2.pddl

  • The output will be:

  • ;strips-gripper21:(pick ball1 rooma right)2:(move rooma roomb)3:(drop ball1 roomb right)Time: 0

  • “pddl” is the planning domain definition language.


Why is planning a hard problem

Why is planning a hard problem?

  • It is due to the large branching factor and the overwhelming number of possibilities.

  • There is usually no way to separate out the relevant operators. Take the previous example, and imagine that there are 100 balls, just two rooms, and two grippers. Again, the goal is to take 1 ball to the other room.

  • How many PICK operators are possible in the initial situation?

  • pick:parameters (?obj ?room ?gripper)

  • That is only one part of the branching factor, the robot could also move without picking up anything.


Why is planning a hard problem cont d

Why is planning a hard problem? (cont’d)

  • Also, goal interactions is a major problem. In planning, goal-directed search seems to make much more sense, but unfortunately cannot address the exponential explosion. This time, the branching factor increases due to the many ways of resolving interactions.

  • When subgoals are compatible, i.e., they do not interact, they are said to be linear ( or independent, or serializable).


How to deal with the exponential explosion

How to deal with the exponential explosion?

  • Use goal-directed algorithms

  • Use domain-independent heuristics

  • Use domain-dependent heuristics (need a language to specify them)


The monkey and bananas problem

The “monkey and bananas” problem


The monkey and bananas problem cont d

The “monkey and bananas” problem(cont’d)

  • The problem statement: A monkey is in a laboratory room containing a box, a knife and a bunch of bananas. The bananas are hanging from the ceiling out of the reach of the monkey. How can the monkey obtain the bananas?

?


Vhpop coding

VHPOP coding

  • (define (domain monkey-domain) (:requirements :equality) (:constants monkey box knife glass water waterfountain) (:predicates (on-floor) (at ?x ?y) (onbox ?x) (hasknife) (hasbananas) (hasglass) (haswater) (location ?x) (:action go-to :parameters (?x ?y) :precondition (and (not = ?y ?x)) (on-floor) (at monkey ?y) :effect (and (at monkey ?x) (not (at monkey ?y))))


Vhpop coding cont d

VHPOP coding (cont’d)

  • (:action climb :parameters (?x) :precondition (and (at box ?x) (at monkey ?x)) :effect (and (onbox ?x) (not (on-floor))))

  • (:action push-box :parameters (?x ?y) :precondition (and (not (= ?y ?x)) (at box ?y) (at monkey ?y) (on-floor)) :effect (and (at monkey ?x) (not (at monkey ?y)) (at box ?x) (not (at box ?y))))


Vhpop coding cont d1

VHPOP coding (cont’d)

  • (:action getknife :parameters (?y) :precondition (and (at knife ?y) (at monkey ?y)) :effect (and (hasknife) (not (at knife ?y))))

  • (:action grabbananas :parameters (?y) :precondition (and (hasknife) (at bananas ?y) (onbox ?y) ) :effect (hasbananas))


Vhpop coding cont d2

VHPOP coding (cont’d)

  • (:action pickglass :parameters (?y) :precondition (and (at glass ?y) (at monkey ?y)) :effect (and (hasglass) (not (at glass ?y))))

  • (:action getwater :parameters (?y) :precondition (and (hasglass) (at waterfountain ?y) (ay monkey ?y) (onbox ?y)) :effect (haswater))


Problem 1 monkey test1 pddl

Problem 1: monkey-test1.pddl

  • (define (problem monkey-test1) (:domain monkey-domain) (:objects p1 p2 p3 p4) (:init (location p1) (location p2)(location p3) (location p4)(at monkey p1) (on-floor)(at box p2) (at bananas p3)(at knife p4)) (:goal (hasbananas)))

  • go-to p4 p1get-knife p4go-to p2 p4push-box p3 p2climb p3grab-bananas p3time = 30 msec.


Problem 2 monkey test2 pddl

Problem 2: monkey-test2.pddl

  • (define (problem monkey-test2) (:domain monkey-domain) (:objects p1 p2 p3 p4 p6) (:init (location p1) (location p2)(location p3) (location p4) (location p6)(at monkey p1) (on-floor)(at box p2) (at bananas p3) (at knife p4)(at waterfountain p3) (at glass p6)) (:goal (and (hasbananas) (haswater))))

  • go-to p4 p1go-to p2 p6get-knife p4push-box p3 p2go-to p6 p4climb p3pickglass p6getwater p3grab-bananas p3time = 70 msec.


The monkey and bananas problem cont d russell norvig 2003

The “monkey and bananas” problem(cont’d) (Russell & Norvig, 2003)

  • Suppose that the monkey wants to fool the scientists, who are off to tea, by grabbing the bananas, but leaving the box in its original place. Can this goal be solved by a STRIPS-style system?


Triangle table execution monitoring and macro operators

Triangle table (execution monitoring and macro operators)


Teleo reactive planning combines feedback based control and discrete actions klein et al 2000

Teleo-reactive planning: combines feedback-based control and discrete actions (Klein et al., 2000)


Model based reactive configuration management williams and nayak 1996a

Model-based reactive configuration management (Williams and Nayak, 1996a)

  • Intelligent space probes that autonomously explore the solar system.

  • The spacecraft needs to:

  • radically reconfigure its control regime in response to failures,

  • plan around these failures during its remaining flight.


A schematic of the simplified livingstone propulsion system williams and nayak 19961

A schematic of the simplified Livingstone propulsion system (Williams and Nayak ,1996)


A model based configuration management system williams and nayak 19961

A model-based configuration management system (Williams and Nayak, 1996)

ME: mode estimation MR: mode reconfiguration


The transition system model of a valve williams and nayak 1996a

The transition system model of a valve (Williams and Nayak, 1996a)


Mode estimation williams and nayak 1996a

Mode estimation (Williams and Nayak, 1996a)


Mode reconfiguration mr williams and nayak 1996a

Mode reconfiguration (MR)(Williams and Nayak, 1996a)


Comments on planning

Comments on planning

  • It is a synthesis task

  • Classical planning is based on the assumptions of a deterministic and static environment

  • Algorithms to solve planning problems include:

    • forward chaining: heuristic search in state space

    • Graphplan: mutual exclusion reasoning using plan graphs

    • Partial order planning (POP): goal directed search in plan space

    • Satifiability based planning: Convert problem into logic

  • Non-classical planners include:

    • probabilistic planners

    • contingency planners (aka conditional planners)

    • decision-theoretic planners

    • temporal planners


  • Login