factored approches for mdp rl n.
Skip this Video
Loading SlideShow in 5 Seconds..
Factored Approches for MDP & RL PowerPoint Presentation
Download Presentation
Factored Approches for MDP & RL

Loading in 2 Seconds...

play fullscreen
1 / 59

Factored Approches for MDP & RL - PowerPoint PPT Presentation

  • Uploaded on

Factored Approches for MDP & RL. (Some Slides taken from Alan Fern’s course). Factored MDP/RL. Representations. Advantages. Specification: is far easier Inference: Novel lifted versions of the Value and Policy iterations possible Bellman backup directly in terms of ADDs

I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
Download Presentation

PowerPoint Slideshow about 'Factored Approches for MDP & RL' - lavonn

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
factored approches for mdp rl

Factored Approches for MDP & RL

(Some Slides taken from Alan Fern’s course)

factored mdp rl
Factored MDP/RL



Specification: is far easier

Inference: Novel lifted versions of the Value and Policy iterations possible

Bellman backup directly in terms of ADDs

Policy gradient approach where you do direct search in the policy space

Learning: Generalization possibilities

Q-learning etc. will now directly update the factored representations (e.g. weights of the features)

Thus giving implicit generalization

Approaches such as FF-HOP can recognize and reuse common substructure

  • States made of features
    • Boolean vs. Continuous
  • Actions modify the features (probabilistically)
    • Representations include Probabilistic STRIPS, 2-Time-slice Dynamic Bayes Nets etc.
  • Reward and Value functions
    • Representations include ADDs, linear weighted sums of features etc.
problems with transition systems
Problems with transition systems
  • Transition systems are a great conceptual tool to understand the differences between the various planning problems
  • …However direct manipulation of transition systems tends to be too cumbersome
    • The size of the explicit graph corresponding to a transition system is often very large
    • The remedy is to provide “compact” representations for transition systems
      • Start by explicating the structure of the “states”
        • e.g. states specified in terms of state variables
      • Represent actions not as incidence matrices but rather functions specified directly in terms of the state variables
        • An action will work in any state where some state variables have certain values. When it works, it will change the values of certain (other) state variables
state variable models
State Variable Models
  • World is made up of states which are defined in terms of state variables
    • Can be boolean (or multi-ary or continuous)
  • States are complete assignments over state variables
    • So, k boolean state variables can represent how many states?
  • Actions change the values of the state variables
    • Applicability conditions of actions are also specified in terms of partial assignments over state variables



Clear(A), Clear(B), hand-empty


~clear(B), hand-empty

Blocks world

State variables:

Ontable(x) On(x,y) Clear(x) hand-empty holding(x)

Initial state:

Complete specification of T/F values to state variables

--By convention, variables with F values are omitted


If an action changes

a state variable,

this must be explicitly

mentioned in its effects

Goal state:

A partial specification of the desired state variable/value combinations

--desired values can be both positive and negative


Prec: hand-empty,clear(x),ontable(x)

eff: holding(x),~ontable(x),~hand-empty,~Clear(x)


Prec: holding(x)

eff: Ontable(x), hand-empty,clear(x),~holding(x)


Prec: on(x,y),hand-empty,cl(x)

eff: holding(x),~clear(x),clear(y),~hand-empty


Prec: holding(x), clear(y)

eff: on(x,y), ~cl(y), ~holding(x), hand-empty

All the actions here have only positive preconditions; but this is not necessary

why is strips representation compact than explicit transition systems
Why is STRIPS representation compact?(than explicit transition systems)
  • In explicit transition systems actions are represented as state-to-state transitions where in each action will be represented by an incidence matrix of size |S|x|S|
  • In state-variable model, actions are represented only in terms of state variables whose values they care about, and whose value they affect.
  • Consider a state space of 1024 states. It can be represented by log21024=10 state variables. If an action needs variable v1 to be true and makes v7 to be false, it can be represented by just 2 bits (instead of a 1024x1024 matrix)
    • Of course, if the action has a complicated mapping from states to states, in the worst case the action rep will be just as large
    • The assumption being made here is that the actions will have effects on a small number of state variables.



Sit. Calc







factored representations fo mdps actions
Factored Representations fo MDPs: Actions
  • Actions can be represented directly in terms of their effects on the individual state variables (fluents). The CPTs of the BNs can be represented compactly too!
    • Write a Bayes Network relating the value of fluents at the state before and after the action
      • Bayes networks representing fluents at different time points are called “Dynamic Bayes Networks”
      • We look at 2TBN (2-time-slice dynamic bayes nets)
  • Go further by using STRIPS assumption
    • Fluents not affected by the action are not represented explicitly in the model
    • Called Probabilistic STRIPS Operator (PSO) model
factored representations reward value and policy functions
Factored Representations: Reward, Value and Policy Functions
  • Reward functions can be represented in factored form too. Possible representations include
    • Decision trees (made up of fluents)
    • ADDs (Algebraic decision diagrams)
  • Value functions are like reward functions (so they too can be represented similarly)
  • Bellman update can then be done directly using factored representations..
ideas for efficient algorithms
Use heuristic search (and reachability information)


Use execution and/or Simulation

“Actual Execution” Reinforcement learning

(Main motivation for RL is to “learn” the model)

“Simulation” –simulate the given model to sample possible futures

Policy rollout, hindsight optimization etc.

Use “factored” representations

Factored representations for Actions, Reward Functions, Values and Policies

Directly manipulating factored representations during the Bellman update

Ideas for Efficient Algorithms..
probabilistic planning

Probabilistic Planning

--The competition (IPPC)

--The Action language..

PPDDL was based on PSO

A new standard RDDL is based on 2-TBN




reducing heuristic computation cost by exploiting factored representations
Reducing Heuristic Computation Cost by exploiting factored representations
  • The heuristics computed for a state might give us an idea about the heuristic value of other “similar” states
    • Similarity is possible to determine in terms of the state structure
  • Exploit overlapping structure of heuristics for different states
    • E.g. SAG idea for McLUG
    • E.g. Triangle tables idea for plans (c.f. Kolobov)
a plan is a terrible thing to waste
A Plan is a Terrible Thing to Waste
  • Suppose we have a plan
    • s0—a0—s1—a1—s2—a2—s3…an—sG
    • We realized that this tells us not just the estimated value of s0, but also of s1,s2…sn
    • So we don’t need to compute the heuristic for them again
  • Is that all?
    • If we have states and actions in factored representation, then we can explain exactly what aspects of si are relevant for the plan’s success.
        • The “explanation” is a proof of correctness of the plan
          • Can be based on regression (if the plan is a sequence) or causal proof (if the plan is a partially ordered one.
      • The explanation will typically be just a subset of the literals making up the state
        • That means actually, the plan suffix from si may actually be relevant in many more states that are consistent with that explanation
triangle table memoization
Triangle Table Memoization
  • Use triangle tables / memoization







If the above problem is solved, then we don’t need to call FF again for the below:





explanation based generalization of successes and failures
Explanation-based Generalization (of Successes and Failures)
  • Suppose we have a plan P that solves a problem [S, G].
  • We can first find out what aspects of S does this plan actually depend on
    • Explain (prove) the correctness of the plan, and see which parts of S actually contribute to this proof
    • Now you can memoize this plan for just that subset of S
relaxations for stochastic planning
Relaxations for Stochastic Planning
  • Determinizations can also be used as a basis for heuristics to initialize the V for value iteration [mGPT; GOTH etc]
  • Heuristics come from relaxation
  • We can relax along two separate dimensions:
    • Relax –ve interactions
      • Consider +ve interactions alone using relaxed planning graphs
    • Relax uncertainty
      • Consider determinizations
    • Or a combination of both!
factored td and q learning policy search has to be factored
--Factored TD and Q-learning

--Policy search (has to be factored..)

large state spaces
Large State Spaces
  • When a problem has a large state space we can not longer represent the V or Q functions as explicit tables
  • Even if we had enough memory
    • Never enough training data!
    • Learning takes too long
  • What to do??

[Slides from Alan Fern]

function approximation
Function Approximation
  • Never enough training data!
    • Must generalize what is learned from one situation to other “similar” new situations
  • Idea:
    • Instead of using large table to represent V or Q, use a parameterized function
      • The number of parameters should be small compared to number of states (generally exponentially fewer parameters)
    • Learn parameters from experience
    • When we update the parameters based on observations in one state, then our V or Q estimate will also change for other similar states
      • I.e. the parameterization facilitates generalization of experience
linear function approximation
Linear Function Approximation
  • Define a set of state features f1(s), …, fn(s)
    • The features are used as our representation of states
    • States with similar feature values will be considered to be similar
  • A common approximation is to represent V(s) as a weighted sum of the features (i.e. a linear approximation)
  • The approximation accuracy is fundamentally limited by the information provided by the features
  • Can we always define features that allow for a perfect linear approximation?
    • Yes. Assign each state an indicator feature. (I.e. i’th feature is 1 iff i’th state is present and i represents value of i’th state)
    • Of course this requires far to many features and gives no generalization.


  • Consider grid problem with no obstacles, deterministic actions U/D/L/R (49 states)
  • Features for state s=(x,y): f1(s)=x, f2(s)=y (just 2 features)
  • V(s) = 0 + 1 x + 2 y
  • Is there a good linear approximation?
    • Yes.
    • 0 =10, 1 = -1, 2 = -1
    • (note upper right is origin)
  • V(s) = 10 - x - ysubtracts Manhattan dist.from goal reward





but what if we change reward


But What If We Change Reward …
  • V(s) = 0 + 1 x + 2 y
  • Is there a good linear approximation?
    • No.



but what if we change reward1


But What If We Change Reward …
  • V(s) = 0 + 1 x + 2 y
  • Is there a good linear approximation?
    • No.



but what if


But What If…

+ 3 z

  • Include new feature z
    • z= |3-x| + |3-y|
    • z is dist. to goal location
  • Does this allow a good linear approx?
    • 0 =10, 1 = 2 = 0, 0 = -1
  • V(s) = 0 + 1 x + 2 y





Feature Engineering….

linear function approximation1
Linear Function Approximation
  • Define a set of features f1(s), …, fn(s)
    • The features are used as our representation of states
    • States with similar feature values will be treated similarly
    • More complex functions require more complex features
  • Our goal is to learn good parameter values (i.e. feature weights) that approximate the value function well
    • How can we do this?
    • Use TD-based RL and somehow update parameters based on each experience.
td based rl for linear approximators
TD-based RL for Linear Approximators
  • Start with initial parameter values
  • Take action according to an explore/exploit policy(should converge to greedy policy, i.e. GLIE)
  • Update estimated model
  • Perform TD update for each parameter
  • Goto 2

What is a “TD update” for a parameter?

aside gradient descent
Aside: Gradient Descent
  • Given a function f(1,…, n) of n real values =(1,…, n) suppose we want to minimize f with respect to 
  • A common approach to doing this is gradient descent
  • The gradient of f at point , denoted by  f(), is an n-dimensional vector that points in the direction where f increases most steeply at point 
  • Vector calculus tells us that  f() is just a vector of partial derivativeswhere

can decrease f by moving in negative gradient direction

This will be used

Again with

Graphical Model


aside gradient descent for squared error
Aside: Gradient Descent for Squared Error
  • Suppose that we have a sequence of states and target values for each state
    • E.g. produced by the TD-based RL loop
  • Our goal is to minimize the sum of squared errors between our estimated function and each target value:
  • After seeing j’th state the gradient descent rule tells us that we can decrease error by updating parameters by:

squared error of example j

target value for j’th state

our estimated valuefor j’th state

learning rate

aside continued
Aside: continued

depends on form of approximator

  • For a linear approximation function:
  • Thus the update becomes:
  • For linear functions this update is guaranteed to converge to best approximation for suitable learning rate schedule
td based rl for linear approximators1

Use the TD prediction based on the next state s’

  • this is the same as previous TD method only with approximation
TD-based RL for Linear Approximators
  • Start with initial parameter values
  • Take action according to an explore/exploit policy(should converge to greedy policy, i.e. GLIE) Transition from s to s’
  • Update estimated model
  • Perform TD update for each parameter
  • Goto 2

What should we use for “target value” v(s)?

Note that we are

generalizing w.r.t.

possibly faulty

data.. (the neighbor’s

value may not be

correct yet..)

td based rl for linear approximators2
TD-based RL for Linear Approximators
  • Start with initial parameter values
  • Take action according to an explore/exploit policy(should converge to greedy policy, i.e. GLIE)
  • Update estimated model
  • Perform TD update for each parameter
  • Goto 2
  • Step 2 requires a model to select greedy action
  • For applications such as Backgammon it is easy to get a simulation-based model
  • For others it is difficult to get a good model
  • But we can do the same thing for model-free Q-learning
q learning with linear approximators
Q-learning with Linear Approximators

Features are a function of states and actions.

  • Start with initial parameter values
  • Take action a according to an explore/exploit policy(should converge to greedy policy, i.e. GLIE) transitioning from s to s’
  • Perform TD update for each parameter
  • Goto 2
  • For both Q and V, these algorithms converge to the closest linear approximation to optimal Q or V.
policy gradient ascent
Policy Gradient Ascent
  • Let () be the expected value of policy .
    • () is just the expected discounted total reward for a trajectory of .
    • For simplicity assume each trajectory starts at a single initial state.
  • Our objective is to find a  that maximizes ()
  • Policy gradient ascent tells us to iteratively update parameters via:
  • Problem: ()is generally very complex and it is rare that we can compute a closed form for the gradient of ().
  • We will instead estimate the gradient based on experience
gradient estimation
Gradient Estimation
  • Concern: Computing or estimating the gradient of discontinuous functions can be problematic.
  • For our example parametric policy

is () continuous?

  • No.
    • There are values of  where arbitrarily small changes, cause the policy to change.
    • Since different policies can have different values this means that changing  can cause discontinuous jump of ().
example discontinous
Example: Discontinous ()
  • Consider a problem with initial state s and two actions a1 and a2
    • a1 leads to a very large terminal reward R1
    • a2 leads to a very small terminal reward R2
  • Fixing 2 to a constant we can plot the ranking assigned to each action by Q and the corresponding value ()

Discontinuity in () when ordering of a1 and a2 change






probabilistic policies
Probabilistic Policies
  • We would like to avoid policies that drastically change with small parameter changes, leading to discontinuities
  • A probabilistic policy  takes a state as input and returns a distribution over actions
    • Given a state s (s,a) returns the probability that  selects action a in s
  • Note that () is still well defined for probabilistic policies
    • Now uncertainty of trajectories comes from environment and policy
    • Importantly if (s,a) is continuous relative to changing  then () is also continuous relative to changing 
  • A common form for probabilistic policies is the softmax function or Boltzmann exploration function

Aka Mixed Policy

(not needed for


empirical gradient estimation
Empirical Gradient Estimation
  • Our first approach to estimating  () is to simply compute empirical gradient estimates
  • Recall that  = (1,…, n) and

so we can compute the gradient by empirically estimating each partial derivative

  • So for small  we can estimate the partial derivatives by
  • This requires estimating n+1 values:
empirical gradient estimation1
Empirical Gradient Estimation
  • How do we estimate the quantities
  • For each set of parameters, simply execute the policy for N trials/episodes and average the values achieved across the trials
  • This requires a total of N(n+1) episodes to get gradient estimate
    • For stochastic environments and policies the value of N must be relatively large to get good estimates of the true value
    • Often we want to use a relatively large number of parameters
    • Often it is expensive to run episodes of the policy
  • So while this can work well in many situations, it is often not a practical approach computationally
  • Better approaches try to use the fact that the stochastic policy is differentiable.
    • Can get the gradient by just running the current policy multiple times

Doable without permanent damage if there is a simulator

applications of policy gradient search
Applications of Policy Gradient Search
  • Policy gradient techniques have been used to create controllers for difficult helicopter maneuvers
  • For example, inverted helicopter flight.
  • A planner called FPG also “won” the 2006 International Planning Competition
    • If you don’t count FF-Replan
policy gradient recap
Policy Gradient Recap
  • When policies have much simpler representations than the corresponding value functions, direct search in policy space can be a good idea
    • Allows us to design complex parametric controllers and optimize details of parameter settings
  • For baseline algorithm the gradient estimates are unbiased (i.e. they will converge to the right value) but have high variance
    • Can require a large N to get reliable estimates
    • OLPOMDP offers can trade-off bias and variance via the discount parameter [Baxter & Bartlett, 2000]
  • Can be prone to finding local maxima
    • Many ways of dealing with this, e.g. random restarts.
gradient estimation single step problems
Gradient Estimation: Single Step Problems
  • For stochastic policies it is possible to estimate  () directly from trajectories of just the current policy 
    • Idea: take advantage of the fact that we know the functional form of the policy
  • First consider the simplified case where all trials have length 1
    • For simplicity assume each trajectory starts at a single initial state and reward only depends on action choice
    • () is just the expected reward of action selected by .

where s0 is the initial state and R(a) is reward of action a

  • The gradient of this becomes
  • How can we estimate this by just observing the execution of ?
gradient estimation single step problems1
Gradient Estimation: Single Step Problems
  • Rewriting
  • The gradient is just the expected value of g(s0,a)R(a) over execution trials of 
    • Can estimate by executing  for N trials and averaging samples

aj is action selected by policy on j’th episode

    • Only requires executing  for a number of trials that need not depend on the number of parameters

can get closed form g(s0,a)

gradient estimation general case
Gradient Estimation: General Case
  • So for the case of a length 1 trajectories we got:
  • For the general case where trajectories have length greater than one and reward depends on state we can do some work and get:
  • sjt is t’th state of j’th episode, ajt is t’th action of epidode j
  • The derivation of this is straightforward but messy.

length of trajectory j

# of trajectoriesof current policy

Observed total reward in trajectory jfrom step t to end

how to interpret gradient expression
How to interpret gradient expression?
  • So the overall gradient is a reward weighted combination of individual gradient directions
    • For large Rj(sj,t) will increase probability of aj,t in sj,t
    • For negative Rj(sj,t) will decrease probability of aj,t in sj,t
  • Intuitively this increases probability of taking actions that typically are followed by good reward sequences

Direction to move parameters in order to increase the probability that policy selectsajt in state sjt

Total reward observed after taking ajt in state sjt

basic policy gradient algorithm
Basic Policy Gradient Algorithm
  • Repeat until stopping condition
    • Execute  for N trajectories while storing the state, action, reward sequences
  • One disadvantage of this approach is the small number of updates per amount of experience
    • Also requires a notion of trajectory rather than an infinite sequence of experience
  • Online policy gradient algorithms perform updates after each step in environment (often learn faster)
computing the gradient of policy
Computing the Gradient of Policy
  • Both algorithms require computation of
  • For the Boltzmann distribution with linear approximation we have:


  • Here the partial derivatives needed for g(s,a) are: