1 / 20

Chapter 4: Dynamic Programming

Chapter 4: Dynamic Programming. Overview of a collection of classical solution methods for MDPs known as dynamic programming (DP) Show how DP can be used to compute value functions , and hence, optimal policies Discuss efficiency and utility of DP. Objectives of this chapter:.

arich
Download Presentation

Chapter 4: Dynamic Programming

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Chapter 4: Dynamic Programming • Overview of a collection of classical solution methods for MDPs known as dynamic programming (DP) • Show how DP can be used to compute value functions, and hence, optimal policies • Discuss efficiency and utility of DP Objectives of this chapter: R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction

  2. Policy Evaluation: for a given policy p, compute the state-value function Policy Evaluation Recall: R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction

  3. Iterative Methods a “sweep” A sweep consists of applying a backup operation to each state. A full policy evaluation backup: R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction

  4. Iterative Policy Evaluation R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction

  5. A Small Gridworld • An undiscounted episodic task • Nonterminal states: 1, 2, . . ., 14; • One terminal state (shown twice as shaded squares) • Actions that would take agent off the grid leave state unchanged • Reward is –1 until the terminal state is reached R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction

  6. Iterative Policy Eval for the Small Gridworld R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction

  7. Suppose we have computed for a deterministic policy p. For a given state s, would it be better to do an action ? Policy Improvement R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction

  8. Policy Improvement Cont. R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction

  9. Policy Improvement Cont. R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction

  10. Policy Iteration policy evaluation policy improvement “greedification” R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction

  11. Policy Iteration R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction

  12. Jack’s Cars Rental Jack owns two locations for car rental. Customers come to both locations at random with poison distribution Suppose that =3 and 4 for rent and 3 and 2 for return in two locations He gets $10 if he rents a car. Moving the car from one location to the second one costs $2 Maximum 20 cars at each location Maximum 5 cars can be moved overnight Use MDP with States are number of cars Actions are number of cars moved each night Starting policy never moves any car R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction

  13. Jack’s Cars Rental R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction

  14. Value Iteration Recall the full policy evaluation backup: Here is the full value iteration backup: R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction

  15. Value Iteration Cont. R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction

  16. Gambler’s Problem A gambler plys coins if it is heads he wins As much as he gambled. The game ends when he gets $100 or loses His money The state is his capital And actions are his bets The optimum policy maximizes the probability of reaching the goal. If the probability of coins coming up heads is p=0.4 then the optimum policy is as shown R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction

  17. Asynchronous DP • All the DP methods described so far require exhaustive sweeps of the entire state set. • Asynchronous DP does not use sweeps. Instead it works like this: • Repeat until convergence criterion is met: • Pick a state at random and apply the appropriate backup • Still need lots of computation, but does not get locked into hopelessly long sweeps • Can you select states to backup intelligently? YES: an agent’s experience can act as a guide. R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction

  18. Generalized Policy Iteration Generalized Policy Iteration (GPI): any interaction of policy evaluation and policy improvement, independent of their granularity. A geometric metaphor for convergence of GPI: R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction

  19. Efficiency of DP • To find an optimal policy is polynomial in the number of states… • BUT, the number of states is often astronomical, e.g., often growing exponentially with the number of state variables (what Bellman called “the curse of dimensionality”). • In practice, classical DP can be applied to problems with a few millions of states. • Asynchronous DP can be applied to larger problems, and appropriate for parallel computation. • It is surprisingly easy to come up with MDPs for which DP methods are not practical. R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction

  20. Summary • Policy evaluation: backups without a max • Policy improvement: form a greedy policy, if only locally • Policy iteration: alternate the above two processes • Value iteration: backups with a max • Full backups (to be contrasted later with sample backups) • Generalized Policy Iteration (GPI) • Asynchronous DP: a way to avoid exhaustive sweeps • Bootstrapping: updating estimates based on other estimates R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction

More Related