1 / 22

Reinforcement Learning

Reinforcement Learning . Control learning Control polices that choose optimal actions Q learning Convergence. Control Learning. Consider learning to choose actions, e.g., Robot learning to dock on battery charger Learning to choose actions to optimize factory output

chelsey
Download Presentation

Reinforcement Learning

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Reinforcement Learning • Control learning • Control polices that choose optimal actions • Q learning • Convergence Chapter 13 Reinforcement Learning

  2. Control Learning Consider learning to choose actions, e.g., • Robot learning to dock on battery charger • Learning to choose actions to optimize factory output • Learning to play Backgammon Note several problem characteristics • Delayed reward • Opportunity for active exploration • Possibility that state only partially observable • Possible need to learn multiple tasks with same sensors/effectors Chapter 13 Reinforcement Learning

  3. One Example: TD-Gammon Tesauro, 1995 Learn to play Backgammon Immediate reward • +100 if win • -100 if lose • 0 for all other states Trained by playing 1.5 million games against itself Now approximately equal to best human player Chapter 13 Reinforcement Learning

  4. a0 a1 a2 s0 s1 s2 r1 r2 r0 Reinforcement Learning Problem Environment action state reward Agent ... Goal: learn to choose actions that maximize r0 + r1 + 2r2 + …, where 0   < 1 Chapter 13 Reinforcement Learning

  5. Markov Decision Process Assume • finite set of states S • set of actions A • at each discrete time, agent observes state st  S and choose action at  A • then receives immediate reward rt • and state changes to st+1 • Markov assumption: st+1 = (st, at) and rt = r(st, at) • i.e., rt and st+1 depend only on current state and action • functions  and r may be nondeterministic • functions  and r no necessarily known to agent Chapter 13 Reinforcement Learning

  6. Agent’s Learning Task Execute action in environment, observe results, and • learn action policy  : SA that maximizes E[rt + rt+1 + 2rt+2 + …] from any starting state in S • here 0   < 1 is the discount factor for future rewards Note something new: • target function is  : SA • but we have no training examples of form <s,a> • training examples are of form <<s,a>,r> Chapter 13 Reinforcement Learning

  7. Value Function To begin, consider deterministic worlds … For each possible policy  the agent might adopt, we can define an evaluation function over states where rt,rt+1,… are generated by following policy  starting at state s Restated, the task is to learn the optimal policy * Chapter 13 Reinforcement Learning

  8. 0 0 0 81 G 100 G 100 0 0 0 72 90 81 0 0 0 0 100 81 90 72 81 100 0 0 81 90 r(s,a) (immediate reward) values Q(s,a) values G G 90 100 0 81 90 100 One optimal policy V*(s) values Chapter 13 Reinforcement Learning

  9. What to Learn We might try to have agent learn the evaluation function V* (which we write as V*) We could then do a lookahead search to choose best action from any state s because A problem: • This works well if agent knows a  : SAS, and r :SA • But when it doesn’t, we can’t choose actions this way Chapter 13 Reinforcement Learning

  10. Q Function Define new function very similar to V* If agent learns Q, it can choose optimal action even without knowing d! Q is the evaluation function the agent will learn Chapter 13 Reinforcement Learning

  11. Training Rule to Learn Q Note Q and V* closely related: Which allows us to write Q recursively as Let denote learner’s current approximation to Q. Consider training rule where s' is the state resulting from applying action a in state s Chapter 13 Reinforcement Learning

  12. Q Learning for Deterministic Worlds For each s,a initialize table entry Observe current state s Do forever: • Select an action a and execute it • Receive immediate reward r • Observe the new state s' • Update the table entry for as follows: • ss' Chapter 13 Reinforcement Learning

  13. 63 63 R R 100 100 90 72 81 81 aright next state: s2 initial state: s1 Updating Chapter 13 Reinforcement Learning

  14. Convergence converges to Q. Consider case of deterministic world where each <s,a> visited infinitely often. Proof: define a full interval to be an interval during which each <s,a> is visited. During each full interval the largest error in table is reduced by factor of  Let be table after n updates, and n be the maximum error in ; that is Chapter 13 Reinforcement Learning

  15. Convergence (cont) For any table entry updated on iteration n+1, the error in the revised estimate is Chapter 13 Reinforcement Learning

  16. Nondeterministic Case What if reward and next state are non-deterministic? We redefine V,Q by taking expected values Chapter 13 Reinforcement Learning

  17. Nondeterministic Case Q learning generalizes to nondeterministic worlds Alter training rule to where Can still prove converge of to Q [Watkins and Dayan, 1992] Chapter 13 Reinforcement Learning

  18. Temporal Difference Learning Q learning: reduce discrepancy between successive Q estimates One step time difference: Why not two steps? Or n? Blend all of these: Chapter 13 Reinforcement Learning

  19. Temporal Difference Learning Equivalent expression: TD() algorithm uses above training rule • Sometimes converges faster than Q learning • converges for learning V* for any 0   1 (Dayan, 1992) • Tesauro’s TD-Gammon uses this algorithm Chapter 13 Reinforcement Learning

  20. Subtleties and Ongoing Research • Replace table with neural network or other generalizer • Handle case where state only partially observable • Design optimal exploration strategies • Extend to continuous action, state • Learn and use d : SAS, d approximation to • Relationship to dynamic programming Chapter 13 Reinforcement Learning

  21. RL Summary • Reinforcement learning (RL) • control learning • delayed reward • possible that the state is only partially observable • possible that the relationship between states/actions unknown • Temporal Difference Learning • learn discrepancies between successive estimates • used in TD-Gammon • V(s) - state value function • needs known reward/state transition functions Chapter 13 Reinforcement Learning

  22. RL Summary • Q(s,a) - state/action value function • related to V • does not need reward/state trans functions • training rule • related to dynamic programming • measure actual reward received for action and future value using current Q function • deterministic - replace existing estimate • nondeterministic - move table estimate towards measure estimate • convergence - can be shown in both cases Chapter 13 Reinforcement Learning

More Related