1 / 35

Reinforcement Learning (RL)

Reinforcement Learning (RL). Consider an “agent” embedded in an environment Task of the agent Repeat forever: sense world reason choose an action to perform. Definition of RL. Assume the world (ie, environment) periodically provides rewards or punishments (“reinforcements”)

yves
Download Presentation

Reinforcement Learning (RL)

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Reinforcement Learning (RL) • Consider an “agent” embedded in an environment • Task of the agent Repeat forever: • sense world • reason • choose an action to perform CS 760 – Machine Learning (UW-Madison)

  2. Definition of RL • Assume the world (ie, environment) periodically provides rewards or punishments (“reinforcements”) • Based on reinforcements received, learn how to better choose actions CS 760 – Machine Learning (UW-Madison)

  3. Sequential Decision Problems Courtesy of A.G. Barto, April 2000 • Decisions are made in stages • The outcome of each decision is not fully predictable but can be observed before the next decision is made • The objective is to maximize a numerical measure of total reward (or equivalently, to minimize a measure of total cost) • Decisions cannot be viewed in isolation: need to balance desire for immediate reward with possibility of high future reward CS 760 – Machine Learning (UW-Madison)

  4. Reinforcement Learning vs Supervised Learning • How would we use SL to train an agent in an environment? • Show action to choose in sample of world states – “I/O pairs” • RL requires much less of teacher • Must set up “reward structure” • Learner “works out the details” – i.e. writes a program to maximize rewards received CS 760 – Machine Learning (UW-Madison)

  5. Embedded Learning Systems: Formalization • SE = the set of states of the world • e.g., an N-dimensional vector • “sensors” • AE = the set of possible actions an agent can perform • “effectors” • W = the world • R = the immediate reward structure W and R are the environment, can be probabilistic functions CS 760 – Machine Learning (UW-Madison)

  6. Embedded learning Systems (formalization) W: SE x AE SE The world maps a state and an action and produces a new state R: SE x AE “reals” Provides rewards (a number) as a function of state and action (as in textbook). Can equivalently formalize as a function of state (next state) alone. CS 760 – Machine Learning (UW-Madison)

  7. A Graphical View of RL • Note that both the world and the agent can be probabilistic, so W and R could produce probability distributions. • For now, assume deterministic problems The real world, W R, reward (a scalar) - indirect teacher an action sensory info The Agent CS 760 – Machine Learning (UW-Madison)

  8. Common Confusion State need not be solely the current sensor readings • Markov Assumption Value of state is independent of path taken to reach that state • Can have memory of the past Can always create Markovian task by remembering entire past history CS 760 – Machine Learning (UW-Madison)

  9. Need for Memory: Simple Example “out of sight, but not out of mind” T=1 learning agent opponent W A L L opponent T=2 learning agent W A L L Seems reasonable to remember opponent recently seen CS 760 – Machine Learning (UW-Madison)

  10. State vs. Current Sensor Readings Rememberstate is what is in one’s head (past memories, etc) not ONLY what one currently sees/hears/smells/etc CS 760 – Machine Learning (UW-Madison)

  11. Policies The agent needs to learn a policy E : SE AE Given a world state, SE, which action, AE, should be chosen? The policy, E, function Remember: The agent’s task is to maximize the total reward received during its lifetime CS 760 – Machine Learning (UW-Madison)

  12. Policies (cont.) To construct E, we will assign a utility(U)(a number) to each state. -  is a positive constant < 1 • R(s, E, t) is the reward received at time t, assuming the agent follows policy E and starts in state s at t=0 • Note: future rewards are discounted by  t-1 CS 760 – Machine Learning (UW-Madison)

  13. immediate reward received for going to state W(s,a) Future reward from further actions (discounted due to 1-step delay) The Action-Value Function We want to choose the “best” action in the current state So, pick the one that leads to the best next state (and include any immediate reward) Let CS 760 – Machine Learning (UW-Madison)

  14. The Action-Value Function (cont.) If we can accurately learn Q (the action-value function), choosing actions is easy Choose a, where CS 760 – Machine Learning (UW-Madison)

  15. U’s “stored” on states Q’s “stored” on arcs Qvs. UVisually action state state U(2) Key Q(1,i) states U(5) actions U(1) Q(1,ii) U(3) U(6) Q(1,iii) U(4) CS 760 – Machine Learning (UW-Madison)

  16. Q-Learning (Watkins PhD, 1989) Let Qtbe our current estimate of the optimal Q Our current policy is suchthat Our current utility-function estimate is - hence, the U table is embedded in the Q table and we don’t need to store both CS 760 – Machine Learning (UW-Madison)

  17. Q-Learning (cont.) Assume we are in state St “Run the program”(1) for awhile (n steps) Determine actual reward and compare to predicted reward Adjust prediction to reduce error (1 ) I.e., follow the current policy CS 760 – Machine Learning (UW-Madison)

  18. How Many Actions Should We Take Before Updating Q? Why not do so after each action? • “1 – Step Q learning” • Most common approach CS 760 – Machine Learning (UW-Madison)

  19. Exploration vs. Exploitation In order to learn about better alternatives, we can’t always follow the current policy (“exploitation”) Sometimes, need to try “random” moves (“exploration”) CS 760 – Machine Learning (UW-Madison)

  20. Exploration vs. Exploitation (cont) Approaches 1) p percent of the time, make a random move; could let 2) Prob(picking action A in state S) Exponentia-ting gets rid of negative values CS 760 – Machine Learning (UW-Madison)

  21. One-Step Q-Learning Algo 0. S initial state 1. If random #  P then a = random choice Else a = t(S) 2. Snew  W(S, a) Rimmed  R(Snew) • Q(S, a)  Rimmed + g maxa’ Q(Snew, a’) 4. S  Snew • Go to 1 Act on world and get reward CS 760 – Machine Learning (UW-Madison)

  22. A Simple Example (of Q-learning - with updates after each step, ie N =1) S1 R = 1 Let  = 2/3 Q = 0 S0 R = 0 Q = 0 Q = 0 Q = 0 S3 R = 0 Algo: Pick State +Action S2 R = -1 Q = 0 S4 R = 3 Q = 0 Repeat (deterministic world, so α=1) CS 760 – Machine Learning (UW-Madison)

  23. A Simple Example (Step 1)S0 S2 S1 R = 1 Let  = 2/3 Q = 0 S0 R = 0 Q = 0 Q = 0 Q = -1 S3 R = 0 Algo: Pick State +Action S2 R = -1 Q = 0 S4 R = 3 Q = 0 Repeat (deterministic world, so α=1) CS 760 – Machine Learning (UW-Madison)

  24. A Simple Example (Step 2)S2 S4 S1 R = 1 Let  = 2/3 Q = 0 S0 R = 0 Q = 0 Q = 0 Q = -1 S3 R = 0 Algo: Pick State +Action S2 R = -1 Q = 0 S4 R = 3 Q = 3 Repeat (deterministic world, so α=1) CS 760 – Machine Learning (UW-Madison)

  25. A Simple Example (Step ) S1 R = 1 Let  = 2/3 Q = 1 S0 R = 0 Q = 0 Q = 0 Q = 1 S3 R = 0 Algo: Pick State +Action S2 R = -1 Q = 0 S4 R = 3 Q = 3 Repeat (deterministic world, so α=1) CS 760 – Machine Learning (UW-Madison)

  26. Q-Learning: Implementation Details Remember, conceptually we are filling in a huge table States S0 S1 S2 . . . Sn Actions . . . a b c . . . z Tables are a very verbose representationof a function . . . Q(S2, c) CS 760 – Machine Learning (UW-Madison)

  27. Q-Learning: Convergence Proof • Applies to Q tables and deterministic, Markovian worlds. Initialize Q’s 0 or random finite. • Theorem: if every state-action pair visited infinitely often, 0≤<1, and |rewards| ≤ C (some constant), then s, a the approx. Q table (Q) the true Q table (Q) ^ CS 760 – Machine Learning (UW-Madison)

  28. Q-Learning Convergence Proof (cont.) • Consider the max error in the approx. Q-table at step t: • The max is finite since |r| ≤ C, so max || • Since finite, we have finite, i.e. initial max error is finite CS 760 – Machine Learning (UW-Madison)

  29. Q-Learning Convergence Proof (cont.) Let s’ be the state that results from doing action a in state s. Consider what happens when we visit s and do a at step t + 1: Next state Current state By def’n of Q (notice best a in s’ might be different) By Q-learning rule (one step) CS 760 – Machine Learning (UW-Madison)

  30. Q-Learning Convergence Proof (cont.) ^ =  | maxa’ Qt(s’, a’) – maxa’’ Q(s’, a’’) | By algebra ^ ≤  maxa’’’ | Qt(s’, a’’’) – Q(s’, a’’’) | Trickiest step, can prove by contradiction Since ^ ≤  maxs’’,a’’’ | Qt(s’’, a’’’) – Q(s’’, a’’’) | Max at s’ ≤ max at any s = Δt Plugging in defn of Δt CS 760 – Machine Learning (UW-Madison)

  31. Q-Learning Convergence Proof (cont.) • Hence, every time, after t, we visit an <s, a>, its Q value differs from the correct answer by no more than Δt • Let To=to (i.e. the start) and TN be the first time since TN-1 where every<s, a> visited at least once • Call the time between TN-1 and TN, a complete interval ClearlyΔTN≤ ΔTN-1 CS 760 – Machine Learning (UW-Madison)

  32. Q-Learning Convergence Proof (concluded) • That is, every complete interval, Δt is reduced by at least  • Since we assumed every <s, a> pair visited infinitely often, we will have an infinite number of complete intervals Hence, lim Δt = 0 t   CS 760 – Machine Learning (UW-Madison)

  33. Q (S, a) Q (S, b) . . . . . Q (S, z) Representing Q Functions More Compactly We can use some other function representation(eg, neural net) to compactly encode this big table Second argument is a constant An encoding of the state (S) Each input unit encodes a property of the state (eg, a sensor value) Or could have one net for each possible action CS 760 – Machine Learning (UW-Madison)

  34. Q Tables vs Q Nets Given: 100 Boolean-valued features 10 possible actions Size of Q table 10 * 2 to the power of 100 Size of Q net (100 HU’s) 100 * 100 + 100 * 10 = 11,000 # of possible states Weights between inputs and HU’s Weights between HU’s and outputs CS 760 – Machine Learning (UW-Madison)

  35. Why Use a Compact Q-Function? • Full Q table may not fit in memory for realistic problems • Can generalize across states, thereby speeding up convergence i.e., one example “fills” many cells in the Q table CS 760 – Machine Learning (UW-Madison)

More Related