1 / 25

Reinforcement Learning

Presented by: Kyle Feuz. Reinforcement Learning. Outline. Motivation MDPs RL Model-Based Model-Free Q-Learning SARSA Challenges. Examples. Pac-Man Spider. MDPs. 4-tuple (State, Actions, Transitions, Rewards). Important Terms. Policy Reward Function Value Function Model.

jokeefe
Download Presentation

Reinforcement Learning

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Presented by: Kyle Feuz Reinforcement Learning

  2. Outline • Motivation • MDPs • RL • Model-Based • Model-Free • Q-Learning • SARSA • Challenges

  3. Examples • Pac-Man • Spider

  4. MDPs • 4-tuple (State, Actions, Transitions, Rewards) .

  5. Important Terms • Policy • Reward Function • Value Function • Model

  6. Model-Based RL • Learn transition function • Learn expected rewards • Compute the optimal policy

  7. Model-Free RL • Learn expected rewards/values • Skip learning transistion function • Trade-offs?

  8. Basic Equations

  9. Examples • Pac-Man • Spider • Mario

  10. Q-Learning Q(s, a) = = (1 − α)Q(s, a) + α[R(s, s′ ) + Max Q(s′ , a′ )]

  11. Q-Learning • Demo Video

  12. SARSA Q-Learning Q(s, a) = = (1 − α)Q(s, a) + α[R(s, s′ ) + Q(s′ , a′ )]

  13. Challenges • Explore vs. Exploit • State Space representation • Training Time • Multiagent Learning • Moving Target • Competive or Cooperative

  14. Transfer Learning for Reinforcement Learning on a Physical Robot • Applied TL and RL on Nao robot • TL using the q-value reuse approach • RL uses SARSA variant • State space is represented via CMAC • Neural Network inspired by the cerebellum • Acts as an associative memory • Allows agents to generalize the state space

  15. Agent Model

  16. SARSA Update Rule Q(s, a) = = (1 − α)Q(s, a) + α[R(s, s′ ) + γe(s, a)Q(s′ , a′ )]

  17. Q-Value Reuse Q(s, a) = = Qsource (χX (s), χA (a)) + Qtarget (s, a)

  18. Experimental Setup • Seated Nao robot • Hit the ball at 45 angle • 5 Actions in Source – 9 Actions in Target

  19. Robot Results

  20. Simulator Results

  21. Advanced Combinations

  22. Examples • Pac-Man • Spider • Mario • Q-Learning • Penalty Kick • Others

  23. References and Resources • rl repository • rl-community • rl on PBWorks • rl warehouse • Reinforcement Learning: An Introduction • Artificial Intelligence: A Modern Approach • How to Make Software Agents do the Right Thing

  24. Questions?

More Related