t picos especiais em aprendizagem l.
Download
Skip this Video
Loading SlideShow in 5 Seconds..
Tópicos Especiais em Aprendizagem PowerPoint Presentation
Download Presentation
Tópicos Especiais em Aprendizagem

Loading in 2 Seconds...

play fullscreen
1 / 70

Tópicos Especiais em Aprendizagem - PowerPoint PPT Presentation


  • 132 Views
  • Uploaded on

Tópicos Especiais em Aprendizagem. Prof. Reinaldo Bianchi Centro Universitário da FEI 2007. Objetivo desta Aula. Introdução ao Aprendizado por Reforço: Introdução. Avaliação das ações. O problema do AR. Programação Dinâmica. Aula de hoje: capítulos 1 a 4 do Sutton & Barto.

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about 'Tópicos Especiais em Aprendizagem' - bailey


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
t picos especiais em aprendizagem

Tópicos Especiais em Aprendizagem

Prof. Reinaldo Bianchi

Centro Universitário da FEI

2007

objetivo desta aula
Objetivo desta Aula
  • Introdução ao Aprendizado por Reforço:
    • Introdução.
    • Avaliação das ações.
    • O problema do AR.
    • Programação Dinâmica.
  • Aula de hoje: capítulos 1 a 4 do Sutton & Barto.
the reinforcement learning problem

The Reinforcement Learning Problem

Capítulo 3 do Sutton e Barto.

objetivo
Objetivo
  • Describe the RL problem we will be studying for the remainder of the course.
  • Present idealized form of the RL problem for which we have precise theoretical results;
  • Introduce key components of the mathematics: value functions and Bellman equations;
  • Describe trade-offs between applicability and mathematical tractability.
the agent environment interface

r

r

r

. . .

. . .

t +1

t +2

s

s

t +3

s

s

t +1

t +2

t +3

a

a

a

a

t

t +1

t +2

t

t +3

The Agent-Environment Interface
the agent learns a policy
The Agent Learns a Policy
  • Reinforcement learning methods specify how the agent changes its policy as a result of experience.
  • Roughly, the agent’s goal is to get as much reward as it can over the long run.
getting the degree of abstraction right
Getting the Degree of Abstraction Right
  • Time steps need not refer to fixed intervals of real time.
  • Actions can be low level (e.g., voltages to motors), or high level (e.g., accept a job offer), “mental” (e.g., shift in focus of attention), etc.
  • States can low-level “sensations”, or they can be abstract, symbolic, based on memory, or subjective (e.g., the state of being “surprised” or “lost”).
getting the degree of abstraction right8
Getting the Degree of Abstraction Right
  • An RL agent is not like a whole animal or robot, which consist of many RL agents as well as other components.
  • The environment is not necessarily unknown to the agent, only incompletely controllable.
  • Reward computation is in the agent’s environment because the agent cannot change it arbitrarily.
goals and rewards
Goals and Rewards
  • Is a scalar reward signal an adequate notion of a goal?—maybe not, but it is surprisingly flexible.
  • A goal should specify what we want to achieve, not how we want to achieve it.
  • A goal must be outside the agent’s direct control—thus outside the agent.
  • The agent must be able to measure success:
    • explicitly;
    • frequently during its lifespan.
returns
Returns

Episodic tasks: interaction breaks naturally into episodes, e.g., plays of a game, trips through a maze.

where T is a final time step at which a terminal state is reached, ending an episode.

returns for continuing tasks
Returns for Continuing Tasks

Continuing tasks: interaction does not have natural episodes.

Discounted return:

an example
An Example

Avoid failure: the pole falling beyond

a critical angle or the cart hitting end of

track.

As an episodic task where episode ends upon failure:

As a continuingtask with discounted return:

In either case, return is maximized by

avoiding failure for as long as possible.

another example
Another Example

Get to the top of the hill

as quickly as possible.

Return is maximized by minimizing

number of steps to reach the top of the hill.

a unified notation
A Unified Notation
  • In episodic tasks, we number the time steps of each episode starting from zero.
  • We usually do not have distinguish between episodes, so we write instead of for the state at step t of episode j.
  • Think of each episode as ending in an absorbing state that always produces reward of zero:
a unified notation15
A Unified Notation
  • We can cover all cases by writing:
  • Where  can be 1 only if a zero reward absorbing state is always reached.
the markov property
The Markov Property
  • By “the state” at step t, the book means whatever information is available to the agent at step t about its environment.
  • The state can include immediate “sensations,” highly processed sensations, and structures built up over time from sequences of sensations.
the markov property17
The Markov Property
  • Ideally, a state should summarize past sensations so as to retain all “essential” information, i.e., it should have the Markov Property:
markov decision processes
Markov Decision Processes
  • If a reinforcement learning task has the Markov Property, it is basically a Markov Decision Process (MDP).
  • If state and action sets are finite, it is a finite MDP.
definin g a markov decision processes
Defining a Markov Decision Processes
  • To define a finite MDP, you need to give:
    • state and action sets.
    • one-step “dynamics” defined by transition probabilities:
    • reward expectations:
finite mdp example recycling robot
Finite MDP Example: Recycling Robot
  • At each step, robot has to decide whether it should (1) actively search for a can, (2) wait for someone to bring it a can, or (3) go to home base and recharge.
  • Searching is better but runs down the battery; if runs out of power while searching, has to be rescued (which is bad).
  • Decisions made on basis of current energy level: high, low.
  • Reward = number of cans collected
value functions
Value Functions
  • The value of a state is the expected return starting from that state; depends on the agent’s policy.
  • State-value function for policy :
value functions23
Value Functions
  • The value of taking an action in a stateunder policy  is the expected return starting from that state, taking that action, and thereafter following .
  • Action-value function for policy :
bellman equation for a policy
Bellman Equation for a Policy 

The basic idea:

So:

Or, without the expectation operator:

more on the bellman equation
More on the Bellman Equation

This is a set of equations (in fact, linear), one for each state.

The value function for  is its unique solution.

Backup diagrams:

example gridworld
Example: Gridworld
  • Actions: north, south, east, west; deterministic.
  • If would take agent off the grid: no move but reward = –1
  • Other actions produce reward = 0, except actions that move agent out of special states A and B as shown.

State-value function

for equiprobable

random policy;

g = 0.9

example golf
Example: Golf
  • State is ball location
  • Reward of –1 for each stroke until the ball is in the hole
  • Value of a state?
  • Actions:
    • putt (use putter)
    • driver (use driver)
  • putt succeeds anywhere on the green
optimal value functions
Optimal Value Functions
  • For finite MDPs, policies can be partially ordered:
  • There is always at least one (and possibly many) policies that is better than or equal to all the others.
    • This is an optimal policy.
    • We denote them all *.
optimal value functions29
Optimal Value Functions
  • Optimal policies share the same optimal state-value function:
  • Optimal policies also share the same optimal action-value function:
    • This is the expected return for taking action a in state s and thereafter following an optimal policy.
optimal value function for golf
Optimal Value Function for Golf
  • We can hit the ball farther with driver than with putter, but with less accuracy.
  • Q*(s,driver) gives the value or using driver first, then using whichever actions are best.
bellman optimality equation for v
Bellman Optimality Equation for V*

The value of a state under an optimal policy must equal

the expected return for the best action from that state:

The relevant backup diagram:

is the unique solution of this system of nonlinear equations.

bellman optimality equation for q
Bellman Optimality Equation for Q*

The relevant backup diagram:

is the unique solution of this system of nonlinear equations.

why optimal state value functions are useful
Why Optimal State-Value Functions are Useful

Any policy that is greedy with respect to is anoptimal policy.

Therefore, given , one-step-ahead search produces the

long-term optimal actions.

E.g., back to the gridworld:

what about optimal action value functions
What About Optimal Action-Value Functions?

Given , the agent does not even

have to do a one-step-ahead search:

solving the bellman optimality equation
Solving the Bellman Optimality Equation
  • Finding an optimal policy by solving the Bellman Optimality Equation requires the following:
    • accurate knowledge of environment dynamics;
    • we have enough space an time to do the computation;
    • the Markov Property.
solving the bellman optimality equation36
Solving the Bellman Optimality Equation
  • How much space and time do we need?
    • polynomial in number of states (via dynamic programming methods; Chapter 4),
    • BUT, number of states is often huge (e.g., backgammon has about 10**20 states).
  • We usually have to settle for approximations.
  • Many RL methods can be understood as approximately solving the Bellman Optimality Equation.
dynamic programming

Dynamic Programming

Capítulo 4 do Sutton e Barto

objetivos
Objetivos
  • Overview of a collection of classical solution methods for MDPs known as dynamic programming (DP)
  • Show how DP can be used to compute value functions, and hence, optimal policies
  • Discuss efficiency and utility of DP
iterative methods
Iterative Methods

a “sweep”

A sweep consists of applying a backup operation to each state.

A full policy evaluation backup:

a small gridworld
A Small Gridworld
  • An undiscounted episodic task
  • Nonterminal states: 1, 2, . . ., 14;
  • One terminal state (shown twice as shaded squares)
  • Actions that would take agent off the grid leave state unchanged
  • Reward is –1 until the terminal state is reached
policy improvement

Suppose we have computed for a deterministic policy .

For a given state s,

would it be better to do an action ?

Policy Improvement
policy iteration
Policy Iteration

policy evaluation

policy improvement

“greedification”

example 4 2 jack s car rental
Example 4.2: Jack's Car Rental 
  • Jack manages two locations for a nationwide car rental company.
  • Each day, some number of customers arrive at each location to rent cars.
    • If Jack has a car available, he rents it out and is credited $10 by the national company.
    • If he is out of cars at that location, then the business is lost.
    • Cars become available for renting the day after they are returned.
    • To help ensure that cars are available where they are needed, Jack can move them between the two locations overnight, at a cost of $2 per car moved.
example 4 2 jack s car rental50
Example 4.2: Jack's Car Rental 
  • We assume that:
    • the number of cars requested and returned at each location are Poisson random variables.
    • that there can be no more than 20 cars at each location .
    • a maximum of five cars can be moved from one location to the other in one night.
example 4 2 jack s car rental51
Example 4.2: Jack's Car Rental 
  • We take the discount rate to be 0,9 and formulate this as a continuing finite MDP, where:
    • the time steps are days,
    • the state is the number of cars at each location at the end of the day, and
    • the actions are the net numbers of cars moved between the two locations overnight.
example 4 2 jack s car rental52
Example 4.2: Jack's Car Rental 

Figureshows the sequence of policies found by policy iteration starting from the policy that never moves any cars.

value iteration
Value Iteration

Recall the full policy evaluation backup:

Here is the full value iteration backup:

example 4 3 gambler s problem
Example 4.3: Gambler's Problem
  • A gambler has the opportunity to make bets on the outcomes of a sequence of coin flips.
    • If the coin comes up heads, he wins as many dollars as he has staked on that flip;
    • if it is tails, he loses his stake.
  • The game ends when the gambler wins by reaching his goal of $100, or loses by running out of money.
example 4 3 gambler s problem56
Example 4.3: Gambler's Problem
  • On each flip, the gambler must decide what portion of his capital to stake, in integer numbers of dollars.
  • This problem can be formulated as an undiscounted, episodic, finite MDP.
example 4 3 gambler s problem57
Example 4.3: Gambler's Problem
  • The state is the gambler's capital:
  • the actions are stakes:
  • The reward is zero on all transitions except those on which the gambler reaches his goal, when it is +1.
example 4 3 gambler s problem58
Example 4.3: Gambler's Problem

This graph shows the value function found by successive sweeps of value iteration

example 4 3 gambler s problem59
Example 4.3: Gambler's Problem

This graph shows the final policy.

asynchronous dp
Asynchronous DP
  • All the DP methods described so far require exhaustive sweeps of the entire state set.
  • Asynchronous DP does not use sweeps. Instead it works like this:
    • Repeat until convergence criterion is met:
      • Pick a state at random and apply the appropriate backup
  • Still need lots of computation, but does not get locked into hopelessly long sweeps
  • Can you select states to backup intelligently? YES: an agent’s experience can act as a guide.
generalized policy iteration
Generalized Policy Iteration

Generalized Policy Iteration (GPI):

any interaction of policy evaluation and policy improvement,

independent of their granularity.

A geometric metaphor for

convergence of GPI:

efficiency of dp
Efficiency of DP
  • To find an optimal policy is polynomial in the number of states…
  • BUT, the number of states is often astronomical, e.g., often growing exponentially with the number of state variables (what Bellman called “the curse of dimensionality”).
  • In practice, classical DP can be applied to problems with a few millions of states.
  • Asynchronous DP can be applied to larger problems, and appropriate for parallel computation.
  • It is surprisingly easy to come up with MDPs for which DP methods are not practical.
conclu s o
Conclusão
  • Agent-environment interaction
    • States
    • Actions
    • Rewards
  • Policy: stochastic rule for selecting actions
  • Return: the function of future rewards agent tries to maximize
  • Episodic and continuing tasks
  • Markov Property
  • Markov Decision Process
    • Transition probabilities
    • Expected rewards
conclu s o64
Conclusão
  • Value functions
    • State-value function for a policy
    • Action-value function for a policy
    • Optimal state-value function
    • Optimal action-value function
  • Optimal value functions
  • Optimal policies
  • Bellman Equations
  • The need for approximation
conclus o
Conclusão
  • Policy evaluation: backups without a max
  • Policy improvement: form a greedy policy, if only locally
  • Policy iteration: alternate the above two processes
  • Value iteration: backups with a max
  • Full backups (to be contrasted later with sample backups)
conclus o66
Conclusão
  • Generalized Policy Iteration (GPI)
  • Asynchronous DP: a way to avoid exhaustive sweeps
  • Bootstrapping: updating estimates based on other estimates.
refer ncias b sicas
Referências Básicas
  • “Reinforcement Learning: An introduction”, de Sutton & Barto:
    • http://envy.cs.umass.edu/~rich/book/the-book.html
  • “Reinforcement Learning: A Survey”, de Kaelbling & Littman:
    • http://www.cs.cmu.edu/afs/cs/project/jair/pub/volume4/kaelbling96a-html/rl-survey.html
  • Capítulo 13 do livro Machine Learning, do Tom Mitchell.
  • Capítulo 20 do livro Artificial Intelligence, Russell & Norvig.
refer ncias da apresenta o
Referências da apresentação
  • Esta apresentação é baseada nos slides de:
    • do Livro “Reinforcement Learning: An introduction”, de Sutton & Barto.
    • Lecture slides do livro Machine Learning, do Tom Mitchell.
links teis
Links Úteis
  • Reinforcement Learning Repository:
    • http://www-anw.cs.umass.edu/rlr/
  • Machine Learning and Friends at CMU
    • http://www.cs.cmu.edu/~learning/
  • Machine Learning at Sciencemag:
    • http://www.sciencemag.org/feature/data/compsci/machine_learning.dtl