LECTURE 24: COMBINING CLASSIFIERS. Objectives: Cross -Validation ML and Bayesian Model Comparison Combining Classifiers Resources: AM : Cross-Validation CV: Bayesian Model Averaging VISP: Classifier Combination. Learning With Queries.
Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.
LECTURE 24: COMBINING CLASSIFIERS
. . .
. . .
The Agent-Environment Interface
Time steps need not refer to fixed intervals of real time (e.g., each new training pattern can be considered a time step).
Actions can be low level (e.g., voltages to motors), or high level (e.g., accept a job offer), “mental” (e.g., shift in focus of attention), etc. Actions can be rule-based (e.g., user expresses a preference) or mathematics-based (e.g., assignment of a class or update of a probability).
States can be low-level “sensations”, or they can be abstract, symbolic, based on memory, or subjective (e.g., the state of being “surprised” or “lost”).
States can be hidden or observable.
A reinforcement learning (RL) agent is not like a whole animal or robot, which consist of many RL agents as well as other components.
The environment is not necessarily unknown to the agent, only incompletely controllable.
Reward computation is in the agent’s environment because the agent cannot change it arbitrarily.
Getting the Degree of Abstraction Right
Is a scalar reward signal an adequate notion of a goal? Perhaps not, but it is surprisingly flexible.
A goal should specify what we want to achieve, not how we want to achieve it.
A goal must be outside the agent’s direct control — thus outside the agent.
The agent must be able to measure success:
frequently during its lifespan.
Returns Perhaps not, but it is surprisingly flexible.
Example Perhaps not, but it is surprisingly flexible.
In episodic tasks, we number the time steps of each episode starting from zero.
We usually do not have to distinguish between episodes, so we write st,j instead of st for the state s at step t of episode j.
Think of each episode as ending in an absorbing state that always produces reward of zero:
We can cover all cases by writing:
where = 1 only if a zero reward absorbing state is always reached.
A Unified Notation
r1 = +1
r2 = +1
r3 = +1
r4 = 0
Summary starting from zero.