Probabilistic reasoning over time
Download
1 / 11

Probabilistic Reasoning over Time - PowerPoint PPT Presentation


  • 98 Views
  • Uploaded on

Probabilistic Reasoning over Time. CMSC 671 – Fall 2010 Class #19 – Monday, November 8 Russell and Norvig, Chapter 15.1-15.2. material from Lise Getoor, Jean-Claude Latombe, and Daphne Koller. sensors. environment. ?. agent. actuators. Temporal Probabilistic Agent. t 1 , t 2 , t 3 , ….

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about ' Probabilistic Reasoning over Time' - hashim-reed


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
Probabilistic reasoning over time

Probabilistic Reasoning over Time

CMSC 671 – Fall 2010

Class #19 – Monday, November 8

Russell and Norvig, Chapter 15.1-15.2

material from Lise Getoor, Jean-Claude Latombe, and Daphne Koller


Temporal probabilistic agent

sensors

environment

?

agent

actuators

Temporal Probabilistic Agent

t1, t2, t3, …


Time and uncertainty
Time and Uncertainty

  • The world changes, we need to track and predict it

  • Examples: diabetes management, traffic monitoring

  • Basic idea:

    • Copy state and evidence variables for each time step

    • Model uncertainty in change over time, incorporating new observations as they arrive

    • Reminiscent of situation calculus, but for stochastic domains

  • Xt – set of unobservable state variables at time t

    • e.g., BloodSugart, StomachContentst

  • Et – set of evidence variables at time t

    • e.g., MeasuredBloodSugart, PulseRatet, FoodEatent

  • Assumes discrete time steps


States and observations
States and Observations

  • Process of change is viewed as series of snapshots, each describing the state of the world at a particular time

  • Each time slice is represented by a set of random variables indexed by t:

    • the set of unobservable state variables Xt

    • the set of observable evidence variables Et

  • The observation at time t is Et = et for some set of values et

  • The notation Xa:b denotes the set of variables from Xa to Xb


  • Stationary process markov assumption
    Stationary Process/Markov Assumption

    • Markov Assumption:

      • Xt depends on some previous Xis

    • First-order Markov process: P(Xt|X0:t-1) = P(Xt|Xt-1)

    • kth order: depends on previous k time steps

    • Sensor Markov assumption:P(Et|X0:t, E0:t-1) = P(Et|Xt)

      • That is: The agent’s observations depend only on the actual current state of the world

    • Assume stationary process:

      • Transition model P(Xt|Xt-1) and sensor model P(Et|Xt) are time-invariant (i.e., they are the same for all t)

      • That is, the changes in the world state are governed by laws that do not themselves change over time


    Complete joint distribution
    Complete Joint Distribution

    • Given:

      • Transition model: P(Xt|Xt-1)

      • Sensor model: P(Et|Xt)

      • Prior probability: P(X0)

    • Then we can specify complete joint distribution:


    Example
    Example

    Raint-1

    Raint

    Raint+1

    Umbrellat-1

    Umbrellat

    Umbrellat+1

    This should look a lot like a finite state automaton (since it is one...)


    Inference tasks
    Inference Tasks

    • Filtering or monitoring: P(Xt|e1,…,et)Compute the current belief state, given all evidence to date

    • Prediction: P(Xt+k|e1,…,et) Compute the probability of a future state

    • Smoothing: P(Xk|e1,…,et) Compute the probability of a past state (hindsight)

    • Most likely explanation: arg maxx1,..xtP(x1,…,xt|e1,…,et)Given a sequence of observations, find the sequence of states that is most likely to have generated those observations


    Examples
    Examples

    • Filtering: What is the probability that it is raining today, given all of the umbrella observations up through today?

    • Prediction: What is the probability that it will rain the day after tomorrow, given all of the umbrella observations up through today?

    • Smoothing: What is the probability that it rained yesterday, given all of the umbrella observations through today?

    • Most likely explanation: If the umbrella appeared the first three days but not on the fourth, what is the most likely weather sequence to produce these umbrella sightings?


    Filtering
    Filtering

    • We use recursive estimation to compute P(Xt+1 | e1:t+1) as a function of et+1 and P(Xt | e1:t)

    • We can write this as follows:

    • This leads to a recursive definition:

      • f1:t+1 =  FORWARD (f1:t, et+1)

    QUIZLET: What is α?


    Filtering example
    Filtering Example

    Raint-1

    Raint

    Raint+1

    Umbrellat-1

    Umbrellat

    Umbrellat+1

    What is the probability of rain on Day 2, given a uniform prior of rain on Day 0, U1 = true, and U2 = true?


    ad