tea time talks
Skip this Video
Download Presentation
Tea – Time - Talks

Loading in 2 Seconds...

play fullscreen
1 / 10

Tea Time - Talks - PowerPoint PPT Presentation

  • Uploaded on

Tea – Time - Talks. Every Friday 3.30 pm ICS 432. We Need Speakers (you)! Please volunteer. Philosophy: a TTT (tea-time-talk) should approximately take 15 mins. (extract the essence only). Email: [email protected] Embedded HMM’s Radford Neal Matt Beal Sam Roweis University of Toronto.

I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
Download Presentation

PowerPoint Slideshow about 'Tea Time - Talks' - tremain

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
tea time talks

Tea – Time - Talks

Every Friday 3.30 pm

ICS 432


We Need Speakers (you)!Please volunteer.Philosophy: a TTT (tea-time-talk)should approximately take 15 mins.(extract the essence only).Email: [email protected]

Question:Can we efficiently sample in non-linear state space models with hidden variables (e.g. non-linear Kalman filter).

One option is Gibbs sampling.

However: if random variables are tightly coupled,

the Markov chain mixes very slowly.

This is because we need to change a large number of

variables simultaneously.

idea embed an hmm
Idea: Embed an HMM!

1. Choose a distribution at every time slice t.

i) Define a forward kernel

and a backward kernel:

such that:

(note: not necessarily detailed balance)

The kernels will be used to sample

K states embedded the continuous

domain of

idea embed an hmm8
Idea: Embed an HMM!

1. Choose a distribution at every time slice t.

2. Sample M states from distribution as follows:

i) Define a forward kernel

and a backward kernel:

and the current state sequence

ii) Pick a number uniformly at random between

iii) Apply the forward kernel times, starting at

and apply backward kernel times, starting at

3. Sample from the `embedded HMM’ using “forward-backward”.

sampling from the ehmm
Sampling from the eHMM


1. Starting at the current state sequence, sample K states

by applying the forward kernel J times (J chosen uniform at random)

and the backward kernel K-J-1 times.

This defines the embedded state space.

2. Sample states using the forward-backward algorithm from the

following distribribution:

x & y discrete!

proof of detailed balance: see paper


Note: The probabilities of the HMM are not normalized,

so it should it should be treated as an undirected graphical model