Convergence of Sequential Monte Carlo Methods - PowerPoint PPT Presentation

convergence of sequential monte carlo methods n.
Download
Skip this Video
Loading SlideShow in 5 Seconds..
Convergence of Sequential Monte Carlo Methods PowerPoint Presentation
Download Presentation
Convergence of Sequential Monte Carlo Methods

play fullscreen
1 / 14
Convergence of Sequential Monte Carlo Methods
87 Views
Download Presentation
kaya
Download Presentation

Convergence of Sequential Monte Carlo Methods

- - - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript

  1. Convergence of Sequential Monte Carlo Methods Dan Crisan, Arnaud Doucet

  2. Problem Statement • X: signal, Y: observation process • X satisfies and evolves according to the following equation, • Y satisfies

  3. Bayes’ recursion • Prediction • Updating

  4. A Sequential Monte Carlo Methods • Empirical measure • Transition kernel • Importance distribution • : abs. continuous with respect to • : strictly positive Radon Nykodym derivative • Then is also continuous w.r.t. and

  5. Algorithm • Step 1:Sequential importance sampling • sample: • evaluate normalized importance weights and let

  6. Step 2: Selection step • multiply/discard particles with high/low importance weights to obtain N particles let assoc.empirical measure • Step 3: MCMC step • sample ,where K is a Markov kernel of invariant distribution and let

  7. Convergence Study • denote • convergence to 0 of average mean square error under quite general conditions • Then prove (almost sure) convergence of toward under more restrictive conditions

  8. Bounds for mean square errors • Assumptions • 1.-A Importance distribution and weights • is assumed abs.continuous with respect to for all is a bounded function in argument define

  9. There exists a constant s. t. for all there exists with s.t. • There exists s. t. and a constant s.t.

  10. 2.-A Resampling/Selection scheme

  11. First Assumption ensures that • Importance function is chosen so that the corresponding importance weights are bounded above. • Sampling kernel and importance weights depend “ continuously” on the measure variable. • Second assumption ensures that • Selection scheme does not introduce too strong a “discrepancy”.

  12. Lemma 1 • Let us assume that for any then after step 1, for any • Lemma 2 • Let us assume that for any then for any

  13. Lemma 3 • Let us assume that for any then after step 2, for any • Lemma 4 • Let us assume that for any then for any

  14. Theorem 1 • For all , there exists independent of s.t. for any