1 / 56

Using particle filters to fit state-space models of wildlife population dynamics

I always wanted to be a model. Using particle filters to fit state-space models of wildlife population dynamics. Len Thomas, Stephen Buckland, Ken Newman, John Harwood University of St Andrews 16 th September 2004. Outline. 1. Introduction / background / recap State space models

easter
Download Presentation

Using particle filters to fit state-space models of wildlife population dynamics

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. I always wanted to be a model Using particle filters to fit state-space models of wildlife population dynamics Len Thomas, Stephen Buckland, Ken Newman, John Harwood University of St Andrews 16th September 2004

  2. Outline • 1. Introduction / background / recap • State space models • Motivating example – grey seal model • Methods of fitting the models • Types of inference from time series data • 2. Tutorial: nonlinear particle filtering / sequential importance sampling • importance sampling • sequential importance sampling (SIS) • tricks • 3. Results and discussion • Potential of particle filtering in ecology and epidemiology

  3. References • This talk: ftp://bat.mcs.st-and.ac.uk/pub/len/emstalk.zip • See also talk by David Elston, tomorrow • Books: • Doucet A., N. de Freitas and N. Gordon (eds). 2001. Sequential Monte Carlo Methods in Practice. Springer-Verlag. • Lui, J.S. 2001. Monte Carlo Strategies in Scientific Computing. Springer-Verlag. • Papers from St. Andrews group – see abstract

  4. 1. Introduction

  5. State-space models • Describe the evolution of two parallel time series • nt – vector of true states (unobserved) • yt – vector of observations on the states • t=1 ... T • State process density gt(nt|nt-1 ; Θ) • Observation process density ft(yt|nt ; Θ) • Initial state density g0(n0 ; Θ) Can be at the level of individual Don’t need to be equally spaced

  6. Hidden process models • An extension of state-space models • state process can be > 1st order Markov • observation process can depend on previous states and observations

  7. Example: British Grey Seal

  8. Surveying seals • Hard to survey outside of breeding season: 80% of time at sea, 90% of this time underwater • Aerial surveys of breeding colonies since 1960s used to estimate pup production • (Other data: intensive studies, radio tracking, genetic, photo-ID, counts at haul-outs) • ~6% per year overall increase in pup production

  9. Estimated pup production

  10. Orkney example colonies

  11. Questions ? • What is the current population size? • What is the future population trajectory? • Biological interest in population processes • E.g., movement of recruiting females • (What sort of data should be collected to most efficiently answer the above questions?)

  12. Grey seal model: states • 7 age classes • pups (n0) • age 1 – age 5 females (n1-n5) • age 6+ females (n6+) = breeders • 48 colonies – aggregated into 4 regions

  13. State process • time step 1 year, which starts just after the breeding season • 4 sub-processes • survival • age incrementation and sexing • movement of recruiting females • breeding survival age movement breeding na,r,t-1 us,a,r,t ui,a,r,t um,a,r,t na,r,t

  14. Survival • density independent adult survival • density dependent pup survivalwhere

  15. Age incrementation and sexing • ui,1,r,t ~Bin (us,0,r,t , 0.5) • ui,a+1,r,t = us,a,r,t a=1-4 • ui,6+,r,t = us,5,r,t + us,6+,r,t

  16. Movement of recruiting females • only age 5 females move • movement is fitness dependent • females move if expected offspring survival is higher elsewhere • expected proportion moving proportional to • difference in juvenile survival rates • inverse of distance between colonies • inverse of site faithfulness where

  17. Breeding • density independent • ub,0,r,t ~ Bin (um,6+,r,t, α)

  18. Matrix representation E(nt|nt-1, Θ) ≈ BMtAStnt-1

  19. Matrix representation E(nt|nt-1, Θ) ≈ Ltnt-1

  20. pup 1 2 3 4 5 6+ density dependence Life cycle graph

  21. movement depends on • distance • density dependence • site faithfulness pup 1 2 3 4 5 6+ North Sea density dependent Inner Hebrides pup 1 2 3 4 5 6+ pup 1 2 3 4 5 6+ Outer Hebrides pup 1 2 3 4 5 6+ Orkneys Life cycle graph

  22. Observation process • pup production estimates normally distributed, with variance proportional to expectation

  23. Grey seal model: summary • Parameters: • survival : φa, φpmax, β1 ,..., β4 • breeding: α • movement: γdd, γdist, γsf • observation CV: ψ • total 7 + 4 = 11 • States: • 7 age classes per region per year • total 7 x 4 = 28 per year

  24. Fitting state-space models • Analytic approaches • Kalman filter (Gaussian linear model) Takis Besbeas • Extended Kalman filter (Gaussian nonlinear model – approximate) • Numerical maximization of the likelihood • Monte Carlo approximations • Likelihood-based (Geyer 1996) • Bayesian Carmen Fernández • Rejection sampling Damien Clancy • Markov chain Monte Carlo (MCMC) Philip O’Neill • Monte Carlo particle filtering Me!

  25. Inferences tasks for time series data • Observe data y1:t = (y1,...,yt) • We wish to infer the unobserved states n1:t = (n1,...,nt) and parameters Θ • Fundamental inference tasks: • Smoothing p(n1:t, Θ| y1:t) • Prediction p(nt+x| y1:t) • Filtering p(nt, Θt| y1:t)

  26. Filtering inference can be fast! • Suppose we have p(nt|y1:t) • A new observation yt+1 arrives. We want to update to p(nt+1|y1:t+1) . • Can use the filtering recursion:

  27. Monte-Carlo particle filters:online inference for evolving datasets • Particle filtering used when fast online methods required to produce updated (filtered) estimates as new data arrives: • Tracking applications in radar, sonar, etc. • Finance • Stock prices, exchange rates arrive sequentially. Online update of portfolios. • Medical monitoring • Online monitoring of ECG data for sick patients • Digital communications • Speech recognition and processing

  28. 2. Monte Carlo Particle Filtering Variants/Synonyms: Sequential Importance Sampling (SIS) Sampling Importance Sampling Resampling (SISR) Bootstrap Filter Interacting Particle Filter Auxiliary Particle Filter

  29. Importance sampling • Want to make inferences about some function p(), but cannot evaluate it directly • Solution: • Sample from another function q() (the importance function) which has the same support as p() • Correct using importance weights

  30. Importance sampling algorithm • Want to make inferences about p(nt+1|y1:t+1) • Prediction step:Make K random draws from importance function • Correction step:Calculate: • Normalize weights so that • Approximate the target function:

  31. Sequential importance sampling • SIS is just repeated application of importance sampling at each time step • Basic sequential importance sampling: • Proposal distribution q() = g(nt+1|nt) • Leads to weights • To do basic SIS, need to be able to: • Simulate forward from the state process • Evaluate the observation process density (the likelihood)

  32. Basic SIS algorithm • Generate K “particles” from the prior on {n0, Θ} and with weights 1/K: • For each time period t=1,...,T • For each particle i=1,...,K • Prediction step: • Correction step:

  33. Example of basic SIS • State-space model of exponential population growth • State model • Observation model • Priors

  34. Predict Correct Sample from prior Posterior at t=1 Prior at t=1 n0Θ0 w0 n1Θ0 w0 n1Θ1 w1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.063 0.034 0.558 0.202 0.010 0.063 0.063 0.000 0.000 0.003 11 12 14 13 16 16 20 14 9 16 1.055 1.107 1.195 0.974 0.936 1.029 1.081 1.201 1.000 0.958 17 18 11 15 20 17 17 7 6 22 1.055 1.107 1.195 0.974 0.936 1.029 1.081 1.201 1.000 0.958 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 17 18 11 15 20 17 17 7 6 22 1.055 1.107 1.195 0.974 0.936 1.029 1.081 1.201 1.000 0.958 Example of basic SISt=1 Obs: 12 gives f() 0.028 0.012 0.201 0.073 0.038 0.029 0.029 0.000 0.000 0.012

  35. Predict Correct Posterior at t=1 Posterior at t=2 Prior at t=2 Obs: 14gives f() n1Θ1 w! n2Θ1 w1 n2Θ2 w2 0.063 0.034 0.558 0.202 0.010 0.063 0.063 0.000 0.000 0.003 1.055 1.107 1.195 0.974 0.936 1.029 1.081 1.201 1.000 0.958 0.105 0.068 0.691 0.015 0.005 0.105 0.007 0.000 0.000 0.000 17 18 11 15 20 17 17 7 6 22 1.055 1.107 1.195 0.974 0.936 1.029 1.081 1.201 1.000 0.958 15 14 12 10 11 15 21 9 11 20 0.063 0.034 0.558 0.202 0.010 0.063 0.063 0.000 0.000 0.003 0.160 0.190 0.112 0.008 0.046 0.160 0.011 0.000 0.046 0.007 15 14 12 10 11 15 21 9 11 20 1.055 1.107 1.195 0.974 0.936 1.029 1.081 1.201 1.000 0.958 Example of basic SISt=2

  36. Problem: particle depletion • Variance of weights increases with time, until few particles have almost all the weight • Results in large Monte Carlo error in approximation • Can quantify: • From previous example:

  37. Problem: particle depletion • Worse when: • Observation error is small • Lots of data at any one time point • State process has little stochasticity • Priors are diffuse or not congruent with observations • State process model incorrect (e.g., time varying) • Outliers in the data

  38. Particle depletion: solutions • Pruning: throw out “bad” particles (rejection) • Enrichment: boost “good” particles (resampling) • Directed enrichment (auxiliary particle filter) • Mutation (kernel smoothing) • Other stuff • Better proposals • Better resampling schemes • …

  39. Rejection control • Idea: throw out particles with low weights • Basic algorithm, at time t: • Have a pre-determined threshold, ct, where 0 < ct <=1 • For i = 1, … , K, accept particle i with probability • If particle is accepted, update weight to • Now we have fewer than K samples • Can make up samples by sampling from the priors, projecting forward to the current time point and repeating the rejection control

  40. Rejection control - discussion • Particularly useful at t=1 with diffuse priors • Can have a sequence of control points (not necessarily every year) • Check points don’t need to be fixed – can trigger when variance of weights gets too high • Thresholds, ct, don’t need to be set in advance but can be set adaptively (e.g., mean of weights) • Instead of restarting at time t=0, can restart by sampling from particles at previous check point (= partial rejection control)

  41. Resampling: pruning and enrichment • Idea: allow “good” particles to amplify themselves while killing off “bad” particles • Algorithm. Before and/or after each time step (not necessarily every time step) • For j = 1, … , K • Sample independently from the set of particles according to the probabilities • Assign new weights • Reduces particle depletion of states as “children” particles with the same “parent” now evolve independently

  42. Resample probabilities • Should be related to the weights • (as in the bootstrap filter) • α could vary according to the variance of weights • α = ½ has been suggested • related to “future trend” – as in auxiliary particle filter

  43. Directed resampling: auxiliary particle filter • Idea: Pre-select particles likely to have high weights in future • Example algorithm. • For j = 1, … , K • Sample independently from the set of particles according to the probabilities • Predict: • Correct: • If “future” observations are available can extend to look >1 time step ahead – e.g., protein folding application

  44. Kernel smoothing: enrichment of parameters through mutation • Idea: Introduce small “mutations” into parameter values when resampling • Algorithm: • Given particles • Let Vt be the variance matrix of the • For i = 1, … , K • Sample where h controls the size of the perturbations • Variance of parameters is now (1+h2)Vt, so need shrinkage to preserve 1st 2 moments

  45. Kernel smoothing - discussion • Previous algorithm does not preserve the relationship between parameters and states • Leads to poor smoothing inference • Possibly unreliable filtered inference? • Pragmatically – use as small a value of h as possible • Extensions: • Kernel smooth states as well as parameters • Local kernel smoothing

  46. Other “tricks” • Better proposals: • Rao Blackwellization – the Gibbs sampling equivalent • Importance sampling (e.g., from priors) • Better resampling: • Residual resamling • Stratified resampling • Alternative “mutation” algorithms: • MCMC within SIS • Gradual focussing on posterior: • Tempering/anneling • …

  47. 3. Results and discussion I’ll make you fit into my model!!!

  48. Example: Grey seal model • Used pup production estimates for 1984-2002 • Informative priors on parameters; 1984 data used with sampled parameters to provide priors for states • Algorithm: • Auxiliary particle filter with kernel smoothing (h=0.9) and rejection control (c=99th %ile) at the first time step • 400,000 particles • Took ~6hrs to run on a PC (in SPlus)

  49. Posterior parameter estimates

  50. Coplots

More Related