More on Model Building and Selection
Download
1 / 15

More on Model Building and Selection (Observation and process error; simulation testing and diagnostics) - PowerPoint PPT Presentation


  • 169 Views
  • Uploaded on

More on Model Building and Selection (Observation and process error; simulation testing and diagnostics). Fish 458, Lecture 15. Observation and Process Error. Reminder:

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about 'More on Model Building and Selection (Observation and process error; simulation testing and diagnostics)' - millicent


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript

More on Model Building and Selection(Observation and process error; simulation testing and diagnostics)

Fish 458, Lecture 15


Observation and process error
Observation and Process Error

  • Reminder:

    • Process uncertainty impacts the dynamics of the population (e.g. recruitment variability, natural mortality variability, birth-death processes).

    • Observation uncertainty impacts how we observe the population (e.g. CVs for abundance estimates).


Observation and process error the dynamic schaefer model
Observation and Process Error(the Dynamic Schaefer model)

  • Let us generalize the Schaefer model to allow for both observation and process error:

    • determines the extent of process error, and

    • determines the extent of observation error.

  • Often we assume that one of the two types of error dominate and hence assume the other to be zero.


Process error only i
Process error only-I

  • We assume here that and continue under the assume that v=0. Under this assumption:

  • If we assume that w is normally distributed, the likelihood function becomes:


Process error only ii
Process error only-II

  • Issues to consider:

    • A process error estimator can only estimate biomass for years for which index information is available.

    • The choice of where to place the process error term in the dynamics equation is arbitrary (what “process” is really being modeled?)

    • You need a continuous time-series of data to compute all the residuals.

    • What do you do if there are two series of abundance estimates!


Observation error only i
Observation error only - I

  • We assume here that and continue under the assumption that w=0. Under this assumption:

  • If we assume that v is normally distributed, the likelihood function becomes:


Observation error only ii
Observation error only - II

  • Issues to consider:

    • An observation error estimator can estimate biomass for all years.

    • The choice of where to place the observation error term is fairly easy.

    • There is no need for a continuous time-series of data and multiple series of abundance estimates can be handled straightforwardly!

    • There is a need to estimate an additional parameter (the initial biomass - often we assume that ).


Comparing approaches cape hake
Comparing approaches (Cape Hake)

Note that we can’t compare these models because the likelihood functions are different

So what can we

say about these

two analyses


Comparing approaches residuals
Comparing approaches (Residuals)

The residuals about the fit of the process error estimator seem more correlated. Formally, a runs test could be conducted.

Perhaps plot the residuals against the predicted values; look for a lack of normality


Comparing approaches retrospective analyses
Comparing approaches(Retrospective analyses)

We re-run the analysis leaving the last few CPUE data points out of the analysis – seems to be a pattern here!

Analyses along these lines could have saved northern cod!


Simulation testing
Simulation testing

  • Fit the model to the data.

  • Run the model forward with process error.

  • Add some observation noise to the predicted CPUE

  • Fit the observation and process error models.

  • Compare the estimates from the observation and the process error estimators with the true values.

  • Repeat steps 2-5 many times.


Simulation testing cape hake
Simulation testing (Cape hake)

  • The simulation testing framework was applied assuming a process error variation of 0.1 and an observation error variation of 0.2.

  • The results were summarized by the distribution for the difference between the true and estimated current depletion (the ratio of current biomass to K).

  • For this scenario, the observation error estimator is both more precise and less biased. Unless there is good evidence for high process error variability (there isn’t for Cape hake), we would therefore prefer the observation error estimator.


Simulation testing cape hake1
Simulation testing (Cape hake)

Bias - average error

isn’t zero


Simulation testing recap
Simulation Testing - Recap

  • The reasons for using simulation testing include:

    • we know the correct answer for each generated data set – this is not the case in the real world; and

    • there is no restriction on the types of models that can be compared (e.g. they need not use the same data).

  • The results of simulation testing depend, of course, on the model assumed for the true situation.


Readings
Readings

  • Haddon (2001); Chapter 10.

  • Hilborn and Mangel (1997), Chapter 7.

  • Polacheck et al. (1993).


ad