likelihood methods in ecology l.
Skip this Video
Loading SlideShow in 5 Seconds..
Likelihood Methods in Ecology PowerPoint Presentation
Download Presentation
Likelihood Methods in Ecology

Loading in 2 Seconds...

play fullscreen
1 / 26

Likelihood Methods in Ecology - PowerPoint PPT Presentation

  • Uploaded on

Likelihood Methods in Ecology April 25 - 29, 2011 Granada, Spain. Likelihood Methods in Ecology. April 25 – 29, 2011 Granada, Spain Instructors: Dr. Charles Canham Dr. Luis Cayuela. Daily Schedule. Morning 9:00 – 10:00 Lecture 10:00 – 10:45 Case Study and Discussion

I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
Download Presentation

PowerPoint Slideshow about 'Likelihood Methods in Ecology' - sissy

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
likelihood methods in ecology

Likelihood Methods in Ecology

April 25 - 29, 2011

Granada, Spain

Likelihood Methods in Ecology

April 25 – 29, 2011

Granada, Spain


Dr. Charles Canham

Dr. Luis Cayuela

daily schedule
Daily Schedule
  • Morning
    • 9:00 – 10:00 Lecture
    • 10:00 – 10:45 Case Study and Discussion
    • 10:45 – 11:15 Break
    • 11:15 – 1:00 Lab
  • Lunch 1:00 – 3:00
  • Afternoon
    • 3:00 – 4:00 Lecture
    • 4:00 – 6:00 Lab
course outline statistical inference using likelihood
Course OutlineStatistical Inference using Likelihood
  • Principles and practice of maximum likelihood estimation
  • Know your data – choosing appropriate likelihood functions
  • Formulate statistical models as alternate hypotheses
  • Find the ML estimates of the parameters of your models
  • Compare alternate models and choose the most parsimonious
  • Evaluate individual models
  • Advanced topics

Likelihood is much more than a statistical method...

(it can completely change the way you ask and answer questions…)

lecture 1 an introduction to likelihood estimation
Lecture 1An Introduction to Likelihood Estimation
  • Probability and probability density functions
  • Maximum likelihood estimates (versus traditional “method of moment” estimates)
  • Statistical inference
  • Classical “frequentist” statistics : Limitations and mental gyrations...
  • The “likelihood” alternative: Basic principles and definitions
  • Model comparison as a generalization of hypothesis testing
a simple definition of probability for discrete events
A simple definition of probability for discrete events...

“...the ratio of the number of events of type A to the total number of all possible events (outcomes)...”

The enumeration of all possible outcomes is called the sample space (S).

If there are n possible outcomes in a sample space, S, and m of those are favorable for event A, then the probability of event, A is given as

P{A} = m/n

probability defined more generally
Probability defined more generally...
  • Consider an outcome X from some process that has a set of possible outcomes S:
    • If X and S are discrete, then P{X} = X/S
    • If X is continuous, then the probability has to be defined in the limit:

Where g(x) is a probability density function(PDF)

the normal probability density function pdf
The Normal Probability Density Function (PDF)

m = mean

s2= variance

  • Properties of a PDF:
  • (1) 0 <prob(x)< 1
  • (2) ∫ prob(x) = 1
common pdfs
Common PDFs...
  • For continuous data:
    • Normal
    • Lognormal
    • Gamma
  • For discrete data:
    • Poisson
    • Binomial
    • Multinomial
    • Negative Binomial

See McLaughlin (1993) “A compendium of common probability distributions” in the reading list

why are pdfs important
Why are PDFs important?

Answer: because they are used to calculate likelihood…

(And in that case, they are called “likelihood functions”)

statistical estimators
Statistical “Estimators”

A statistical estimator is a function applied to a sample of data,

and used to estimate an unknown population parameter

(and an “estimate” is just the result of applying an “estimator” to a sample)

properties of estimators
Properties of Estimators
  • Some desirable properties of “point estimators” (functions to estimate a fixed parameter)
    • Bias: if the average error is zero, the estimate is unbiased
    • Efficiency: an estimate with the minimum variance is the most efficient (note: the most efficient estimator is often biased)
    • Consistency: As sample size increases, the probability of the estimate being close to the parameter increases
    • Asymptotically normal: a consistent estimator whose distribution around the true parameter θ approaches a normal distribution with standard deviation shrinking in proportion to as the sample size n grows

Maximum likelihood (ML) estimates


Method of moment (MOM) estimates

Bottom line:

MOM was born in the time before computers, and was OK,

ML needs computing power, but has more desirable properties…

what s wrong with mom s way
What’s wrong with MOM’s way?
  • Nothing, if all you are interested in is calculating properties of your sample…
  • But MOM’s formulas are generally not the best way1 to infer estimates of the statistical properties of the population from which the sample was drawn…

For example: Population variance

(because the second central moment is a biased underestimate of the population variance)

1… in the formal terms of bias, efficiency, consistency, and asymptotic normality

the maximum likelihood alternative
The Maximum Likelihood alternative…

Going back to PDF’s: in plain language, a PDF allows you to calculate the probability that an observation will take on a value (x), given the underlying (true?) parameters of the population

but there s a problem
But there’s a problem…
  • The PDF defines the probability of observing an outcome (x), given that you already know the true population parameter (θ)
  • But we want to generate an estimate of θ, given our data (x)
  • And, unfortunately, the two are not identical:
fisher and the concept of likelihood
Fisher and the concept of “Likelihood”...

The “Likelihood Principle”

In plain English: “The likelihood (L) of the parameter estimates (θ), given a sample (x) is proportional to the probability of observing the data, given the parameters...”

{and this probability is something we can calculate, using the appropriate underlying probability model (i.e. a PDF)}

r a fisher 1890 1962
R.A. Fisher (1890- 1962)

“Likelihood and Probability in R. A. Fisher’s Statistical Methods for Research Workers” (John Aldrich)

A good summary of the evolution of Fisher’s ideas on probability, likelihood, and inference… Contains links to PDFs of Fisher’s early papers…

A second page shows the evolution of his ideas through changes in successive editions of Fisher’s books…

Age 22

calculating likelihood and log likelihood for datasets
Calculating Likelihood and Log-Likelihood for Datasets

From basic probability theory:

If two events (A and B) are independent, then P(A,B) = P(A)P(B)

More generally, for i = 1..nindependent observations, and a vector X of observations (xi):


is the appropriate PDF

But, logarithms are easier to work with, so...

a simple example
A simple example…

A sample of 10 observations…

Assume they are normally distributed, with an unknown population mean and standard deviation.

What is the (log) likelihood that the mean is 4.5 and the standard deviation is 1.2?

likelihood surfaces
Likelihood “Surfaces”

The variation in likelihood for any given set of parameter values defines a likelihood “surface”...

For a model with just 1 parameter, the surface is simply a curve:

(aka a “likelihood profile”)

support and support limits
“Support” and “Support Limits”

Log-likelihood = “Support” (Edwards 1992)

another still somewhat trivial example
Another (still somewhat trivial) example…
  • MOM vs ML estimates of the probability of survival for a population:
    • Data: a quadrat in which 16 of 20 seedlings survived during a census interval. (Note that in this case, the quadrat is the unit of observation…, so sample size = 1)

i.e. Given N=20, x = 16, what is p?

x <- seq(0,1,0.005)

y <- dbinom(16,20,x)



a more realistic example
A more realistic example

# Create some data (5 quadrats)

N <- c(11,14,8,22,50)

x <- c(8,7,5,17,35)

# Calculate the log-likelihood for each

# probability of survival

p <- seq(0,1,0.005)

log_likelihood <- rep(0,length(p))

for (i in 1:length(p))

{ log_likelihood[i] <- sum(log(dbinom(x,N,p[i]))) }

# Plot the likelihood profile


# What probability of survival maximizes log likelihood?



# How does this compare to the average across the 5 quadrats



focus in on the mle
Focus in on the MLE…

# what is the log-likelihood of the MLE?


[1] -9.46812

  • Things to note about log-likelihoods:
  • They should always be negative! (if not, you have a problem with your likelihood function)
  • The absolute magnitude of the log-likelihood increases as sample size increases
an example with continuous data
An example with continuous data…

The normal PDF:

x = observed

m = mean

s2= variance

In R:

dnorm(x, mean = 0, sd = 1, log = FALSE)

> dnorm(2,2.5,1)

[1] 0.3520653

> dnorm(2,2.5,1,log=T)

[1] -1.043939


Problem: Now there are TWO unknowns needed to calculate likelihood (the mean and the variance)!

Solution: treat the variance just like another parameter in the model, and find the ML estimate of the variance just like you would any other parameter…

(this is exactly what you’ll do in the lab this morning…)