1 / 8

Chapter 3: Part c – Parameter Estimation

Chapter 3: Part c – Parameter Estimation. We will be discussing Nonlinear Parameter Estimation Maximum Likelihood Parameter Estimation (These topics are needed for Chapters 9, 12, 14 and 15). Why Do We Need Nonlinear Parameter Estimation?.

cissy
Download Presentation

Chapter 3: Part c – Parameter Estimation

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Chapter 3: Part c – Parameter Estimation • We will be discussing • Nonlinear Parameter Estimation • Maximum Likelihood Parameter Estimation • (These topics are needed for Chapters 9, 12, 14 and 15)

  2. Why Do We Need Nonlinear Parameter Estimation? With the Linear Model, y = X + e,we end up with a closed form, algebraic solution. Sometimes there is no algebraic solution for the unknowns in a Marketing Model Suppose the data depend in a nonlinear way on an unknown parameter , lets say y = f() + e To minimize e′e, we need to find the spot at which de′e/d = 0. But if there is no way to get  by itself on one side of an equation and stuff that we know on the other…. .

  3. Steps to the Algorithm of Nonlinear Estimation • We take a stab at the unknown, inventing a starting value for it. • We assess the derivative of the objective function at the current value of . If the • derivative is not zero, we modify  by moving it in the direction in which the derivative • getting closer to 0. We keep repeating this step until the derivative arrives at zero.

  4. f 2 1  A Picture of Nonlinear Estimation If the derivative is positive, we should move to the left (go more negative) If the derivative is negative, we should move to the right (go more positive) This suggests the rule:

  5. A Brief Introduction to Maximum Likelihood • ML is an alternative philosophy to Least Squares. • If ML estimators exist, they will be consistent • If ML estimators exist they will be normally distributed. • If ML estimators exist, they will be asymptotically efficient. • ML leads to a Chi Square test of the model • The Covariance Matrix for ML estimators can be calculated from the second order derivatives. • Marketing Scientists really like ML estimators.

  6. The Likelihood Principle We wish to maximize the probability of the data given the model. We will start with the example of estimating the population mean, . 1.0 Pr(x) x 0  Assume we draw a sample of 3 values, x1 = 4, x2 = 5 and x3 = 6.

  7. The Likelihood of The Sample What would be the likelihood of observing x1, x2 and x3 given that  = 212? How about if  = 5? With ML we choose an estimate for  that maximizes the likelihood of the sample. The sample that we observed was presumably more likely on average than the samples that we did not observe. We should make its probability as large as possible.

  8. Steps to ML Estimation Derive the probability of an observation given the parameters, Pr(yi | ). Derive the likelihood of the sample, which typically involves multiplication when we assume independent sampling, Derive the likelihood under the general alternative that the data are arbitrary. Pick elements of the unknown parameter vector  so that is as small as possible.

More Related