- 90 Views
- Uploaded on
- Presentation posted in: General

Point estimation, interval estimation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

- Point estimation
- Desirable properties of point estimations
- Interval estimations
- Confidence intervals

Assume that we have a sample (x1,x2,,,xn) from a given population. All parameters of the population are known except some parameter . We want to determine from the given observations unknown parameter - . In other words we want to determine a number or range of numbers from the observations that can be taken as a value of .

Estimator – is a method of estimation.

Estimate – is a result of an estimator

Point estimation – as the name suggests is the estimation of the population parameter with one number.

Problem of statistics is not to find estimates but to find estimators. Estimator is not rejected because it gives one bad result for one sample. It is rejected when it gives bad results in a long run. I.e. it gives bad result for many, many samples. Estimator is accepted or rejected depending on its sampling properties. Estimator is judged by the properties of the distribution of estimates it gives rise.

Since estimator gives rise an estimate that depends on sample points (x1,x2,,,xn) estimate is a function of sample points. Sample points are random variable therefore estimate is random variable and has probability distribution. We want that estimator to have several desirable properties like

- Consistency
- Unbiasedness
- Minimum variance
In general it is not possible for an estimator to have all these properties.

Note that estimator is a sample statistic. I.e. it is a function of the sample elements.

For many estimators variance of the sampling distribution of an estimator decreases as sample size increases. We would like that estimator stays as close as possible to the parameter it estimates as sample size increases.

We want to estimate and tnis an estimator. If tn tends to in probability as n increases then estimator is called consistent. I.e. for any given and there is an integer number n0 so that for all samples size of n > n0 following condition is satisfied:

P(|tn- |< ) > 1-

The property of consistency is a limiting property. It does not require any behaviour of the estimator for a finite sample size.

If there is one consistent estimator then you can construct infinitely many others. For example if tn is consistent then n/(n-1)tnis also consistent.

Example: 1/nxi and 1/(n-1) xi are both consistent estimators for the population mean.

If an estimator tn estimates then difference between them (tn- ) is called the estimation error. Bias of the estimator is defined as the expectation value of this difference

B =E(tn-)=E(tn)-

If the bias is equal to zero then the estimation is called unbiased. For example sample mean is an unbiased estimator:

Here we used the fact that expectation and summation can change order (Remember that expectation is integration for continuous random variables and summation for discrete random variables.) and the expectation of each sample point is equal to the population mean.

Knowledge of population distribution was not necessary for derivation of unbiasedness of the sample mean. This fact is true for the samples taken from population with any distribution for which the first moment exists..

Given sample of size n from the population with unknown mean () and variance (2) we estimate mean as we already know and variance (intuitively) as:

What is the bias of this estimator? We could derive distribution of tn and then use it to find expectation value. If population has normal distribution then it would give us multiple of 2 distribution with n-1 degrees of freedom. Let us use a direct approach:

Sample variance is not an unbiased estimator for the population variance. That is why when mean and variance are unknown the following equation is used for sample variance:

Expectation value of the square of the differences between estimator and the expectation of the estimator is called its variance:

Exercise: What is the variance of the sample mean.

As we noted if estimator for is tn then difference between them is error of the estimation. Expectation value of this error is bias. Expectation value of square of this error is called mean square error (m.s.e.):

It can be expressed by the bias and the variance of the estimator:

M.s.e is equal to square of the estimator’s bias plus variance of the estimator. If the bias is 0 then m.s.e is equal to the variance. In estimation it is usually trade of between unbiasedness and minimum variance. In ideal world we would like to have minimum variance unbiased estimator. It is not always possible.

One of the estimator is plug-in. It has only intuitive bases. If parameter we want to estimate is expressed like =t(F) then estimator taken as . Where F is thepopulation distribution and is its sample equivalent.

Example: population mean is calculated as:

Since sample is from the population with the density of distribution f(x) sample mean is plug-in estimator for the population mean.

Exercise: What is the plug-in estimator for population variance? What is the plug-in estimator for covariance. Hint: Population variance and covariance are calculated as:

Replace the integration with summation and divide by the number of elements in the sample. Since sample was drawn from the population with a given distribution it is not necessary to multiply by f(x)

Another well known and popular estimator is the least-square estimator. If we have a sample and we think that (because of some knowledge we had before) all parameters of interest are inside the mean value of the population then least squares methods estimates by minimising the square of the differences between observations and mean value:

Exercise: Verify that if only unknown parameter is the mean of the population and all wi are equal to each other then the least-squares estimator will result in the sample mean.

Estimation of the parameter is not sufficient. It is necessary to analyse and see how confident we can be about this particular estimation. One way of doing it is defining confidence intervals. If we have estimated we want to know if the “true” parameter is close to our estimate. In other words we want to find an interval that satisfies following relation:

I.e. probability that “true” parameter is in the interval (GL,GU) is greater than 1-. Actual realisation of this interval - (gL,gU) is called a 100(1- )% of confidence interval, limits of the interval are called lower and upper confidence limits. 1- is called confidence level.

Example: If population variance is known (2) and we estimate population mean then

We can find from the table that probability of Z is more than 1 is equal to 0.1587. Probability of Z is less than -1 is again 0.1587. These values comes from the tables of the standard normal distribution.

Now we can find confidence interval for the sample mean. Since:

Then for we can write

Confidence level that “true” value is within 1 standard error (standard deviation of sampling distribution) from the sample mean is 0.6826. Probability that “true” value is within 2 standard error from the sample mean is 0.9545.

What we did here is to find sample distribution and to use it to define confidence intervals. Here we used two sided symmetric interval. They don’t have to be two sided or symmetric. Under some circumstances non-symmetric intervals might be better. For example it might be better to diagnose patient for particular treatment than not. If doctor made an error and did not treat the patient then he might die. But if doctor made a mistake and started to treat him then he can stop and correct his mistake at some later time.

Above we considered the case when population variance is known in advance. It is rarely the case in real life. When both population mean and variance are unknown we can still find confidence intervals. In this case we calculate population mean and variance and then consider distribution of the statistic:

Here s2 is the sample variance.

Since it is the ratio of the standard normal random variable to square root of 2 random variable with n-1 degrees of freedom, Z has Student’s t distribution with n-1 degrees of freedom. In this case we can use table of t distribution to find confidence levels.

It is not surprising that when we do not know sample variance confidence intervals for the same confidence levels becomes larger. That is price we pay for what we do not know.

If number of degrees of freedom becomes large then t distribution is approximated well with normal distribution. For n>100 we can use normal distribution to find confidence levels, intervals.