week 7 october 13 17
Download
Skip this Video
Download Presentation
Week 7 October 13-17

Loading in 2 Seconds...

play fullscreen
1 / 30

Week 7 October 13-17 - PowerPoint PPT Presentation


  • 60 Views
  • Uploaded on

Week 7 October 13-17. Three Mini-Lectures QMM 510 Fall 2014 . Confidence Interval ML 7.1 For a Proportion (  ). Chapter 8. A proportion is a mean of data whose only values are 0 or 1. Confidence Interval for a Proportion ( ). Chapter 8.

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about ' Week 7 October 13-17' - morgan


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
week 7 october 13 17

Week 7 October 13-17

Three Mini-Lectures

QMM 510

Fall 2014

slide2

Confidence Interval ML 7.1

For a Proportion ()

Chapter 8

  • A proportion is a mean of data whose only values are 0 or 1.
slide3

Confidence Interval for a Proportion ()

Chapter 8

Applying the CLT

  • The distribution of a sample proportion p = x/n is symmetric if p = .50.
  • The distribution of p approaches normal as nincreases, for any p.
slide4

Confidence Interval for a Proportion ()

Chapter 8

When Is It Safe to Assume Normality of p?

  • Rule of Thumb: The sample proportion p = x/n may be assumed to be normal if both np 10 and n(1p)  10.

Sample size to assume normality:

Rule: It is safe to assume normality of p = x/n if we have at least 10 “successes” and 10 “failures” in the sample.

Table 8.9

slide5

Confidence Interval for a Proportion ()

Chapter 8

Confidence Interval for p

  • The confidence interval for the unknown p (assuming a large sample) is based on the sample proportion p = x/n.
slide7

Estimating from Finite Population

Chapter 8

N = population size; n = sample size

  • The FPCF narrows the confidence interval somewhat.
  • When the sample is small relative to the population, the FPCF has little effect. If n/N < .05, it is reasonable to omit it (FPCF  1 ).
slide8
To estimate a population mean with a precision of +E (allowable error), you would need a sample of size n. Now,

Sample Size Determination ML 7-2

Chapter 8

Sample Size to Estimate m

slide9
Method 1: Take a Preliminary SampleTake a small preliminary sample and use the sample s in place of s in the sample size formula.

Method 2: Assume Uniform PopulationEstimate rough upper and lower limits a and b and set s = [(b  a)/12]½.

Sample Size Determination for a Mean

Chapter 8

How to Estimate s?

  • Method 3: Assume Normal PopulationEstimate rough upper and lower limits a and b and set s= (b  a)/4. This assumes normality with most of the data within m ± 2s so the range is 4s.
  • Method 4: Poisson ArrivalsIn the special case when m is a Poisson arrival rate, then s = m .
slide10

Sample Size Determination for a Mean

Chapter 8

Using MegaStat

For example, how large a sample is needed to estimate the population mean age of college professors with 95 percent confidence and precision of ± 2 years, assuming a range of 25 to 70 years (i.e., 2 years allowable error)? To estimate σ, we assume a uniform distribution of ages from 25 to 70:

slide11
To estimate a population proportion with a precision of ± E (allowable error), you would need a sample of size n.

Sample Size Determination for a Mean

Chapter 8

  • Since p is a number between 0 and 1, the allowable error E is also between 0 and 1.
slide12
Method 1: Assume that p = .50 This conservative method ensures the desired precision. However, the sample may end up being larger than necessary.

Method 2: Take a Preliminary SampleTake a small preliminary sample and use the sample p in place of p in the sample size formula.

Method 3: Use a Prior Sample or Historical DataHow often are such samples available? p might be different enough to make it a questionable assumption.

Sample Size Determination for a Mean

Chapter 8

How to Estimate p?

slide13

Sample Size Determination for a Mean

Chapter 8

Using MegaStat

For example, how large a sample is needed to estimate the population proportion with 95 percent confidence and precision of ± .02(i.e., 2% allowable error)?.

one sample hypothesis tests ml 7 3
One-Sample Hypothesis Tests ML 7-3

Chapter 9

Learning Objectives

LO9-1: List the steps in testing hypotheses.

LO9-2: Explain the difference between H0and H1.

LO9-3:Define Type I error, Type II error, and power.

LO9-4: Formulate a null and alternative hypothesis for μ or π.

logic of hypothesis testing1
Logic of Hypothesis Testing

Chapter 9

LO9-2: Explain the difference between H0 and H1.

  • Hypotheses are a pair of mutually exclusive, collectively exhaustive statements about some fact about a population.
  • One statement or the other must be true, but they cannot both be true.
  • H0: Null hypothesisH1: Alternative hypothesis
  • These two statements are hypotheses because the truth is unknown.

State the Hypothesis

logic of hypothesis testing2
Logic of Hypothesis Testing

Chapter 9

State the Hypothesis

  • Efforts will be made to reject the null hypothesis.
  • If H0 is rejected, we tentatively conclude H1 to be the case.
  • H0 is sometimes called the maintained hypothesis.
  • H1 is called the action alternative because action may be required if we reject H0 in favor of H1.

Can Hypotheses Be Proved?

  • We cannot accept a null hypothesis; we can only fail to reject it.

Role of Evidence

  • The null hypothesis is assumed true and a contradiction is sought.
logic of hypothesis testing3
Logic of Hypothesis Testing

Chapter 9

LO9-3: Define Type I error, Type II error, and power.

Types of Error

  • Type I error: Rejecting the null hypothesis when it is true. This occurs with probability a (level of significance). Also called a false positive.
  • Type II error: Failure to reject the null hypothesis when it is false. This occurs with probability b. Also called a false negative.
logic of hypothesis testing4
Logic of Hypothesis Testing

Chapter 9

Probability of Type I and Type II Errors

  • If we choose a = .05, we expect to commit a Type I error about 5 times in 100.
  • b cannot be chosen in advance because it depends on a and the sample size.
  • A small b is desirable, other things being equal.
logic of hypothesis testing5
Logic of Hypothesis Testing

Chapter 9

Power of a Test

  • A low b risk means high power.
  • Larger samples lead to increased power.
logic of hypothesis testing6
Logic of Hypothesis Testing

Chapter 9

Relationship between a and b

  • Both a small a and a small b are desirable.
  • For a given type of test and fixed sample size, there is a trade-off between a and b.
  • The larger critical value needed to reduce a risk makes it harder to reject H0, thereby increasing b risk.
  • Both a and b can be reduced simultaneously only by increasing the sample size.
logic of hypothesis testing7
Logic of Hypothesis Testing

Chapter 9

Consequences of Type I and Type II Errors

  • The consequences of these two errors are quite different, and the costs are borne by different parties.
  • Example: Type I error is convicting an innocent defendant, so the costs are borne by the defendant. Type II error is failing to convict a guilty defendant, so the costs are borne by society if the guilty person returns to the streets.
  • Firms are increasingly wary of Type II error (failing to recall a product as soon as sample evidence begins to indicate potential problems.)
statistical hypothesis testing
Statistical Hypothesis Testing

Chapter 9

LO9-4: Formulate a null and alternative hypothesis for μ or π.

  • A statistical hypothesisis a statement about the value of a population parameter.
  • A hypothesis testis a decision between two competing mutually exclusive and collectively exhaustive hypotheses about the value of the parameter.
  • When testing a mean we can choose between three tests.
statistical hypothesis testing1
Statistical Hypothesis Testing

Chapter 9

One-Tailed and Two-Tailed Tests

  • The direction of the test is indicated by H1:

> indicates a right-tailed test

< indicates a left-tailed test

≠ indicates a two-tailed test

statistical hypothesis testing2
Statistical Hypothesis Testing

Chapter 9

Decision Rule

  • A test statistic shows how far the sample estimate is from its expected value, in terms of its own standard error.
  • The decision ruleuses the known sampling distribution of the test statistic to establish the critical valuethat divides the sampling distribution into two regions.
  • Reject H0 if the test statistic lies in the rejection region.
statistical hypothesis testing3
Statistical Hypothesis Testing

Chapter 9

Decision Rule for Two-Tailed Test

  • Reject H0 if the test statistic < left-tail critical value or if the test statistic > right-tail critical value.
statistical hypothesis testing4
Statistical Hypothesis Testing

Chapter 9

When to use a One- or Two-Sided Test

  • A two-sided hypothesis test (i.e., µ ≠ µ0) is used when direction (< or >) is of no interest to the decision maker.
  • A one-sided hypothesis test is used when - the consequences of rejecting H0 are asymmetric, or - where one tail of the distribution is of special importance to the researcher.
  • Rejection in a two-sided test guarantees rejection in a one-sided test, other things being equal.
statistical hypothesis testing5
Statistical Hypothesis Testing

Chapter 9

Decision Rule for Left-Tailed Test

  • Reject H0 if the test statistic < left-tail critical value.

Figure 9.2

statistical hypothesis testing6
Statistical Hypothesis Testing

Chapter 9

Decision Rule for Right-Tailed Test

  • Reject H0 if the test statistic > right-tail critical value.
statistical hypothesis testing7
Statistical Hypothesis Testing

Chapter 9

Type I Error

also called a false positive

  • A reasonably small level of significance a is desirable, other things being equal.
  • Chosen in advance, common choices for a are .10, .05, .025, .01, and .005 (i.e., 10%, 5%, 2.5%, 1%, and .5%).
  • The a risk is the area under the tail(s) of the sampling distribution.
  • In a two-sided test, the a risk is split with a/2 in each tail since there are two ways to reject H0.
ad