What is statistics
This presentation is the property of its rightful owner.
Sponsored Links
1 / 209

What is Statistics? PowerPoint PPT Presentation


  • 72 Views
  • Uploaded on
  • Presentation posted in: General

What is Statistics?. “Statistics is a way to get information from data”. Statistics. Data. Information. Data: Facts, especially numerical facts, collected together for reference or information. Information: Knowledge communicated concerning some particular fact.

Download Presentation

What is Statistics?

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript


What is statistics

What is Statistics?

  • “Statistics is a way to get information from data”

Statistics

Data

Information

Data:Facts, especially numerical facts, collected together for reference or information.

Information:Knowledge communicated concerning some particular fact.

Statistics is a tool for creating new understanding from a set of numbers.

Definitions: Oxford English Dictionary


Key statistical concepts

Key Statistical Concepts…

  • Population

  • — a population is the group of all items of interest to a statistics practitioner.

  • — frequently very large; sometimes infinite.

  • E.g. All 5 million Florida voters, per Example 12.5

  • Sample

  • — A sample is a set of data drawn from the population.

  • — Potentially very large, but less than the population.

  • E.g. a sample of 765 voters exit polled on election day.


Key statistical concepts1

Key Statistical Concepts…

  • Parameter

  • — A descriptive measure of a population.

  • Statistic

  • — A descriptive measure of a sample.


Key statistical concepts2

Key Statistical Concepts…

Population

  • Populations have Parameters,

  • Samples have Statistics.

Sample

Subset

Statistic

Parameter


Descriptive statistics

Descriptive Statistics…

  • …are methods of organizing, summarizing, and presenting data in a convenient and informative way. These methods include:

    • Graphical Techniques (Chapter 2), and

    • Numerical Techniques (Chapter 4).

  • The actual method used depends on what information we would like to extract. Are we interested in…

    • • measure(s) of central location? and/or

    • • measure(s) of variability (dispersion)?

  • Descriptive Statistics helps to answer these questions…


Statistical inference

Statistical Inference…

  • Statistical inference is the process of making an estimate, prediction, or decision about a population based on a sample.

Population

Sample

Inference

Statistic

Parameter

What can we infer about a Population’s Parameters

based on a Sample’s Statistics?


Definitions

Definitions…

  • A variable is some characteristic of a population or sample.

  • E.g. student grades.

  • Typically denoted with a capital letter: X, Y, Z…

  • The valuesof the variable are the range of possible values for a variable.

  • E.g. student marks (0..100)

  • Data are the observed values of a variable.

  • E.g. student marks: {67, 74, 71, 83, 93, 55, 48}


Interval data

Interval Data…

  • Intervaldata

  • • Real numbers, i.e. heights, weights, prices, etc.

  • • Also referred to as quantitative or numerical.

  • Arithmetic operations can be performed on Interval Data, thus its meaningful to talk about 2*Height, or Price + $1, and so on.


Nominal data

Nominal Data…

  • Nominal Data

  • • Thevalues of nominal data are categories.

  • E.g. responses to questions about marital status, coded as:

  • Single = 1, Married = 2, Divorced = 3, Widowed = 4

  • Because the numbers are arbitrary arithmetic operations don’t make any sense (e.g. does Widowed ÷ 2 = Married?!)

  • Nominal data are also called qualitative or categorical.


Ordinal data

Ordinal Data…

  • OrdinalData appear to be categorical in nature, but their values have an order; a ranking to them:

  • E.g. College course rating system:

  • poor = 1, fair = 2, good = 3, very good = 4, excellent = 5

  • While its still not meaningful to do arithmetic on this data (e.g. does 2*fair = very good?!), we can say things like:

  • excellent > poor or fair < very good

  • That is, order is maintained no matter what numeric values are assigned to each category.


Graphical tabular techniques for nominal data

Graphical & Tabular Techniques for Nominal Data…

  • The only allowable calculation on nominal data is to count the frequency of each value of the variable.

  • We can summarize the data in a table that presents the categories and their counts called a frequency distribution.

  • A relative frequency distribution lists the categories and the proportion with which each occurs.

  • Refer to Example 2.1


Nominal data tabular summary

Nominal Data (Tabular Summary)


Nominal data frequency

Nominal Data (Frequency)

Bar Charts are often used to display frequencies…


Nominal data1

Nominal Data

It all the same information,

(based on the same data).

Just different presentation.


Graphical techniques for interval data

Graphical Techniques for Interval Data

  • There are several graphical methods that are used when the data are interval (i.e. numeric, non-categorical).

  • The most important of these graphical methods is the histogram.

  • The histogram is not only a powerful graphical technique used to summarize interval data, but it is also used to help explain probabilities.


Building a histogram

Building a Histogram…

  • Collect the Data 

  • Create a frequency distribution for the data. 

  • Draw the Histogram. 


Histogram and stem leaf

Histogram and Stem & Leaf…


Ogive

Ogive…

  • Is a graph of a cumulativefrequency distribution.

  • We create an ogive in three steps…

  • 1) Calculate relative frequencies. 

  • 2) Calculate cumulative relative frequencies by adding the current class’ relative frequency to the previous class’ cumulative relative frequency.

  • (For the first class, its cumulative relative frequency is just its relative frequency)


Cumulative relative frequencies

Cumulative Relative Frequencies…

first class…

next class: .355+.185=.540

:

:

last class: .930+.070=1.00


Ogive1

Ogive…

The ogive can be used to answer questions like:

What telephone bill value is at the 50th percentile?

“around $35”

(Refer also to Fig. 2.13 in your textbook)


Scatter diagram

Scatter Diagram…

  • Example 2.9 A real estate agent wanted to know to what extent the selling price of a home is related to its size…

  • Collect the data 

  • Determine the independent variable (X – house size) and the dependent variable (Y – selling price) 

  • Use Excel to create a “scatter diagram”…


Scatter diagram1

Scatter Diagram…

  • It appears that in fact there is a relationship, that is, the greater the house size the greater the selling price…


Patterns of scatter diagrams

Patterns of Scatter Diagrams…

  • Linearity and Direction are two concepts we are interested in

Positive Linear Relationship

Negative Linear Relationship

Weak or Non-Linear Relationship


Time series data

Time Series Data…

  • Observations measured at the same point in time are called cross-sectional data.

  • Observations measured at successive points in time are called time-series data.

  • Time-series data graphed on a line chart, which plots the value of the variable on the vertical axis against the time periods on the horizontal axis.


Numerical descriptive techniques

Numerical Descriptive Techniques…

  • Measures of Central Location

    • Mean, Median, Mode

  • Measures of Variability

    • Range, Standard Deviation, Variance, Coefficient of Variation

  • Measures of Relative Standing

    • Percentiles, Quartiles

  • Measures of Linear Relationship

    • Covariance, Correlation, Least Squares Line


Measures of central location

Measures of Central Location…

  • The arithmetic mean, a.k.a. average, shortened to mean, is the most popular & useful measure of central location.

  • It is computed by simply adding up all the observations and dividing by the total number of observations:

Sum of the observations

Number of observations

Mean =


Arithmetic mean

Arithmetic Mean…

Sample Mean

Population Mean


Statistics is a pattern language

Statistics is a pattern language…


The arithmetic mean

The Arithmetic Mean…

  • …is appropriate for describing measurement data, e.g. heights of people, marks of student papers, etc.

  • …is seriously affected by extreme values called “outliers”. E.g. as soon as a billionaire moves into a neighborhood, the average household income increases beyond what it was previously!


Measures of variability

Measures of Variability…

  • Measures of central location fail to tell the whole story about the distribution; that is, how much are the observations spread out around the mean value?

For example, two sets of class grades are shown. The mean (=50) is the same in each case…

But, the red class has greater variability than the blue class.


Range

Range…

  • The range is the simplest measure of variability, calculated as:

  • Range = Largest observation – Smallest observation

  • E.g.

  • Data: {4, 4, 4, 4, 50}Range = 46

  • Data: {4, 8, 15, 24, 39, 50}Range = 46

  • The range is the same in both cases,

  • but the data sets have very different distributions…


Statistics is a pattern language1

Statistics is a pattern language…


Variance

Variance…

  • The variance of a population is:

  • The variance of a sample is:

population mean

population size

sample mean

Note! the denominator is sample size (n) minus one !


Application

Application…

  • Example 4.7. The following sample consists of the number of jobs six randomly selected students applied for: 17, 15, 23, 7, 9, 13.

  • Finds its mean and variance.

  • What are we looking to calculate?

  • The following sample consists of the number of jobs six randomly selected students applied for: 17, 15, 23, 7, 9, 13.

  • Finds its mean and variance.

…as opposed to  or 2


Sample mean variance

Sample Mean & Variance…

Sample Mean

Sample Variance

Sample Variance (shortcut method)


Standard deviation

Standard Deviation…

  • The standard deviation is simply the square root of the variance, thus:

  • Population standard deviation:

  • Sample standard deviation:


Standard deviation1

Standard Deviation…

  • Consider Example 4.8 where a golf club manufacturer has designed a new club and wants to determine if it is hit more consistently (i.e. with less variability) than with an old club.

  • Using Tools > Data Analysis [may need to “add in”… > Descriptive Statistics in Excel, we produce the following tables for interpretation…

You get more consistent distance with the new club.


The empirical rule if the histogram is bell shaped

The Empirical Rule… If the histogram is bell shaped

  • Approximately 68% of all observations fall

  • within one standard deviation of the mean.

  • Approximately 95% of all observations fall

  • within two standard deviations of the mean.

  • Approximately 99.7% of all observations fall

  • within three standard deviations of the mean.


Chebysheff s theorem not often used because interval is very wide

Chebysheff’s Theorem…Not often used because interval is very wide.

  • A more general interpretation of the standard deviation is derived from Chebysheff’s Theorem, which applies to all shapes of histograms (not just bell shaped).

  • The proportion of observations in any sample that lie

  • within k standard deviations of the mean is at least:

For k=2 (say), the theorem states that at least 3/4 of all observations lie within 2 standard deviations of the mean. This is a “lower bound” compared to Empirical Rule’s approximation (95%).


Box plots

Box Plots…

  • These box plots are based on data in Xm04-15.

  • Wendy’s service time is shortest and least variable.

  • Hardee’s has the greatest variability, while Jack-in-the-Box has the longest service times.


Methods of collecting data

Methods of Collecting Data…

  • There are many methods used to collect or obtain data for statistical analysis. Three of the most popular methods are:

  • • Direct Observation

  • • Experiments, and

  • • Surveys.


Sampling

Sampling…

  • Recall that statistical inference permits us to draw conclusions about a population based on a sample.

  • Sampling (i.e. selecting a sub-set of a whole population) is often done for reasons of cost (it’s less expensive to sample 1,000 television viewers than 100 million TV viewers) and practicality (e.g. performing a crash test on every automobile produced is impractical).

  • In any case, the sampled population and the target population should be similar to one another.


Sampling plans

Sampling Plans…

  • A sampling plan is just a method or procedure for specifying how a sample will be taken from a population.

  • We will focus our attention on these three methods:

  • Simple Random Sampling,

  • Stratified Random Sampling, and

  • Cluster Sampling.


Simple random sampling

Simple Random Sampling…

  • A simple random sample is a sample selected in such a way that every possible sample of the same size is equally likely to be chosen.

  • Drawing three names from a hat containing all the names of the students in the class is an example of a simple random sample: any group of three names is as equally likely as picking any other group of three names.


Stratified random sampling

Stratified Random Sampling…

  • After the population has been stratified, we can use simple random sampling to generate the complete sample:

If we only have sufficient resources to sample 400 people total,

we would draw 100 of them from the low income group…

…if we are sampling 1000 people, we’d draw

50 of them from the high income group.


Cluster sampling

Cluster Sampling…

  • A cluster sample is a simple random sample of groups or clusters of elements (vs. a simple random sample of individual objects).

  • This method is useful when it is difficult or costly to develop a complete list of the population members or when the population elements are widely dispersed geographically.

  • Cluster sampling may increase sampling error due to similarities among cluster members.


Sampling error

Sampling Error…

  • Sampling error refers to differences between the sample and the population that exist only because of the observations that happened to be selected for the sample.

  • Another way to look at this is: the differences in results for different samples (of the same size) is due to sampling error:

  • E.g. Two samples of size 10 of 1,000 households. If we happened to get the highest income level data points in our first sample and all the lowest income levels in the second, this delta is due to sampling error.


Nonsampling error

Nonsampling Error…

  • Nonsampling errors are more serious and are due to mistakes made in the acquisition of data or due to the sample observations being selected improperly. Three types of nonsampling errors:

  • Errors in data acquisition,

  • Nonresponse errors, and

  • Selection bias.

  • Note: increasing the sample size will not reduce this type of error.


Approaches to assigning probabilities

Approaches to Assigning Probabilities…

  • There are three ways to assign a probability, P(Oi), to an outcome, Oi, namely:

  • Classical approach: make certain assumptions (such as equally likely, independence) about situation.

  • Relative frequency: assigning probabilities based on experimentation or historical data.

  • Subjective approach: Assigning probabilities based on the assignor’s judgment.


Interpreting probability

Interpreting Probability…

  • One way to interpret probability is this:

  • If a random experiment is repeated an infinite number of times, the relative frequency for any given outcome is the probability of this outcome.

  • For example, the probability of heads in flip of a balanced coin is .5, determined using the classical approach. The probability is interpreted as being the long-term relative frequency of heads if the coin is flipped an infinite number of times.


Conditional probability

Conditional Probability…

  • Conditional probability is used to determine how two events are related; that is, we can determine the probability of one event given the occurrence of another related event.

  • Conditional probabilities are written as P(A | B) and read as “the probability of A given B” and is calculated as:


Independence

Independence…

  • One of the objectives of calculating conditional probability is to determine whether two events are related.

  • In particular, we would like to know whether they are independent, that is, if the probability of one event is not affected by the occurrence of the other event.

  • Two events A and B are said to be independent if

  • P(A|B) = P(A)

  • or

  • P(B|A) = P(B)


Complement rule

Complement Rule…

  • The complement of an event A is the event that occurs when A does not occur.

  • The complement rule gives us the probability of an event NOT occurring. That is:

  • P(AC) = 1 – P(A)

  • For example, in the simple roll of a die, the probability of the number “1” being rolled is 1/6. The probability that some number other than “1” will be rolled is 1 – 1/6 = 5/6.


Multiplication rule

Multiplication Rule…

  • The multiplication rule is used to calculate the joint probability of two events. It is based on the formula for conditional probability defined earlier:

If we multiply both sides of the equation by P(B) we have:

P(A and B) = P(A | B)•P(B)

Likewise, P(A and B) = P(B | A) • P(A)

If A and B are independent events, then P(A and B) = P(A)•P(B)


Addition rule

Addition Rule…

  • Recall: the addition rule was introduced earlier to provide a way to compute the probability of event A or B or both A and B occurring; i.e. the union of A and B.

  • P(A or B) = P(A) + P(B) – P(A and B)

  • Why do we subtract the joint probability P(A and B) from the sum of the probabilities of A and B?

P(A or B) = P(A) + P(B) – P(A and B)


Addition rule for mutually excusive events

Addition Rule for Mutually Excusive Events

  • If and A and B are mutually exclusive the occurrence of one event makes the other one impossible. This means that

  • P(A and B) = 0

  • The addition rule for mutually exclusive events is

  • P(A or B) = P(A) + P(B)

  • We often use this form when we add some joint probabilities calculated from a probability tree


Two types of random variables

Two Types of Random Variables…

  • Discrete Random Variable

  • – one that takes on a countable number of values

  • – E.g. values on the roll of dice: 2, 3, 4, …, 12

  • Continuous Random Variable

  • – one whose values are not discrete, not countable

  • – E.g. time (30.1 minutes? 30.10000001 minutes?)

  • Analogy:

  • Integers are Discrete, while Real Numbers are Continuous


Laws of expected value

Laws of Expected Value…

  • E(c) = c

    • The expected value of a constant (c) is just the value of the constant.

  • E(X + c) = E(X) + c

  • E(cX) = cE(X)

    • We can “pull” a constant out of the expected value expression (either as part of a sum with a random variable X or as a coefficient of random variable X).


Laws of variance

Laws of Variance…

  • V(c) = 0

    • The variance of a constant (c) is zero.

  • V(X + c) = V(X)

    • The variance of a random variable and a constant is just the variance of the random variable (per 1 above).

  • V(cX) = c2V(X)

    • The variance of a random variable and a constant coefficient is the coefficient squared times the variance of the random variable.


Binomial distribution

Binomial Distribution…

  • The binomial distribution is the probability distribution that results from doing a “binomial experiment”. Binomial experiments have the following properties:

  • Fixed number of trials, represented as n.

  • Each trial has two possible outcomes, a “success” and a “failure”.

  • P(success)=p (and thus: P(failure)=1–p), for all trials.

  • The trials are independent, which means that the outcome of one trial does not affect the outcomes of any other trials.


Binomial random variable

Binomial Random Variable…

  • The binomial random variable counts the number of successes in n trials of the binomial experiment. It can take on values from 0, 1, 2, …, n. Thus, its a discrete random variable.

  • To calculate the probability associated with each value we use combintorics:

for x=0, 1, 2, …, n


Binomial table

Binomial Table…

  • “What is the probability that Pat fails the quiz”?

  • i.e. what is P(X ≤ 4), given P(success) = .20 and n=10 ?

P(X ≤ 4) = .967


Binomial table1

Binomial Table…

  • “What is the probability that Pat gets two answers correct?”

  • i.e. what is P(X = 2), given P(success) = .20 and n=10 ?

P(X = 2) = P(X≤2) – P(X≤1) = .678 – .376 = .302

remember, the table shows cumulative probabilities…


Binomdist excel function

=BINOMDIST() Excel Function…

  • There is a binomial distribution function in Excel that can also be used to calculate these probabilities. For example:

  • What is the probability that Pat gets two answers correct?

# successes

# trials

P(success)

cumulative

(i.e. P(X≤x)?)

P(X=2)=.3020


Binomdist excel function1

=BINOMDIST() Excel Function…

  • There is a binomial distribution function in Excel that can also be used to calculate these probabilities. For example:

  • What is the probability that Pat fails the quiz?

# successes

# trials

P(success)

cumulative

(i.e. P(X≤x)?)

P(X≤4)=.9672


Binomial distribution1

Binomial Distribution…

  • As you might expect, statisticians have developed general formulas for the mean, variance, and standard deviation of a binomial random variable. They are:


Poisson distribution

Poisson Distribution…

  • Named for Simeon Poisson, the Poisson distribution is a discrete probability distribution and refers to the number of events (a.k.a. successes) within a specific time period or region of space. For example:

    • The number of cars arriving at a service station in 1 hour. (The interval of time is 1 hour.)

    • The number of flaws in a bolt of cloth. (The specific region is a bolt of cloth.)

    • The number of accidents in 1 day on a particular stretch of highway. (The interval is defined by both time, 1 day, and space, the particular stretch of highway.)


The poisson experiment

The Poisson Experiment…

  • Like a binomial experiment, a Poisson experiment has four defining characteristic properties:

  • The number of successes that occur in any interval is independent of the number of successes that occur in any other interval.

  • The probability of a success in an interval is the same for all equal-size intervals

  • The probability of a success is proportional to the size of the interval.

  • The probability of more than one success in an interval approaches 0 as the interval becomes smaller.


Poisson distribution1

Poisson Distribution…

  • The Poisson random variable is the number of successes that occur in a period of time or an interval of space in a Poisson experiment.

  • E.g. On average, 96 trucks arrive at a border crossing

  • every hour.

  • E.g. The number of typographic errors in a new textbook edition averages 1.5 per 100 pages.

successes

time period

successes (?!)

interval


Poisson probability distribution

Poisson Probability Distribution…

  • The probability that a Poisson random variable assumes a value of x is given by:

  • and e is the natural logarithm base.

  • FYI:


Example 7 12

Example 7.12…

  • The number of typographical errors in new editions of textbooks varies considerably from book to book. After some analysis he concludes that the number of errors is Poisson distributed with a mean of 1.5 per 100 pages. The instructor randomly selects 100 pages of a new book. What is the probability that there are no typos?

  • That is, what is P(X=0) given that = 1.5?

“There is about a 22% chance of finding zero errors”


Poisson distribution2

Poisson Distribution…

  • As mentioned on the Poisson experiment slide:

  • The probability of a success is proportional to the size of the interval

  • Thus, knowing an error rate of 1.5 typos per 100 pages, we can determine a mean value for a 400 page book as:

  • =1.5(4) = 6 typos / 400 pages.


Example 7 13

Example 7.13…

  • For a 400 page book, what is the probability that there are

  • no typos?

  • P(X=0) =

“there is a very small chance there are no typos”


Example 7 131

Example 7.13…

  • …Excel is an even better alternative:


Probability density functions

Probability Density Functions…

  • Unlike a discrete random variable which we studied in Chapter 7, a continuous random variable is one that can assume an uncountable number of values.

  •  We cannot list the possible values because there is an infinite number of them.

  •  Because there is an infinite number of values, the probability of each individual value is virtually 0.


Point probabilities are zero

Point Probabilities are Zero

  • Because there is an infinite number of values, the probability of each individual value is virtually 0.

    Thus, we can determine the probability of a range of values only.

  • E.g. with a discrete random variable like tossing a die, it is meaningful to talk about P(X=5), say.

  • In a continuous setting (e.g. with time as a random variable), the probability the random variable of interest, say task length, takes exactly 5 minutes is infinitesimally small, hence P(X=5) = 0.

  • It is meaningful to talk about P(X ≤ 5).


Probability density function

Probability Density Function…

  • A function f(x) is called a probability density function (over the range a ≤ x ≤ b if it meets the following requirements:

  • f(x) ≥ 0 for all x between a and b, and

  • The total area under the curve between a and b is 1.0

f(x)

area=1

a

b

x


The normal distribution

The Normal Distribution…

  • The normal distribution is the most important of all probability distributions. The probability density function of a normal random variable is given by:

  • It looks like this:

  • Bell shaped,

  • Symmetrical around the mean …


The normal distribution1

The Normal Distribution…

  • Important things to note:

The normal distribution is fully defined by two parameters:

its standard deviation andmean

The normal distribution is bell shaped and

symmetrical about the mean

Unlike the range of the uniform distribution (a ≤ x ≤ b)

Normal distributions range from minus infinity to plus infinity


Standard normal distribution

0

1

1

Standard Normal Distribution…

  • A normal distribution whose mean is zero and standard deviation is one is called the standard normal distribution.

  • As we shall see shortly, any normal distribution can be converted to a standard normal distribution with simple algebra. This makes calculations much easier.


Calculating normal probabilities

Calculating Normal Probabilities…

  • We can use the following function to convert any normal random variable to a standard normal random variable…

0

Some advice: always draw a picture!


Calculating normal probabilities1

Calculating Normal Probabilities…

  • Example: The time required to build a computer is normally distributed with a mean of 50 minutes and a standard deviation of 10 minutes:

  • What is the probability that a computer is assembled in a time between 45 and 60 minutes?

  • Algebraically speaking, what is P(45 < X < 60) ?

0


Calculating normal probabilities2

Calculating Normal Probabilities…

  • P(45 < X < 60) ?

…mean of 50 minutes and a

standard deviation of 10 minutes…

0


Calculating normal probabilities3

Calculating Normal Probabilities…

  • We can use Table 3 in

  • Appendix B to look-up

  • probabilities P(0 < Z < z)

  • We can break up P(–.5 < Z < 1) into:

  • P(–.5 < Z < 0) + P(0 < Z < 1)

  • The distribution is symmetric around zero, so we have:

  • P(–.5 < Z < 0) = P(0 < Z < .5)

  • Hence: P(–.5 < Z < 1) = P(0 < Z < .5) + P(0 < Z < 1)


Calculating normal probabilities4

Calculating Normal Probabilities…

  • How to use Table 3…

This table gives probabilities P(0 < Z < z)

First column = integer + first decimal

Top row = second decimal place

P(0 < Z < 0.5)

P(0 < Z < 1)

P(–.5 < Z < 1) = .1915 + .3414 = .5328


Using the normal table table 3

Using the Normal Table (Table 3)…

  • What is P(Z > 1.6) ?

P(0 < Z < 1.6) = .4452

z

0

1.6

P(Z > 1.6) = .5 – P(0 < Z < 1.6)

= .5 – .4452

= .0548


Using the normal table table 31

Using the Normal Table (Table 3)…

  • What is P(Z < -2.23) ?

P(0 < Z < 2.23)

P(Z < -2.23)

P(Z > 2.23)

z

0

-2.23

2.23

P(Z < -2.23) = P(Z > 2.23)

= .5 – P(0 < Z < 2.23)

= .0129


Using the normal table table 32

Using the Normal Table (Table 3)…

  • What is P(Z < 1.52) ?

P(0 < Z < 1.52)

P(Z < 0) = .5

z

0

1.52

P(Z < 1.52) = .5 + P(0 < Z < 1.52)

= .5 + .4357

= .9357


Using the normal table table 33

Using the Normal Table (Table 3)…

  • What is P(0.9 < Z < 1.9) ?

P(0 < Z < 0.9)

P(0.9 < Z < 1.9)

z

0

0.9

1.9

P(0.9 < Z < 1.9) = P(0 < Z < 1.9) – P(0 < Z < 0.9)

=.4713 – .3159

= .1554


Finding values of z

Finding Values of Z…

  • Other Z values are

  • Z.05 = 1.645

  • Z.01 = 2.33


Using the values of z

Using the values of Z

  • Because z.025 = 1.96 and - z.025= -1.96, it follows that we can state

  • P(-1.96 < Z < 1.96) = .95

  • Similarly

  • P(-1.645 < Z < 1.645) = .90


Other continuous distributions

Other Continuous Distributions…

  • Three other important continuous distributions which will be used extensively in later sections are introduced here:

  • Student t Distribution,

  • Chi-Squared Distribution, and

  • F Distribution.


Student t distribution

Student t Distribution…

  • Here the letter t is used to represent the random variable, hence the name. The density function for the Student t distribution is as follows…

  • (nu) is called the degrees of freedom, and

  • (Gamma function) is (k)=(k-1)(k-2)…(2)(1)


Student t distribution1

Student t Distribution…

  • In much the same way that and define the normal distribution, , the degrees of freedom, defines the Student

  • t Distribution:

  • As the number of degrees of freedom increases, the t distribution approaches the standard normal distribution.

Figure 8.24


Determining student t values

Determining Student t Values…

  • The student t distribution is used extensively in statistical inference. Table 4 in Appendix B lists values of

  • That is, values of a Student t random variable with degrees of freedom such that:

  • The values for A are pre-determined

  • “critical” values, typically in the

  • 10%, 5%, 2.5%, 1% and 1/2% range.


Using the t table table 4 for values

Using the t table (Table 4) for values…

  • For example, if we want the value of t with 10 degrees of freedom such that the area under the Student t curve is .05:

Area under the curve value (tA) : COLUMN

t.05,10

t.05,10=1.812

Degrees of Freedom : ROW


F distribution

F Distribution…

  • The F density function is given by:

  • F > 0. Two parameters define this distribution, and like we’ve already seen these are again degrees of freedom.

  • is the “numerator” degrees of freedom and

  • is the “denominator” degrees of freedom.


Determining values of f

Determining Values of F…

  • For example, what is the value of F for 5% of the area under the right hand “tail” of the curve, with a numerator degree of freedom of 3 and a denominator degree of freedom of 7?

  • Solution: use the F look-up (Table 6)

There are different tables

for different values of A.

Make sure you start with

the correct table!!

F.05,3,7=4.35

F.05,3,7

Denominator Degrees of Freedom : ROW

Numerator Degrees of Freedom : COLUMN


Determining values of f1

Determining Values of F…

  • For areas under the curve on the left hand side of the curve, we can leverage the following relationship:

Pay close attention to the order of the terms!


Chapter 9

Chapter 9

Sampling Distributions


Sampling distribution of the mean

Sampling Distribution of the Mean…

  • A fair die is thrown infinitely many times,

  • with the random variable X = # of spots on any throw.

  • The probability distribution of X is:

  • …and the mean and variance are calculated as well:


Sampling distribution of two dice

Sampling Distribution of Two Dice

  • A sampling distribution is created by looking at

  • all samples of size n=2 (i.e. two dice) and their means…

  • While there are 36 possible samples of size 2, there are only 11 values for , and some (e.g. =3.5) occur more frequently than others (e.g. =1).


Sampling distribution of two dice1

P( )

6/36

5/36

4/36

3/36

2/36

1/36

P( )

Sampling Distribution of Two Dice…

  • The sampling distribution of is shown below:


Compare

Compare…

  • Compare the distribution of X…

  • …with the sampling distribution of .

  • As well, note that:


Central limit theorem

Central Limit Theorem…

  • The sampling distribution of the mean of a random sample drawn from any population is approximately normal for a sufficiently large sample size.

  • The larger the sample size, the more closely the sampling distribution of X will resemble a normal distribution.


Central limit theorem1

Central Limit Theorem…

  • If the population is normal, then X is normally distributed for all values of n.

  • If the population is non-normal, then X is approximately normal only for larger values of n.

  • In many practical situations, a sample size of 30 may be sufficiently large to allow us to use the normal distribution as an approximation for the sampling distribution of X.


Sampling distribution of the sample mean

Sampling Distribution of the Sample Mean

  • 1.

  • 2.

  • 3. If X is normal, X is normal. If X is nonnormal, X is approximately normal for sufficiently large sample sizes.

  • Note: the definition of “sufficiently large” depends on the extent of nonnormality of x (e.g. heavily skewed; multimodal)


Example 9 1 a

Example 9.1(a)…

  • The foreman of a bottling plant has observed that the amount of soda in each “32-ounce” bottle is actually a normally distributed random variable, with a mean of 32.2 ounces and a standard deviation of .3 ounce.

  • If a customer buys one bottle, what is the probability that the bottle will contain more than 32 ounces?


Example 9 1 a1

Example 9.1(a)…

  • We want to find P(X > 32), where X is normally distributed and =32.2 and =.3

  • “there is about a 75% chance that a single bottle of soda contains more than 32oz.”


Example 9 1 b

Example 9.1(b)…

  • The foreman of a bottling plant has observed that the amount of soda in each “32-ounce” bottle is actually a normally distributed random variable, with a mean of 32.2 ounces and a standard deviation of .3 ounce.

  • If a customer buys a carton of four bottles, what is the probability that the mean amount of the four bottles will be greater than 32 ounces?


Example 9 1 b1

Example 9.1(b)…

  • We want to find P(X > 32), where X is normally distributed

  • with =32.2 and =.3

  • Things we know:

  • X is normally distributed, therefore so will X.

  • = 32.2 oz.


Example 9 1 b2

Example 9.1(b)…

  • If a customer buys a carton of four bottles, what is the probability that the mean amount of the four bottles will be greater than 32 ounces?

  • “There is about a 91% chance the mean of the four bottles will exceed 32oz.”


Graphically speaking

Graphically Speaking…

mean=32.2

what is the probability that one bottle will contain more than 32 ounces?

what is the probability that the mean of four bottles will exceed 32 oz?


Sampling distribution difference of two means

Sampling Distribution: Difference of two means

  • The final sampling distribution introduced is that of the difference between two sample means. This requires:

  • independent random samples be drawn from each of twonormal populations

    If this condition is met, then the sampling distribution of the difference between the two sample means, i.e.

    will be normally distributed.

    (note: if the two populations are not both normally distributed, but the sample sizes are “large” (>30), the distribution of is approximately normal)


Sampling distribution difference of two means1

Sampling Distribution: Difference of two means

  • The expected value and variance of the sampling distribution of are given by:

  • mean:

  • standard deviation:

  • (also called the standard error if the difference between two means)


Estimation

Estimation…

  • There are two types of inference: estimation and hypothesis testing; estimation is introduced first.

  • The objective of estimation is to determine the approximate value of a population parameter on the basis of a sample statistic.

  • E.g., the sample mean ( ) is employed to estimate the population mean ( ).


Estimation1

Estimation…

  • The objective of estimation is to determine the approximate value of a population parameter on the basis of a sample statistic.

  • There are two types of estimators:

  • Point Estimator

  • Interval Estimator


Point interval estimation

Point & Interval Estimation…

  • For example, suppose we want to estimate the mean summer income of a class of business students. For n=25 students,

  • is calculated to be 400 $/week.

  • point estimate interval estimate

  • An alternative statement is:

  • The mean income is between 380 and 420 $/week.


Estimating when is known

Estimating when is known…

the confidence interval

  • We established in Chapter 9:

  • Thus, the probability that the interval:

  • contains the population mean is 1– . This is a confidence interval estimator for .

the sample mean is in the center of the interval…


Four commonly used confidence levels

Four commonly used confidence levels…

  • Confidence Level

cut & keep handy!

Table 10.1


Example 10 1

Example 10.1…

  • A computer company samples demand during lead time over 25 time periods:

  • Its is known that the standard deviation of demand over lead time is 75 computers. We want to estimate the mean demand over lead time with 95% confidence in order to set inventory levels…


Example 10 11

Example 10.1…

CALCULATE

  • In order to use our confidence interval estimator, we need the following pieces of data:

  • therefore:

  • The lower and upper confidence limits are 340.76 and 399.56.

Calculated from the data…

Given


Example 10 12

Example 10.1…

INTERPRET

  • The estimation for the mean demand during lead time lies between 340.76 and 399.56 — we can use this as input in developing an inventory policy.

  • That is, we estimated that the mean demand during lead time falls between 340.76 and 399.56, and this type of estimator is correct 95% of the time. That also means that 5% of the time the estimator will be incorrect.

  • Incidentally, the media often refer to the 95% figure as “19 times out of 20,” which emphasizes the long-run aspect of the confidence level.


Interval width

Interval Width…

  • A wide interval provides little information.

  • For example, suppose we estimate with 95% confidence that an accountant’s average starting salary is between $15,000 and $100,000.

  • Contrast this with: a 95% confidence interval estimate of starting salaries between $42,000 and $45,000.

  • The second estimate is much narrower, providing accounting students more precise information about starting salaries.


Interval width1

Interval Width…

  • The width of the confidence interval estimate is a function of the confidence level, the populationstandard deviation, and the sample size…


Selecting the sample size

Selecting the Sample Size…

  • We can control the width of the interval by determining the sample size necessary to produce narrow intervals.

  • Suppose we want to estimate the mean demand “to within 5 units”; i.e. we want to the interval estimate to be:

  • Since:

  • It follows that

Solve for n to get requisite sample size!


Selecting the sample size1

Selecting the Sample Size…

  • Solving the equation…

  • that is, to produce a 95% confidence interval estimate of the mean (±5 units), we need to sample 865 lead time periods (vs. the 25 data points we have currently).


Sample size to estimate a mean

Sample Size to Estimate a Mean…

  • The general formula for the sample size needed to estimate a population mean with an interval estimate of:

  • Requires a sample size of at least this large:


Example 10 2

Example 10.2…

  • A lumber company must estimate the mean diameter of trees to determine whether or not there is sufficient lumber to harvest an area of forest. They need to estimate this to within 1 inch at a confidence level of 99%. The tree diameters are normally distributed with a standard deviation of 6 inches.

  • How many trees need to be sampled?


Example 10 21

1

Example 10.2…

  • Things we know:

  • Confidence level = 99%, therefore =.01

  • We want , hence W=1.

  • We are given that = 6.


Example 10 22

1

Example 10.2…

  • We compute…

  • That is, we will need to sample at least 239 trees to have a

  • 99% confidence interval of


Nonstatistical hypothesis testing

Nonstatistical Hypothesis Testing…

  • A criminal trial is an example of hypothesis testing without the statistics.

  • In a trial a jury must decide between two hypotheses. The null hypothesis is

  • H0: The defendant is innocent

  • The alternative hypothesis or research hypothesis is

  • H1: The defendant is guilty

  • The jury does not know which hypothesis is true. They must make a decision on the basis of evidence presented.


Nonstatistical hypothesis testing1

Nonstatistical Hypothesis Testing…

  • There are two possible errors.

  • A Type I error occurs when we reject a true null hypothesis. That is, a Type I error occurs when the jury convicts an innocent person.

  • A Type II error occurs when we don’t reject a false null hypothesis. That occurs when a guilty defendant is acquitted.


Nonstatistical hypothesis testing2

Nonstatistical Hypothesis Testing…

  • The probability of a Type I error is denoted as α (Greek letter alpha). The probability of a type II error is β (Greek letter beta).

  • The two probabilities are inversely related. Decreasing one increases the other.


Nonstatistical hypothesis testing3

Nonstatistical Hypothesis Testing…

  • The critical concepts are theses:

  • 1. There are two hypotheses, the null and the alternative hypotheses.

  • 2. The procedure begins with the assumption that the null hypothesis is true.

  • 3. The goal is to determine whether there is enough evidence to infer that the alternative hypothesis is true.

  • 4. There are two possible decisions:

  • Conclude that there is enough evidence to support the alternative hypothesis.

  • Conclude that there is not enough evidence to support the alternative hypothesis.


Nonstatistical hypothesis testing4

Nonstatistical Hypothesis Testing…

  • 5. Two possible errors can be made.

  • Type I error: Reject a true null hypothesis

  • Type II error: Do not reject a false null hypothesis.

  • P(Type I error) = α

  • P(Type II error) = β


Concepts of hypothesis testing 1

Concepts of Hypothesis Testing (1)…

  • There are two hypotheses. One is called the null hypothesis and the other the alternative or research hypothesis. The usual notation is:

  • H0: — the ‘null’ hypothesis

  • H1: — the ‘alternative’ or ‘research’ hypothesis

  • The null hypothesis (H0) will always state that the parameter equals the value specified in the alternative hypothesis (H1)

pronounced

H “nought”


Concepts of hypothesis testing

Concepts of Hypothesis Testing…

  • Consider Example 10.1 (mean demand for computers during assembly lead time) again. Rather than estimate the mean demand, our operations manager wants to know whether the mean is different from 350 units. We can rephrase this request into a test of the hypothesis:

  • H0: μ = 350

  • Thus, our research hypothesis becomes:

  • H1: μ ≠ 350

This is what we are interested in determining…


Concepts of hypothesis testing 4

Concepts of Hypothesis Testing (4)…

  • There are two possible decisions that can be made:

  • Conclude that there is enough evidence to support the alternative hypothesis

    (also stated as: rejecting the null hypothesis in favor of the alternative)

  • Conclude that there is not enough evidence to support the alternative hypothesis

    (also stated as: not rejecting the null hypothesis in favor of the alternative)

    NOTE: we do not say that we accept the null hypothesis…


Concepts of hypothesis testing1

Concepts of Hypothesis Testing…

  • Once the null and alternative hypotheses are stated, the next step is to randomly sample the population and calculate a test statistic (in this example, the sample mean).

  • If the test statistic’s value is inconsistent with the null hypothesis we reject the null hypothesis and infer that the alternative hypothesis is true.

  • For example, if we’re trying to decide whether the mean is not equal to 350, a large value of (say, 600) would provide enough evidence. If is close to 350 (say, 355) we could not say that this provides a great deal of evidence to infer that the population mean is different than 350.


Types of errors

Types of Errors…

  • A Type I error occurs when we reject a true null hypothesis (i.e. Reject H0 when it is TRUE)

  • A Type II error occurs when we don’t reject a false null hypothesis (i.e. Do NOT reject H0 when it is FALSE)


Recap i

Recap I…

  • 1) Two hypotheses: H0 & H1

  • 2) ASSUME H0 is TRUE

  • 3) GOAL: determine if there is enough evidence to infer that H1 is TRUE

  • 4) Two possible decisions:

  • Reject H0 in favor of H1

  • NOT Reject H0 in favor of H1

  • 5) Two possible types of errors:

  • Type I: reject a true H0 [P(Type I)= ]

  • Type II: not reject a false H0 [P(Type II)= ]


Example 11 1

Example 11.1…

  • A department store manager determines that a new billing system will be cost-effective only if the mean monthly account is more than $170.

  • A random sample of 400 monthly accounts is drawn, for which the sample mean is $178. The accounts are approximately normally distributed with a standard deviation of $65.

  • Can we conclude that the new system will be cost-effective?


Example 11 11

Example 11.1…

  • The system will be cost effective if the mean account balance for all customers is greater than $170.

  • We express this belief as a our research hypothesis, that is:

  • H1: > 170 (this is what we want to determine)

  • Thus, our null hypothesis becomes:

  • H0: = 170 (this specifies a single value for the parameter of interest)


Example 11 12

Example 11.1…

  • What we want to show:

  • H1: > 170

  • H0: = 170 (we’ll assume this is true)

  • We know:

  • n = 400,

  • = 178, and

  • = 65

  • Hmm. What to do next?!


Example 11 13

Example 11.1…

  • To test our hypotheses, we can use two different approaches:

  • The rejection region approach (typically used when computing statistics manually), and

  • The p-value approach (which is generally used with a computer and statistical software).

  • We will explore both in turn…


Example 11 1 rejection region

Example 11.1… Rejection Region…

  • The rejection region is a range of values such that if the test statistic falls into that range, we decide to reject the null hypothesis in favor of the alternative hypothesis.

is the critical value of to reject H0.


Example 11 14

Example 11.1…

  • All that’s left to do is calculate and compare it to 170.

we can calculate this based on any level of significance ( ) we want…


Example 11 15

Example 11.1…

  • At a 5% significance level (i.e. =0.05), we get

  • Solving we compute=175.34

  • Since our sample mean (178) is greater than the critical value we calculated (175.34), we reject the null hypothesis in favor of H1, i.e. that: > 170 and that it is cost effective to install the new billing system


Example 11 1 the big picture

H1: > 170

H0: = 170

Example 11.1… The Big Picture…

=175.34

=178

Reject H0 in favor of


Standardized test statistic

Standardized Test Statistic…

  • An easier method is to use the standardized test statistic:

  • and compare its result to : (rejection region: z > )

  • Since z = 2.46 > 1.645 (z.05), we reject H0 in favor of H1…


Plot power curve

PLOT POWER CURVE


P value

p-Value

  • The p-value of a test is the probability of observing a test statistic at least as extreme as the one computed given that the null hypothesis is true.

  • In the case of our department store example, what is the probability of observing a sample mean at least as extreme as the one already observed (i.e. = 178), given that the null hypothesis (H0: = 170) is true?

p-value


Interpreting the p value

Interpreting the p-value…

  • The smaller the p-value, the more statistical evidence exists to support the alternative hypothesis.

  • If the p-value is less than 1%, there is overwhelming evidence that supports the alternative hypothesis.

  • If the p-value is between 1% and 5%, there is a strong evidence that supports the alternative hypothesis.

  • If the p-value is between 5% and 10% there is a weak evidence that supports the alternative hypothesis.

  • If the p-value exceeds 10%, there is no evidence that supports the alternative hypothesis.

  • We observe a p-value of .0069, hence there is overwhelming evidence to support H1: > 170.


Interpreting the p value1

Interpreting the p-value…

  • Compare the p-value with the selected value of the significance level:

  • If the p-value is less than , we judge the p-value to be small enough to reject the null hypothesis.

  • If the p-value is greater than , we do not reject the null hypothesis.

  • Since p-value = .0069 < = .05, we reject H0 in favor of H1


Chapter opening example

Chapter-Opening Example…

  • The objective of the study is to draw a conclusion about the mean payment period. Thus, the parameter to be tested is the population mean. We want to know whether there is enough statistical evidence to show that the population mean is less than 22 days. Thus, the alternative hypothesis is

  • H1:μ < 22

  • The null hypothesis is

  • H0:μ = 22


Chapter opening example1

Chapter-Opening Example…

  • The test statistic is

  • We wish to reject the null hypothesis in favor of the alternative only if the sample mean and hence the value of the test statistic is small enough. As a result we locate the rejection region in the left tail of the sampling distribution.

  • We set the significance level at 10%.


Chapter opening example2

Chapter-Opening Example…

  • Rejection region:

  • From the data in SSA we compute

  • and

  • p-value = P(Z < -.91) = .5 - .3186 = .1814


Chapter opening example3

Chapter-Opening Example…

  • Conclusion: There is not enough evidence to infer that the mean is less than 22.

  • There is not enough evidence to infer that the plan will be profitable.

  • Since Z(- .91) > -Z.10(-1.28)

  • We fail to reject Ho: μ> 22

  • at a 10% level of significance.


Plot power curve1

PLOT POWER CURVE


Right tail testing

Right-Tail Testing…

  • Calculate the critical value of the mean ( ) and compare against the observed value of the sample mean ( )…


Left tail testing

Left-Tail Testing…

  • Calculate the critical value of the mean ( ) and compare against the observed value of the sample mean ( )…


Two tail testing

Two–Tail Testing…

  • Two tail testing is used when we want to test a research hypothesis that a parameter is not equal (≠) to some value


Example 11 2

Example 11.2…

  • AT&T’s argues that its rates are such that customers won’t see a difference in their phone bills between them and their competitors. They calculate the mean and standard deviation for all their customers at $17.09 and $3.87 (respectively).

  • They then sample 100 customers at random and recalculate a monthly phone bill based on competitor’s rates.

  • What we want to show is whether or not:

  • H1: ≠ 17.09. We do this by assuming that:

  • H0: = 17.09


Example 11 21

Example 11.2…

  • The rejection region is set up so we can reject the null hypothesis when the test statistic is large or when it is small.

  • That is, we set up a two-tail rejection region. The total area in the rejection region must sum to , so we divide this probability by 2.

stat is “small”

stat is “large”


Example 11 22

Example 11.2…

  • At a 5% significance level (i.e. = .05), we have

  • /2 = .025. Thus, z.025 = 1.96 and our rejection region is:

  • z < –1.96 -or- z > 1.96

z

-z.025

+z.025

0


Example 11 23

Example 11.2…

  • From the data, we calculate = 17.55

  • Using our standardized test statistic:

  • We find that:

  • Since z = 1.19 is not greater than 1.96, nor less than –1.96 we cannot reject the null hypothesis in favor of H1. That is “there is insufficient evidence to infer that there is a difference between the bills of AT&T and the competitor.”


Plot power curve2

PLOT POWER CURVE


Summary of one and two tail tests

Summary of One- and Two-Tail Tests…


Inference about a population sigma unknown

Inference About A Population…[SIGMA UNKNOWN]

  • We will develop techniques to estimate and test three population parameters:

  • Population Mean

  • Population Variance

  • Population Proportion p


Inference with variance unknown

Inference With Variance Unknown…

  • Previously, we looked at estimating and testing the population mean when the population standard deviation ( ) was known or given:

  • But how often do we know the actual population variance?

  • Instead, we use the Student t-statistic, given by:


Testing when is unknown

Testing when is unknown…

  • When the population standard deviation is unknown and the population is normal, the test statistic for testing hypotheses about is:

  • which is Student t distributed with = n–1 degrees of freedom. The confidence interval estimator of is given by:


Example 12 1

Example 12.1…

  • Will new workers achieve 90% of the level of experienced workers within one week of being hired and trained?

  • Experienced workers can process 500 packages/hour, thus if our conjecture is correct, we expect new workers to be able to process .90(500) = 450 packages per hour.

  • Given the data, is this the case?


Example 12 11

Example 12.1…

IDENTIFY

  • Our objective is to describe the population of the numbers of packages processed in 1 hour by new workers, that is we want to know whether the new workers’ productivity is more than 90% of that of experienced workers. Thus we have:

  • H1: > 450

  • Therefore we set our usual null hypothesis to:

  • H0: = 450


Example 12 12

Example 12.1…

COMPUTE

  • Our test statistic is:

  • With n=50 data points, we have n–1=49 degrees of freedom. Our hypothesis under question is:

  • H1: > 450

  • Our rejection region becomes:

  • Thus we will reject the null hypothesis in favor of the alternative if our calculated test static falls in this region.


Example 12 13

Example 12.1…

COMPUTE

  • From the data, we calculate = 460.38, s=38.83 and thus:

  • Since

  • we reject H0 in favor of H1, that is, there is sufficient evidence to conclude that the new workers are producing at more than 90% of the average of experienced workers.


Example 12 2

Example 12.2…

IDENTIFY

  • Can we estimate the return on investment for companies that won quality awards?

  • We are given a random sample of n = 83 such companies. We want to construct a 95% confidence interval for the mean return, i.e. what is: ??


Example 12 21

Example 12.2…

COMPUTE

  • From the data, we calculate:

  • For this term

  • and so:


Check requisite conditions

Check Requisite Conditions…

  • The Student t distribution is robust, which means that if the population is nonnormal, the results of the t-test and confidence interval estimate are still valid provided that the population is “not extremely nonnormal”.

  • To check this requirement, draw a histogram of the data and see how “bell shaped” the resulting figure is. If a histogram is extremely skewed (say in the case of an exponential distribution), that could be considered “extremely nonnormal” and hence t-statistics would be not be valid in this case.


Inference about population variance

Inference About Population Variance…

  • If we are interested in drawing inferences about a population’s variability, the parameter we need to investigate is the population variance:

  • The sample variance (s2)is an unbiased, consistent and efficient point estimator for . Moreover,

  • the statistic, , has a chi-squared distribution,

  • with n–1 degrees of freedom.


Testing estimating population variance

Testing & Estimating Population Variance

  • Combining this statistic:

  • With the probability statement:

  • Yields the confidence interval estimator for :

lower confidence limit

upper confidence limit


Example 12 3

Example 12.3…

IDENTIFY

  • Consider a container filling machine. Management wants a machine to fill 1 liter (1,000 cc’s) so that that variance of the fills is less than 1 cc2. A random sample of n=25 1 liter fills were taken. Does the machine perform as it should at the 5% significance level?

  • We want to show that:

  • H1: < 1

  • (so our null hypothesis becomes: H0: = 1). We will use this test statistic:

Variance is less than 1 cc2


Example 12 31

Example 12.3…

COMPUTE

  • Since our alternative hypothesis is phrased as:

  • H1: < 1

  • We will reject H0 in favor of H1 if our test statistic falls into this rejection region:

  • We computer the sample variance to be: s2=.8088

  • And thus our test statistic takes on this value…

compare


Example 12 4

Example 12.4…

  • As we saw, we cannot reject the null hypothesis in favor of the alternative. That is, there is not enough evidence to infer that the claim is true.

  • Note: the result does not say that the variance is greater than 1, rather it merely states that we are unable to show that the variance is less than 1.

  • We could estimate (at 99% confidence say) the variance of the fills…


Example 12 41

Example 12.4…

COMPUTE

  • In order to create a confidence interval estimate of the variance, we need these formulae:

  • we know (n–1)s2 = 19.41 from our previous calculation, and we have from Table 5 in Appendix B:

lower confidence limit

upper confidence limit


Comparing two populations

Comparing Two Populations…

  • Previously we looked at techniques to estimate and test parameters for one population:

  • Population Mean , Population Variance

  • We will still consider these parameters when we are looking at two populations, however our interest will now be:

  •  The difference between two means.

  •  The ratio of two variances.


Difference of two means

Difference of Two Means…

  • In order to test and estimate the difference between two population means, we draw random samples from each of two populations. Initially, we will consider independent samples, that is, samples that are completely unrelated to one another.

  • Because we are compare two population means, we use the statistic:


Sampling distribution of

Sampling Distribution of

  • 1. is normally distributed if the original populations are normal –or– approximately normal if the populations are nonnormal and the sample sizes are large (n1, n2 > 30)

  • 2. The expected value of is

  • 3. The variance of is

  • and the standard error is:


Making inferences about

Making Inferences About

  • Since is normally distributed if the original populations are normal –or– approximately normal if the populations are nonnormal and the sample sizes are large (n1, n2 > 30), then:

  • is a standard normal (or approximately normal) random variable. We could use this to build test statistics or confidence interval estimators for …


Making inferences about1

Making Inferences About

  • …except that, in practice, the z statistic is rarely used since the population variances are unknown.

  • Instead we use a t-statistic. We consider two cases for the unknown population variances: when we believe they are equal and conversely when they are not equal.

??


When are variances equal

When are variances equal?

  • How do we know when the population variances are equal?

  • Since the population variances are unknown, we can’t know for certain whether they’re equal, but we can examine the sample variances and informally judge their relative values to determine whether we can assume that the population variances are equal or not.


Test statistic for equal variances

Test Statistic for (equal variances)

  • Calculate – the pooled variance estimator as…

  • …and use it here:

degrees of freedom


Ci estimator for equal variances

CI Estimator for (equal variances)

  • The confidence interval estimator for when the population variances are equal is given by:

degrees of freedom

pooled variance estimator


Test statistic for unequal variances

Test Statistic for (unequal variances)

  • The test statistic for when the population variances are unequal is given by:

  • Likewise, the confidence interval estimator is:

degrees of freedom


Example 13 2

Example 13.2…

IDENTIFY

  • Two methods are being tested for assembling office chairs. Assembly times are recorded (25 times for each method). At a 5% significance level, do the assembly times for the two methods differ?

  • That is, H1:

  • Hence, our null hypothesis becomes: H0:

  • Reminder: This is a two-tailed test.


Example 13 21

Example 13.2…

COMPUTE

  • The assembly times for each of the two methods are recordedand preliminary data is prepared…

The sample variances are similar, hence we will assume that the population variances are equal…


Example 13 22

Example 13.2…

COMPUTE

  • Recall, we are doing a two-tailed test, hence the rejection region will be:

  • The number of degrees of freedom is:

  • Hence our critical values of t (and our rejection region) becomes:


Example 13 23

Example 13.2…

COMPUTE

  • In order to calculate our t-statistic, we need to first calculate the pooled variance estimator, followed by the t-statistic…


Example 13 24

Example 13.2…

INTERPRET

  • Since our calculated t-statistic does not fall into the rejection region, we cannot reject H0 in favor of H1, that is, there is not sufficient evidence to infer that the mean assembly times differ.


Example 13 25

Example 13.2…

INTERPRET

  • Excel, of course, also provides us with the information…

Compare…

…or look at p-value


Confidence interval

Confidence Interval…

  • We can compute a 95% confidence interval estimate for the difference in mean assembly times as:

  • That is, we estimate the mean difference between the two assembly methods between –.36 and .96 minutes. Note: zero is included in this confidence interval…


Matched pairs experiment

Matched Pairs Experiment…

  • Previously when comparing two populations, we examined independent samples.

  • If, however, an observation in one sample is matched with an observation in a second sample, this is called a matched pairs experiment.

  • To help understand this concept, let’s consider example 13.4


Identifying factors

Identifying Factors…

  • Factors that identify the t-test and estimator of :


Inference about the ratio of two variances

Inference about the ratio of two variances

  • So far we’ve looked at comparing measures of central location, namely the mean of two populations.

  • When looking at two population variances, we consider the ratio of the variances, i.e. the parameter of interest to us is:

  • The sampling statistic: is F distributed with

  • degrees of freedom.


Inference about the ratio of two variances1

Inference about the ratio of two variances

  • Our null hypothesis is always:

  • H0:

  • (i.e. the variances of the two populations will be equal, hence their ratio will be one)

  • Therefore, our statistic simplifies to:

  • df1 = n1 - 1

  • df2 = n2 - 1


Example 13 6

Example 13.6…

IDENTIFY

  • In example 13.1, we looked at the variances of the samples of people who consumed high fiber cereal and those who did not and assumed they were not equal. We can use the ideas just developed to test if this is in fact the case.

  • We want to show: H1:

  • (the variances are not equal to each other)

  • Hence we have our null hypothesis: H0:


Example 13 61

Example 13.6…

CALCULATE

  • Since our research hypothesis is: H1:

  • We are doing a two-tailed test, and our rejection region is:

F


Example 13 62

Example 13.6…

CALCULATE

  • Our test statistic is:

  • Hence there is sufficient evidence to reject the null hypothesis in favor of the alternative; that is, there is a difference in the variance between the two populations.

F

.58

1.61


Example 13 63

Example 13.6…

INTERPRET

  • We may need to work with the Excel output before drawing conclusions…

Our research hypothesis

H1:

requires two-tail testing,

but Excel only gives us values

for one-tail testing…

If we double the one-tail p-value Excel gives us, we have the p-value of

the test we’re conducting (i.e. 2 x 0.0004 = 0.0008). Refer to the text and CD Appendices for more detail.


  • Login