1 / 22

# The t-test - PowerPoint PPT Presentation

The t-test. Inferences about Population Means. Questions. How are the distributions of z and t related? Given that construct a rejection region. Draw a picture to illustrate.

Related searches for The t-test

I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript

### The t-test

• How are the distributions of z and t related?

• Given that

construct a rejection region. Draw a picture to illustrate.

• What is the standard error of the difference between means? What are the factors that influence its size?

• What are the main uses of the t-test?

• Give a concrete example of the use of the {one sample, independent samples, dependent samples} t-test. State why the particular test is the right one to choose.

• What is the importance of variance accounted for?

• For large samples (N>100) can use z.

• Suppose

• Then

• If

The t Distribution

We use t when the population variance is unknown (the usual case) and sample size is small (N<100, the usual case). If you use a stat package for testing hypotheses about means, you will use t.

The t distribution is a short, fat relative of the normal. The shape of t depends on its df. As N becomes infinitely large, t becomes normal.

For the t distribution, degrees of freedom are always a simple function of the sample size, e.g., (N-1).

One way of explaining df is that if we know the total or mean, and all but one score, the last (N-1) score is not free to vary. It is fixed by the other scores. 4+3+2+X = 10. X=1.

With a small sample size, we compute the same numbers as we did for z, but we compare them to the t distribution instead of the z distribution.

(c.f. z=1.96)

1<2.064, n.s.

Interval =

Interval is about 9 to 13 and contains 10, so n.s.

• How are the distributions of z and t related?

• Given that

construct a rejection region. Draw a picture to illustrate.

• Most studies have at least 2 groups (e.g., M vs. F, Exp vs. Control)[1 v 2 sample]

• If we want to know diff in population means, best guess is diff in sample means.

• Unbiased:

• Variance of the Difference:

• Standard Error:

• We can estimate the standard error of the difference between means.

• For large samples, can use z

• Looks just like z:

• df=N1-1+N2-1=N1+N2-2

• If SDs are equal, estimate is:

Pooled variance estimate is weighted average:

Pooled Standard Error of the Difference (computed):

Independent Samples t (2)

tcrit = t(.05,10)=2.23

• The t-test is based on assumptions of normality and homogeneity of variance.

• You can test for both these (make sure you learn the SAS methods).

• As long as the samples in each group are large and nearly equal, the t-test is robust, that is, still good, even tho assumptions are not met.

• What is the standard error of the difference between means? What are the factors that influence its size?

• What are the assumptions of the t-test?

• Scientific purpose is to predict or explain variation.

• Our variable Y has some variance that we would like to account for. There are statistical indexes of how well our IV accounts for variance in the DV. These are measures of how strongly or closely associated our IVs and DVs are.

• Variance accounted for:

• How much of variance in Y is associated with the IV?

Compare the 1st (left-most) curve with the curve in the middle and the one on the right.

In each case, how much of the variance in Y is associated with the IV, group membership? More in the second comparison. As mean diff gets big, so does variance acct.

• Power increases with association (effect size) and sample size.

• Effect size:

• Significance = effect size X sample size.

Increasing sample size does not increase effect size (strength of association). It decreases the standard error so power is greater. Widely misunderstood.

• If the null is false, the statistic is no longer distributed as t, but rather as noncentral t. This makes power computation difficult.

• Hays (p. 334) presents an alternative method based on strength of association, that is, on

• Based on Hays’s method, we find:

Suppose alpha is .01, power desired is .90, and variance accounted for is .25. What is n per group? It’s 24 (23?) per group or 48 all together. (Hays says add one more person for luck. “it’s wise”

Same problem, but variance a/c is .10, need 68/group.

Same again, but .15, need 43 per group. What if alpha = .05?

Dependent t (1)

Observations come in pairs. Brother, sister, repeated measure.

Problem solved by finding diffs between pairs Di=yi1-yi2.

df=N(pairs)-1

df =2; n.s.

• What are the main uses of the t-test?

• Give a concrete example of the use of the {one sample, independent samples, dependent samples} t-test. State why the particular test is the right one to choose.

• What is the importance of variance accounted for?