1 / 20

Statistics # 2

Statistics # 2. The central limit theorem and sampling distributions. Abraham de Moivre, French Hugenot refugee in London, originator of the Central Limit Theorem. Central limit theorem. The Central Limit Theorem

iniko
Download Presentation

Statistics # 2

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Statistics # 2 The central limit theorem and sampling distributions Abraham de Moivre, French Hugenot refugee in London, originator of the Central Limit Theorem Statistics Introduction 2.

  2. Central limit theorem The Central Limit Theorem Our evaluation of a t score for statistical significance depends on sample size: • Larger samples yield more “normal”, tighter distributions (less error variance…). • With smaller samples we use more conservative assumptions about the sampling distribution. Statistics Introduction 2.

  3. -3 -2 -1 0 +1 +2 +3 t Scores The normal distribution Here is the Sampling Distribution. This is the normal distribution, segmented into t units (similar to Z units or Standard Deviations). Each t unit (e.g., between t = 0 and t = 1) represents a fixed percentage of cases. Central Limit Theorem: our assumptions about t values have to change, depending upon the size of our sample. 34.13% of cases 34.13% of cases 13.59% of cases 13.59% of cases 2.25% of cases 2.25% of cases

  4. True Population M “True” normal distribution Score Score Score Score <-- smallerMlarger ---> Score Score Score Score Score Score Score The Central Limit Theorem; small samples Central Limit Theorem • With few scores in the sample a few extreme or “deviant” values have a large effect. • The distribution is “flat” or has high variance. Statistics Introduction 2.

  5. True Population M “True” normal distribution Score Score Score Score Score Score Score Score Score Score Score Score Score Score Score Score Score Score Score Score Score Score Score <-- smallerMlarger ---> Score Score Score Score Score Score Score The Central Limit Theorem; larger samples Central Limit Theorem • With more scores the effect of extreme or “deviant” values is offset by other values. • The distribution has less variance & is more normal. Statistics Introduction 2.

  6. True Population M “True” normal distribution Score Score Score Score Score Score Score Score Score Score Score Score Score Score Score Score Score Score Score Score Score Score Score Score Score Score Score Score Score Score Score Score Score Score Score Score Score Score Score Score Score Score Score Score Score Score <-- smallerMlarger ---> Score Score Score Score Score Score Score The Central Limit Theorem; large samples Central Limit Theorem • With many scores “deviant” values are completely offset by other values. • The distribution is normal, with low(er) variance. • The sampling distribution better approximates the population distribution Pascal’s quincunx demonstration is at http://www.mathsisfun.com/data/quincunx.html Statistics Introduction 2.

  7. Central limit theorem & evaluating t scores The same logic applies with samples we use to test hypotheses. • If the groups are small, the M score for each group reflects a lot of error variance. • This increases the likelihood that error variance, not an experimental effect, led to differences between Ms. • Since smaller samples (lower df) = more variance, t must be larger for us to consider it statistically significant (< 5% likely to have occurred by chance alone). • We evaluate t vis-à-vis a sampling distribution based on the dffor the experiment. • Critical value for t with p <.05 thus goes up or down depending upon sample size (df) Statistics Introduction 2.

  8. The Central Limit Theorem; small samples Central Limit Theorem applied to a sampling distribution: How well do small samples reflect the “true” population? M of sample Ms (approximates population M) Since small samples have a lot of error, a distribution of small samples is relatively “flat” (lot of variance)… Imagine we calculate the M for each of 50 samples, each n=10 M(n=10) M(n=10) M(n=10) M(n=10) Many sample Ms may be far from the M of sample Ms M(n=10) M(n=10) M(n=10) M(n=10) M(n=10) M(n=10) M(n=10) M(n=10) M(n=10) M(n=10) M(n=10) M(n=10) M(n=10) M(n=10) M(n=10) M(n=10) M(n=10) M(n=10) M(n=10) M(n=10) M(n=10) M(n=10) M(n=10) M(n=10) M(n=10) M(n=10) M(n=10) M(n=10) M(n=10) M(n=10) M(n=10) M(n=10) M(n=10) M(n=10) M(n=10) M(n=10) M(n=10) M(n=10) M(n=10) M(n=10) M(n=10) M(n=10) M(n=10) M(n=10) M(n=10) M(n=10) M(n=10) M(n=10) M(n=10) Statistics Introduction 2. <-- smallerMlarger --->

  9. The Central Limit Theorem; larger samples Central Limit Theorem & sampling distributions, larger samples The M for each sample has less error (since it has larger n), so the distribution will be “cleaner” and more normal. ‘True” M of sample Ms Now we collect another 50 samples, but each n=25 M(n=25) M(n=25) M(n=25) M(n=25) M(n=25) M(n=25) M(n=25) M(n=25) M(n=25) M(n=25) It is less likely that any individual sample M would be far from the M of sample Ms M(n=25) M(n=25) M(n=25) M(n=25) M(n=25) M(n=25) M(n=25) M(n=25) M(n=25) M(n=25) M(n=25) M(n=25) M(n=25) M(n=25) M(n=25) M(n=25) M(n=25) M(n=25) M(n=25) M(n=25) M(n=25) M(n=25) M(n=25) M(n=25) M(n=25) M(n=25) M(n=25) M(n=25) M(n=25) M(n=25) M(n=25) M(n=25) M(n=25) M(n=25) M(n=25) M(n=25) M(n=25) M(n=25) M(n=25) M(n=25) M(n=25) M(n=25) M(n=25) Statistics Introduction 2. <-- smallerMlarger --->

  10. The Central Limit Theorem; larger samples Central Limit Theorem & sampling distributions, large samples ‘True” M of sample Ms Since each individual sample has low error, a distribution of large sample Ms will have low variance. M(n=50) M(n=50) Our third set of samples are each fairly large, say n=50 M(n=50) M(n=50) M(n=50) M(n=50) M(n=50) M(n=50) M(n=50) M(n=50) M(n=50) M(n=50) M(n=50) M(n=50) M(n=50) M(n=50) M(n=50) M(n=50) It is unlikely for a sample M to far exceed the M of the sample Ms by chance alone. M(n=50) M(n=50) M(n=50) M(n=50) M(n=50) M(n=50) M(n=50) M(n=50) M(n=50) M(n=50) M(n=50) M(n=50) M(n=50) M(n=50) M(n=50) M(n=50) M(n=50) M(n=50) M(n=50) M(n=50) M(n=50) M(n=50) M(n=50) M(n=50) M(n=50) M(n=50) M(n=50) M(n=50) M(n=50) M(n=50) M(n=50) M(n=50) M(n=50) M(n=50) M(n=50) Statistics Introduction 2. <-- smallerMlarger --->

  11. Central limit theorem: critical values • Central limit theorem: • When df> 120 we assume a perfectly normal distribution. (Here Z = t; no compensation for sample size) • With smaller samples, we assume more error in each group. • When df < 120 we use tto estimate a sampling distribution based on the total df (i.e., ns of groups being sampled). • Alpha[ α]: Probability criterion for “statistical significance,” typically p < .05 • Critical value Cut off point for alpha on distribution: • With df > 120 critical value for p<.05 = + 1.98 (Z = t) • With df < 120 we adjust the critical value based on the sampling distribution we use • As df goes down we assume a more conservative sampling distribution, and use a larger critical value for p <.05. Statistics Introduction 2.

  12. -2 -1 0 +1 +2 Z Score (standard deviation units) 2.4% of cases > +1.98 2.4% of cases < -1.98 Sampling Distributions and Critical Values • Critical value for p<.05 = 1.98; 95% of cases (critical ratios, differences between Ms)are < +1.98 and > -1.98. This sampling distribution n > 120. Other graphs will show what happens as sample size decreases. • Z or t (120) > + 1.98 will occur by chance < 5% of the time. • A distribution with n > 120 is “normal” Statistics Introduction 2.

  13. -2 -1 0 +1 +2 Z Score (standard deviation units) 2.4% of cases < -2.10 2.4% of cases > +2.10 Sampling distributions: Critical Values when df = 18 Here group sizes are small; Group1 n = 10 Group2 n = 10. df = (10-1) + (10-1) = 18. • With a smaller df we estimatea flatter, more “errorful” curve. • At df = 18 the critical value for p<.05 = 2.10, a more conservative test.

  14. -2 -1 0 +1 +2 Z Score (standard deviation units) 2.4% of cases > +2.30 2.4% of cases < -2.30 Critical Values, n = 10 With only 8 df we estimate a flat, conservative curve. Here the critical value for p<.05 = 2.30. This sampling distribution assumes 10 participants. Group1 n= 5, Group2 n = 5; df= (5-1) + (5-1) = 8. Statistics Introduction 2.

  15. N > 120, t > + 1.98, p<.05 df = 18, t > + 2.10, p<.05. df = 8, t > + 2.30, p<.05. -2 -1 0 +1 +2 Z Score (standard deviation units) 2.4% of cases below this value 2.4% of cases above this value Central Limit Theorem; variations in sampling distributions • As samples sizes (df) go down, the estimated sampling distributions of t scores based on them have more variance, giving a more “flat” distribution. • This increases thecritical value for p<.05. Statistics Introduction 2.

  16. A t-table contains: Alpha Levels 0.10 0.05 0.02 0.01 0.001 df 8 9 10 11 12 13 14 15 20 25 30 40 60 120 1.8602.3062.8963.3555.041 1.8332.2622.8213.2504.781 1.8122.2282.7643.1694.587 1.7962.2012.7183.1064.437 1.7822.1792.6813.0554.318 1.7712.1602.6503.0124.221 1.7612.1452.6242.9774.140 1.7532.1312.6022.9474.073 1.7252.0862.5282.8453.850 1.7082.0602.4852.7873.725 1.6972.0422.4572.7503.646 1.6842.0212.4232.7043.551 1.6712.0002.3902.6603.460 1.6581.9802.3582.6173.373 1.6451.9602.3262.5763.291 • Degrees of freedom (df) Size of the research samples: (ngroup1 - 1) + (ngrp2 - 1) Critical values of t • Alpha levels % likelihood of a t occurring by chance. • Critical ValuesValue t must exceed to be statistically significant [not occurring by chance] at a given alpha.

  17. Critical values of t (2 tailed test) Alpha Levels 0.10 0.05 0.02 0.01 0.001 Critical values of t 1.8602.3062.8963.3555.041 1.8332.2622.8213.2504.781 1.8122.2282.7643.1694.587 1.7962.2012.7183.1064.437 1.7822.1792.6813.0554.318 1.7712.1602.6503.0124.221 1.7612.1452.6242.9774.140 1.7532.1312.6022.9474.073 1.7252.0862.5282.8453.850 1.7082.0602.4852.7873.725 1.6972.0422.4572.7503.646 1.6842.0212.4232.7043.551 1.6712.0002.3902.6603.460 1.6581.9802.3582.6173.373 1.6451.9602.3262.5763.291 df 8 9 10 11 12 13 14 15 20 25 30 40 60 120 Alpha = .05, df = 10 Alpha = .02, df = 13 • Critical value of t is read across the row for the df in your study, to the column for your alpha. • p < .05 is the most typical alpha. • lower alpha (.02 .001, a more conservative test) requires a higher critical value. Alpha = .05, df = 120

  18. Determining If A Result Is "Statistically Significant" Assumptions: • Null hypothesis: the difference between Ms [or the correlation, chi square, etc.] is > 0 or < 0 by chance alone. • Statistical question: is the effect in your experiment different from 0 by more than chance alone? • "More than chance alone" is < 5% of the time [p < .05]. Steps: • Derive the t value for the difference between groups

  19. Steps cont.: Statistical significance… • Figure out what distribution to compare your t value to ... • Use the degrees of freedom (df) for this. • df = (ngroup1 - 1) + (ngroup2 - 1). • The Central Limit Theorem tells us to assume there is more error (a more "flat" distribution) as df go down. • Use the usual criteria [alpha value] for “statistical significance” of p < .05 (unless you have good reason to use another…). • Find the value on the t table that corresponds to your df, at your alpha. This is the critical valuethat your t must exceed to be considered “statistically significant”. • Compare your t to the critical value, using the absolute value of t.

  20. Testing t Alpha Levels 0.10 0.05 0.02 0.01 0.001 • Use p < .05 (unless you want to be more conservative by using a higher value). • Look up your df to see what sampling distribution to compare your results to. • With n = 10 per group df = (10-1) + (10-1) = 18. • Compare your t to the critical value from the table. • If the absolute value of t > the critical value, your effect is statistically significant at p < .05. 1.8602.3062.8963.3555.041 1.8332.2622.8213.2504.781 1.8122.2282.7643.1694.587 1.7962.2012.7183.1064.437 1.7822.1792.6813.0554.318 1.7712.1602.6503.0124.221 1.7612.1452.6242.9774.140 1.7532.1312.6022.9474.073 1.734 2.101 2.552 2.878 3.922 1.7252.0862.5282.8453.850 1.7082.0602.4852.7873.725 1.6972.0422.4572.7503.646 1.6842.0212.4232.7043.551 1.6712.0002.3902.6603.460 1.6581.9802.3582.6173.373 1.6451.9602.3262.5763.291 df 8 9 10 11 12 13 14 15 18 20 25 30 40 60 120 Statistics Introduction 2.

More Related