1 / 33

Psyc 235: Introduction to Statistics

Psyc 235: Introduction to Statistics. DON’T FORGET TO SIGN IN FOR CREDIT!. http://www.psych.uiuc.edu/~jrfinley/p235/. Announcements (1of2). Early Informal Feedback https://webtools.uiuc.edu/formBuilder/Secure?id=9748379 Open until Sat March 15th

Download Presentation

Psyc 235: Introduction to Statistics

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Psyc 235:Introduction to Statistics DON’T FORGET TO SIGN IN FOR CREDIT! http://www.psych.uiuc.edu/~jrfinley/p235/

  2. Announcements (1of2) • Early Informal Feedback • https://webtools.uiuc.edu/formBuilder/Secure?id=9748379 • Open until Sat March 15th • Special Lecture Thurs March 13th: Conditional Probability (incl. Law of Total Prob., Bayes’ Theorem) • Mandatory for invited students • Anyone can come • No OH; Go to lab for Qs/help.

  3. Announcements (2of2) • Target Dates: STAY ON TARGET! • You should be finishing the Distributions slice • VoD “5. Normal Calculations, 17. Binomial Distributions,” and “18. The Sample Mean and Control Charts,” • Quiz 3: Thurs-Fri March 13th-14th

  4. sample statistic (a random variable!) Population  “Standard Error” SamplingDistribution (of the mean) Sample size = n

  5. Shape of the Sampling Distribution? • If population distribution is normal: • Sampling distribution is normal (for any n) • If sample size (n) is large: • Sampling distribution approaches normal Central Limit Theorem • As sample size (n) increases: • Sampling distribution becomes more normal • Variance (and thus std. dev.) decreases

  6. Great, Normal Distributions! • Can now calculate probabilities like: • Just convert values of interst to z scores (standard normal distribution) • And then look up probabilities for that z score in ALEKS (calculator) • Or vice versa…

  7. So far… • We’ve been doing things like: • Given a certain population, what’s prob of getting a sample statistic above/below a certain value? • Population--->Sample • How can we shift to … • Using our Sample to reason about the POPULATION? • Sample--->Population

  8. INFERENTIAL STATISTICS! • Estimating a population parameter (e.g., the mean of the pop.: ) • How to do it: • Take a random sample from the pop. • Calculate sample statistic (e.g., the mean of the sample: ) • That’s your estimate. • Class dismissed.

  9. No, wait! • The sample statistic is a point estimate of the population parameter  • It could be off, by a little, or by a lot!

  10. Population  SamplingDistribution We only have one sample statistic. (of the mean) And we don’t know where in here it falls. Sample size = n

  11. Interval Estimate • Point estimate (sample statistic) gives us no idea of how close we might be to the true population parameter. • We want to be able to specify some intervalaround our point estimate that will have a high prob. of containing the true pop parameter.

  12. Confidence Interval • An interval around the sample statistic that would capture the true population parameter a certain percent of the time (e.g., 95%) in the long run. • (i.e., over all samples of the same size, from the same population)

  13. Note: True PopulationParameter is constant!  Note that this particularinterval capturesthe true mean!  This is the meanfrom one sample. Let’s put a 90% Confidence Intervalaround it. Let’s consider other possible samples(of the SAME SIZE)

  14.  The meanfrom another possible sample. This one capturesthe true mean too. So does this one. And this one. This one too. Yep. This interval missesthe true mean! But this one’s alright. …

  15.  A 90% Confidence Interval means that for 90% of all possible samples(of the same size),that interval around the sample statistic will capture the true population parameter(e.g., mean). Only sample statistics in the outer 10% of the sampling distribution have confidence intervals that “miss” the true population parameter. …

  16.  … But, remember…

  17. Sample size = n But, remember… All that we have is our sample.

  18. Sample size = n Still, a Confidence Interval is more usefulin estimating the population parameterthan is a mere point estimate alone. So, how do we make ‘em?

  19. Margin of Error critical value Std. dev. of point estimate CONFIDENCE INTERVAL (1 - )% confidence interval for a population parameter P( C. I. encloses true population parameter ) = 1 -  Note:  = P(Confidence Interval misses true population parameter) “Proportion of times such a CI misses the population parameter” Point estimate · ± standard deviation ofsampling distribution sample statistic or ex: (aka “Standard Error”)

  20. Decision Tree for Confidence Intervals CriticalScore n large? (CLT) Population Standard Deviation known? Pop. Distributionnormal? z-score Yes Standard normaldistribution z-score Yes Yes No No Can’t do it Yes t-score No t distribution No t-score Yes Note: ALEKS… No Can’t do it

  21. Margin of Error Point estimate · ± critical value Std. dev. of point estimate C.I. using Standard Normal Distribution  When known. For the Population Mean First, choose an  level. For ex., α=.05 gives us a 95% confidence interval.

  22. Margin of Error critical value Std. dev. of point estimate C.I. using Standard Normal Distribution  When known. For the Population Mean First, choose an  level. For ex., α=.05 gives us a 95% confidence interval. · ±

  23. Margin of Error critical value C.I. using Standard Normal Distribution  When known. For the Population Mean First, choose an  level. For ex., α=.05 gives us a 95% confidence interval. · ±

  24. Margin of Error C.I. using Standard Normal Distribution  When known. For the Population Mean First, choose an  level. For ex., α=.05 gives us a 95% confidence interval. · ± Lookup value (ALEKS calculator, Z tables)

  25. Handy Zs (Thanks, Standard Normal Distribution!)

  26. Margin of Error C.I. using Standard Normal Distribution  When known. For the Population Mean · ± Remember: random variable

  27. Margin of Error C.I. using t Distribution  When unknown! For the Population Mean · ±

  28. Margin of Error The “n-1” is an adjustment tomake s an unbiased estimatorof the population std. dev. C.I. using t Distribution  When unknown! For the Population Mean · ± We use the standard deviation from our sample (s)to estimate the population std. dev. ().

  29. Margin of Error C.I. using t Distribution  When unknown! For the Population Mean · ± Critical value taken from a t distribution, not standard normal. The goodness of our estimate of  will depend on our sample size (n). So the exact shape of any given t distribution depends on degrees of freedom (which is derived from sample size: n-1, here). Fortunately, we can still just LOOK UP the critical values…(just need to additionally plug in degrees freedom)

  30. Std dev of samplingdistribution of the mean Behavior of C.I. • As Confidence (1-) goes UP • Intervals get WIDER • (ex: 90% vs 99%) • As Population Std. Dev. () goes UP • Intervals get WIDER • As Sample Size (n) goes UP • Intervals get NARROWER

  31. C. I. for Differences(e.g., of Population Means) • Same approach. • Key is: • Treat the DIFFERENCE between sample means as a single random variable, with its own sampling distribution & everything. • The difference between population means is a constant (unknown to us).

  32. Remember • Early Informal Feedback • Special Lecture Thursday • No OH; Go to lab for Qs/help. • Stay on target • Finish Distributions • VoDs • Quiz 3

  33.  

More Related