1 / 27

INF397C Introduction to Research in Information Studies Spring, 2005 Day 13

INF397C Introduction to Research in Information Studies Spring, 2005 Day 13. Confidence Intervals. We calculate a confidence interval for a population parameter. The mean of a random sample from a population is a point estimate of the population mean.

crete
Download Presentation

INF397C Introduction to Research in Information Studies Spring, 2005 Day 13

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. INF397CIntroduction to Research in Information StudiesSpring, 2005Day 13

  2. Confidence Intervals • We calculate a confidence interval for a population parameter. • The mean of a random sample from a population is a point estimate of the population mean. • But there’s variability! (SE tells us how much.) • What is the range of scores between which we’re 95% confident that the population mean falls? • Think about it – the larger the interval we select, the larger the likelihood it will “capture” the true (population) mean. • CI = M +/- (t.05)(SE) • See Box 12.2 on “margin of error.” NOTE: In the box they arrive at a 95% confidence that the poll has a margin of error of 5%. It is just coincidence that these two numbers add up to 100%.

  3. CI about a mean -- example • CI = M +/- (t.05)(SE) • Establish the level of α (two-tailed) for the CI. (.05) • M=15.0 s=5.0 N=25 • Use Table A.2 to find the critical value associated with the df. • t.05(24) = 2.064 • CI = 15.0 +/- 2.064(5.0/SQRT 25) = 15.0 +/- 2.064 = 12.935 – 17.064 “The odds are 95 out of 100 that the population mean falls between 12.935 and 17.064.” (NOTE: This is NOT the same as “95% of the scores fall within this range!!!)

  4. Another CI example • Hinton, p. 89. • t-test not sig. • What if we did this via confidence intervals?

  5. Limitations of t tests • Can compare only two samples at a time • Only one IV at a time (with two levels) • But you say, “Why don’t I just run a bunch of t tests”? • It’s a pain in the butt. • You multiply your chances of making a Type I error.

  6. ANOVA • Analysis of variance, or ANOVA, or F tests, were designed to overcome these shortcomings of the t test. • An ANOVA with ONE IV with only two levels is the same as a t test.

  7. ANOVA (cont’d.) • Remember back to when we first busted out some scary formulas, and we calculated the standard deviation. • We subtracted the mean from each score, to get a feel for how spread out a distribution was – how DEVIANT each score was from the mean. How VARIABLE the distribution was. • Then we realized if we added up all these deviation scores, they necessarily added up to zero. • So we had two choices: we coulda taken the absolute value, or we coulda squared ‘em. And we squared ‘em. Σ(X – M)2

  8. ANOVA (cont’d.) • Σ(X – M)2 • This is called the Sum of the Squares (SS). And when we add ‘em all up and average them (well – divide by N-1), we get S2 (the “variance”). • We take the square root of that and we have S (the “standard deviation”).

  9. ANOVA (cont’d.) • Let’s work through the Hinton example on p. 111.

  10. F is . . . • F is the variance ratio. • F is • between conditions variance/error variance • (systematic differences + error variance) /error variance • Between conditions variance/within conditions variance (This from Hinton, p. 112, p. 119.)

  11. Check out . . . • ANOVA summary table on p. 120. This is for a ONE FACTOR anova (i.e., one IV). (Maybe MANY levels.) • Sample ANOVA summary table on p. 124. • Don’t worry about unequal sample sizes – interpretation of the summary table is the same. • The only thing you need to realize in Chapter 13 is that for repeated measures ANOVA, we also tease out the between subjects variation from the error variance. (See p. 146 and 150.) • Note, in Chapter 15, that as factors (IVs) increase, the comparisons (the number of F ratios) multiply. See p. 167, 174. • What happens when you have 3 levels of an IV, and you get a significant F? • Memorize the table on p. 177. (No, I’m only kidding.)

  12. Interaction effects • Here’s what I want you to understand about interaction effects: • They’re WHY we run studies with multiple IVs. • A significant interaction effect means different levels of one IV have different influences on the other IV. • You can have significant main effects and insignificant interactions, or vice versa (or both sig., or both not sig.) (See p. 157, 158.)

  13. Correlation • With correlation, we return to DESCRIPTIVE statistics. (This is counterintuitive. To me.) (Well, it’s BOTH descriptive and inferential.) • We are describing the strength and direction of the relationship between two variables. • And how much one variable predicts the other.

  14. Correlation • Formula – • Hinton, p. 259, or • S, Z, & Z, p. 393 • Two key points: • How much predictability does one variable provide, for another. • NOT causation. • Let’s work two trivial examples.

  15. Let’s talk about the final • Here’s what you’ve read: • Huff (How to lie with statistics) • Dethier (To know a fly) • Hinton: Ch. 1 – 15, 20 • S, Z, & Z: Ch. 1-8, 10-13 • Several other articles

  16. For the final, EMPHASIZE… • Descriptive stat • Measures of central tendency, dispersion • Z scores (both ways!) • Frequency distributions, tables, graphs • Correlation (interpret, not calculate) • Inferential stat • Hypothesis testing • Standard error of the mean • t-test (calculate one, for one sample; interpret others) • Confidence intervals (maybe calculate one) • Chi square (maybe one) • ANOVA – interpret summary table • Type I and II errors • Effect size (book only – concept, not calculate)

  17. Emphasize . . . • Experimental design • IV, DV, controls, confounds, counterbalancing • Repeated measures, Independent groups • Sampling • Operational definitions • Individual differences variable • Ethics of human study • Possible sources of bias and error variance and how to minimize/eliminate • Qualitative methods • Per Rice Lively, Gracy, Doty • Survey generation (from SZZ, Ch. 5)

  18. De-emphasize • Complicated probability calculations • APA ethical standard (S,Z, & Z, Ch. 3) • Content analysis (SZZ, Ch. 6) • Calculating an ANOVA. • Nonequivalent control group design (SZZ, Ch. 11) (Indeed, de-emphasize all Ch. 11) • Hinton, Ch. 12

  19. Sample Problems • With answers – hand out.

  20. Experiment 1 I’ll present 10 word, one at a time. Presented visually. After the 10th I’ll say “go” and you’ll write down as many as you can. Don’t have to remember them in order. Pencils down. Ready?

  21. balloon doorknob minivan meatloaf teacher zebra pillow barn sidewalk coffin

  22. balloon doorknob minivan meatloaf teacher zebra pillow barn sidewalk coffin

  23. Experiment 2 • Now, 10 new words. • Same task -- recall them. • After the 10th one I’ll say “Go,” write down as many of the 10 words as you can. • Again, don’t have to remember them in order. • Pencils down. • Ready?

  24. forget interest anger imagine fortitude smart peace effort hunt focus

  25. forget interest anger imagine fortitude smart peace effort hunt focus

  26. Course Evaluation

  27. See you Monday! • Don’t hesitate to call or shoot me an email if you have any questions.

More Related