1 / 33

Confidence Intervals, Effect Size and Power

Confidence Intervals, Effect Size and Power. Chapter 8. Hyde, Fennema & Lamon (1990). Mean gender differences in mathematical reasoning were very small. When extreme tails of the distributions were removed, differences were smaller and reversed direction (favored girls)

bond
Download Presentation

Confidence Intervals, Effect Size and Power

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Confidence Intervals, Effect Size and Power Chapter 8

  2. Hyde, Fennema & Lamon (1990) • Mean gender differences in mathematical reasoning were very small. • When extreme tails of the distributions were removed, differences were smaller and reversed direction (favored girls) • The gender differences depended on task • Females were better with computation. • Males were better with problem solving.

  3. Men, Women, and Math • An accurate understanding of gender differences may not be found in “significant” effects. • Each distribution has variability. • Each distribution has overlap.

  4. A Gender Difference in Mathematics Performance – amount of overlap as reported by Hyde (1990)

  5. Confidence Intervals • Point estimate: summary statistic – one number as an estimate of the population • e.g., mean • Interval estimate: based on our sample statistic, range of sample statistics we would expect if we repeatedly sampled from the same population • e.g., Confidence interval

  6. Confidence Intervals • Interval estimate that includes the mean we would expect for the sample statistic a certain percentage of the time were we to sample from the same population repeatedly • Typically set at 95% • The range around the mean when we add and subtract a margin of error • Confirms findings of hypothesis testing and adds more detail

  7. Calculating Confidence Intervals • Step 1: Draw a picture of a distribution that will include confidence intervals • Step 2: Indicate the bounds of the CI on the drawing • Step 3: Determine the z statistics that fall at each line marking the middle 95% • Step 4: Turn the z statistic back into raw means • Step 5: Check that the CIs make sense

  8. Confidence Interval for z-test Mlower = - z(σM) + Msample Mupper = z(σM) + Msample • Think about it: Why do we need two calculations?

  9. A 95% Confidence Interval, Part I

  10. A 95% Confidence Interval, Part II

  11. A 95% Confidence Interval, Part III

  12. Think About It • Which do you think more accurately represents the data: point estimate or interval estimate? • Why? • What are the advantages and disadvantages of each method?

  13. Effect Size: Just How Big Is the Difference? • Significant = Big? • Significant = Different? • Increasing sample size will make us more likely to find a statistically significant effect

  14. What is effect size? • Size of a difference that is unaffected by sample size • Standardization across studies

  15. Effect Size & Mean Differences Imagine both represent significant effects: Which effect is bigger?

  16. Effect Size and Standard Deviation Note the spread in the two figures: Which effect size is bigger?

  17. Increasing Effect Size • Decrease the amount of overlap between two distributions: • 1. Their means are farther apart • 2. The variation within each population is • smaller

  18. Calculating Effect Size • Cohen’s d Estimates effect size • Assesses difference between means using standard deviation instead of standard error

  19. Prep Probability of replicating an effect given a particular population and sample size To Calculate Step 1: Calculate the p value associated with our statistic (in Appendix B) Step 2: Use the following equation in Excel =NORMSDIST(NORMSINV(1-P)/(SQRT(2))) Use the actual p value instead of P in the equation

  20. Statistical Power The measure of our ability to reject the null hypothesis, given that the null is false The probability that we will reject the null when we should The probability that we will avoid a Type II error

  21. Calculating Power Step 1: Determine the information needed to calculate power Population mean, population standard deviation, sample mean, samp0le size, standard error (based on sample size) Step 2: Determine a critical z value and raw mean, to calculate power Step 3: Calculate power: the percentage of the distribution of the means for population 2 that falls above the critical value

  22. Statistical Power: The Whole Picture

  23. Increasing Alpha

  24. Two-Tailed Verses One-Tailed Tests

  25. Increasing Sample Size or Decreasing Standard Deviation

  26. Increasing the Difference between the Means

  27. Factors Affecting Power • Larger sample size increases power • Alpha level • Higher level increases power (e.g., from .05 to .10) • One-tailed tests have more power than two-tailed tests • Decrease standard deviation • Increase difference between the means

  28. Meta-Analysis • Meta-analysis considers many studies simultaneously. • Allows us to think of each individual study as just one data point in a larger study.

  29. The Logic of Meta-Analysis • STEP 1: Select the topic of interest • STEP 2: Locate every study that has • been conducted and meets the criteria • STEP 3: Calculate an effect size, often Cohen’s d, for every study • STEP 4: Calculate statistics and create appropriate graphs

  30. Meta-Analysis and ElectronicDatabases

  31. Think About It • Why would it be important to know the statistical power of your study?

More Related