1 / 29

But…does it work? Do students truly learn the material better?

But…does it work? Do students truly learn the material better?. Nathan Tintle, Dordt College. Small liberal arts college: 1350 undergraduate students Statistician within Department of Math, Stat and CS Class size: Stat 131 (30 students), 5-6 sections per year

nakia
Download Presentation

But…does it work? Do students truly learn the material better?

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. But…does it work? Do students truly learn the material better?

  2. Nathan Tintle, Dordt College • Small liberal arts college: 1350 undergraduate students • Statistician within Department of Math, Stat and CS • Class size: Stat 131 (30 students), 5-6 sections per year • 3 hours per week in computer or tech-enabled classroom

  3. Overview • What we know about randomization approaches • What we don’t • What it means

  4. Our approach • Tintle et al. flavor (2013 version) • Unit 1. Inference (Single proportion) • Unit 2. Comparing two groups • Means, proportions, paired data • Descriptives, simulation/randomization, asymptotic • Unit 3. Other data contexts • Multiple means, multiple proportions, two quantitative variables • Descriptives, simulation/randomization, asymptotic

  5. What we know about it • Qualitative • Momentum: • Attendance at conference sessions, workshops • Publishers agreeing to publish the books • Class testers/inquiries • People doing this in their classrooms (clients, colleagues) • Repeat users • Appealing “in principle” and based on testimonials to date

  6. What we know about it • Quantitative assessment • Tintle et al. (2011, 2012) • Compare early version of curriculum (2009) to traditional curriculum at same institution as well as national sample • 40 question CAOS test • Results • Better student learning outcomes in some areas (design and inference); little evidence of declines

  7. What we know about it Pre-test: 50-60% correct Sample sizes: Hope ~200 per group; National Sample 760 P<0.001 between cohorts Example #1. Proportion of students correctly identifying that researcherswant small p-value’s if they hope to show statistical significance

  8. What we know about it • 2012-13 results 14 instructors, 7 institutions Total combined sample size of 783

  9. What we know about it

  10. What we know about it • Institutional diversity in student background (pre-test) • Post-test performance very good for most (over 90%) • A couple of exceptions • Both first time instructors with curriculum who will use it again this year

  11. What we know about it • Example 1 (continued). • First quiz, 2.5 weeks into course; Simulation for a single proportion • 119 people played RPS, 11.8% picked scissors • Evidence that scissors are picked less than 1/3 of time in long run?

  12. What we know about it • The following graph shows the 1000 different “could have been” sample proportions choosing scissors for samples of 119 people assuming scissors is chosen 1/3 of the time in the long run.

  13. What we know about it • Would you consider the results of this study to be convincing evidence that scissors are chosen less often in the long run than expected?

  14. What we know about it • Suppose the study had only involved 50 people but with the same sample proportion picking scissors. How would the p-value change? Single instructor (me), on 92 students, across 4 sections and 2 semesters

  15. What we know about it • Example #2. Moving beyond a specific item to sets of related items and retention • Tintle et al. 2012 (SERJ)+JSE • Improvement in Data collection and Design, Tests of significance, Probability (Simulation) on post-test • Data collection and Design and Tests of significance improvements were retained significantly better than in consensus curriculum

  16. What we know about it • Retention significantly better (p=0.02)

  17. What we know about it • Example #3. How are weak students doing?

  18. What we know about it • 2012-2013 All changes are highly significant using paired t-tests (p<0.001) **Among those who completed course; anecdotally we’reseeing lower drop out rate now than with consensus curriculum

  19. What we know about it • Example #4. Understand new data contexts? • Old AP Statistics question 10 randomly selectedlaptop batteries; testedand measured hoursthey lasted

  20. What we know about it • To investigate whether the shape of the sample data distribution was simply due to chance or if it actually provides evidence that the population distribution of battery lifetimes is skewed to the right, the engineers at the company decided to take 100 random samples of lifetimes, each of size 10, sampled from a perfectly symmetric normally, distributed population with a mean of 2.6 hours and standard deviation of 0.29 hours. For each of those 100 samples, the statistic sample mean divided by the sample median was calculated. A dotplot of the 100 simulated skewness ratios is shown below.

  21. What we know about it • What is the explanation for why the engineers carried out the process above?

  22. What we know about it • Analysis of all (free-response) class tests is ongoing • Integrate observed statistic and simulated values to draw a conclusion?

  23. What we know about it • Summary • Preliminary and current versions showed improved performance in understanding of tests of significance, design and probability (simulation) post-course, and improved retention in these areas • These results appear stable across lower-performing students with older and newer versions of the curriculum • Some evidence of student ability to apply the framework of inference (3-S) to novel situations

  24. What we know about it • Summary • Some instructor differences, but also preliminary validation of “transferability” of findings across different institutions/instructors; new instructors? • **Note: Some evidence of weaker performance in descriptive stats in this earlier curriculum; substantial changes to descriptive statistics approach to combat this.

  25. What don’t we know • What’s making the change • Content? • Pedagogy? • Repetition? • How much randomization before you see a change? • Are there differences student performance based on curricula? Are they important?

  26. What we don’t know • What are the developmental learning trajectories for inference (Do they understand what we mean by ‘simulation’)? Other topics? • Low performing students; promising---ACT, GPA • Does improved performance transfer across institutions/instructors? What kind of instructor training/support is needed to be successful? • Using CAOS (or adapted CAOS) questions, but do we still all agree these are the “right” questions? Is knowing what a small p-value means enough? What level of understanding are they attaining? • Why do students in both curriculums tend to do poorly on descriptive statistics questions? Or areas where we see little difference in curricula?

  27. What it means • Preliminary indications continue to be positive • You can cite similar or improved performance on nationally standardized/accepted/normed tests for the approach • Tag line for peers and clients: • We are improving some areas (the important ones?) and doing no harm elsewhere • Still lots of room for better understanding and continued improvement of approach • Student engagement (talk yesterday) • Next steps: Larger, more comprehensive assessment effort coordinated between users of randomization-based curriculum and those that don’t. If you are interested let me know.

  28. Acknowledgements • Author team (Beth Chance, George Cobb, Allan Rossman, Soma Roy, Todd Swanson and Jill VanderStoep) • Class testers • NSF funding

  29. References • Tintle NL, VanderStoep J, Holmes V-L, Quisenberry B and Swanson T “Development and assessment of a preliminary randomization-based introductory statistics curriculum” Journal of Statistics Education 19(1), 2011 • Tintle NL, TopliffK, VanderSteop J, Holmes V-L, Swanson T “Retention of statistical concepts in a preliminary randomization-based introductory statistics curriculum” Statistics Education Research Journal, 2012.

More Related