1 / 26

CNI - 2014

3 Questions You Should Never Ask in Evaluation/ Assessment: … And a few you should!. Presenters (in order of appearance). Chris Bourg , Stanford University , mchris@ stanford.edu Josh ua Morrill , Morrill Solutions Research (MSR), joshua@ morrillsolutions.com

minhg
Download Presentation

CNI - 2014

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. 3 Questions You Should Never Ask in Evaluation/ Assessment: …And a few you should! Presenters (in order of appearance) • Chris Bourg, Stanford University, mchris@stanford.edu • Joshua Morrill, Morrill Solutions Research (MSR), joshua@morrillsolutions.com • Glenda Morgan, University of Illinois Urbana, gmorgan@illinois.edu CNI - 2014

  2. Question #1 The ambiguous question CNI - 2014

  3. Ambiguity in … • Wording • Meaning • Threshold

  4. Double-barreled questions • “Buy-back programs are a great idea for decreasing gun ownership.” • “Within the next 5 years, the use of e-books will be so prevalent among faculty & students that it will not be necessary to maintain library collections of hard copy books”

  5. Ambiguous meaning

  6. Less ambiguous

  7. Ambiguous thresholds • How much is enough? • Fake data is your friend

  8. Question #2 Significance-Shminifigance …Is it meaningful? CNI - 2014

  9. Overpowered Tests • Faculty use of digital resources (4400+ responses!) • First order tests, EVERYTHING IS SIGNIFICANT! (= Slow…) • Implications for Learning Analytics

  10. Overpowered Tests • Faculty use of digital resources (4400+ responses!) • First order tests, EVERYTHING IS SIGNIFICANT! (= Slow…) • Implications for Learning Analytics If the phenomena we are trying to measure is the earthquake The size of our sample determines if our seismograph is more or less sensitive. Too FEW cases and we can only pick up on the BIG earthquakes. Too MANY cases and every minor bump registers as an earthquake.

  11. Overpowered Tests • Faculty use of digital resources (4400+ responses!) • First order tests, EVERYTHING IS SIGNIFICANT! (= Slow…) • Implications for Learning Analytics If the phenomena we are trying to measure is the earthquake The size of our sample determines if our seismograph is more or less sensitive. Too FEW cases and we can only pick up on the BIG earthquakes. Too MANY cases and every minor bump registers as an earthquake. GOLD NUGGET We are smarter than our tools. We can decide if we sound the alarm or not, but we need to make that decision mindfully!

  12. False Precision • False precision (The bane of 1-10 scales) Is there REALLY a difference between 8 & 9s? 6s and 7s?

  13. False Precision • False precision (The bane of 1-10 scales) Is there REALLY a difference between 8 & 9s? 6s and 7s? There is error in every measurement!More scale precision does not necessarily mean more accuracy. Good researchers are empathetic! How are the people who are participating in your research seeing/ understanding the process?

  14. Lots of things influence the statistical signifance / non-signifance of a finding Chiefly… • The Sample Size • The Effect Size

  15. Lots of things influence the statistical signifance / non-signifance of a finding Chiefly… • The Sample Size • The Effect Size In most evaluations these are NOT optimum for significance testing…

  16. Lots of things influence the statistical signifance / non-signifance of a finding Chiefly… • The Sample Size • The Effect Size In most evaluations these are NOT optimum for significance testing… But --- EVEN WITHOUT SIGNIFICANCE --- evaluation results can still be important and meaningful.

  17. For Example…An Underpowered Test “The library provides a vital service to campus.” % Agree & Strongly Agree Something going on here? …maybe but not statistically…

  18. For Example…An Underpowered Test “The library provides a vital service to campus.” % Agree & Strongly Agree Something going on here? …maybe but not statistically… At its best, research is a small flashlight in a dark room... Data-driven decision making ≠ relinquishing personal judgment.

  19. Approach --especially underpowered data-- with these questions • Is this data suggesting something important for my core business or service? • Do I need more information in order to take action? Do I REALLY need this? • What --if anything--do I need to know in order to take meaningful action?

  20. Question #3 Questions you SHOULDN’T ask? …Probably about half of them CNI - 2014

  21. The Length vs. Empathy Relationship Long Research Length / Time Investment for participants Empathy for / with Participants High Short / Low

  22. Do you REALLY need to know that?

  23. Problems with too much data?

  24. The (often) false allure of longitudinal data

  25. A Few Suggested Readings --From Chris-- Evaluation: A systematic Approach by Rossi, Lipsey & Freeman --From Josh-- How to Measure Anything by D. Hubbard --From Glenda-- Improving Survey Questions by F. Fowler -&- Research Methods in Anthropology by R. Bernard

  26. Questions for us? Thank you! Presenters (in order of appearance) • Chris Bourg, Stanford University, mchris@stanford.edu • Joshua Morrill, Morrill Solutions Research (MSR), joshua@morrillsolutions.com • Glenda Morgan, University of Illinois Urbana, gmorgan@illinois.edu CNI - 2014

More Related