1 / 39

Reliability

Session A: Psychometrics 101: The Foundations and Terminology of Quality Assessment Design Date : May 14 th from 3-5 PM Session B: Psychometrics 101: Test Blueprints (Standards Alignment, Working Backwards, and documentation)   Date : May 16 th from 3-5 PM

ansel
Download Presentation

Reliability

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Session A: Psychometrics 101: The Foundations and Terminology of Quality Assessment Design Date: May 14th from 3-5 PM Session B: Psychometrics 101: Test Blueprints (Standards Alignment, Working Backwards, and documentation)  Date: May 16th from 3-5 PM Session C: Psychometrics 101: Understanding and Decreasing Threats to Validity AKA What’s the purpose of my assessment? Date:: May 30th from 3-5 PM Session D: Psychometrics 101: Understanding and Decreasing Threats to Reliability AKA What does it mean to have noisy assessments? Date June 4th from 3-5 PM Session E: Designing Quality Qualitative Measures: Understanding, Interpreting, and Using Survey Data Date: June 18th from 3-5 PM Session G: Putting the cart behind the horse: Designing action research and inquiry questions to inform teaching and learning Date: June 25th from 3-5 PM

  2. Reliability Indication of how consistently an assessment measures its intended target and the extent to which scores are relatively free of error. Low reliability means that scores cannot be trusted for decision making. Necessary but not sufficient condition to ensure validity.

  3. How consistent are my assessment results? It applies to strength, or consistency, of the assessment when given at different times, scored by different teachers, or given in a different way.

  4. There are multiple ways of assessing reliability alternate form reliability, split-halves reliability coefficients, Spearman-Brown double length formula, Kudar-Richardson Reliability Coefficient, Pearson Product-Moment Correlation Coefficient, etc.

  5. and three general ways to collect evidence of reliability Stability: How consistent are the results of an assessment when given at two time-separated occasions? Alternate Form: How consistent are the results of an assessment when given in two different forms?; Internal Consistency: How consistently do the test’s items function?

  6. Noise

  7. 1. Formatting Do students have enough space to write their response?

  8. Text or features that pull the student out of the test create noise. Question stem on one page, choices on another Three choices on one page, fourth choice on second page

  9. 2. Typos Typos popped up in every department. They happen. “Final Eyes” are the best way to avoid them.

  10. Test from Period 1 Test from Period 2

  11. What accommodations can be made to ensure there is quality control?

  12. 3. Having to hunt for the right answer

  13. Compare with . . .

  14. 4. Using the question to answer the question Two options in word bank were two word phrases – so I know they are the right answer for one of these two items

  15. Don’t need to know the answer to know it’s not her . . . or her . . . and we can be pretty sure the president of France isn’t like Bono

  16. 5. Not having one clear answer

  17. 6. Unclear Questions As compared to what? If a student needs to infer what you want, there’s noise.

  18. One assessment does not an assessment system make.

  19. Fairness and Bias Fair tests are accessible and enable all students to show what they know. Bias emerges when features of the assessment itself impede students’ ability to demonstrate their knowledge or skills.

  20. In 1876, General George Custer and his troops fought Lakota and Cheyenne warriors at the Battle of the Little Big Horn. In there had been a scoreboard on hand, at the end of that battle which of the following score-board representatives would have been most accurate? Soldiers > Indians Soldiers = Indians Soldiers < Indians All of the above scoreboards are equally accurate

  21. My mother’s field is court reporting. Chose the sentence below in which the word field means the same as it does in the sentence above. The first basemen knew how to field his position. Farmer Jones added fertilizer to his field. What field will you enter when school is complete? The doctor checked my field of vision?

  22. What are other attributes of quality assessments?

  23. Implications Generally speaking, schools should perform at least two statistical tests to document evidence of reliability: a correlation coefficient and SEM. Nitkoand Brookhart (2011) recommend 0.85 – 0.95 for only MC and 0.65-0.80 for extended response. In the example above, the user will need to understand that coefficient of 0.80 is low for MC but high for extended response.

  24. Standard Error of Measurement An estimate of the consistency of a student’s score if the student had retaken the test innumerable times

More Related