1 / 17

Creating Assessments

Creating Assessments. AKA how to write a test. Creating Assessments. All good assessments have three key features: Validity Reliability Usability. Reliability. Next to validity, reliability is the most important characteristic of assessment results. Why?

elinorp
Download Presentation

Creating Assessments

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Creating Assessments AKA how to write a test

  2. Creating Assessments • All good assessments have three key features: • Validity • Reliability • Usability

  3. Reliability Next to validity, reliability is the most important characteristic of assessment results. Why? 1. It provides the consistency to make validity possible. 2. It indicates the degree to which various kinds of generalizations are justifiable.

  4. Reliability • re·li·a·bleadj. Capable of being relied on; dependable. re·li”a·bil“i·ty or re·li“a·ble·ness n.--re·li“a·bly adv. (American Heritage Dictionary)

  5. Reliability Reliability: the consistency of measurement, i.e. how consistent test scores or other assessment results are from one measurement to another.

  6. Reliability Which is more reliable?

  7. Reliability Classical Test Theory: X = T + e Where: X =observed score T = “true score” e = error

  8. Reliability x =observed score: The score the student receives on the exam. T = “true score”: What the student “really” knows.

  9. Reliability e = error Error variance is the variability that exists in a set of scores and is due to factors other than the one being assessed. • Systematic: errors that are consistent. • Random: errors that have no pattern.

  10. Reliability e = error Positive error (i.e. raises score): • Lucky guesses. • Items that give clues to the answer. • Cheating (students, aides, teachers).

  11. Reliability e = error score Negative error (i.e. lowers score): • Not following directions. • Miss-marking items. • Room climate/atmosphere. • Hunger, fatigue, illness, “need to go potty”. • Assemblies, ball games, fire drills, etc. • Break-up of a relationship.

  12. Circle the figures that are half shaded.

  13. Reliability Determining Reliability: • Test-retest method • Equivalent forms • Split half method • KR-20 method • Interrater reliability • Intrarater reliability

  14. Reliability Standard Error of Measurement (SEM)= the estimated amount of variation expected in a score.

  15. Reliability Example: If Sara scored 78 on a standardized test with a SEM of 6 we can be: • 68% certain her true score is between 72 and 84 • 95% certain her true score is between 66 and 90 • 99% certain her true score is between 60 and 96

  16. Reliability Summation of Reliability: • Reliability refers to the results and not to the instrument itself. • Reliability is a necessary but not sufficient condition for validity. • The more reliable the assessment, the better.

  17. Usability The practical aspects of a test cannot be neglected: • Ease of administration • Time • Administration • Scoring • Ease of Interpretation • Availability of equivalent forms • Cost

More Related