1 / 61

Reliability

Reliability. Reliability. Reliability means repeatability or consistency A measure is considered reliable if it would give us the same result over and over again (assuming that what we are measuring isn’t changing!). Definition of Reliability.

sana
Download Presentation

Reliability

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Reliability

  2. Reliability • Reliability means repeatability or consistency • A measure is considered reliable if it would give us the same result over and over again (assuming that what we are measuring isn’t changing!) Mr. Rajesh Gunesh

  3. Definition of Reliability • Reliability usually “refers to the consistency of scores obtained by the same persons when they are reexamined with the same test on different occasions, or with different sets of equivalent items, or under other variable examining conditions (Anastasi & Urbina, 1997). • Dependable, consistent, stable, constant • Gives the same result over and over again Mr. Rajesh Gunesh

  4. Validity vs Reliability Mr. Rajesh Gunesh

  5. Variability and reliability • What is the acceptable range of error in measurement • Bathroom scale ±1 kg • Body thermometer ±0.2 C • Baby weight scale ±20 g • Clock with hands ±5 min • Outside thermometer ±1 C Mr. Rajesh Gunesh

  6. Variability and reliability We are completely comfortable with a bathroom scale accurate to ±1 kg, since we know that individual weights vary over far greater ranges than this, and typical changes from day to day are about the same order of magnitude. Mr. Rajesh Gunesh

  7. Reliability • True Score Theory • Measurement Error • Theory of reliability • Types of reliability • Standard error of measurement Mr. Rajesh Gunesh

  8. True Score Theory Mr. Rajesh Gunesh

  9. True Score Theory • Every measurement is an additive composite of two components: • True ability (or the true level) of the respondent on that measure • Measurement error Mr. Rajesh Gunesh

  10. True Score Theory • Individual differences in test scores • “True” differences in characteristic being assessed • “Chance” or random errors. Mr. Rajesh Gunesh

  11. True Score Theory • What might be considered error variance in one situation may be true variance in another (e.g Anxiety) Mr. Rajesh Gunesh

  12. Can we observe the true score? X = T + ex • We only observe the measurement, we don’t observe what’s on the right side of equation (only God knows what those values are) Mr. Rajesh Gunesh

  13. True Score Theory var(X) = var(T) + var(ex) • The variability of the measure is the sum of the variability due to true score and the variability due to random error Mr. Rajesh Gunesh

  14. What is error variance? Conditions irrelevant to purpose of the test • Environment (e.g., quiet v. noisy) • Instructions (e.g., written v. verbal) • Time limits (e.g., limited v. unlimited) • Rapport with test taker • All test scores have error variance. Mr. Rajesh Gunesh

  15. Measurement Error • Measurement error: • Random • Systematic Mr. Rajesh Gunesh

  16. Measurement Error Mr. Rajesh Gunesh

  17. Measurement Error • Random error: effects are NOT consistent across the whole sample, they elevate some scores and depress others • Only adds noise; does not affect mean score Mr. Rajesh Gunesh

  18. Measurement Error • Systematic error: effects are generally consistent across a whole sample • Example: environmental conditions for group testing (e.g., temperature of the room) • Generally either consistently positive (elevate scores) or negative (depress scores) Mr. Rajesh Gunesh

  19. Measurement Error Mr. Rajesh Gunesh

  20. Measurement Error Mr. Rajesh Gunesh

  21. Theory of Reliability Mr. Rajesh Gunesh

  22. Reliability The variance of the true score Reliability = The variance of the measure Var(T) Reliability = Var(X) Mr. Rajesh Gunesh

  23. Subject variability Reliability = Subject variability + measurement error Var(T) Var(T) Reliability = = Var(T) + Var(e) Var(X) How big is an estimate of Reliability? Mr. Rajesh Gunesh

  24. We can’t compute reliability because we can’t calculate the variance of the true score; but we can get an estimate of the variability. Mr. Rajesh Gunesh

  25. Estimate of Reliability • Observations would be related to each other to the degree that they share true scores. For example consider the correlation between X1 and X2: Mr. Rajesh Gunesh

  26. Mr. Rajesh Gunesh

  27. Types of Reliability • Test-Retest Reliability Used to assess the consistency of a measure from one time to another • Alternate-form Reliability Used to assess the consistency of the results of two tests constructed the same way from the same content domain Mr. Rajesh Gunesh

  28. Types of Reliability • Split-half Reliability Used to assess the consistency of results across items within a test by splitting them into two equivalent halves • Kuder-Richardson Reliability Used to assess the extent to which items are homogenous when items have a dichotomous response, e.g. “yes/no” items. Mr. Rajesh Gunesh

  29. Types of Reliability • Cronbach’s alpha (α) Reliability Compares the consistency of response of all items on the scale (Likert scale or linear graphic response format) • Inter-Rater or Inter-Scorer Reliability Used to assess the concordance between two or more observers scores of the same event or phenomenon for observational data Mr. Rajesh Gunesh

  30. Test-Retest Reliability • Definition: When the same test is administered to the same individual (or sample) on two different occasions Mr. Rajesh Gunesh

  31. Test-Retest Reliability:Used to assess the consistency of a measure from one time to another Mr. Rajesh Gunesh

  32. Test-Retest Reliability • Statistics used • Pearson r or Spearman rho • Warning • Correlation decreases over time because error variance INCREASES (and may change in nature) • Closer in time the two scores were obtained, the more the factors which contribute to error variance are the same Mr. Rajesh Gunesh

  33. Test-Retest Reliability • Warning • Circumstances may be different for both test-taker and physical environment. • Transfer effects like practice and memory might play a role on the second testing occasion Mr. Rajesh Gunesh

  34. Alternate-form Reliability • Definition: Two equivalent forms of the same measure are administered to the same group on two different occasions Mr. Rajesh Gunesh

  35. Alternate-form Reliability:Used to assess the consistency of the results of two tests constructed same way from the same content domain Mr. Rajesh Gunesh

  36. Alternate-form Reliability • Statistic used • Pearson ror Spearman rho • Warning • Even when randomly chosen, the two forms may not be truly parallel • It is difficult to construct equivalent tests Mr. Rajesh Gunesh

  37. Alternate-form Reliability • Warning • Even when randomly chosen, the two forms may not be truly parallel • It is difficult to construct equivalent tests • The tests should have the same number of items, same scoring procedure, uniform content and item difficulty level Mr. Rajesh Gunesh

  38. Split-half Reliability • Definition: Randomly divide the test into two forms; calculate scores for Form A, B; calculate Pearson r as index of reliability Mr. Rajesh Gunesh

  39. Split-half Reliability Mr. Rajesh Gunesh

  40. Split-half Reliability (Spearman-Brown formula) Mr. Rajesh Gunesh

  41. Split-half Reliability • Warning The correlation between the odd and even scores are generally an underestimation of the reliability coefficient because it is based only on half the test. Mr. Rajesh Gunesh

  42. Cronbach’s alpha & Kuder-Richardson-20 Measures the extent to which items on a test are homogeneous; mean of all possible split-half combinations • Kuder-Richardson-20 (KR-20): for dichotomous data • Cronbach’s alpha: for non-dichotomous data Mr. Rajesh Gunesh

  43. Cronbach’s alpha (α) Mr. Rajesh Gunesh

  44. Cronbach’s alpha (α) (Coefficient alpha) Mr. Rajesh Gunesh

  45. Kuder-Richardson (KR-20) Mr. Rajesh Gunesh

  46. Inter-Rater or Inter-Observer Reliability:Used to assess the degree to which different raters or observers give consistent estimates of the same phenomenon Mr. Rajesh Gunesh

  47. Inter-rater Reliability • Definition Measures the extent to which multiple raters or judges agree when providing a rating of behavior Mr. Rajesh Gunesh

  48. Inter-rater Reliability • Statistics used • Nominal/categorical data • Kappa statistic • Ordinal data • Kendall’s tau to see if pairs of ranks for each of several individuals are related • Two judges rate 20 elementary school children on an index of hyperactivity and rank order them Mr. Rajesh Gunesh

  49. Inter-rater Reliability • Statistics used • Interval or ratio data • Pearson r using data obtained from the hyperactivity index Mr. Rajesh Gunesh

  50. Factors affecting Reliability • Whether a measure is speeded • Variability in individual scores • Ability level Mr. Rajesh Gunesh

More Related