1 / 25

Measurement

Measurement. MANA 4328 Dr. George Benson benson@uta.edu. Basic Concepts. The Normal Curve Many people taking a test One person taking the test many times 95% Confidence Intervals Variability and comparing test scores Mean / Standard Deviation Z scores and Percentiles

Download Presentation

Measurement

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Measurement MANA 4328 Dr. George Benson benson@uta.edu

  2. Basic Concepts • The Normal Curve • Many people taking a test • One person taking the test many times • 95% Confidence Intervals • Variability and comparing test scores • Mean / Standard Deviation • Z scores and Percentiles • Correlation coefficients • Standard Error of Measurement

  3. The Normal Curve Note: Not to Scale Rounded Percentiles Z Scores .1% 2% 16% 50% 84% 98% 99.9% -3 -2 -1 0 +1 +2 +3

  4. Variability • How did an individual score compared to others? • How to compare scores across different tests?

  5. Variability • How did an individual score compared to others? • How to compare scores across different tests?

  6. Variability • How did an individual score compared to others? • How to compare scores across different tests?

  7. Z Score or “Standard” Score Score – Mean Z Score = Std. Dev

  8. The Normal Curve Note: Not to Scale Jim Bob Linda Sue

  9. Z scores and Percentiles • Look up z scores on a “standard normal table” • Corresponds to proportion of area under normal curve • Linda has z score of 1.25 • Standard normal table = .9265 • Percentile score of 92.65% • Linda scored better than 92.65% of test takers

  10. Proportion Under the Normal Curve Note: Not to Scale Jim Bob Linda Sue

  11. Correlation • How strongly are two variables related? • Correlation coefficient (r) • Ranges from -1.00 to 1.00 • Shared variation = r2 • If two variables are correlated at r =.6 then they share .62 or 36% of the total variance. • Illustrated using scatter plots • Used to test consistency and accuracy of measure

  12. Correlation Scatterplots Figure 5.3

  13. EEOC Uniform Guidelines Reliability – consistency of the measure If the same person takes the test again will he/she earn the same score? Potential contaminations: • Test takers physical or mental state • Environmental factors • Test forms • Multiple raters

  14. Reliability: Basic Concepts • Observed score = true score + error • Error is anything that impacts test scores that is not the characteristic being measured • Reliability measures error • Lower the error the better the measure • Things that can be observed are easier to measure than things that are inferred

  15. Reliability Test Methods • Test – retest • Alternate or parallel form • Inter-rater • Internal consistency • Methods of calculating correlations between test items, administrations, or scoring.

  16. Summary of Types of Reliability

  17. Standard Error of Measure (SEM) • Estimate of the potential error for an individual test score • Uses variability AND reliability to establish a confidence interval around a score • 95% Confidence Interval (CI) means if one person took the test 100 times, 95 of the scores will fall within the upper and lower bounds. SEM = SD * √ (1- reliability) • There is a 5% chance that scores observed outside the CI are due to chance, therefore the differences are “significant”.

  18. Standard Error of Measure (SEM) SEM = SD * √ (1- reliability) Assume a mathematical ability test has a reliability of .9 and a standard deviation of 10: SEM = 10 * √ (1- .9) = 3.16 If an applicant scores a 50, the SEM is the degree to which the score would vary if she were retested on another day. Plus or minus 2 SEM gives you a ~95% confidence interval. 50 + 2(3.16) = 56.32 50 – 2(3.16) = 43.68

  19. Standard Error of Measure • The difference between two scores should not be considered significant unless the difference is twice the standard error. • If an applicant scores 2 points above a passing score and the SEM is 3.16 – then there is a good chance of making a bad selection choice. • If two applicants score within 2 points of one another and the SEM is 3.16 then it is possible that the difference is due to chance.

  20. Standard Error of Measure • The higher the reliability, the lower the SEM

  21. Confidence Intervals Do the applicants differ when SEM = 2? Do the applicants differ when SEM = 4?

  22. Validity • Accuracy of the measure Are you measuring what you intend to measure? OR Does the test measure a characteristic related to job performance? • Types of test validity • Criterion – test predicts job performance • Predictive or Concurrent • Content – test representative of the job

  23. Tests of Criterion-Related Validity • Predictive validity “Future Employee or Follow-up Method” Test Applicants Performance of Hires Time 1 6-12 mos. Time 2 • Concurrent validity “Present Employee Method” Test Existing Employee AND Measure Performance Time 1

  24. Types of Validity Job Performance Criterion-Related Content-Related Job Duties Selection Tests KSA’s

  25. Reliability vs. Validity • Validity Coefficients • Reject below .11 • Very useful above .21 • Rarely exceed .40 • Reliability Coefficients • Reject below .70 • Very useful above .90 • Rarely approaches 1.00 Why the difference?

More Related