100 likes | 157 Views
This guide explores the importance of reliability and validity in measurement concepts, including test-retest reliability, internal consistency reliability, interrater reliability, and construct validity. Learn about different reliability coefficients and validity indicators in research. The text also covers measurement scales and their significance in research methodologies.
E N D
Ch. 5 Measurement Concepts
Reliability of measures • Reliability • Reliability is a necessary but not sufficient condition for validity. • The consistency or stability of a measure of behavior • Measure (Observed score) = true score + measurement error • Unreliable measures have large margins of measurement error • Assessment of stability of measures is done using correlation coefficients • Pearson product-moment correlation coefficient • Ranges from 0.00-1.00 and 0.00- (-)1.00 • If r is + there is a + relationship and vice versa
Reliability of measures • Test-Retest Reliability • Measuring the same individuals at 2 points in time to get similar results • Similar results reflect true scores rather than measurement error • Reliability coefficients should be at least 0.80 • Drawback -
Reliability of measures • Internal consistency reliability • Devised to control for error in test-retest experiments • Assessment of reliability using responses at only 1 time point • This form of reliability is dependent on how well the items on the test measure what they are supposed to • Split half reliability • Using ½ of the scores to compare to the other ½ • Or using odd #s to compare to even #s
Reliability of measures • Interrater reliability • The extent to which raters agree in their observations • High reliability = Similar observations __________________________________________ • Reactivity of measures • Potential problem when measuring behavior • AWARENESS OF BEING MEASURED CHANGES AN INDIVIDUAL’S BEHAVIOR
Construct validity of measures • Construct validity • The adequacy of the operational definition of variables; The measure has construct validity if it measures what it is supposed to • Indicators of construct validity • Face validity: the measure appears to measure what it’s supposed to • Criterion orientedvalidity: the relationship between scores on the measure and some criterion
Construct validity of measures • Construct validity • Criterion oriented validity • Predictive validity: • the extent to which the measure allows you to predict behaviors that it should predict • i.e. GRE developed to predict behavior in a graduate program • Concurrent validity: • whether 2 or more groups of people differ on the measure in expected ways • a measure that allows a researcher to distinguish between people at the present time
Construct validity of measures • Construct validity • Criterion oriented validity • Convergent validity: • the extent to which scores on the measure in question are related to scores on other measures of the same construct or similar constructs • showing that measures that should be related, in reality are related • Discriminant validity: • demonstrated when the measure is not related to variables with which it should not be related • Showing that measures that are NOT related, in reality are NOT related
Variables and measurement scales • Nominal scales • Ordinal scales • Interval scales • Ratio scales
Importance of measurement scales • The scale used determines the amount of information provided by a particular measure • Interval and ratio scales allow the researcher to make quantitative assumptions