1 / 34

CH. 9 MEASUREMENT: SCALING, RELIABILITY, VALIDITY

CH. 9 MEASUREMENT: SCALING, RELIABILITY, VALIDITY. Scaling. Scaling is a procedure for the assignment of numbers (or other symbols) to a property of objects in order to import some of the characteristics of numbers to properties in question. Methods of Scaling. Rating scales

dboyles
Download Presentation

CH. 9 MEASUREMENT: SCALING, RELIABILITY, VALIDITY

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CH. 9MEASUREMENT: SCALING, RELIABILITY, VALIDITY

  2. Scaling • Scaling is a procedure for the assignment of numbers (or other symbols) to a property of objects in order to import some of the characteristics of numbers to properties in question

  3. Methods of Scaling • Rating scales • Have several response categories and are used to elicit responses with regard to the object, event, or person studied. • Ranking scales • Make comparisons between or among objects, events, persons and elicit the preferred choices and ranking among them.

  4. Rating Scales • Dichotomous scale • Is used to elicit a Yes or No answer. • Nominal scale

  5. Dichotomous Scale • Do you own a car? • Yes • No

  6. Rating Scales (Cont’d) • Category scale • Uses multiple items to elicit a single response. • Nominal scale

  7. Category Scale • Where in northern California do you reside? • North Bay • South Bay • East Bay • Peninsula • Other (specify:_____________)

  8. Rating Scales (Cont’d) • Likert scale • Is designed to examine how strongly subjects agree or disagree with statements on a 5-point scale. • Interval scale

  9. Likert Scale • My work is very interesting • Strongly disagree • Disagree • Neither agree nor disagree • Agree • Strongly agree

  10. Rating Scales (Cont’d) • Semantic differential scale • Several bipolar attributes are identified at the extremes of the scale, and respondents are asked to indicate their attitudes. • Interval scale

  11. Semantic Differential

  12. Rating Scales (Cont’d) • Numerical scale • Similar to the semantic differential scale, with the difference that numbers on a 5-point or 7-point scale are provided, with bipolar adjectives at both ends. • Interval scale

  13. Numerical Scale How pleased are you with your new real estate agent? Extremely 7 6 5 4 3 2 1 Extremely Pleased Displeased

  14. Rating Scales (Cont’d) • Itemized rating scale • A 5-point or 7-point scale with anchors, as needed, is provided for each item and the respondent states the appropriate number on the side of each item, or circles the relevant number against each item. • Interval scale

  15. Itemized Rating Scale 1 2 3 4 5 Very Unlikely Unlikely Neither Unlikely Likely Very Likely Nor Likely 1. I will be changing my job within the next 12 months

  16. Rating Scales (Cont’d) • Fixed or constant sum scale • The respondents are here asked to distribute a given number of points across various items. • Ordinal scale

  17. Fixed or Constant-Sum Scales

  18. Rating Scales (Cont’d) • Stapel scale • This scale simultaneously measure both the direction and intensity of the attitude toward the items under study. • Interval data

  19. Stapel Scales

  20. Rating Scales (Cont’d) • Graphic rating scale • A graphical representation helps the respondents to indicate on this scale their answers to particular question by placing a mark at the appropriate point on the line. • Ordinal scale

  21. Graphic Rating Scales

  22. Ranking Scales • Paired Comparison • Used when, among a small number of objects, respondents are asked to choose between two objects at a time.

  23. Paired-Comparison Scale

  24. Ranking Scales (Cont’d) • Forced Choice • Enable respondents to rank objects relative to one another, among the alternatives provided.

  25. Forced Choice

  26. Ranking Scales (Cont’d) • Comparative Scale • Provides a benchmark or a point of reference to assess attitudes toward the current object, event, or situation under study.

  27. Comparative Scale

  28. Goodness of Measures • Reliability • Indicates the extent to which it is without bias (error free) and hence ensures consistent measurement across time and across the various items in the instrument.

  29. Reliability • Stability of measures: • Test-retest reliability • Parallel-form reliability • Correlation • Internal consistency of measures: • Interitem consistency reliability • Cronbach’s alpha • Split-half reliability • Correlation

  30. Goodness of Measures (Cont’d) • Validity • Ensures the ability of a scale to measure the intended concept. • Content validity • Criterion related validity • Construct validity

  31. Validity • Content validity • Ensures that the measure includes an adequate and representative set of items that tap the concept. • A panel of judges

  32. Validity (Cont’d) • Criterion related validity • Is established when the measure differentiates individuals on a criterion it is expected to predict • Concurrent validity: established when the scale differentiates individuals who are known to be different • Predictive validity: indicates the ability of measuring instrument to differentiate among individuals with reference to future criterion • Correlation

  33. Validity (Cont’d) • Construct validity • Testifies to how well the results obtained from the use of the measure fit the theories around which the test is designed. • Convergent validity: established when the scores obtained with two different instrument measuring the same concept are highly correlated • Discriminant validity: established when, based on theory, two variables are predicted to be uncorrelated, and the scores obtained by measuring them are indeed empirically found to be so • Correlation, factor analysis, convergent-discriminant techniques, multitrait-multimethod analysis

  34. Understanding Validity and Reliability

More Related