1 / 23

Developing a Measure: scales, validity and reliability

Developing a Measure: scales, validity and reliability. Types of Measures. Observational Physiological and Neuroscientific Self-report --m ajority of social & behavioral science research. Self-report measures People’s replies to written questionnaires or interviews Can measure:

keran
Download Presentation

Developing a Measure: scales, validity and reliability

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Developing a Measure: scales, validity and reliability

  2. Types of Measures • Observational • Physiological and Neuroscientific • Self-report --majority of social & behavioral science research

  3. Self-report measures • People’s replies to written questionnaires or interviews • Can measure: • thoughts (cognitive self-reports) • feelings (affective self-reports) • actions (behavioral self-reports)

  4. Self-Report Self-reported momentary emotions: Positive and Negative Affect Schedule (PANAS) (Watson, Clark & Tellegen,1988)

  5. Scales of Measurement Thing being measured IntervalIntervalRatio Nominal Hot = 1 Warm = 3 Cold = 2 Ordinal 1st Place Sample 2nd Place Sample 3rd Place Sample 4th Place Sample 5th Place Sample

  6. Scales of MeasurementFour Types Distinction between scales is due to the meaning of numbers • Nominal Scale—numbers assigned are only labels. • Ordinal Scale—a rank ordering. • Interval Scale—each number equidistant from the next, but no zero point(majority of measures). • Ratio Scale—each number is equidistant and there is a true zero point.

  7. Scales of Measurement Type of Scale Determines Statistics and Power

  8. Attributes of Good Measures • Valid: measure assesses the construct it is intended to and is not influenced by other factors • Reliable: the consistency of a measure, does it provide the same result repeatedly.

  9. Reliability and Validity Reliable but not Valid Dependable measure, but doesn’t measure what it should Example: Arm length to measure self-esteem. Valid but not Reliable Measures what it should, but not dependably Example: Stone as a measure of weight in Great Britain.

  10. Reliability vs. Validity Visual Central dot = construct we are seeking to measure

  11. Reliability Assessments 1 • Test-Retest Reliability Measure administered at two points in time to assess consistency. Works best for things that do not change over time (e.g., intelligence). • Internal Consistency Reliability Judgments of consistency of results across items in the same test administration session. 1. Intercorrelation: Chronbach’sα (> .65 is preferred) 2. Split halves reliability

  12. Types of Validity • Content Validity Does the measure represent the range of possible items the it should cover based on the meaning of the measure. • Predictive Validity measure predicts criterion measures that are assessed at a later time. Ex: Does aptitude assessment predict later success? • Construct Validity Does the measure actually tap into intended construct?

  13. Developing Items for a New Measure • Guided spontaneous response from individuals in sample population (thought listings, essay questions…) • Face valid items: develop items that appear to measure your construct. • Pilot test a larger set of items and choose those that are more reliable & valid. • Reversed coded items indicate whether participants are paying attention.

  14. Use common response scale types • Likert Scale: To what extent do you agree with the following statement… (0 to 9, strongly disagree-strongly agree) • Semantic Differential: What is your response to (insert person, object, place, issue)? (-5 to +5, good-bad, like-dislike, warm-cold)

  15. Pitfalls of New Measures • The measure exists already in the literature • Restriction of range: responses either at high or low end of scale (skew). • Can you trust responses? Social desirability, demand characteristics & satisficing.

  16. Simple things I have learned. 1. Develop subjective and objective versions of a new scale • Example: Contact with Blacks scale: Objective: % of your neighborhood growing up Subjective: No Blacks—a lot of Blacks 2. Using 5+ items worded similarly provides greatly increased reliability and likelihood of success. 3. Human targets are rarely evaluated below the midpoint of the scale, so use more scale points (9 instead of 5 points).

  17. **Most Important** If you have a larger study ready and a great idea for a new scale comes up, build something and give it a shot!

  18. A Few Types of Non-scale measures • Response time measures • Physiological measures • Neuroscience: fMRI and other brain imaging • Indirect measures: projective tests, etc. • Facial and other behavior coding schemes (verbal/nonverbal) • Cognitive measures: (memory, perception…) • Task performance: academic, physical… • Game theory: prisoner’s dilemma…

  19. SPSS: Reliability Chronbach’sα: AnalyzeScaleReliability Analysis Pull over all scale items Click Statistics, select inter-item correlations OK Try Van Camp, Barden & Sloan (2010) data file. Centrality1-Centrality8. Compare to manuscript. Many other reliability analyses involve correlations (test-retest, split halves) or probabilities (inter-rater reliability).

  20. SPSS-Output

  21. END

  22. Advanced Scale Development Techniques • Factor Analysis: determines factor structure of measures (does your measure assess one construct or multiple constructs? Is your proposed construct coherent?) • Multi-trait Multi-method Matrix: using combination of existing measures and manipulations to establish convergent/ divergent validity with measure.

  23. Reliability Assessments 2 • Inter-rater Reliability Independent judges score participant responses and the % of agreement is assessed to indicate reliability. Used particularly for measures requiring coding (video coding, spontaneous responses…).

More Related