1 / 31

Instrument Validation: Data Types, Scales, & Instruments

Instrument Validation: Data Types, Scales, & Instruments. Detmar Straub Georgia State University Graphics available at: detmar straub.com. A short course in ensuring that measurement error is within acceptable scientific bounds. Agenda.

dugan
Download Presentation

Instrument Validation: Data Types, Scales, & Instruments

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Instrument Validation: Data Types, Scales, & Instruments Detmar Straub Georgia State University Graphics available at: detmar straub.com A short course in ensuring that measurement error is within acceptable scientific bounds

  2. Agenda • What are the different types of data and why it matters • Scales • Instruments and CMV • Instrument validity versus other critical validities • Formative versus reflective measures • Validity roadmap • Content validity • Reliability • Construct validity • Why we should care about having valid instruments Presentation is available for downloading at: detmarstraub.com

  3. 1. What are the different types of data and why it matters Which came first? The Chicken or the Egg (Design of a study or choice of data type/statistical test)

  4. 1. What are the different types of data and why it matters

  5. 2. Scales [Examples are Likert (agree scales like #1-#4) , fully anchored scales (#5), & Semantic Differentials (end points anchored by phrases at bottom of page, #1-#2) ] Demonstration via a security questionnaire used in previous studies.

  6. The Technology Acceptance Model I love 'em! I hate 'em! • Instruments and CMV (Common Methods Variance or Bias)

  7. Model The Technology Acceptance Model + + + PU Attitudes Intentions USE + + PEOU Theoretical PU = Perceived Usefulness (of IT) PEU = Perceived Ease of Use (of IT) • -adapted from Davis' Technology Acceptance Model (Davis, 1986, 1989)

  8. Model The Technology Acceptance Model + + + + PU Attitudes Intentions USE + + PEOU Theoretical PU = Perceived Usefulness (of IT) PEU = Perceived Ease of Use (of IT) + + + • -adapted from Davis' Technology Acceptance Model (Davis, 1986, 1989)

  9. Instruments and CMV (Common Methods Variance or Bias) Constructs Intention to Use System Perceived Usefulness Perceived Ease of Use Table 2. Item Ordering Threats to Construct Validity through Common Methods Bias

  10. 4. Instrument validity versus the other critical validities Based on: Straub, D., Boudreau, M.-C., and Gefen, D. 2004. "Validation Guidelines for IS Positivist Research," Communications of the AIS (13:24), 380-427.

  11. GIGO (garbage in, garbage out)

  12. 5. Formative versus reflective measures Source: Petter, S., Straub, D., and Rai, A. 2007. "Specifying Formative Constructs in IS Research," MIS Quarterly (31:4, December), 623-656.

  13. A gingerbread cookie man analogy Interchangeable (with estimated error) Not Interchangeable

  14. Servqual: Usually Specified as Reflective…..Is It? If, in their factor analysis, the researchers had forced the SPSS software to find more factors, the construct will start to break apart.

  15. One Construct (Omission of Security Actions): Measured Reflectively & Measured Formatively Reflective Measures of Omission of Security Actions Formative Measures of Omission of Security Actions

  16. 6. A validity roadmap

  17. Roadmap

  18. Roadmap

  19. 6a. Content validity

  20. 6a. Content validity • Content validity is about the elements or dimensions that you are capturing in your scale items and whether you have captured the essence of the construct or whether you have left something out. • Generally assessed through expert panels, but Lawshe (1975) offers a quantitative approach[Lawshe, C.H. "A Quantitative Approach to Content Validity," Personnel Psychology (28) 1975, pp 563-575.] • The essential content validity question is: • Is Lawsche’s CVR (content validity ratio) statistically significant at some alpha protection level like .05? Significance is interpreted to mean that more than 50% of the panelists rate the item as either essential or important.

  21. ? ? Latent Latent Latent Reflective Reflective Formative ? Construct Construct Construct A C B ? ? Measure Measure Measure Measure Measure Measure Measure Measure 4 5 6 7 1 2 3 8 6b. Reliability Figure 6. Pictorial Model of Reliability [Incidentally, the arrows are backwards in the Straub et al. (2004) article]

  22. 6b. Reliability • Reliability is all about the consistency of scale items with each other within the same construct (reflective scales only); it has nothing to do with other constructs. • A reliable reflective scale is one that has items that are highly correlated with each other (demonstrates high multicollinearity). • The essential scale reliability questions are: • Is the Cronbach’s alpha higher than .6 for exploratory scales or .7 for confirmatory scales? See Nunnally, 1967. [Nunnally, J.C. 1967. Psychometric Theory. New York: New York.] • Can we improve the reliability by dropping items? This is a tradeoff in that higher number of items almost always yields a higher Cronbach’s alpha, but respondents get tired when the questionnaire is too long.

  23. 6b. Reliability Demonstration exercise using SPSS and TAMAVG.sav dataset

  24. 6c. Construct validity Notice anything wrong with this research model? Figure 3. Pictorial Model of Construct Validity

  25. 6c. Construct validity • Construct validity is all about the stickiness of the measures that have been chosen. • Construct validity means that scales items that are supposed to be related to each other DO relate to each other (convergent validity) and those that are NOT supposed to be related to each other do NOT relate to each other (discriminant validity). • The essential factorial validity questions are: • In the presence of other items related to different constructs, do scale items that we expect to be related to each load on the same factor at high levels? (Convergent validity) • In the presence of other items related to different constructs, do scale items that we do not expect to relate to each load onto different factors? (Discriminant validity)

  26. 6c. Construct validity Comparison for stage 2 if you only have one DV? ….Control variables like firm size; demographics like age of respondent or work experience of subject…any construct that should NOT correlate with the DV!! Notice anything wrong with this research model? Stage 1 Stage 2

  27. 6c. Construct validity Demonstration exercise using SPSS and TAMAVG.sav dataset

  28. 6c. Construct validity Demonstration exercise using SPSS and TAMAVG.sav dataset

  29. 6c. Construct validity • Former President of the APA (American Psychological Association) and one of the greatest methodologists in the 20th century, Donald Campbell1 (1960)2 says on page 548 that validation is: • “…symmetric and egalitarian." • In other words, it works in both directions. • Methods validate each other but cannot show that one is superior to the other. 1Campbell lived November 20, 1916 - May 5, 1996. 2Campbell, D.T. 1960. "Recommendations for APA Test Standards Regarding Construct, Trait, Discriminant Validity," American Psychologist (15:August), pp. 546-553.

  30. 7. Why we should care about having valid instruments GIGO Standards are generally rising across the social sciences and one could be viewed negatively (i.e., in the backwaters) if one is not aware of and practicing these validations. It could be personally and professionally embarrassing if an article is published, but colleagues begin to doubt the worth of the article because the data was never certified as “measurement-error” controlled (or at least accounted for).

  31. Thank you! Any Questions?

More Related