1 / 21

Measuring Two Scales for Quality Performance

Measuring Two Scales for Quality Performance. Authors: Winston G. Lewis, Kit F. Pun Terrence R.M. Lalla. Stages of Scale Development and Variable Measurement. Item development – generation of individual items Scale development – manner in which items are combined to form scales

judson
Download Presentation

Measuring Two Scales for Quality Performance

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Measuring Two Scales for Quality Performance Authors: Winston G. Lewis, Kit F. Pun Terrence R.M. Lalla

  2. Stages of Scale Development and Variable Measurement • Item development – generation of individual items • Scale development – manner in which items are combined to form scales • Scale evaluation – psychometric examination of the new measure

  3. Stage 1: Item Generation • Items must adequately capture the specific domain of interest • Items must have not extraneous content • Two approaches used - deductive - inductive

  4. Deductive Approach • Utilizes a classification schema • Requires an understanding of the phenomenon • Through literature review • Develop theoretical definition of construct • Definition used as guide for item development

  5. Stage 2: Scale Development Step 1 – Design of Developmental Study Step 2 – Scale Construction Step 3 – Reliability Assessment

  6. Inductive Approach • Little theory involved at the onset • Researchers develops items based on ethnography

  7. Step 1: Design of Developmental Study • Researcher has identified potential set of items for construct(s) under consideration • Administration of these items is required to determine how well they confirmed expectations about the structure of the measure

  8. Administration of Scale Items • Adequate sample that is representative of the population - description, sampling, response rates, questionnaire administration • Wording of items e.g. reverse scoring • Number of items per measure • Scaling of items e.g. Likert scales • Sample size

  9. Step 2: Scale Construction • Involves data reduction and refining constructs • Many criteria used - Exploratory Factor Analysis - Confirmatory Factor Analysis

  10. Exploratory Factor Analysis (EFA) • Technique used for uncovering the underlying structure (constructs) of a large set of items (variables) • Reduces a set of variables to a couple of constructs • Easy to use • Useful for lots of survey questions • Basis for other instruments e.g. regression analysis with factor scores • Easy to combine with other instruments e.g. confirmatory factor analysis

  11. Confirmatory Factor Analysis (CFA) • Seeks to statistically test the significance of a priori specified theoretical model • Works best when you have measures that have been carefully developed and have been subjected to (and survived) EFA • Researcher specifies a certain number of constructs, which constructs are correlated, and which items measure each construct

  12. Step 3: Reliability Assessment (1) • The degree to which the observed instrument measures the “true” value is a free from measurement error • A reliable measure provides consistent results when administered repeatedly to the same group of people • Usually considered part of the testing stage of a newly developed measure

  13. Step 3: Reliability Assessment (2) • However, many researchers delete items to increase coefficient alpha values, so it is also considered part of the development stage • Two basic concerns - Internal consistency of items within a construct - Stability of the construct over time

  14. Internal Consistency Reliability • Commonly called ‘iter-item reliability’ • Use Cronbach Alpha coefficient • Cronbach Alpha is the average of the correlation coefficient of each item with every other item • Values less that 0.7 – delete item

  15. Stage 3: Scale Evaluation The scale could be further evaluated by testing its validity

  16. Validity • Extent to which a measure or set of measures correctly represents the concept of study • There are three types of validity • Content validity • Criterion-related validity • Construct validity

  17. Content Validity • Adequacy with which a measure assesses the domain of interest • It is the judgment of experts • It is based on item content of the extent to which a scale truly measures what is intended • Scales based on theory derived from extensive literature review, or • Utilize existing scales

  18. Criterion Validity • Pertains to the relationship between a measure and another independent measure • Examines the empirical relationship between the scores on the test instrument (predictor) an objective outcome (criterion) • High valued multiple coefficient between predictor and criterion indicates the scale has criterion validity

  19. Construct Validity • Concerned with the relationship of the measure to the underlying attributes it is attempting to assess • Provides psychometric evidence of convergent validity, discriminant validity and trait and method effects.

  20. Convergent Validity • Correlations between items of the same trait (construct) using different methods (instruments) • Should be in the range of 0.85 to 0.95 or higher

  21. Discriminant Validity • Correlations between items of the different constructs using the same instrument • Should be lower than the convergent validity coefficients

More Related