1 / 26

Instrumentation

Instrumentation. Instruments. Questionnaires Surveys Interviews How do you know what to ask?. “…a characteristic or attribute of an individual or an organization that can be measured or observed by the researcher and varies among individuals or organizations studied” (Chp 5, p.84)

thartl
Download Presentation

Instrumentation

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Instrumentation

  2. Instruments • Questionnaires • Surveys • Interviews • How do you know what to ask?

  3. “…a characteristic or attribute of an individual or an organization that can be measured or observed by the researcher and varies among individuals or organizations studied” (Chp 5, p.84) Variables are measured using a measurement instrument, a tool, a data collection instrument or just an instrument measuring variables

  4. Hierarchical Determine what statistical analysis can be used NOMINAL ORDINAL INTERVAL RATIO Levels of measurement Lowest Level Highest Level

  5. Names, nomenclature, or labels used to classify • Two requirements: • Categories must be mutually exclusive • Categories have to be exhaustive • Nominal measures do not convey any value to what is measured …just name it (e.g. race, sex, class) • Tests of difference (nonparametric): • Chi-square • Measure of association (correlation): • Contingency coefficient NOMINAL (categorical)

  6. Three requirements: • Categories have to be mutually exclusive • Categories must be exhaustive • Categories allow rank ordering • Categories represent relatively more or less of something, however the distance between categories cannot be measured • How would you describe your level of health? • Excellent • Good • Fair • Poor Ordinal (categorical)

  7. Nonparametric statistics • Correlational coefficients • Spearman r • Kendal r Ordinal cont.

  8. Category Rules: • Mutually exclusive • Exhaustive • Rank order • Widths of categories must be the same which allows for the distance between categories to be measured • No absolute zero (e.g. temperature) • Parametric statistics: • Means, standard deviations • Pearson correlations • t-test • F-test Interval (continuous)

  9. Category Rules: • Mutually exclusive • Exhaustive • Rank order • Widths of categories must be the same which allows for the distance between categories to be measured • Scale has an absolute zero (e.g. age, height, weight, time) • Sometimes referred to a numerical • All statistical tests Ratio (continuous)

  10. Accuracy = validity Consistency = reliability Fairness = appropriate for participants Psychometric properties

  11. Has traditionally been given the least attention • No one way to measure fairness • Is the instrument “fair” for individuals of various ethnic groups, educational levels, gender, etc. Examples: • Cultural and language sensitivity • Assessment of conceptual and linguistic equivalence of other language versions of an instrument • Literacy • Evaluate the reading level of the instrument Fairness

  12. Does the instrument accurately measure what it is intending to measure? • Establishing Validity of an Instrument: • Criterion-related validity • Construct validity • Content validity • Face validity Validity = accuracy

  13. Data from a measurement instrument is correlated with data generated from a measure (criterion) of the concept being studied (usually an individual’s performance or behavior). • Use of a second measure of a concept as a criterion by which the validity of the new measure can be checked. • Two types: • Predictive validity – measure used will be correlated with a future performance or behavior • LSAT scores accurately predict success in law school in the future • SAT and ACT scores predict future college success Criterion-related validity

  14. Concurrent validity – a new instrument and an established (valid) instrument that measure the same thing are given to the same sample and the results of the new instrument correlate with the results of the established instrument • Beck’s Depression Inventory (established ) and the BDI II (new) • High positive correlation between scores is evidence of concurrent validity Criterion-related validity

  15. If there is no existing instrument to compare to or the concept being measured is more abstract in nature, then construct validity is useful. The degree to which a measure correlates with other measures it is theoretically expected to correlate with…(driven by theory) Are measures positively or negatively associated with each other as would be expected by theory Construct validity

  16. Two types: • Convergent - degree to which two measures which purport to be measuring the same topic positively correlate (converge) • Theory: Would we expect the relationship to be? • Instrument #1 measures person’s self-efficacy for regular exercise • Instrument #2 measures person’s exercise behavior • Discriminant – construct measured (self-efficacy for regular exercise) should not correlate with dissimilar variables (based on theory) • Measure of person’s self-efficacy for regular exercise would not be expected to positively correlate with a person’s inactivity Construct validity

  17. Content validity is established during an instruments early development, not after completion. Assessment of the correspondence between the items that make up an instrument and the content domain from which the items were selected Step-by-step process Content validity

  18. Weakest form of validity Can be a good first step to establishing validity, but it should not replace other means for establishing validity “On the face” the instrument appears to measure what it says it measures Established by having individuals familiar with the concept look over the instrument to see if it appears to cover the concept it seeks to measure Face validity

  19. Sensitivity – ability of the test to identify correctly all screened individuals who actually have the disease/condition atrue positives a + c true positives + false negatives • Specificity – the ability of the test to identify only nondiseased individuals who actually do not have the disease/condition dtrue negatives b + d false positives + true negatives Validity of screening and diagnostic tests

  20. If an instrument does not measure what it is supposed to (validity), then it does not matter if it is reliable. Validity is most important

  21. Consistency – extent to which a measure will produce the same or nearly the same results each time it is used • Reliability is estimated by computing a correlation coefficient (r) ..the closer the correlation coefficient is to 1.00, the greater the reliability • Values of r between .6 and .8 = moderate • Values of r above .8 = substantial correlation Reliability

  22. Parallel Forms • Internal Consistency • Test-Retest • Rater • Interrator • intrarater Methods for determining reliability

  23. Also called equivalent form or alternate form Create different forms of the same measure that when given to the same participants will produce similar results (means, SD, item correlations) Example: SAT and ACT exams Parallel Forms

  24. One of the most common methods used • Intercorrelations among individual items • Correlate each individual item and the total score • The greater the correlation, the higher the reliability • Statistical Measures: • Cronbach’s alpha • Kuder-Richardson(KR) 20 or 21 coefficient • Spearman-Bowman split-half reliability Internal consistency

  25. Also called stability reliability as it provides evidence of stability over time Same instrument, same group, same conditions, at two different points in time Data from two administrations of the instrument are used to calculate a correlation coefficient How much time should there be between administations? Test-retest

  26. Inter = Consistency of an observed event by different raters (e.g. three research team members are identifying themes from interview transcripts as part of a qualitative study) Intra = consistent measurement or rating by the same person (e.g. I take blood pressure measures of community members) Inter- and intra- rater reliability

More Related