1 / 20

Validity

Validity. Introduction to Communication Research School of Communication Studies James Madison University Dr. Michael Smilowitz. Reliability and Validity. Remember the distinction: Reliability assesses the internal property of the consistency of the measurement.

emi-perez
Download Presentation

Validity

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Validity Introduction to Communication Research School of Communication Studies James Madison University Dr. Michael Smilowitz

  2. Reliability and Validity • Remember the distinction: • Reliability assesses the internal property of the consistency of the measurement. • Validity assesses the external property of the accuracy of the measurement. A measuring instrument is considered valid when it truly and accurately measures the construct it purports to measure.

  3. What is a valid measurement? • Validity cannot be claimed for an unreliable measurement instrument. • But demonstrating the reliability of an instrument does not also demonstrate its validity. • The validity of a measuring instrument is therefore assessed separately from the assessment of its reliability.

  4. How is validity assessed? • There are different types of validity assessments. • Each type differs in what is emphasized as important to achieving accurate measurement. • Each type differs in the methods of doing the assessment. (The material to follow is drawn largely from Wimmer and Dominick, 1994). • There are three broad categories of validity assessments: (1) Judgment-based validity, (2) Criterion-based validity, and (3) Theory-based validity.

  5. Judgment -Based Validity Assessments There are two types of judgment-based validity assessments: 1. Content validity (sometimes called face-validity) 2. Expert jury validity

  6. Judgment -Based Validity Assessments • Content validity • Assesses whether the instrument, on face value, provides an adequate number of representative empirical indicators. • Researchers provide arguments to support their claims that an instrument is measuring the appropriate empirical indicators.

  7. Judgment -Based Validity Assessments Content validity When students complain that a final exam contained material that was not covered in the course, they are complaining about the exam’s content validity? Why? If researchers believe that communication competence requires both knowledge of communication principles, and the ability to demon-strate effective skills, a paper and pencil measure of someone’s communication knowledge could not be said to meet the assessment of content validity. Why? The valid empirical indicators would be their responses to questions based on what was covered in the course.

  8. Judgment -Based Validity Assessments Content validity When students complain that a final exam contained material that was not covered in the course, they are complaining about the exam’s content validity? Why? If researchers believe that communication competence requires both knowledge of communication principles, and the ability to demon-strate effective skills, a paper and pencil measure of someone’s communication knowledge could not be said to meet the assessment of content validity. Why? The valid empirical indicators would be their responses to questions based on what was covered in the course. If both knowledge and skills are necessary, the paper and pencil measure fails to provide adequate empirical indicators.

  9. Judgment -Based Validity Assessments • Expert jury validity • Very similar to content validity. • The researcher asks a group of experts on the subject matter to examine the measuring instrument and judge whether, in the expert’s opinion, the instrument accurately measures what it purports to measure.

  10. Judgment -Based Validity Assessments • Expert jury validity • Very similar to content validity. • The researcher asks a group of experts on the subject matter to examine the measuring instrument and judge whether, in the expert’s opinion, the instrument accurately measures what it purports to measure. Let’s say a researcher is interested in measuring “work group cohesiveness.” To do so, the researcher develops five questions to measure group member’s perceptions of the groups (1) friendliness, (2) helpfulness, (3) expressions of personal interest, (4) level of trust, and (5) willingness to work together. The researcher sends the questionnaire to five experts on group communication, and asks them to evaluate whether the questionnaire will provide adequate and representative empirical indicators of work group cohesiveness. In reporting the research, the researcher indicates that a panel of expert judges regarded the instrument as a valid measuring device.

  11. Criterion-based Validity Assessments • Criterion-based validity assessments involve assessing an instrument’s relation to some criterion assumed relevant to the construct being measured. • Three methods for assessing criterion-based validity. • Predictive validity. • Concurrent validity. • Known-groups validity.

  12. Criterion-based Validity Assessments • Predictive validity • The assessment is based on the instrument’s ability to accurately predict important behavioral manifestations of the construct being measured. • Does the SAT (Scholastic Aptitude Test) accurately predict a student’s academic success at college? • Do responses to public opinion polls accurately predict voter behavior? • Do responses to corporate image inventories accurately predict behaviors such as purchasing, favorable press mentions, public support?

  13. Criterion-based Validity Assessments • Concurrent Validity • Concurrent validity arguments are based on the correlation between a new measure and an already validated measure of a similar construct. • Concurrent validity is frequently used when researchers are extending a particular research area with updated measures.

  14. Criterion-based Validity Assessments • Concurrent Validity • Concurrent validity arguments are based on the correlation between a new measure and an already validated measure of a similar construct. • Concurrent validity is frequently used when researchers are extending a particular research area with update measures. Let’s say Smilowitz wants to develop a new instrument to measure the willingness of work team members to engage in constructive conflict with their colleagues. He reviews the literature, and finds that his notion of constructive conflict is similar (but different) from Infante’s (1992) argumentativeness scale. To argue for the validity of his instrument, he asks a number of work team members to complete his new questionnaire, and also complete Infante’s scale. If Smilowitz finds that the two instruments are highly correlated, he can make an argument for the criterion-based validity of his instrument.

  15. Criterion-based Validity Assessments • Known-groups validity • This technique is similar to predictive validity assessments. • The instrument is administered to a group of subjects known to have the empirical indictors associated with the construct under investigation, and a group that is known to lack those same indicators. • If the results of the measurements are statistically significantly different, the instrument is said to validly discriminate.

  16. Criterion-based Validity Assessments • Known-groups validity • This technique is similar to predictive validity assessments. • The instrument is administered to a group of subjects known to have the empirical indictors associated with the construct under investigation, and a group that is known to lack those same indicators. • If the results of the measurements are statistically significantly different, the instrument is said to validly discriminate. A researcher is developing an instrument to measure aggressiveness in elementary age children. A group of children who are frequently detained after school, and a group of students identified by their teachers as “model children” are both measured by the instrument. If a significant difference is found, the researcher claims known-group validity.

  17. Theory-based Validity Assessments • Generally, theory based validity assessments are known as construct validity. • There are, however, different conceptions of the meaning and application of construct validity.

  18. Theory-based Validity Assessments • Construct validity Smith (1988) describes construct validity as a more comprehensive and robust concept than the other types of validity assessments. “Because of its global emphasis on the goodness-of-fit between a measuring instrument and a construct’s theoretical properties, construct validity is by far the most important....Indeed, if an instrument has construct validity, it is considered valid from content and criterion perspectives as well.”

  19. Theory-based Validity Assessments • Construct Validity Reinard (1998) also regards construct validity as the most important, but regards its application as more methodologically based than theoretical. “Construct validation requries that a new measure be administered to subjects along with at least two other measures, one of which is a valid measure of a construct that is known conceptually to be directly related to the new measure, and another one which should be a valid measure of a construct that is known conceptually to be inversely related to the construct of interest.”

  20. Theory-based Validity Assessments • Construct Validity Wimmer and Dominick (1994) liken construct validity to discriminant validity. “...construct validity involves relating a measuring instrument to some overall theoretic framework to ensure that the measurement is actually related to other concepts in the framework. Ideally, a researcher should be able to suggest various relationships between the property being measured and other variables.”

More Related