1 / 25

Chapter 8

Chapter 8. Flashcards. systematic process that involves assigning labels (usually numbers) to characteristics of people, objects, or events using explicit and consistent rules so, ideally, the labels accurately represent the characteristic measured Measurement.

Download Presentation

Chapter 8

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Chapter 8 Flashcards

  2. systematic process that involves assigning labels (usually numbers) to characteristics of people, objects, or events using explicit and consistent rules so, ideally, the labels accurately represent the characteristic measured Measurement

  3. abstraction that symbolizes a class of people (e.g., female), objects (e.g., chair), or events (e.g., baseball game) that have one or more characteristics in common Concept

  4. definition that assigns meaning to a concept in terms of other concepts, such as in a dictionary, instead of in terms of the activities or operations used to measure it. (Contrast with Operational definition.) Conceptual definition

  5. definition that assigns meaning to a concept in terms of the activities or operations used to measure it, ideally in a way that contains relevant features of the concept and excludes irrelevant features. (Contrast with Conceptual definition.) Operational definition

  6. discrepancies between measured and actual (true) values of a variable caused by flaws in the measurement process (e.g., characteristics of clients or other respondents, measurement conditions, properties or measures). See also Random measurement errors and Systematic measurement errors Measurement errors

  7. discrepancies between measured and actual (true) values of a variable that are equally likely to be higher or lower than the actual values because they are caused by chance fluctuations in measurement. They are caused by flaws in the measurement process and they tend to cancel each other out and average to zero but they increase the variability of measured values. Also known as unsystematic measurement errors. (Contrast with Systematic measurement errors.) Random measurement errors

  8. discrepancies between measured and actual (true) values of a variable that are more likely to be higher or lower than the actual values of the variable. They are caused by flaws in the measurement process and they lead to over- or underestimates of the actual values of a variable. Also known as bias in measurement. (Contrast with Random measurement errors.) Systematic measurement errors

  9. statistic which indicates whether and how two variables are related. A correlation has a potential range from –1.0 to +1.0. A positive correlation means that people with higher values on one variable tend to have higher values on another variable. A negative correlation means that people with lower values on one variable tend to have higher values on another variable. A correlation of 0 means there’s no linear relationship between two variables. The absolute value of a correlation (i.e., the actual number, ignoring the plus or minus sign) indicates the strength of the relationship—the larger the absolute value, the stronger the relationship Correlation

  10. general term for the consistency of measurements, and unreliability means inconsistency caused by random measurement errors. See also Internal-consistency reliability, Inter-rater reliability, and Test–retest reliability Reliability

  11. degree to which scores on a measure are consistent over time Test–retest reliability

  12. degree to which responses to a set of items on a standardized scale measure the same construct consistently Internal-consistency reliability

  13. statistic typically used to quantify the internal-consistency reliability of a standardized scale. Also known as Cronbach’s alpha and, when items are dichotomous, Kuder-Richardson 20, KR20, or KR-20 Coefficient alpha

  14. degree of consistency in ratings or observations across raters, observers, or judges (e.g., a second opinion from a health care professional, judges in an Olympic competition). Also known as interobserver or interjudge reliability or agreement Inter-rater reliability

  15. general term for the degree to which accumulated evidence and theory support interpretations and uses of scores derived from a measure. See also Concurrent validity, Construct validity, Content validity, Convergent validity, Criterion validity, Discriminant validity, Face validity, Predictive validity, and Sensitivity to change Measurement validity

  16. degree to which a measure of a construct or other variable appears to measure a given construct in the opinion of clients, other respondents, and other users of the measure Face validity

  17. degree to which questions, behaviors, or other types of content represent a given construct comprehensively (e.g., the full range of relevant content is represented, and irrelevant content is not) Content validity

  18. degree to which scores on a measure can predict performance or status on another measure that serves as a standard (i.e., the criterion, sometimes called a gold standard). See also Concurrent validity and Predictive validity Criterion validity

  19. degree to which scores on a measure can predict a contemporaneous criterion. (Contrast with Predictive validation.) See also Criterion validity Concurrent validity

  20. degree to which scores on a measure can predict a criterion measured at a future point in time. (Contrast with Concurrent validity.) See also Criterion validity Predictive validity

  21. complex concept (e.g., intelligence, well-being, depression) that is inferred or derived from a set of interrelated attributes (e.g., behaviors, experiences, subjective states, attitudes) of people, objects, or events; typically embedded in a theory; and oftentimes not directly observable but measured using multiple indicators Construct

  22. degree to which scores on a measure can be interpreted as representing a given construct, as evidenced by theoretically predicted patterns of associations with measures of related and unrelated variables, group differences, and changes over time, and accuracy of conclusions based on evidence and reasoning about the degree to which cause and effect variables as operationalized in a study represent the constructs of interest (e.g., does an intervention as implemented or an outcome as measured contain all of the relevant features and exclude irrelevant features). See also Convergent validity and Discriminant validity Construct validity

  23. degree to which scores derived from a measure of a construct are correlated in the predicted way with other measures of the same or related constructs or variables. (Contrast with Discriminant validity.) See also Construct validity Convergent validity

  24. degree to which scores derived from a measure of a construct are uncorrelated with, or otherwise distinct from, theoretically dissimilar or unrelated constructs or other variables. (Contrast with Convergent validity.) See also Construct validity Discriminant validity

  25. degree to which a measure detects genuine change in the variable measured. Also known as responsiveness to change Sensitivity to change

More Related