1 / 75

Defining and Measuring Variables

Outline: Defining and measuring variables. Observations, constructs, and theoriesInductive and deductive reasoningIVs and DVsOperational definitionsHypotheses. Steps in the research process. Find a research ideaConvert the idea into a hypothesisDefine and measure variablesIdentify participant

cholena
Download Presentation

Defining and Measuring Variables

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


    1. Defining and Measuring Variables

    2. Outline: Defining and measuring variables Observations, constructs, and theories Inductive and deductive reasoning IVs and DVs Operational definitions Hypotheses

    3. Steps in the research process Find a research idea Convert the idea into a hypothesis Define and measure variables Identify participants Select a research strategy Conduct the study Evaluate the data Report the results Refine/reformulate your research idea

    8. Psychological science is characterized by the back-and-forth use of inductive and deductive reasoning. Inductive reasoning Particular ? general Infer constructs/theories from empirical observations Deductive reasoning General ? particular Use constructs/theories to make predictions regarding empirical observations

    10. Independent and dependent variables Independent variable (IV) What the researcher wants to see the effect of Variable that is manipulated (circumstances) The IV is divided into levels or conditions (e.g., experimental and control conditions) Variable that is independent of the participant’s behavior Dependent variable (DV) What the researcher uses to measure the effect of the IV Variable that is measured (behavior) Variable that is dependent on the participant’s behavior

    11. Independent and dependent variables

    12. Operational definitions A construct cannot be directly measured or observed, but we can measure or observe external factors associated with the construct. E.g., alertness or vigilance could be operationally defined as being quick to respond to the appearance of a target.

    13. Operational definitions Researchers translate a construct into a study-specific variable They specify the operations required to manipulate or measure the construct They provide enough detail so that other researchers can replicate the research

    14. Operationally defining IVs and DVs Example: Television violence (IV) At least 75% of judges rate a TV show as violent Show has four of ten items from a violence checklist Levels of violence (Levels of the IV) Violent level: Shows that 75% of people rated as violent Non-violent level: Shows that 0% of people rated as violent Aggressive behavior (DV) Judges’ rating of aggressive behavior (1-7) during one hour of free-play Percentage of time spent playing with toys classified as aggressive (e.g., toy guns, knives, tanks) versus non-aggressive (trucks, tools, dolls)

    15. Hypotheses A hypothesis is an educated guess regarding the relationship between circumstances and behavior A hypothesis is a statement regarding the expected relationship between two variables A hypothesis can be expressed at both the construct level and the operational level

    16. Hypotheses: Three dimensions Null or research Directional or nondirectional Conceptual (construct level) or operational

    17. Hypotheses Null hypothesis A statement of equality Predicts no relationship between two variables The hypothesis to be proven false Research hypothesis A statement of inequality Predicts a relationship between two variables The hypothesis in search of support by evidence

    20. Sample hypotheses Construct level Children exposed to violent television will display more aggressive play behavior than children exposed to non-violent television. Operational level Children who watch 5 hours of television programming that is rated as violent will spend more time during a one-hour free-play period playing with toys categorized as aggressive than children who watch 5 hours of television rated as nonviolent.

    21. Is the hypothesis written at the construct or the operational level? 1. When asked to make a list of their friends, ten-year-olds who attend public school will list more friends than ten-year-olds who are home-schooled. 2. Children who attend public school will have more highly developed social skills than children who are home-schooled. 3. Practice improves memory performance. 4. Participants who read a list of words 3 times will recall more words than participants who read a list of words once.

    23. Professor Sullivan wants to determine how students’ study habits affect their performance on an exam. She divides her class in half by having them draw numbers out of a hat and then puts the groups in adjoining rooms. She tells the students in Room 1 to study their notes individually, without talking to one another. She tells the students in Room 2 to get into groups of four and discuss the topics with each other. After allowing both rooms of students one hour of study time, she administers the exam and records the scores.

    24. Conclusions (so far) In psychological science, researchers move between the levels of observation and construct/theory. Inductive reasoning is used to formulate constructs and theories from observations. Deductive reasoning is used to formulate hypotheses from constructs and theories. A hypothesis is an educated guess regarding the relationship between circumstances and behavior. Independent variables and dependent variables specify the circumstances and behavior under investigation. Variables must be operationally defined (made observable/measurable) before a hypothesis can be tested.

    25. Outline: Selecting a measurement Modalities of measurement Scales/levels of measurement

    26. Measurement Measurement A systematic way of assigning specific values (numbers or names) to objects or behaviors and their attributes.

    27. Measurement Measurement When the variable of interest is a hypothetical construct, we must use an operational definition as a measurement procedure. An operational definition is really an indirect method of measuring something that can't be measured directly.

    28. Modalities of measurement We need to be confident that the measurements obtained from an operational definition actually represent the hypothetical construct. The first decision in developing a measurement procedure (operational definition) is to determine what type of external expression of the hypothetical construct should be used to define and measure it.

    29. Modalities of measurement Self-report measures Questionnaires Surveys Physiological measures Behavioral measures Direct observation Indirect observation

    30. Self-report measures Self-report People report their own behavior, thoughts, feelings, knowledge, or opinions Questionnaires and surveys are two types of self-report measures

    31. Self-report measures Types of self-report items Open-ended e.g., "What values are most important to you in life?" Advantage: poses few restrictions, allows participants to truly express their thoughts Disadvantage: can present difficulties for comparing responses, analyzing statistically

    32. Self-report measures Types of self-report items Closed-ended (restricted) e.g., multiple choice questions Which value is most important to you: (a) wealth (b) family (c) health (d) intellectual growth Advantages and disadvantages?

    33. Self-report measures Types of self-report items Rating scale e.g., Likert Scale

    34. Self-report measures Types of self-report items Rating scale Advantages: - give a numeric score - are quick & easy - allow for degrees of agreement Disadvantage: - response set

    35. Self-report measures Advantages of Self-report measures: - they are probably the most direct way to assess a construct - each individual is in a unique position of knowledge and awareness of their own internal state - an answer to a direct question seems to be more valid than a measure of something that is only indirectly related

    36. Self-report measures Disadvantages - participants may not tell the truth - participants may not be accurate in identifying their internal states - responses may be subtly influenced by the presence of the researcher, the wording of the questions, etc. - the "hello-goodbye effect"

    37. Physiological measures Physical measures of a psychological construct Heart rate, perspiration, blood flow in the brain, electrical activity in the brain

    38. Physiological measures Advantage - this type of measurement is objective -- it provides accurate, reliable, and well-defined information Disadvantages - may not provide a valid measure of the construct - inconvenience & expense of equipment - unnatural situation may change subjects' reactions

    39. Behavioral measures Observation and measurement of behaviors Two types of behavioral measures Direct observation Indirect observation

    40. Behavioral measures Direct observation The behavior measured may be the actual variable of interest, or it may be considered an indicator of a hypothetical construct E.g., - disruptive behavior in the classroom - reaction time as a measure of alertness

    41. Behavioral measures Advantages - provide a vast number of options for defining and measuring a construct - if the behavior is the actual variable of interest, no hypothetical construct is needed Disadvantage - a behavior may be only a temporary or situational indicator of an underlying construct

    42. Behavioral measures Behavioral Observation Methods frequency -- How many occurrences in a fixed time period? duration -- How much time does an individual engage in a behavior? interval counts -- divide the observation period into a series of intervals, then record whether the behavior occurs during each interval

    43. Behavioral measures Indirect observation Participant does not need to be present to be observed Types of indirect observation Content analysis (e.g., literature) Archival research (historical records)

    44. Outline: Selecting a measurement Modalities of measurement Scales/levels of measurement Nominal Ordinal Interval Ratio

    46. LOM Summary Nominal: You can say: One thing is qualitatively different than another You can’t say: One thing is more or less than another Ordinal: You can say: One thing is more or less than another You can’t say: How much more or less Interval: You can say: How much of a difference exists between things You can’t say: One thing is twice or half as much as another Ratio: You can say: One thing is half or twice as much as another

    47. Levels of Measurement Exercise What is the LOM (nominal, ordinal, interval, or ratio)? Spelling test score Neighborhood Age in years Time for 100-yard dash Choice of after-school clubs Place in beauty pageant GPA High school rank

    48. Outline: Selecting a measurement Modalities of measurement Scales/levels of measurement Validity and reliability of measurement Other aspects of measurement

    49. Reliability and Validity How can we be sure that the measurements obtained from an operational definition actually represent the intangible construct we're interested in? Researchers have developed two general criteria for evaluating the quality of measurement procedures.

    50. Reliability and Validity Reliability The consistency or stability of a measure Does it measure the same thing each time? Validity The truthfulness of a measure Does it measure what it intends to measure?

    51. Reliability of measurement Reliability The consistency or stability of a measure Does it measure the same thing each time? A measurement is reliable if repeated measurements of the same individual under the same conditions produce identical (or nearly identical) values.

    52. Reliability of measurement Reliability Includes the notion that each individual measurement has an element of error. Measured score = True score + Error A measurement is reliable if measurement error is small

    53. Reliability of measurement Measured score = True score + Error E.g., IQ score: your measured score is determined partially by your level of intelligence (your true score) … … but it is also influenced by other factors like your mood, health, luck in guessing (error).

    54. Reliability of measurement Measured score = True score + Error As long as error is small, reliability is good. (e.g., IQ tests) If the error component is large, the measurement is not reliable. (e.g., reaction time tests)

    55. Reliability of measurement Common Sources of Error 1. Observer error Simple human error (such as lack of precision) in measuring. E.g., four people with stopwatches recording the winner's time in a race -- differences in judgment and reaction time.

    56. Reliability of measurement Common Sources of Error 2. Environmental changes It's not really possible to measure the same individual at different times under identical circumstances. Small environmental changes can influence measurement (e.g., temperature, time of day, background noise)

    57. Reliability of measurement Common Sources of Error 3. Participant changes The participant can change between measurements E.g., mood, body temperature, hunger, fatigue

    58. Reliability of measurement Forms of reliability Test-retest reliability Split-half reliability Inter-rater reliability

    59. Reliability Test-retest reliability A test of stability over time Test people with exactly the same test (or equivalent version of the test) at two time points and compare scores Potential problems First testing contaminates second Participants change over time

    60. Reliability Split-half reliability A measure of consistency within a measure Relatedness of items on a test or questionnaire When we give a multi-item test, we assume that the different questions measure a part or aspect of the construct. If this is true, there should be some consistency among the items. Researchers split the set of items in half, and then evaluate whether they are in agreement.

    61. Reliability Inter-rater reliability A measure of agreement between two observers of the same behaviors High reliability demonstrates strong definitions of behavioral categories

    62. Validity of measurement Validity The truthfulness of a measure Does it measure what it intends to measure? E.g., IQ: Think of the absent-minded professor with an IQ of 160 who functions poorly in everyday life.

    63. Validity of measurement Types of Validity Face validity Concurrent validity Predictive validity Construct validity Convergent validity Divergent validity

    64. Validity Face validity Appearance of validity Based on subjective judgment Difficult to quantify Little scientific value Potential problem with high face validity: you don't always want participants to know what you're measuring.

    65. Validity Concurrent validity Scores obtained from a new measure covary with (are consistent with) scores from an established measure of the same construct But … caution: the fact that two sets of measurement are related doesn't mean that they are measuring the same thing. E.g., measuring people's height by weighing them

    66. Validity Predictive validity Scores obtained from a measure accurately predict future behavior Most theories make predictions about how different values of a construct will affect behavior. If people's behavior is consistent with the results of a test, the measure has predictive validity. E.g., "need for achievement" score predicted children's behavior in a game

    67. Validity Construct validity How well a test assesses the underlying construct Is this measurement of the variable consistent with all the things we already know about the variable and its relationship to other variables? E.g., measuring people's height by weighing them is ok in terms of concurrent validity (height and weight are correlated), but not in terms of construct validity (height is not influenced by food deprivation but weight is).

    68. Construct validity Two types of Construct Validity Convergent validity Divergent validity

    69. Construct validity Convergent validity Scores on the measure are related to scores on measures of related constructs Use two methods to measure the same construct and then show that the scores are strongly related **Note: There is a difference between convergent and concurrent validity. It's a subtle difference. Concurrent validity has to do with comparing a new measure to established measures. Please see p. 83 of your text.

    70. Construct validity Divergent validity The test does not measure something it was not intended to measure Demonstrate that we are measuring one specific construct (and not a mixture of two)

    71. Construct validity Divergent validity Use two methods to measure two constructs, then demonstrate: convergent validity for each construct no relationship between scores for the two constructs when measured by the same method.

    72. Construct validity Convergent and divergent validity E.g.: want to measure aggression in children, but worried that your measure might reflect general activity level get observations of aggression and activity, and get teacher ratings of aggression and activity aggression measures should match, activity measures should match, the two observational measures should not match, and the two ratings measures should not match

    73. Reliability and validity Indicate whether the researcher is interested in demonstrating reliability or validity. Specify the type of reliability or validity being described. Dr. Chang has written a test to measure extroversion. He is worried that his test might simply measure social desirability (i.e., people will respond to the items the way they think they should respond, instead of being honest). To make sure his test does not measure social desirability, he gives 100 research participants his measure of extroversion and a measure of social desirability and calculates the correlation between the two scales. Dr. Change is trying to demonstrate ______________________________. Dr. Diefenbacher has written two versions of her Religiosity Scale. Each version is 20 items in length. She gives all 40 items to 100 research participants. Then she examines the correlation between each set of 20 items. Dr. Diefenbacher is interested in demonstrating _____________________________.

    74. Other aspects of measurement Sensitivity and range effects Ceiling effects Floor effects

    75. Other aspects of measurement Participant reactivity Social desirability Demand characteristics Observer bias Expectancy effects

    76. Other aspects of measurement Participant reactivity Social desirability Demand characteristics Observer bias Expectancy effects Potential solutions Deception Blinding (single-blind & double-blind studies)

More Related