1 / 45

Results of Ch 2 Quiz

This text discusses the definitions, characteristics, and measurement of variables in research. It explores the concepts of validity, reliability, and different scales of measurement.

boydjohn
Download Presentation

Results of Ch 2 Quiz

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Results of Ch 2 Quiz

  2. Defining and MeasuringVariables

  3. Definitions

  4. Variables • Characteristics or conditions that change or have different values for different individuals • Age • Gender • Score • Elapsed Time

  5. Variables in research Usually, researchers are interested in how variables are affected by different conditions or how variables differ from one group of individuals to another. • How depression scores change in response to therapy? • How much difference there is in the reading scores for third-grade children versus fourth- grade children?

  6. Variables & Constructs • Variables are well defined, easily observed, and easily measured. • age, time, gender, score • Constructs are intangible, abstract attributes such as , intelligence, motivation, or self- esteem.

  7. Operational Definition An operational definition specifies a measurement procedure ( a set of operations) for measuring a construct.

  8. Validity & Reliability

  9. Validity & Reliability Researchers have developed two general criteria for evaluating the quality of any measurement procedure: validity and reliability

  10. Example

  11. Validity • To establish validity, you must demonstrate that the measurement procedure is actually measuring what it claims to be measuring.

  12. Types of Validity • Face validity • Criterion based validity Concurrent Validity Predictive Validity 3. Construct Validity Convergent Divergent

  13. Face validity • Face validity is the simplest and least scientific definition of validity. • Face validity concerns the superficial appearance, or face value, of a measurement procedure.

  14. Concurrent Validity • The scores obtained from the new measurement technique are directly related to the scores obtained from another, better- established procedure for measuring the same variable. Examples • Teacher’s agreement • Standard tests

  15. Predictive Validity • When the measurements of a construct accurately predict behavior ( according to the theory), the measurement procedure is said to have predictive validity. • A medical science test that predicts passing the Medical Board Exam

  16. Construct validity • If you can demonstrate that your measure matches with what theories and other studies say about that variable. Example, • You would need to study all the past research on aggression and show that the measurement procedure produces scores that behave in accordance with everything that is known about the construct “ aggression.”

  17. Construct validity • If you can demonstrate that your measure matches with what theories and other studies say about that variable. Example, • You would need to study all the past research on aggression and show that the measurement procedure produces scores that behave in accordance with everything that is known about the construct “ aggression.”

  18. Aggression • You search and find all symptoms or all questions that has been used in earlier research to measure aggression and then you do a factor analysis to see which questions are not related to the construct

  19. Convergent / Divergent • Convergent validity involves creating two different methods to measure the same construct, then showing a strong relationship between the measures obtained from the two methods.

  20. Convergent / Divergent • Divergent validity, on the other hand, involves demonstrating that we are measuring one specific construct and not combining two different constructs in the same measurement process. Self Esteem IQ Math

  21. Reliability

  22. Reliability • A measurement procedure is said to have reliability if it produces identical ( or nearly identical) results when it is used repeatedly to measure the same individual under the same conditions.

  23. 3 types of reliability • Successive measurements (test-retest, parallel-forms reliability.) • Simultaneous measurements: (Inter-rater reliability) • Internal consistency: (split- half reliability, Cronbach’s Alpha and the Kuder- Richardson)

  24. The Relationship Between Reliability and Validity • These two factors are partially related and partially independent. • Reliability is a prerequisite for validity • The consistency of measurement is no guarantee of validity.

  25. SCALES OF MEASUREMENTModes of Measurement

  26. SCALES OF MEASUREMENT • In very general terms, measurement is a procedure for classifying individuals. The set of categories used for classification is called the scale of measurement. • Nominal -simply represent qualitative ( not quantitative) differences in the variable measured. • Ordinal (series of ranks, verbal labels such as small, medium, and large) • Interval & Ratio & Scale (The categories on interval and ratio scales are organized sequentially and all categories are the same size)

  27. Modes of Measurement The external expressions of a construct are traditionally classified into three categories • Self- report • Physiological • Behavioral

  28. Self-report Advantage No one knows more about the individuals than the individual. Disadvantage • A participant may deliberately lie to create a better self- image. • Response may be influenced subtly by the presence of a researcher. • The wording of the questions. • Other aspects of the research situation.

  29. Physiological Measures • Fear, for example, reveals itself by increased heart rate • Brain imaging techniques such as positron emission tomography ( PET- Positron emission tomography )

  30. advantage • One advantage of physiological measures is that they are extremely objective.

  31. Disadvantage • One disadvantage of such measures is that they typically require equipment that may be expensive or unavailable. • In addition, the presence of monitoring devices creates an unnatural situation that may cause participants to react differently • Example? Lie detector

  32. Behavioral The behaviors may be completely natural events such as laughing, playing, eating, sleeping, arguing, or speaking.

  33. Multiple Measures • One method of obtaining a more complete measure of a construct is to use two ( or more) different procedures to measure the same variable. • For example, we could record both heart rate and behavior as measures of fear.

  34. Sensitivity and Range Effects

  35. Sensitivity and Range Effects • In general, if we expect fairly small, subtle changes in a variable, then the measurement procedure must be sensitive enough to detect the changes.

  36. Example Which one is more sensitive? • Pass-Fail • A-B-C-D • 1-10 • 1-100

  37. Experimenter Bias and Participant Reactivity

  38. Experimenter Bias • Typically, a researcher knows the predicted outcome of a research study and is in a position to influence the results, either intentionally or unintentionally.

  39. How? Even the most trained interviewers • by paralinguistic cues ( variations in tone of voice) that influence the participants to give the expected or desired responses • by kinesthetic cues ( body posture or facial expressions) • by verbal reinforcement of expected or desired responses

  40. Participant Reactivity • If we observe or measure an inanimate object such as a table or a block of wood, we do not expect the object to have any response such as “ Whoa! I’m being watched. I had better be on my best behavior.” • Unfortunately this kind of reactivity can happen with human participants.

  41. Four types of subjects Four different subject roles have been identified • The good subject role. (know what we want) • The negativistic subject role. (against us) • The apprehensive subject role.(desirable) • The faithful subject role.(pro science)

More Related