1 / 25

Methods of Observation

Methods of Observation. PS 204A, Week 2. What is Science?. Science is: (think Ruse) Based on natural laws/empirical regularities. Makes predictions. Collections of laws that generate predictions that are empirically confirmed constitute “explanations.” Must be falsifiable.

Lucy
Download Presentation

Methods of Observation

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Methods of Observation PS 204A, Week 2

  2. What is Science? Science is: (think Ruse) • Based on natural laws/empirical regularities. • Makes predictions. • Collections of laws that generate predictions that are empirically confirmed constitute “explanations.” • Must be falsifiable. • Is always tentative (move from grossly wrong to more subtly wrong theories).

  3. The Scientific Enterprise

  4. Theory Analogy • Theories always possess a “theoretical notion” or analogy that simplifies reality. • This analogy is embodied in the assumptions or premises of the theory. Assumptions are themselves unobservable – and known to be simplifications (e.g., individuals are rational, states are unitary actors). Prefer plausible over less plausible premises. • Since premises are never “true” or, at least, are unobservable, theories are never true, only more or less useful. • Utility is defined by the number of empirically supported propositions the theory generates.

  5. Plausibility of Premises • If in theories, premises are things we do not agree on and are unobservable, how do we assess their plausibility? • Utility of their predictions (Friedman). • Accordance with natural laws. • Transform premises into objects of investigation that are themselves the subjects of theories. • Theory of rationality. • Theory of unitary states.

  6. Science is a series of “boxes within boxes” • Balance of power theory: international system is anarchic and composed of unitary states wishing only to survive. Within any given “box,” we take the premises as “given.” But any premise may itself become an object of investigation in another “box.” Internal hierarchy: state “speaks” with a single voice Unitary states wishing to survive Anarchy: no common authority If testing leads to a revision of a premise, then still need to plug back into original theory and retest. Act to check the power of other states.

  7. Hypotheses • Propositions are general statements that follow logically from the premises. • Hypotheses are propositions that contain only observable variables (i.e., if X, then Y, when both X and Y can be observed). • Central issue is deductive validity: Does the hypothesis follow logically and axiomatically from the premises? Deductive Validity

  8. Tests • We test theories by examining whether the hypotheses they generate are supported by the evidence. We make the observations the theories imply. • Conclusion validity: is there a relationship between X and Y? • Internal validity: is the relationship causal? • Construct validity: do the observable measures capture concepts in the theory appropriately? 2. Internal Validity Test 3. Construct Validity 1. Conclusion Validity

  9. Internal Validity • Is there a causal relationship in the model? You have evidence that YOUR treatment (IV, intervention, program) caused the outcome (DV). • It is possible to have internal validity and not construct validity: internal validity does not tell you that you measured your intervention or outcomes well. (ex. Reading program, but really adult attention) = yes internal, not construct

  10. Construct (Measurement) Validity • Assuming that there is a causal relationship in the study, can you claim the IV (intervention) reflected well your idea of the construct of the measure? Did you implement the the IV you intended to implement and did you measure the outcome you wanted to measure? Did you operationalize well the ideas of the cause and effect?

  11. Conclusion Validity • Is there a relationship between the two variables? You might infer there is a positive or negative or etc. relationship. Or no relationship.

  12. Explanation v. Prediction • Theory offers an explanation for observed facts and predicts new facts that, once confirmed, are also explained. • Theories must be potentially falsifiable. Popper/Hempel insist that known facts cannot falsify a theory. Therefore, prediction is the goal of all science. • Alternatively, Snyder argues that if scientific evidence is objective, evidence is evidence independent of the timing of it’s discovery relative to the theory.

  13. Who’s Right? • All evidence helps corroborate a theory, even known facts. • Predictions are more “valuable” than explanations in providing evidence for a theory.

  14. Generalization Possible Refinements • Can we generalize our observations to larger populations? • Key issue here is external validity (i.e., will conclusions hold for other people at other times). • Testing may lead us to refine our theories further, propelling the cycle another round. • Science is interactive. Tests suggest refinements to theories, which then generate new predictions and tests. Conversation between theory and evidence. 4. External Validity

  15. External validity • Assuming a causal relationship in this study between constructs of the cause and effect, can you generalize this effect to other persons, places, times?

  16. Deductive v. Inductive Reasoning • Hypothetico-Deductive method begins with theory, then generates tests. From observations, draw inferences about theory, which in turn lead to generalizations about unobserved populations. • Inductive approach begins with observations and draws draw inferences about unobserved populations. May or may not lead to theory. (Cuts into cycle at observations, then draws inferences. Can be predictive.) • Induction can be science: body of replicated and confirmed laws that are predictive. But, falsifiablility is always an issue.

  17. What can we learn by “observing”?

  18. Observation Inference • Observation central to both deduction and induction. • How do we draw inferences about unobservable phenomena, premises, or populations from observable phenomena? • How do we learn about what we can’t see, from what we can? • Applies equally to inherently unobservable traits (what goes on in people’s heads), future events (predictions), and true populations (alternative worlds).

  19. Descriptive Inference: Or how do we know what we saw? • What is this a case of? What is the class of which you observe one or more members? • Many categories are question or theory dependent. • If observation is unique, no generalizations are possible. • If observation is exhaustive (all members of class), what can you generalize to?

  20. Descriptive Inference II • Probabilistic v. Deterministic Events: all events have systematic and non-systematic components. • Probabilistic events occur with some probability < 1.0; if replayed under identical conditions, the observed result would differ (more or less). On average, would get same result (non-systematic component is random). • Deterministic events occur with certainty (p = 1.0). Special case in which non-systematic component is zero.

  21. Descriptive Inference III • With a probabilistic event, how are we to classify a particular instance? • What should we infer from a war in which country A loses? We observe the loss, but can we generalize to other similar cases? If the probability of victory was .25, what can we infer from the actual loss? • Inference is harder the smaller the number of cases. • Problem arises when we mistake a probabilistic event for a deterministic event. • Incorrect to infer any “pathology” or “mistake” in such instances.

  22. Induction I: Empirical Laws • Analysis limited to observable phenomena only. • An empirical law is a robust “regularity” (e.g., the democratic peace) • By extending empirical laws, we can make predictions (inferences) about future events. • But, • Concepts do not exist independent of theory. • Correlations may be spurious. • Correlation does not equal causation. We may “explain” events by empirical laws but such laws do not imply cause or constitute a causal test.

  23. Induction II: Thick Description as a Data Collection Method • “Thick description” as detailed casework. • May discover correlations • Darwin • Uses observable to derive unobservable traits. • Discriminate between a wink and a twitch by looking at the reactions of others, context, etc. • Generates an interpretation that we can think of as an inductive theory.

  24. Induction III: Thick Description as Science • Use observables to infer unobservable phenomena, typically motives, intent, purpose. Then explain observed outcomes in terms of the actor’s self-understanding. • Geertz: “not an experimental science in search of law but an interpretive one in search of meaning.” • Risk of circularity: observables used to infer unobservables, which are then used to explain observables. • Not falsifiable. • External validity? Geertz: object is “not to generalize across cases but to generalize within them.” • Interpretation is not science, because inductive and non-falsifiable.

  25. Conclusion • Observation is central to the scientific enterprise. There can be no science without observation. • Observation by itself can never demonstrate cause. To explain a phenomenon requires an empirically supported theory. • Nonetheless, much of what we do in political science is observe.

More Related