1 / 24

CONCEPTUALIZING AND MEASURING DEMOCRACY Evaluating Alternative Indices GERARDO L. MUNCK - JAY VERKUILEN University of

CONCEPTUALIZING AND MEASURING DEMOCRACY Evaluating Alternative Indices GERARDO L. MUNCK - JAY VERKUILEN University of Illinois at Urbana-Champaign. “The study of democracy increasingly has drawn on sophisticated

madra
Download Presentation

CONCEPTUALIZING AND MEASURING DEMOCRACY Evaluating Alternative Indices GERARDO L. MUNCK - JAY VERKUILEN University of

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CONCEPTUALIZING AND MEASURING DEMOCRACY Evaluating Alternative Indices GERARDO L. MUNCK - JAY VERKUILEN University of Illinois at Urbana-Champaign

  2. “The study of democracy increasingly has drawn on sophisticated statistical methods of causal inference. This is a welcome development, and the contributions of this quantitative literature are significant. However quantitative researchers have paid sparse attention to the quality of the data on democracy that they analyze. Indeed, the assessments that have been carried out are usually restricted to fairly informal discussions of alternative data sets and somewhat superficial examinations of correlations among aggregate data. To a large extent, problems of causal inference have overshadowed the equally important problems of conceptualization and measurement.”

  3. I: THE CHALLENGE OF CONCEPTUALIZATION: ATTRIBUTE AND LOGICAL ORGANIZATION Authors provide a systematic assessment of the large-N data sets on democracy that are most frequently used in current statistical research Beyond a concern with empirical scope...

  4. Toward a thorough comparison and assessment of these data sets that tackle a range of methodological issues. Munck and Verkuilen address the distinctively methodological task of constructing a comprehensive and integrated framework for the analysis of data.

  5. Direction of Error risk aggregation disaggregation

  6. Conceptualization Measurement Aggregation (Not only) Empirical scope affect the generation of data set!!! (in statistical research) Crucial point: tasks as CRUCIAL POINT: it’s a methodologically scientific process!! (assume the burden of proof of justifying and testing a particular choice) Both to make explicit and to justify the criteria we use to evaluate alternative democracy indices, this article addresses the distinctively methodological task of constructing a comprehensive and integrated framework for the analysis of data.

  7. I.I: THE CHALLENGE OF CONCEPTUALIZATION: ATTRIBUTE • Specification of the meaning of the concept (attributes) • The problem of maximalist definitions: • a decrease in usefulness (a concept without empirical referents) • little analytical use (reducing the number of allowed study of empirical questions) • The problem of minimalist definitions: • troubles in stressing relevant theoretical concerns and discriminate among cases. • Evaluating the existing indices... • Freedom House Index: • - the inclusion of attributes such as “socioeconomic rights,” “freedom from gross socioeconomic inequalities,” “property rights,” and “freedom from war”. • Dahl’s influence

  8. the problem of minimalist definitions is quite widespread in existing indices of democracy. • Omission of the “participation” attribute: • - Polity index.: scope of data (experience of 19th and early 20th centuries VS late 20th century: the gradual expansion of the right to vote). • ACLP Index and Coppedge and ReinickePolyarchy Index. • - Freedom House’s Index • - Bollen • Omission of “offices” attribute (ACLP Index): the set of government offices that are filled • through elections has varied independently of the extent to which elections were contested and inclusive. • - Coppedge & Reinicke, Gasiorowski and Vanhanen indices. • Omission of “agenda-setting power of elected officials” attribute (Arat (1991) and Bollen (1980), • Hadenius (1992), “constraints on the chief executive” in the Polity IV data set (Marshall & Jaggers): who exercises power? Measurement problem of the unreliability of “legislative power”

  9. I.I: THE CHALLENGE OF CONCEPTUALIZATION: LOGICAL ORGANIZATION Second step: how these attributes are related to each other the vertical organization of attributes by level of abstraction  isolation of the “leaves of the concept tree”. Conceptual logic and its problem: less abstract attributes be placed on the proper branch of the conceptual tree, that is, immediately subordinate to the more abstract attribute it helps to flesh out and make more concrete (conflation problem) attributes at the same level of abstraction should tap into mutually exclusive aspects of the attribute at the immediately superior level of abstraction (problem of redundancy)

  10. All indices of democracy carefully distinguish the level of abstraction of their attributes and thus clearly isolate the leaves of their concept trees. BUT these indices do not avoid basic problems of conceptual logic. Redundancy: Polity IV: competitiveness and regulation of participation; competitiveness and openness of executive recruitment. Hadenius’: “openness of elections”and “political freedoms” Conflation: Arat index: offices and agenda-setting power of E. Off. Bollen index: Hadenius’s index Freedom House index:

  11. II: THE CHALLENGE OF MEASUREMENT: • INDICATORS AND LEVELS OF MEASUREMENT From concept to measurement: process of measurement is completed with the assignment of scores to each of the leaves of the concept tree Two task: the selection of indicators and measurement level One standard of assessment: the validity of the measures,

  12. 1- the failure to recognize the manifold empirical manifestations of a conceptual attribute  failure to properly use multiple indicators. Q: are the indices able to reflect differences in context? Need to maximize the validity of indicators selecting multiple indicators [but to do so in a way that explicitly addresses the need to establish the cross-system equivalence of these indicators] 2- the failure to appreciate the inescapable nature of measurement error. Need to maximize the validity of their indicators by selecting indicators that are less likely to be affected by bias and that can be cross-checked through the use of multiple sources. Existing indices of democracy + Alvarez et al. and Hadenius. --- Vanhanen: a defence of the use of “simple quantitative indicators”: attribute “competition” in terms of the percentage of votes going to the largest party and his attribute “participation” in terms of voter turnout II.I: Selection of indicator and the impact of two pitfalls on the validity of measures.

  13. II.II Selection of measurement level VALIDITY OF MEASURES No a-priori measurement level: selection is not an easy or mechanical task, but an attempt to avoid the excesses of introducing distinctions that are either too fine-grained or too coarse-grained. the selection of a measurement level should: be driven by the goal of maximizing homogeneity within measurement classes with the minimum number of necessary distinctions. . be seen as a process that requires both theoretical justification and empirical testing (open to testing) CRUCIAL POINT: It’s a methodologically scientific process!! (assume the burden of proof of justifying and testing a particular choice) Existing democracy indices did not this. -- Bollen VS Alvarez

  14. reliability of measures (the prospect that the same data collection process would always produce the same data). tests of reliability: multiple coders produce the same codings replicability of measures (reproducibility of the process through which data were generated). in addressing the formation of measures, analysts should record and make public their coding rules (b) the coding process (-) - (c) the disaggregate data generated on all indicators (++) Assessing existing democracy indices... ++! Alvarez, Hadenius and Polity IV (Marshall & Jaggers, 2001a) --! Gasiorowski and Freedom House.

  15. III: THE CHALLENGE OF AGGREGATION: LEVELS AND RULES OF AGGREGATION Q: whether and how to reverse the process of disaggregation that was carried out during the conceptualization stage. the selection of level of aggregation (whether): it is equally necessary to recognize the potential benefits and costs involved in the choice to proceed to a higher level of aggregation Standard of parsimony (concept is likely to be more analytically tractable and facilitate theorizing and testing) VS the concern with underlying dimensionality and differentiation Problem of information, and consequently validity, loss Methodologically, need of testing. ++ The exception is Coppedge and Reinicke, who tackle the process of aggregation by constructing a Guttman scale. The advantage: no loss of information without having to assign weights to each component. The problem: Guttman scale can only be constructed if the multiple components move in tandem and measure the same underlying dimension (!!!!Coppedge and Reinicke index)

  16. the selection of aggregation rule (how). Notice: - It’s a way of formalizing the theoretical understanding of the links between attributes. two-step process Ex-ante: making explicit the theory concerning the relationship between attributes. Ex- post: ensuring that the aggregation rule is actually the equivalent formal expression of the posited relationship. - conceptual logic is the prerequisite. (EXAMPLE) CRUCIAL POINT: It’s a methodologically scientific process!! (assume the burden of proof of justifying and testing a particular choice)

  17. Existing data sets on democracy are less than adequate. • - - -Freedom House (2000) index: • +the selected aggregation rule is clear and explicit • no theoretical justification. • P : equal weight • - P: replicability • Vanhanen Index: • + clear and simple aggregation rule. • no theoretical justification. • no tests • + replicability. • The Polity IV index • - explicit but convoluted aggregation rule • - No justification • - index’s problems of conceptual logic. • Replicability

  18. Arat Index: • + formal aggregation rule that is quite complex. • - No justification. • never tested • Not testable • - Alvarez Index: • +++ Hadenius’s (1992) index • ++very complex aggregation rule justifying it and formalizing it. • ++test of robustness of his proposed aggregation rule • ++ replicability • The biggest problem is that most index constructors have simply assumed that it is appropriate and desirable to move up to the highest level of aggregation, that is, to a one-dimensional index.

  19. CONCLUSION: AN OVERVIEW AND CALL FOR EVALUATIONS OF DATA SETS Index creators have demonstrated widely divergent levels of sophistication in tackling the challenges of conceptualization, measurement, and aggregation. BUT this review shows that no single index offers a satisfactory response to all three challenges of conceptualization, measurement, and aggregation.

  20. the most common comparison among indices, via simple correlation tests on aggregate data, has consistently shown a very high level of correlation among indices Q: good comprehension of the same reality? • Authors say “no”: - Idices have relied, in some cases quite heavily, on the same sources and even the same precodeddata (sources’ bias) • these correlation tests do not give a sense of the validity of the data but only of their reliability • - all correlation tests have been performed with highly aggregate data and leave unresolved the critical issue of the potential multidimensionality of the data. • Authors’ test: nonlinear principal components method

  21. Conclusions • P: “analysts many times overlook the fact that mathematical statistics—which develops the relationship between theory, data, and inference—presumes that the relationship between theory, data, and observation has been well established. Thus one cannot slight the task of measurement hoping that mathematical statistics will somehow offer a solution to a problem it is not designed to tackle” • Authors aim at offering a methodological framework for generation and analysis of data • Identifying areas where the quality of the data has to be improved

More Related