1 / 36

Model-based Inquiry: Epistemology, Modeling Skills, Assessment, & Research

Model-based Inquiry: Epistemology, Modeling Skills, Assessment, & Research. Janice Gobert The Concord Consortium mac.concord.org mtv.concord.org Based on work from 1) Making Thinking Visible (NSF #9980600) and 2) Modeling Across the Curriculum (IERI #0115699) .

malory
Download Presentation

Model-based Inquiry: Epistemology, Modeling Skills, Assessment, & Research

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Model-based Inquiry: Epistemology, Modeling Skills, Assessment, & Research Janice Gobert The Concord Consortium mac.concord.org mtv.concord.org Based on work from 1) Making Thinking Visible (NSF #9980600) and 2) Modeling Across the Curriculum(IERI #0115699). All opinions expressed are those of the author and do not necessarily reflect the views of the granting agencies.

  2. How are you defining “scientific practice” in your design and empirical work? The Scientific Practice is Modeling, this includes model-based reasoning, model-based inquiry, etc. • MBL is a theory of science learning that integrates research in cognitive psychology and science education (Gobert & Buckley, 2000). • Its tenets are that understanding requires the construction of mental models and all subsequent problem-solving, inferencing, or reasoning are done by means of manipulating or ‘running’ these mental models (Johnson-Laird, 1983). • Model-based reasoning also involves the testing, and subsequent reinforcement, revision, or rejection of mental models. • Modeling research at the Concord Consortium organizes learning activities, assessment, and research around model-based learning.

  3. MBR Involves both internal and external models cognitive processes act on mental model External Models, i.e., hypermodels Mental model Assumes students’ epistemologies influences model-based reasoning; Gobert & Discenna, 1997; Gobert & Pallant, 2004).

  4. Other research literature…. In addition to students’ pre-instruction models in designing the unit, we (J. Gobert, Jim Slotta, Amy Pallant) drew on current findings from: • causal models (White, 1993; Schauble et al, 1991; Raghavan & Glaser, 1995), • model-based teaching and learning (Gilbert, S., 1991; Gilbert, J. 1993); • model revising (Clement, 1989; 1993; Stewart & Hafner, 1991); • diagram generation and comprehension (Gobert, 1994; Gobert & Frederiksen, 1988; Kindfield, 1993; Larkin & Simon, 1987; Lowe, 1989; 1993), • the integration of text and diagrams (Hegarty & Just, 1993), and • text comprehension (van Dijk & Kintsch, 1983; Kintsch, 1998).

  5. How is it being supported?from Making Thinking Visible Project (mtv.concord.org) • Scaffold drawing of their own models of plate tectonics phenomena based on progressive model-building principles (model pieces acquisition). • Scaffold on-line “field trip” to explore differences between the East and West coast in terms of earthquakes, volcanoes, mountains (beginning with the most salient differences to support knowledge building around the driving question; model-pieces acquisition). • Posing a question about their current model (to model pieces integration and model-building). • Learn about location of earth’s plates (to scaffold relationship between plate boundaries anf plate tectonic phenomena as model pieces integration). • Reify important spatial and dynamic knowledge (model pieces integration) about transform, divergent, collisional, and convergent boundaries. • Learn about causal mechanisms involved in plate tectonics, i.e., convection & subduction (scaffolded by reflection activities to integratespatial, causal, dynamic, and temporal aspects of the domain- model pieces integration).

  6. Pedagogical support (cont’d)from Making Thinking Visible Project (mtv.concord.org) • Students are scaffolded to critique learning partners’ models using prompts in WISE (reconstruct, reify, & reflect). Prompts include: 1. Are the most important features in terms of what causes this geologic process depicted in this model? 2. Would this model be useful to teach someone who had never studied this geologic process before? 3. What important features are included in this model? Explain why you gave the model this rating. 4. What do you think should be added to this model in order to make it better for someone who had never studied this geologic process before? • Reflect on how their model was changed and what it now helps explain (reconstruct, reify, & reflect), e.g.,: prompts include: • “I changed my original model of.... because it did not explain or include....” • “My model is now more useful for someone to learn from because it now includes….” • Transfer what they have learned in the unit to answer intriguing points (reconstruct, reify, & reflect): • Why are there mountains on the East coast when there is no plate boundary there? • How will the coast of California look in the future?

  7. How do you know when you see it? • Examples to follow….

  8. Comments on example 1…. • Original model- focus on crustal layer, no causal mechanisms for what causes mountain formation. • W. coast partners’ critique requested labels. • Revised model-includes labels, and a cut away view of the interior of the earth which includes convection in the mantle.

  9. Comments on example 2… • Original model- cross section, no causal mechanisms for what causes mountain formation. • W. coast partners’ critique requested information about direction of plate movement. • Revised model-includes a cross section with plate movement, added the mantle as an interior layer.

  10. Does it effect students’ epistemologies? * • Data to follow…. * Acknowledging the problem with assessing epistemologies with surveys.

  11. WISE Period 1 - sig. Epistemological gains

  12. WISE Period 2 - sig. Epistemological gains

  13. WISE Period 3 - sig. Epistemological gains

  14. WISE Period 4 - sig. Epistemological gains

  15. WISE Period 5 - sig. Epistemological gains

  16. Modeling Across the Curriculum Team Principal & Co-Principal Investigators Paul Horwitz, Concord Consortium, Principal Investigator Janice Gobert, Concord Consortium, Co-PI & Research Director Robert Tinker, Concord Consortium, Co-PI Uri Wilensky, Northwestern University, Co-PI Other senior personnel Barbara Buckley, Concord Consortium Chris Dede, Harvard University Ken Bell, Concord Consortium Sharona Levy, University of Haifa Trudi Lord, Concord Consortium Jaclyn Scobo (intern), Northeastern University mac.concord.org; IERI #0115699 www.concord.org http://ccl.northwestern.edu

  17. Design of Activities, Scaffolding, & Research are based on… Model-based learning (Gobert & Buckley, 2000) as well as other literature…. • cognitive and perceptual affordances of learning with technology-based representations (Gobert, 2005; Larkin & Simon, 1987) • progressive model-building (White & Frederiksen, 1990; Raghavan & Glaser, 1995) • students’ difficulties in learning with models (Sweller, et al, 1990; Gobert, 1994; Lowe, 1989; Head, 1984). Thus, scaffolding is designed to… • guide search, supports perceptual cues, and inference-making from perceptual cues (Larkin & Simon, 1987). • elicit prior knowledge, support integration with new knowledge, and support reification & reflection of knowledge. Theory driving our analyses is based on… • expert problem-solving for estimating solutions (Paige & Simon, 1966) • experts vs. novices search and knowledge acquisition strategies (Gobert, 1994, 1999; Thorndyke & Stasz,1980).

  18. Model-Based Learning in situ Intrinsic Teacher Factors Epistemology of models (adapted from Grosslight et al, 1991) Teaching experience Background (adapted from Fishman, 1999) Intrinsic Learner Factors Epistemology of models (SUMS, Treagust et al, 2002) ….because students’ epistemologies influence both knowledge integration (Songer & Linn, 1991) and model-based reasoning (Gobert & Discenna, 1997), Classroom Factors Implementation of MAC activity use (logged) Teacher practices (reported via Classroom Communique)

  19. What is the model for the pedagogical support of the practice? What kinds of designs put this model into effect? Scaffolds from the MAC project include: • Representational Competence: view and understand a representation or representational features of the domain. • Model pieces acquisition: understand & reason with pieces of models (spatial, causal, functional, temporal). • Model pieces integration: combine model components in order to come to a deeper understanding of how they work together as a causal system. • Model based reasoning: reasoning with models or pieces of models. • Reconstruct, Reify, & Reflect: reify knowledge and transfer it to another context or level of understanding.

  20. Technology: Log files on students’ interactions with models capturing students’… Data on duration and sequence Actions and choices with models What info or help they seek Responses to questions Embedded & Performance Assessments with models & questions… Generate profile for students at “pivotal” points in curriculum Responses to questions Affordances Implementation data -- which activities were used, pattern of use (consecutive or intermittent days) at classroom level & student level. Finer-grained log data can be used for Measure of students’ systematicity and inquiry skills. Test for interactions with prior knowledge & epistemology. These data are used to derive student reports…. Formative assessments Summative assessments Performance assessments Technology & Affordances for Research & Assessment with Models

  21. Drill down to performance assessment with logs Currently we are focusing on log files as indices of: • Domain-specific model based reasoning by investigating “hot spots” • Domain- General Inquiry skills (DoGI spots, similar to NSES inquiry strands). • This allows us to assess inquiry development • both within (hot spots) and across domains (DoGI spots). • assess transfer from one domain to another • assess how a student’s inquiry skills are progressing “independent” of content learning. Since our activities are enacted over multiple days and in three domains, we avoid the problems faced by earlier studies of inquiry in which there were not enough data to get at students’ inquiry skills (Shavelson et al, 1999).

  22. Inquiry “Hot Spots” Tasks or parts of tasks that contain multiple components of model-based inquiry these, by definition, require deep reasoning. MAC supports 5 strands of model-based inquiry. These are more specific than the NSES (1996) inquiry standards which were are not specific to current technology-based learning nor are the NSES strands specific to modeling tasks. • Representational Competence: view and understand a representation or features of the domain. • Model pieces acquisition: understand & reason with pieces of models (spatial, causal, functional, temporal). • Model pieces integration: combine model components in order to come to a deeper understanding of how they work together as a causal system. • Model based reasoning: reasoning with models or pieces of models. • Reconstruct, Reify, & Reflect: reify knowledge and transfer it to another context or level of understanding. Fine-grained analysis, one hot spot at a time, is necessary in order for us to code the various process variables we plan to aggregate and focus on.

  23. Hot spot from Collisions task 5: Student sets mass of two balls • The challenge: adjust the masses of the two balls to make the orange ball move as fast as possible after the collision.

  24. Strategies for Inquiry • Preliminary analysis based on human coding identified 2 different inquiry patterns: • haphazard • systematic (Also, there are students who got it correct on first trial, sometimes with explicit test). These are consistent with literature: ~ experts vs. novices search and knowledge acquisition strategies (Thorndyke & Stasz,1980; Gobert, 1994, 1999). ~ expert problem-solving for estimating solutions (Paige & Simon, 1966). Examples …

  25. Haphazard Strategy- this student obtained the correct answer (11.0; 1.0) on trials 2,10,(& 15) but did not know it! Student 12116 made 15 trials: Blue Ball Orange ball 11.011.0 11.01.0 11.0 3.0 11.0 4.0 1.01.0 1.0 11.0 8.0 7.0 11.0 2.0 11.0 11.0 11.0 1.0 11.0 5.0 3.0 5.0 1.0 5.0 1.0 8.0 11.0 1.0

  26. Systematic Strategy, e.g.,vary one ball at a time (a good strategy in the absence of prior knowledge). Student 18115 had a plan: Blue Ball Orange ball 11.011.0 5.011.0 10.0 11.0 11.0 1.0

  27. Another Hot Spot from Dynamica: “What settings cause the blue ball to stop when it collides with the orange ball?” • Track students’ iterations of this as index of systematicity in inquiry. Input sliders Numerical data from run Constructed text response

  28. CC’s approach for Task 3- Additional Categories for coding & 4 students’ data. Additional categories (in addition to CMU) are % of trials in which ~ set the masses as equal ~ set the masses as extremes ~ closer the the goal, further from the goal, ~ goal flips.

  29. Arrow tool Cross tool Snip tool Chromosome tool The task Hot Spot from BioLogica: Monohybrid (Task 3): produce only 2-legged offspring Requires changing both Legs alleles of one parent Dragon genome chart Punnett square pad

  30. Monohybrid Task 3 Subtasks & Data collected • Predict whether a pair of dragons can have only 2-legged offspring • Multiple choice question • Describe the necessary parental genotypes. • Full text response • Change alleles of one parent to homozygous recessive • Cross parents • Success = making right cross • Number of crosses made • List of crosses made

  31. Data Monohybrid performances • Student performance is scored by computer based on • Prediction • Success • Number of attempts • Whether they repeated any crosses (an indication of haphazard behavior). • Performances can be grouped into • Systematic & correct • Systematic & incorrect • Haphazard & correct • Haphazard & incorrect

  32. Systematic vs. haphazard performance and Pre/Post gains Data based on 649 students in 10 member schools; (54.2%) in ‘regular’ classes. ANCOVA with pre test score as covariate indicates pre-test is significant (p ≤ .001), as is the two-level predictor variable (Systematic versus Haphazard). Together, the pretest scores and the systematicity variable account for 25.2% of the variance in the Post-test scores. Students who are systematic at this task outperform students on the post-test who are not, irrespective of whether they succeeded at the inquiry task. Thus the systematic inquiry is facilitating knowledge building (as measured by the post-test).

  33. Overview of Data Analysis with Hot spots We are aggregating hot spots and testing their relationship: • conceptual learning measurements, i.e., pre-post content tests • measures of students’ epistemologies of models and views of science since students’ epistemologies influence learning (Songer & Linn, 1991; Gobert & Discenna, 1997). With these data, we can: • track students’ systematicity in learning with models as one important facet of inquiry skills and conceptual learning. To us, inquiry skills co-evolve with content learning but each can be measured separately (sort of). • test for development of inquiry strategies across time and across domains ~ complicated by task difficulty increasing over time ~ complicated by the co-evolution of the development between domain-knowledge and inquiry strategies ~ complicated by the likelihood that students build knowledge in small, conceptual pieces, I.e., about acceleration or velocity). In the future, using log files we seek to identify at risk students- i.e., students whose inquiry strategies are buggy.

  34. Domain-General Inquiry Spots (“DoGI” spots) 1- Making predictions with models 2- Interpreting data from a representation (i.e., model/graph, pedigree, etc). 3- Making explanations (about models, etc) 4- Mathematizing with models- Filling in an equation/solve an equation; reasoning with an equation. 5- Designing and/or conducting an experiment with models. Thus, if a student can do these types of tasks, they are doing model-based inquiry.

More Related