460 likes | 635 Views
. Top-level Goal. X. Cognitive. Social. Affective. DeepLearning. . . . Outline. IntroductionThesisDefinitionsMethodologyEvidenceIndividual learning Human tutoring (novice
E N D
1. An analysis of generative dialogue patterns across interactive learning environments: Explanation, elaboration, and co-construction Robert G.M Hausmann
Pittsburgh Science of Learning Center (PSLC)
Learning Research and Development Center
University of Pittsburgh
2. Top-level Goal 3 paths to the same outcome
Try to integrate the social and the cognitive. I know virtually nothing about the affective, but that may be where lessons from the gaming community may play the most important part.
1.The goal is to either explain how a strong student learned the material.
2. Or, design an intervention that helps a student traverse the trajectory from non-understanding to robust understanding.3 paths to the same outcome
Try to integrate the social and the cognitive. I know virtually nothing about the affective, but that may be where lessons from the gaming community may play the most important part.
1.The goal is to either explain how a strong student learned the material.
2. Or, design an intervention that helps a student traverse the trajectory from non-understanding to robust understanding.
3. Outline Introduction
Thesis
Definitions
Methodology
Evidence
Individual learning
Human tutoring (novice & expert)
Peer collaboration
Observing Tutorial Dialogs Collaboratively
Discussion
Integration with serious games
4. Thesis Part 1: There are several paths toward learning.
Some paths are better suited for the acquisition of different representations (Nokes & Ohlsson, 2005).
Part 2: Generative interactions produce deep learning.
Increasing the probability that a generative interaction occurs should increase the probability of robust learning.
Part 3: Different interactive learning environments differentially support generative interactions.
Learning with understanding vs. performance orientation (Schauble, 1990; Schauble & Glaser, 1990). Constructive interactions are DEEP.Constructive interactions are DEEP.
5. Definitions Learning
Revise of mental model
Application of conceptual knowledge
Reduction of errors during the acquisition of procedural knowledge
Interaction
Dialog: situation-relevant response that occurs between two or more agents (human or computer).
Monolog: statements uttered out loud that reveal the individual’s understanding processes.
Generative
Produce inferences (ex: the lungs are the site of oxygenation)
Apply knowledge to a problem (ex: applying Newton’s second law)
6. Methodology: high-level description Collect & transcribe a corpus of learning interactions
Categorize statements (Chi, 1997)
Assess learning at multiple levels of depth
Correlate statements with shallow and deep learning
Follow-up Study: Experimentally manipulate interaction type (goto 1) TRANS: How do we go about garnering evidence for these hypotheses?TRANS: How do we go about garnering evidence for these hypotheses?
7. Study 1: Self-explaining vs. Paraphrasing Procedure
Domain: Circulatory system
Pretest
Prompting intervention (41.1 min.)
Posttest
Participants
University of Pittsburgh undergraduates (N = 40)
Course credit
Research Questions
Can a computer interface use generic prompts to inspire students to self-explain?
If so, what is the effect on learning? TRANS: let me demonstrate how this methodology can be applied to a real example.TRANS: let me demonstrate how this methodology can be applied to a real example.
8. Here is a screen-shot of our system: the automatic prompting system (APS)
Feature:
Presents text
Provides generic prompts (i.e., no content)
A field for the student to type their thoughts, notes, hopefully their free-form self-explanationsHere is a screen-shot of our system: the automatic prompting system (APS)
Feature:
Presents text
Provides generic prompts (i.e., no content)
A field for the student to type their thoughts, notes, hopefully their free-form self-explanations
9. Results: Self-explanation Frequency (Exp. 2) Paraphrasing dominates the types of coded statements F(2, 57) = 19.345, p = .000
TRANS: how did it affect learning?
Paraphrasing dominates the types of coded statements F(2, 57) = 19.345, p = .000
TRANS: how did it affect learning?
10. Results: Correlations with Learning (Exp. 2) Verbatim: explicitly stated in the text
Integrated: integrate with prior material or BK
Strong self-explainers: upper quartile (M = 12.00, SD = 2.55) produced more SE than the lower three quartiles (M = 4.67, SD = 2.38) (t(18) = 5.87, p < 0.0001Verbatim: explicitly stated in the text
Integrated: integrate with prior material or BK
Strong self-explainers: upper quartile (M = 12.00, SD = 2.55) produced more SE than the lower three quartiles (M = 4.67, SD = 2.38) (t(18) = 5.87, p < 0.0001
11. Study 2: Coverage vs. Generation Method
Domain: Electrostatics
Procedure: Alternate between problem-solving and example-studying (110 min.)
Design: Example (complete vs. incomplete) x Study Strategy (self-explain vs. paraphrase)
Participants
U.S. Naval Academy midshipmen (N = 104)
Course credit
Research Questions
Does learning depend on the type of processing or the completeness of the examples?
Does prompting for self-explaining also work in the classroom? More direct evidence (i.e., not correlational): experimentally manipulate SE vs. Paraphrase prompting
Issue from last study, are self-explanations more complete?
Computer prompted at the end of each “step” as defined by experimenter (some intelligence)
If we move out of the lab and into the classroom, do the results still hold?
More direct evidence (i.e., not correlational): experimentally manipulate SE vs. Paraphrase prompting
Issue from last study, are self-explanations more complete?
Computer prompted at the end of each “step” as defined by experimenter (some intelligence)
If we move out of the lab and into the classroom, do the results still hold?
12. Is Andes a serious game? No, it is a full-fledged ITS.
Point out: bottom-out hintsIs Andes a serious game? No, it is a full-fledged ITS.
Point out: bottom-out hints
13. Method: Timeline I wish I had a picture of this, but we put a headphones with a noise-cancellation microphone on 24 mids in one laboratory.
The problem simultaneously posttests the previous concepts & pretests the next set of concepts.
I wish I had a picture of this, but we put a headphones with a noise-cancellation microphone on 24 mids in one laboratory.
The problem simultaneously posttests the previous concepts & pretests the next set of concepts.
14. Results: Bottom-out Help ME: Paraphrase > Self-explain F(3, 100) = 14.57, p < .001ME: Paraphrase > Self-explain F(3, 100) = 14.57, p < .001
15. Results: Assistance Score Assistance score = hints + errors
Average assistance score per mid per 4 problems
ME: Paraphrase > Self-explain F(1, 100) = 3.70, p < .06Assistance score = hints + errors
Average assistance score per mid per 4 problems
ME: Paraphrase > Self-explain F(1, 100) = 3.70, p < .06
16. Study 3: Novice, Human Tutoring Procedure
Domain: circulatory system
Pretest (w/ textbook)
Intervention (120 min.)
Posttest
Participants
Tutors: Nursing Students (N = 11)
Tutee: Eighth-Grade Students (N = 11)
Paid volunteers
Research Questions
How do novice tutors naturally interact with students?
Can tutors be trained to interact in specific way, and what impact does alternative tutorial dialogs have on learning?
TRANS: previous studies, explain to self. Sets up a baseline. What about interacting with a more knowledgeable person (i.e., tutor)?TRANS: previous studies, explain to self. Sets up a baseline. What about interacting with a more knowledgeable person (i.e., tutor)?
17. Results: Learning Knowledge pieces:
Study 1: Pretest 13%; Posttest 46%, p < .001
Study 2: Pretest 22%; Posttest 45%, p < .001
Mental model:
Study 1: Pretest 0%; Posttest 73%
Study 2: Pretest 0%; Posttest 64%
Students learned from pre- to post-test in terms of:
At the knowledge piece level
At the mental model levelStudents learned from pre- to post-test in terms of:
At the knowledge piece level
At the mental model level
18. Results: Learning Question-answering:
Notice two things:
Lack of difference between study 1 and study 2
As problems get harder, performance decreases
Shallow learning Cat1+2
Deep learning Cat3+4Notice two things:
Lack of difference between study 1 and study 2
As problems get harder, performance decreases
Shallow learning Cat1+2
Deep learning Cat3+4
19. Results (Exp. 1): Types of tutor moves and student responses Correlation:
Shallow Learning: S response to scaffolding; T explanations
Deep Learning: reflectionCorrelation:
Shallow Learning: S response to scaffolding; T explanations
Deep Learning: reflection
20. Results (Exp. 2): Types of tutor moves and student responses Shallow follow-ups: 54 => 67
Deep follow-ups: 19 => 29
Non-constructive: 48 => 8
Increase in deep scaffolding episodes Mtut= 0.91 Mprompt=3.91, t(20) = 52.52, p < .05
Point: instruct tutor to interact in specific way can have an impact on the student’s contributions.Shallow follow-ups: 54 => 67
Deep follow-ups: 19 => 29
Non-constructive: 48 => 8
Increase in deep scaffolding episodes Mtut= 0.91 Mprompt=3.91, t(20) = 52.52, p < .05
Point: instruct tutor to interact in specific way can have an impact on the student’s contributions.
21. Study 4: Comparison of Multiple Interactive Learning Environments Procedure
Domain: Newtonian Mechanics
Pretest (w/ textbook)
Instructional Intervention (next 5 slides)
Posttest (w/o textbook)
Participants
University of Pittsburgh undergraduates (N = 70)
Paid volunteers
Research Questions
What types of interactions are related to learning from tutoring and collaboratively observing tutoring?
Why do peers learn from collaboration? TRANS: Can we design an effective intervention that harnesses the effects of tutoring and collaboration?
TRANS: Can we design an effective intervention that harnesses the effects of tutoring and collaboration?
22. Experimental Design Intervention 1: Tutoring
Learning Resource: Expert human tutor
One student (n = 10 video tapes)
23. Experimental Design Intervention 2: Observing Collaboratively
Learning Resource: Peer & Videotape
Two students (n = 10; yoked design)
24. Experimental Design Intervention 3: Observing Alone
Learning Resource: Videotape
One student (n = 10; yoked design)
25. Experimental Design Intervention 4: Collaborating
Learning Resource: Peer & Text
Two students (n = 10)
26. Experimental Design Intervention 5: Studying Alone
Learning Resource: Text
One student (n = 10)
27. Results: Condition Differences Collab, Obs Collab, & tutoring were different from study alone & observing alone, but not from each other.
Largest gains were between tutoring and obs collab
Cond, controlling for pre: F(4, 64) = 2.596, p = .044)
Tutoring (d = .815) = Observing Collaboratively (d = .613) = Collaborating (d = .326)
Top three > Observing Alone (F(1, 66) = 5.111, p = .026, d = .532)
Top three > Studying Alone (F(1, 66) = 6.448, p = .044, d = .522).
TRANS: The question is why?Collab, Obs Collab, & tutoring were different from study alone & observing alone, but not from each other.
Largest gains were between tutoring and obs collab
Cond, controlling for pre: F(4, 64) = 2.596, p = .044)
Tutoring (d = .815) = Observing Collaboratively (d = .613) = Collaborating (d = .326)
Top three > Observing Alone (F(1, 66) = 5.111, p = .026, d = .532)
Top three > Studying Alone (F(1, 66) = 6.448, p = .044, d = .522).
TRANS: The question is why?
28. Results: Interaction Analysis Correlations are with deep learning
Hyp: why is scaffolding correlated with observers’ learning?
Higher coherence
Observer can answer along, while the explanation is not tailored to their confusionCorrelations are with deep learning
Hyp: why is scaffolding correlated with observers’ learning?
Higher coherence
Observer can answer along, while the explanation is not tailored to their confusion
29. Results: Condition Differences
30. Results: Collaborative Dyads TRANS: Why did the collaboration condition demonstrate a significant gain from pre to post (26% controlling for pretest knowledge, p=0.002)?
3 possible reasons: explaining the material to the peer, jointly constructing a solution, figuring it out for yourself while the partner listens.
Found evidence for all three mechanisms.TRANS: Why did the collaboration condition demonstrate a significant gain from pre to post (26% controlling for pretest knowledge, p=0.002)?
3 possible reasons: explaining the material to the peer, jointly constructing a solution, figuring it out for yourself while the partner listens.
Found evidence for all three mechanisms.
31. Results: Collaborative Dyads TRANS: decompose the co-construction episodes into two sub-categories
Construction elaborative (5/12=42%); Gain: (3/5=60%),
Construction critical (7/12=58%); Gain: (5/7=71%).
TRANS: decompose the co-construction episodes into two sub-categories
Construction elaborative (5/12=42%); Gain: (3/5=60%),
Construction critical (7/12=58%); Gain: (5/7=71%).
32. Results: Collaborative Dyads TRANS: Did the speakers and listeners learn equally from reciprocal peer tutoring and self-directed explaining?
TRANS: Did the speakers and listeners learn equally from reciprocal peer tutoring and self-directed explaining?
33. Results: Collaborative Dyads Peer tutoring listener: Gain 5/11=45.4%
Peer tutoring speakers: Reapply 9/11=82%
SE speaker: Gain 12/17=71%
SE listener: Gain 5/17=29%Peer tutoring listener: Gain 5/11=45.4%
Peer tutoring speakers: Reapply 9/11=82%
SE speaker: Gain 12/17=71%
SE listener: Gain 5/17=29%
34. Study 5: Interaction Training Procedure
Domain: Conceptual Engineering (bridge design)
Pretest (w/ textbook)
Intervention (5 min.)
Problem-solving Task (30 min.)
Posttest
Participants
University of Pittsburgh undergraduates (N = 136)
Course credit
Research Questions
Can undergraduate dyads be trained to interact effectively (i.e., co-construct)?
What effect do certain dialog types have on problem solving and learning? TRANS: Correlation vs. causation: experimentally manipulate dialogueTRANS: Correlation vs. causation: experimentally manipulate dialogue
35. Serious Game? Yes, learn a bit about the effect of materials and configuration on cost and strength (and the tradeoffs).Serious Game? Yes, learn a bit about the effect of materials and configuration on cost and strength (and the tradeoffs).
39. Results: Manipulation Check The elaboration condition generated marginally fewer clarification questions than the control condition, F(1, 22) = 3.19, p = .09, d = .79.
There was a trend for the elaborative condition generated more elaborations than the control condition, F(1, 22) = 2.69, p = .12, d = .73, The elaboration condition generated marginally fewer clarification questions than the control condition, F(1, 22) = 3.19, p = .09, d = .79.
There was a trend for the elaborative condition generated more elaborations than the control condition, F(1, 22) = 2.69, p = .12, d = .73,
40. Results: Problem Solving There was a marginal effect of condition on optimization score, F(2, 75) = 2.60, p = .08. Post-hoc analyses revealed a reliable difference between the elaborative dyads and control dyads, d = .61.There was a marginal effect of condition on optimization score, F(2, 75) = 2.60, p = .08. Post-hoc analyses revealed a reliable difference between the elaborative dyads and control dyads, d = .61.
41. Results: Learning A higher score for the elaborative dyads than the control dyads, F(2, 133) = 3.08, p < .05
ADD: tried to train an evaluative condition, but they were no different than control in production.A higher score for the elaborative dyads than the control dyads, F(2, 133) = 3.08, p < .05
ADD: tried to train an evaluative condition, but they were no different than control in production.
42. Summary Individual Learning (Studies 1 & 2)
Paraphrasing => shallow learning
Self-explaining => deep learning
Human Tutoring (Studies 3 & 4)
Listening to tutor explain => shallow learning
Receiving scaffolding => deep learning
Reflective comments => deep learning
Peer collaboration (Studies 4 & 5)
Listening to peer explain => shallow learning
Giving an explanation to peer => deep learning
Co-constructing knowledge => deep learning
Observing Tutoring Collaboratively (Studies 4)
Observing tutor explain is not correlated with deep learning
Observing student receive scaffolding => deep learning At best, paraphrasing correlates with shallow learning
Even when controlling for content, self-explaining is a better interaction with the material than paraphrasing, even through paraphrasing may be more prominent.
At best, paraphrasing correlates with shallow learning
Even when controlling for content, self-explaining is a better interaction with the material than paraphrasing, even through paraphrasing may be more prominent.
43. Integration with Serious Games What is the implication of these results for the design of serious games?
How can a game inspire explanation, elaboration, or even co-construction?
Can a system know when to prompt?
Can peers prompt each other?
Can the listen be motivated to be more generative while listening to an explanation?Can a system know when to prompt?
Can peers prompt each other?
Can the listen be motivated to be more generative while listening to an explanation?
44. Acknowledgements Funding Agencies
Support @ LRDC
Gary Wild
Shari Kubitz
Eric Fussenegger
45. References Study 1: Hausmann, R.G.M. & Chi, M.T.H. (2002) Can a computer interface support self-explaining? Cognitive Technology, 7(1), 4-15.
Study 2: Hausmann, R.G.M. & VanLehn (in prep). The effect of generation on robust learning.
Study 3: Chi, M.T.H., Siler, S., Jeong, H., Yamauchi, T., & Hausmann, R.G. (2001). Learning from human tutoring. Cognitive Science, 25(4), 471-533.
Study 4a: Chi, M.T.H., Roy, M., & Hausmann (accepted). Observing Tutorial Dialogues Collaboratively: Insights about Tutoring Effectiveness from Vicarious Learning, Cognitive Science, x, p. xxx-xxx.
Study 4b: Hausmann, R.G.M., Chi, M.T.H., & Roy, M. (2004) Learning from collaborative problem solving: An analysis of three hypothesized mechanisms. 26nd Annual Meeting of the Cognitive Science Conference, Chicago, IL.
Study 5: Hausmann, R. G. M. (2006). Why do elaborative dialogs lead to effective problem solving and deep learning? Poster presented at the 28th Annual Meeting of the Cognitive Science Conference, Vancouver, Canada.
46. Inferential Mechanisms Simulation of a mental model (Norman, 1983)
Category membership (Chi, Hutchinson, & Robin, 1989)
Analogical reasoning (Markman, 1997)
Integration of the situation and text model (Graesser, Singer, & Trabasso, 1994)
Logical reasoning (Rips, 1990)
Self-explanation (Chi, Bassok, Lewis, Reimann, & Glaser, 1989) What are some of the cognitive factors that lead to robust learning?What are some of the cognitive factors that lead to robust learning?
47. Cognitive + Social Factors Different types of interaction lead to different representations:
Non-generative interactions lead to shallow learning.
Definition: does not modify material in any meaningful way.
Generative interactions lead to deep learning.
Definition: significantly modifies the material in a meaningful way.