380 likes | 488 Views
This research explores the challenges of massive open online courses (MOOCs), focusing on improving student retention and engagement through collaborative learning techniques. By implementing structured group activities, peer discussions, and active participation, studies indicate significant boosts in comprehension and performance. The analysis presents data from various experiments demonstrating that interaction among students leads to better learning outcomes, particularly when group work is thoughtfully designed to promote positive interdependence and accountability. The findings suggest practical strategies for enhancing peer learning in large-scale online courses.
E N D
Towards Collaborative Learning @ Scale Marti A. Hearst UC Berkeley Joint work with Bjorn Hartmann, Armando Fox, Derrick Coetzee, Taek Lim Sponsored in part by a Google Social Interactions Grant
MOOC Drawbacks • Retention • Learning (?) • Isolation (?)
Collaborative Learning “Quick Thinks” Structured Groups
Active & Peer Learning: The Evidence (Large Courses) • Pausing frequently during lecture for 2 minute discussions leads to better comprehension (1-2 grade points higher) • [Ruhl et al, Jrnl Teacher Ed. 1987] • A meta-analysis over 60 physics courses and 6,500 students found improvements of almost 2 std.dev. • [Hake, Am. J. Physics, 1998] • Controlled experiment with > 500 physics students found improved attendance, engagement, and more than twice the learning. • [Deslaurieset al., Science 2011]
Active & Peer Learning: The Evidence (Large Courses) Even if no one in the group knows the answer, discussing improves results (genetics) [Smith et al, Science 323, Jan 2, 2009]
Peer Learning Example • From Deslauries et al: • Pre-class reading assignments and quizzes • (CQ) In-class clicker questions with student-student discussion • (GT) Small-group active learning tasks • Turn in individual written response • (IF) Targeted in-class instructor feedback • Typical schedule for 50-min class: • CQ1, 2 min; IF, 4 min. • CQ2, 2 min; IF, 4 min; CQ2 (continued), 3 min; IF, 5 min; Revote CQ2, 1 min. • CQ3, 3 min; IF, 6 min. • GT1, 6 min; IF with a demonstration, 6 min; GT1 (continued), 4 min; and IF, 3 min.
From Deslauries et al., for a one-week intervention Results for Controlled Experiment
Peer Learning Core Ideas • Students learn better by explaining to others • Extended group work must be structured • Must promote both: • Positive Interdependence • Individual Accountability • Group makeup: • Best if heterogeneous • Groups can change frequently
What Can Be Improved? More short assignments!
Project goal:MOOCS + Peer Learning How to do it?
First Step: Try MTurk • Hypothesis: • People in groups will get answers right more often than those working alone • Expectations: • The chats will be on topic • People will try to solve the problems
First Step: Try MTurk • Issues? • How to motivate the workers? • How to coordinate the workers? • What kinds of questions to use? • How to structure the conversation?
How To Motivate? • Experimental Manipulation: • If entire group gets the right answer, everyone gets a bonus • Control Group: • No mention of a bonus (no incentive for helping others)
System Workflow Real Time Crowdsourcing: Lasecki, et al, CSCW 2013, Bernstein et al, UIST 2011
Interaction: Small-Group Chat • CMC Literature suggests the affordances are appropriate • Video on next slide
Experimental Setup • 226 worker sessions lasting on average 12.8 minutes. • (15.0 minutes excluding solo workers), with 169 solo workers, 25 discussions of size 2, and 73 discussions of size 3. • Each session consisted of 2 questions. 2 minutes alone, 5 minutes in discussion, 20 seconds for final answer choice • 56% of the 452 attempts to answer questions were answered correctly.
Results • All hypotheses confirmed • Engaging in discussion leads to more correct answers. • The bonus incentive leads to more correct changed answers. • The participants have substantive discussions. • Of interest, but not a result: • More discussion is correlated with more correct answers
Results • 138 workers (61%) kept their original choices unchanged on both questions • 74 (33%) changed one answer after the discussion • 14 (6%) changed both. • 50% of workers who changed their answers improved their score • 18% lowered their score; • 86% of workers who changed both answers improved their score.
Results • Engaging in Discussion Leads to More Correct Answers • The mean percentage of correct responses is higher in chatrooms with more than one student (Fisher’s exact test, p < 0:01).
Results • Bonus Incentive Leads to More Correct Answers: • In the control condition, participants changed 33 out of 121 (27%) In the bonus condition they changed 44 out of 139 answers (32%). No significant difference (Fisher’s exact test, two-tailed p = 0.50 ). • However, among the changed answers, 14 answers (12%)changed from incorrect to correct in the control condition, while 31 (22%) changed from incorrect to correct in the bonus condition, a significant difference (Fisher’s exact test, two-tailed p < 0.04 )
Results • Participants have Substantive Discussions • 3 independent raters, Scale of 1 to 4 • 73 of 98 discussions (74%) were rated 4 by all raters • 80 (82%) had a median rating of 4. (Spearman’s rho=0.65)
Next Steps • Put this into MOOCs! • We have an experiment underway right now.
Other MOOC Projects • Forum Usage • Role of Instructor • Untangling Correlation from Causation • MOOC Instructor Dashboards
Thank you! Marti A. Hearst UC Berkeley Joint work with Bjorn Hartmann, Armando Fox, Derrick Coetzee, Taek Lim Sponsored in part by a Google Social Interactions Grant