evaluating online and blended learning environments n.
Skip this Video
Loading SlideShow in 5 Seconds..
Evaluating Online and Blended Learning Environments PowerPoint Presentation
Download Presentation
Evaluating Online and Blended Learning Environments

play fullscreen
1 / 102

Evaluating Online and Blended Learning Environments

246 Views Download Presentation
Download Presentation

Evaluating Online and Blended Learning Environments

- - - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript

  1. Evaluating Online and Blended Learning Environments

  2. What does evaluation mean to you? • analysis • Critique • Judgment • Feedback • Audit • Reflection • Improvement • Client perspective • Satisfaction

  3. Agenda • Clarify the challenges of evaluating online and blended learning. • Introduce evaluation model. • Present case studies. • Engage in a planningexercise.

  4. Planning Exercise Part 1 • Sketch an evaluation plan for a new online course titled “21st Century Communication Skills” • Jointly enrolls students from around the globe. • Course is designed with participants from multiple cultures and various fields of study

  5. “It would be very surprising if even 10 percent of organizations using e-learning actually conducted any well-structured and executed evaluations.” http://www.alleninteractions.com/

  6. “An evaluation can first and foremost determine whether the distance learning version worked as effectively as, or better than, the standard instructional approach – teaching students face to face.” - The ASTD Distance Learning Handbook

  7. We already know that online learning works as well as face-to-face instruction.

  8. Despite 50 years of “no significant differences” between media, people persist in trying to find them.

  9. Dr. Ken AllenNYU

  10. Allen, K., Galvis, D., Katz R. (2004). Evaluation of CDs and chewing gum in teaching dental anatomy. Journal of Dental Research, 83. VS. VS. Learning

  11. Evaluation Is the Key to Effective Successful Online Learning Initiatives

  12. A major evaluation challenge is determining what your stakeholders will regard as credible evidence.

  13. Evaluation Paradigms • People have hold different evaluation paradigms. • We should recognize ours and others. • Try to avoid paradigm wars.

  14. Experimental (Quantitative) Paradigm • There are facts with an objective reality that exist regardless of our beliefs. • The goal of evaluation is to detect the causes of changes in phenomena through measurement and quantitative analysis. • Experimental designs are best because they reduce “error” which hides the truth. • Detachment is the ideal state.

  15. Interpretive (Qualitative) Paradigm • Reality is socially constructed through collective definitions of phenomena. • The goal of evaluation is to interpret phenomena from multiple perspectives. • Ethnographic methods such as observation and interviews are best because they provide the basis for sharing interpretations. • Immersion is the ideal state.

  16. Postmodern (Critical) Paradigm • Reality is individually constructed based upon experience, gender, culture, etc. • The goal of evaluation is to improve the status of under-privileged minorities. • Critical theory that deconstructs phenomena is best because it reveals the “hidden curriculum” or other power agendas in technological innovations. • Political engagement is the ideal state.

  17. Pragmatic (Eclectic) Paradigm • Reality is complex, and many phenomena are chaotic and unpredictable. • The goal of evaluation is to provide decision-makers with the information they need to make better decisions. • Methods and tools should be selected on the basis of their potential for enhancing the quality of decision-making. • Wonder and skepticism are the ideal states.

  18. Experimental Evaluation Flaws • There is over-reliance on comparative designs using inadequate measures. • No significant differences are the most common result in media comparisons. • Learning is difficult to measure in most cases, especially in higher education.

  19. We don’t know enough about the outcomes of teaching and learning in higher education. • It is convenient for everyone involved to pretend that high quality, relevant teaching and learning are occurring.

  20. “Quality” ratings of universities & colleges by commercial entities have enormous impact in the USA today.

  21. The criteria used for these rankings are surprisingly dubious.

  22. Film Clip from “Declining by Degrees” by John Merrow and Learning Matters

  23. Film Clip from “Declining by Degrees” by John Merrow and Learning Matters

  24. Interpretive Evaluation Flaws • Administrators often express distain for “anecdotal evidence.” • Observations and interviews can be expensive and time-consuming. • Qualitative interpretations are open to bias.

  25. The Failure of Educational Research • Vast resources going into education research are wasted. • They [educational researchers] employ weak research methods, write turgid prose, and issue contradictory findings.

  26. The Failure of Educational Research • Too much useless work is done under the banner of qualitative research. • Qualitative research…. [yields] ….little that can be generalized beyond the classrooms in which it is conducted.

  27. Postmodern Evaluation Flaws • It is easier to criticize than to propose solutions. • Extreme subjectivity is not widely accepted, especially outside higher education. • Whose power agenda should be given precedence?

  28. The Trouble with Postmodernists • Write critiques in a language inaccessible to decision makers. • Regard technologies as inherently evil.

  29. Pragmatic Evaluation Flaws • Requires larger commitment of resources to the evaluation enterprise. • Mixed-methods can be expensive and time-consuming. • Sometimes, decision-makers ignore even the best evidence.

  30. The Lesson of the Vasa http://www.vasamuseet.se/

  31. So what are some better ideas about evaluating online and blended learning courses?

  32. Three core starting points. • Plan up front. • Align anticipated decisions with evaluation questions. • Use multiple criteria and multiple data collection methods.

  33. Decision-Making and Evaluation • We must make decisions about how we go about designing and using e-learning. • Information from evaluation is a better basis for decision-making than habit, intuition, superstition, politics, prejudice, or just plain ignorance.

  34. Planning is the key! • A major challenge is getting stakeholders to identify the decisions they face. • Clear decisions drive the rest of the planning. • Evaluation questions emerge from decisions. • Methods emerge from questions.

  35. Conducting Evaluations - Step 1 • Identify decisions that must be made about e-learning. • adopt • expand • improve • abandon • reallocate funding

  36. Conducting Evaluations - Step 2 • Clarify questions that must be addressed to guide decisions. • Who is enrolled in e-learning and why? • How can CMS be improved? • What is the impact on access? • What is the impact on performance?

  37. Conducting Evaluations - Step 3 • Select methods. • Observations • Interviews • Focus Groups • Questionnaires • Data log analysis • Expert Review • Usability Studies

  38. Conducting Evaluations - Step 4 • Collect the data. • Triangulate. • Revise data collection strategies as needed. • Accept limitations, but aim for quality. • Be sensitive to demands on all participants.

  39. Conducting Evaluations - Step 5 • Report findings so that they influence decisions in time. • Report early and often • Use multiple formats • Engage stakeholders in focus groups • Don’t hide complexity

  40. Selective Criteria for Evaluation Consistency Economy Learning Flexibility Efficiency Safety

  41. In comparison to traditional instructor-led instructional methods, e-learning may show statistically significant, but modest, learning gains as measured by most standardized tests. Developing reliable, valid measures of the most important outcomes is difficult and expensive. Learning High Low

  42. Who do we want our learners to become? • Better problem-solvers and communicators • Capable of working collaboratively as well as independently • Knowledgeable • Highly skilled

  43. Who do we want our learners to become? • Experts who possess robust mental models specific to the professions in which they work • Lifelong learners who value personal and professional development

  44. Developing reliable and valid online tests is expensive.

  45. In addition, many, if not most, important outcomes are difficult to assess with traditional measures.

  46. In comparison to traditional instructor dependent instructional methods, e-learning can be more consistent, providing each learner with equivalent exposure to content, interactions, and assessment strategies, all of which can be reliably documented. Consistency High Low

  47. In comparison to traditional classroom instruction, e-learning can be more economical. Unfortunately, valid examples of ROI evaluations for e-learning are still rare, especially in higher education. Economy High Low

  48. In comparison to many types of laboratory or field activities, e-learning can be safer for both people and equipment. Safety is an increasingly important criteria in higher education as well as in business and industry. Safety High Low