1 / 40

Are Your e-Learners Learning? How to develop online level 2 evaluations quickly and effectively

Are Your e-Learners Learning? How to develop online level 2 evaluations quickly and effectively. Gus Prestera , PhD, CPT President, effectPerformance, Inc. Instructional Design Consultant April 26, 2004. Why assess learning?. Are your PALs aligned?. Learning Context (e.g., Classroom).

iona-norman
Download Presentation

Are Your e-Learners Learning? How to develop online level 2 evaluations quickly and effectively

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Are Your e-Learners Learning?How to develop online level 2 evaluations quickly and effectively Gus Prestera, PhD, CPT President, effectPerformance, Inc. Instructional Design Consultant April 26, 2004

  2. Why assess learning?

  3. Are your PALs aligned? Learning Context (e.g., Classroom) Performance Context (e.g., Workplace) LearningTask PerformanceTask AssessmentTask

  4. Agenda • Rapid Prototyping • Before Test Development… • 4-Step Test Development Process • Practice • Discussion

  5. Rapid Prototyping Rapid Prototyping: Develop a functional prototype quickly, test/refine it until it is accepted, and then proceed with full development

  6. Rapid Prototyping • Reverse-engineering • Develop  User test  Refine • Minimal upfront analysis • Iterative and incremental approach • Continuous improvement • Progressive refinement • User centric • Reliant on user input and user feedback • Testing under realistic conditions

  7. Thiagi’s Rapid ID Model • Strategy 1. Speed up the process • Strategy 2. Use a partial process • Strategy 3. Incorporate existing instructional materials • Strategy 4. Incorporate existing non-instructional materials • Strategy 5. Use templates • Strategy 6. Use computers and recording devices • Strategy 7. Involve more people • Strategy 8. Make efficient use of subject matter experts • Strategy 9. Involve trainees in speeding up instruction • Strategy 10. Use performance support systems Thiagarajan (1999)

  8. My Approach • Prioritize - Spend time on what matters most • Produce - Move from abstract to concrete fast… • Pilot - Don’t guess, just see if it works • Learn - Creative processes are iterative • Listen - Involve learners early and often • Leverage - Use technology, templates, EPSS tools • Streamline - Reduce process complexity, inefficiencies, and redundancies to cut cycle time and costs • Align - Maintain PAL alignment

  9. Before Test Development… Source: Prestera, 2004a

  10. Front-End Analysis • What are the performance gaps or opportunities? • What are the root causes? • What interventions will close those gaps? • Which are skill gaps, i.e., are caused by gaps in knowledge, skills, or attitudes? • What skill gaps can/should be addressed through training?

  11. Training Needs Assessment (TNA) • Identify critical skills • Prioritize skill set • Difficulty of implementation • Potential of impact • Type of cognitive process • Type of knowledge (Krathwohl, 2002) • Survey skill needs TNA (Prestera, 2004b)

  12. Tool: IRC Worksheet IRC Worksheet Tool High IRC skills are more difficult to implement, have a high potential impact on the organization, and require the most instructional resources to develop/influence.

  13. Tool: TNA Survey TNA Survey Tool This tool automates the survey development and analysis, quickly informing you of which skills have high perceived training need and which have low perceived training need and can be addressed through non-training interventions, if at all.

  14. Test Development:A 4-Step Process Prestera, 2004a

  15. Step 1: Identify Criteria • Form panel (3-7 people) • Exemplar workers • Subject matter experts (SMEs) • Review skill set • Brainstorm assessment criteria

  16. Step 2: Develop Test • Is the skill well-defined or ill-defined? • Is there a set of right and wrong ways of doing things? • Or is right/wrong more dependent upon perspective, degrees of rightness, and context? • Does the skill need to be: • Physically performed (motor, psychomotor, and some procedural tasks) • Mentally performed (decision-making, problem-solving, remembering, analyzing, synthesizing, evaluating) • Write test items and test instructions • Review with a SME for content (Nitko, 1996) • Review for grammar, spelling, etc.

  17. Link to test writing guidelines:http://taesig.8m.com/createcon.html Test Format Matrix Performance (Performance, simulation, projects, apprenticeships) Knowledge (MC, TF, matching, fill-in, short answer, essay, report) Test Format Matrix Objective (Correct/Incorrect) Objective/ Performance Objective/ Knowledge Subjective (Rating scales) Subjective/ Performance Subjective/ Knowledge Note: There can be an element of objectivity to almost any “subjective” judgments and there certainly is subjectivity in any “objective” judgment, and the same overlap exists between “performance” tasks and “knowledge” tasks, so do not get hung up on the labels.

  18. Practice:What format would you use? • Cashier’s ability to distinguish between valid and invalid coupons • Cashier’s ability to process transactions involving coupons at the cash register • Sales person’s product knowledge • Designer’s ability to select the right test format • Manager’s ability to apply laws and regulations governing hiring practices • Manager’s ability to conduct a job interview • Sales person’s ability to use product knowledge to help customers make good product decisions

  19. Remember Your PALs Learning Context (e.g., Classroom) Performance Context (e.g., Workplace) LearningTask PerformanceTask AssessmentTask

  20. Practice Set 1 Step 3: Pilot Test • Difficult to write good test items • But soooo easy to write bad ones • Use a random sample of actual learners • Alternative: two-group approach, use a group of average learners with no training and a group of exemplars • After data collection, copy the data into our Item Analysis Tool • Set the parameters and you’re ready!

  21. Set the Parameters

  22. After these preliminary steps, you are ready to interpret the results of the Test and Item Analysis

  23. Step 4: Revise Test • Interpret indicators: • Reliability estimates • Item difficulty (p) • Item discrimination (d) • Revisit criteria • Revise test items • Pilot again

  24. Test Reliability • Is it measuring consistently? • How often is your watch accurate? • Would you use it if it were accurate 50% of the times? • Reliability estimates: • KR-20, KR-21 (Kuder & Richardson, 1937) • Alpha (Cronbach, 1951)

  25. Item Discrimination (d) • Is it measuring accurately? Discriminant Validity • Does the question differentiate between those who know their stuff and those who don’t? • If your watch was reliable but consistently told you the wrong time, would you keep it? • d is the key indicator (Sullivan, Wircenski & Major, 1999) • d > .1: good question • 0 < d < .1: weak question • d < 0: bad question

  26. Item Difficulty (p) • How difficult was that question? • What are the odds that a learner will get it right in the future? • Good questions are challenging but feasible • Tooeasy – Is training even necessary for that skill? • Toohard – Is current training for that skill effective?

  27.  Item Discrimination (d)   Item Difficulty (p) 

  28. So what’s in curve? This distribution is skewed to the left because so many items are extremely difficult. This distribution is not “Normal.”

  29. A “Normal” Distribution

  30. Practice:Interpreting Analysis Results Practice Set 2 Practice Set 3

  31. Discussion

  32. “Analysis takes time and time is in short supply”(Rossett, 1999; pp. x-xi) Benefits • Simple-to-use tool makes item analysis fast and easy • Tests with high face, discriminant, and ecological validity as well as reliability • Validation promotes sense of fairness in test process • Assessments create a sense of learner accountability • High-quality tests drive high-quality training • Concrete understanding of client needs • Iterative cycle enables test development to inform design decisions • Continuous improvement approach compatible with Six Sigma, LEAN, and Gemba Kaizen quality models

  33. How can valid assessments help you address these concerns regarding e-learning and the workplace? Is work performance relevant anymore? Is e-learning relevant to work performance? 84% admit they could work much harder 14% usage rates* 50% admit they only work as hardas they must to keep their jobs 60% dropout rates* Individuals contribute about 30% less when working in teams ASTD, 2001 Clark, 2004

  34. Key Success Factors • Can you form a panel of exemplar workers? • Can you secure pilot participants? • Can you get over the fear of not being perfect the first time? • Are you willing to discard and revise items?

  35. Obstacles • Anti-test cultures • Lack of management support • Fear of making mistakes and learning from feedback • Tendency to do things once and forget about them

  36. Did we get there? After attending this session, are you able to use the rapid prototyping process and tools provided to: • Identify and prioritize needed skills? • Collaborate with learners to brainstorm assessment criteria for each skill? • Determine what test formats need to be used in order to keep PALs aligned? • Run a pilot and quickly conduct test and item analyses? • Use pilot data to decide what to remove, revise, or refine? • Position assessments as a means to drive training?

  37. Slides and tools available at: http://www.effectperformance.com/html/library.htm Contact effectPerformance Instructional design solutions for your learning and performance needs Gus Prestera, Ph.D., CPT President, effectPerformance, Inc. E-mail: gprestera@effectPerformance.com Voice 610.449.2060  Fax 610.449.2061 1513 Fairview Avenue, Havertown, PA 19083 www.effectPerformance.com

  38. References ASTD. (2001). Benchmarking Report on e-Learning. Bloom, B. S., Engelhart, M. D., Furst, E. J., Hill, W. H., & Krathwohl, D. R. (1956). Taxonomy of educational objectives: The classification of educational goals. Handbook 1: Cognitive Domain. White Plains, NY: Longman. Clark, R.E. (2004, March). The “10 Most Wanted” motivation killers. PerformanceXpress. Clark, D. (2003, August). How effective is training? A new summary of the past 40 years of training field research and evaluation. PerformanceXpress. Cronbach, L. J. (1951). Coefficient alpha and the internal structure of tests. Psychometrika, 16, 297-334. Dick, W., & Carey, L. (1990). The Systematic Design of Instruction. Glenview, IL: Scott, Foresman. Kirkpatrick, D. (1998). Evaluative training programs: The four levels (2nd ed.). New York, NY: Barrett-Kohler. Krathwohl, D. R. (2002). A revision of Bloom's Taxonomy: An overview. Theory into Practice, 41(4), 212-218. Kuder, G. F., & Richardson, M. W. (1937). The theory of the estimation of test reliability. Psychometrika, 2, 151-160. Nitko, A. J. (1996). Educational Assessment of Students (2nd Ed). Englewood Cliffs, NJ: Prentice-Hall. Prestera, G.E. (2004a). Are your e-learners learning? A rapid prototyping process and tool for test development. effectPerformance White Papers. Retrieved from the effectPerformance, Inc. web site: http://www.effectPerformance.com/html/library.htm. Prestera, G.E. (2004b). Training needs assessment: Process and tools to help you identify and prioritize training needs quickly. effectPerformance White Papers. Retrieved from the effectPerformance, Inc. web site: http://www.effectPerformance.com/html/library.htm. Prestera, G.E. (2004c). Understanding ADDIE: A foundation for designing instruction. effectPerformance White Papers. Retrieved from the effectPerformance, Inc. web site: http://www.effectPerformance.com/html/library.htm. Rossett, A. (1999). First things fast: A handbook for performance analysis. San Francisco, CA: Jossey-Bass. Sullivan, R. L., Wircenski, J. L., & Major, M. J. (1999). Analyzing knowledge-based tests. In D. L. Kirkpatrick (Ed.), Another Look at Evaluating Training Programs (pp. 113-118). Alexandria, VA: ASTD. Thiagarajan, S. (1999). Rapid Instructional Design. Workshops by Thiagi, Inc. Retrieved 11/18/2003, from the World Wide Web: http://www.thiagi.com/article-rid.html.

More Related