1 / 15

Are your OSCE examiners any good?

Are your OSCE examiners any good?. OSCE: Background. First described in 1979 Involves the rotation of students through multiple stations at which they are required to perform specific observable tasks. Validity & Reliability issues. Content & number of OSCE stations

sabine
Download Presentation

Are your OSCE examiners any good?

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Are your OSCE examiners any good?

  2. OSCE: Background • First described in 1979 • Involves the rotation of students through multiple stations at which they are required to perform specific observable tasks

  3. Validity & Reliability issues • Content & number of OSCE stations • “Interrater inconsistency” • “Interstation reliability” influenced by case specificity • Determining the cut score • Design & use of score sheets • Inappropriate examiner conduct

  4. Examiner influence • Rewriting the station • Redesigning assessment instruments • Errors of central tendency & contrast • Examinee familiarity • ‘Halo’ effect • ‘Hawk-dove’ effect • Prompting, teaching & feedback • DRIFT • Interfering with the SP’s role

  5. Simulated patient

  6. Remedies for examiner influence • Statistical techniques • Pair high & low stringency examiners • Use several examiners & average scores • Give feedback • Eliminate extreme outliers • “Ownership’’ of the entire assessment • Prevent examiner boredom & fatigue • Examiner training

  7. How did we go about it? Reader Reflection on performance

  8. Data collection strategies • Participants: • Questionnaires • Video recordings • Score sheets • Course evaluations • Students: • Focus group interviews

  9. Results • Improved examiner conduct & inter-rater reliability: • Examiner briefing • Assessment instrument construction • Reflecting on assessment behaviour • “I think it’s a good idea, the whole OSCE thing, to standardise it. Like the current clinical evaluations is, well, very subjective. It matters which doctor you get, which patient you get…so, to sort of, make it fairer for everyone…” [translated].

  10. Before the intervention ...

  11. Increased inter-rater reliability

  12. Limitations of the study • Small sample size • Staged OSCE • Observer effect

  13. Conclusion • OSCE examiner training reduces inappropriate examiner conduct & • Improves inter-rater reliability • The future…

  14. References • IC McManus, M Thompson & J Mollon. Assessment of examiner leniency and stringency ('hawk-dove effect') in the MRCP(UK) clinical examination (PACES) using multi-facet Raschmodelling. BMC Medical Education 2006, 6:42 • Barry R. Nathan and Robert G. Lord. Cognitive Categorization and Dimensional Schemata: A Process Approach to the Study of Halo in Performance Ratings. Journal of Applied Psychology. 1983, Vol 68, No 1, 102-114 • Reed G. Williams Debra A. Klamen & William C. McGaghieCognitive, Social and Environmental Sources of Bias in Clinical Performance Ratings. Teaching and Learning in Medicine, 15(4), 270–292 • Improving oral examinations: selecting, training, and monitoring examiners for the MRCGP.(Membership examination of the Royal College of General Practitioners). British Medical Journal 311.n7010 (Oct 7, 1995): pp931(5).  • Peter H. Harasym Æ Wayne Woloschuk Æ Leslie Cunning. Undesired variance due to examiner stringency/leniency effect in communication skill scores assessed in OSCEs. Adv in Health SciEduc (2008) 13:617–632 • C. P. M. Van der Vleuten, S. J. Van Luykt A. M. J. Van Ballegooijen & D. B. Swansons. Training and experience of examiners. Medical Education 1989, 23, 290-296 • Kevin McLaughlin, Martha Ainslie, Sylvain Coderre, Bruce Wright & Claudio Violato. T he effect of differential rater function over time (DRIFT) on objective structured clinical examination ratings. Medical Education 2009: 43: 989–992 • Ann Jefferies, Brian Simmons & Glenn Regehr. The effect of candidate familiarity on examiner OSCE scores. Medical Education 2007: 41: 888–891 • Tim J. Wilkinson, Christopher M. Frampton, Mark Thompson-Fawcett & Tony Egan. Objectivity in Objective Structured Clinical Examinations: Checklists Are No Substitute for Examiner Commitment. Academic Medicine, VO L. 78, NO. 2 / February 2003

More Related