1 / 32

Measuring Post-Licensure Competence

Measuring Post-Licensure Competence. The Nursing Performance Profile. Research Team. Janine Hinton RN, Ph.D Mary Mays Ph.D Debra Hagler RN, Ph.D Pamela Randolph RN, MS Beatrice Kastenbaum RN, MSN, CNE Ruth Brooks RN, MS, BC Nick DeFalco RN, MS Kathy Miller RN, MS Dan Weberg RN, MHI.

lael
Download Presentation

Measuring Post-Licensure Competence

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Measuring Post-Licensure Competence The Nursing Performance Profile

  2. Research Team • Janine Hinton RN, Ph.D • Mary Mays Ph.D • Debra Hagler RN, Ph.D • Pamela Randolph RN, MS • Beatrice Kastenbaum RN, MSN, CNE • Ruth Brooks RN, MS, BC • Nick DeFalco RN, MS • Kathy Miller RN, MS • Dan Weberg RN, MHI

  3. Support • Funded by NCSBN CRE Grant • Supported by: • Scottsdale Community College • Arizona State University • Arizona State Board of Nursing

  4. Statement of the Problem • A valid reliable practice assessment is needed to support intervention on the public’s behalf when the pattern of nursing performance results in or is likely to result in patient harm

  5. Literature Review • Medical errors a leading cause of death (IOM, 2000) • Written tests do not directly measure performance (Auerwarakul, Downing, Jaruratamrong, & Praditsuwan, 2005) • Multiple observations of a nurse’s performance have provided evidence of competent practice (Williams, Klaman,& McGaghie, 2003)

  6. Literature Review • High-fidelity simulation technology allows the creation of reproducible scenarios to evaluate nursing performance (Boulet et.al., 2011; Kardong-Edgren, Adamson, & Fitzgerald, 2010) • Nursing and Health care leaders have called for performance assessments to evaluate competence and support remediation (Benner, Stupen, Leonard, & Day, 2010; IOM, 2011)

  7. Purpose of study • To develop and evaluate a high-stakes simulation testing process to measure minimally safe nursing practice competence and identify remediation needs.

  8. Methodology • Needed process to apply sophisticated measures of validity and reliability • Participants appeared in 3 simulation videos • 3 subject matter expert rated each video on 41 measures of competency • Raters blind to participant ability, experience, order of testing • Videos presented a range of safe and unsafe performance • Obtained ratio level data suitable for parametric, inferential statistical analysis

  9. Filming Participant Demographics • Criteria—newly licensed RNs less than 3 years nursing experience (N=21) • Average age=32 • 95% female • 58% white, 16% black, 26% hispanic • 79% AD; 21% BSN • Less than 3 years experience—mean experience=1.05 years • Majority had some experience with simulation 74%

  10. Rater Demographics • Criteria--BSN and 3 years experience and work in a role that involves evaluating others (N=4) • Average experience=12.5 • Age 31-51 • White, female • Education: 3 BSN, 1 MS

  11. Instrument Development • Developed and established initial validity/reliability before funding • TERCAP served as the theoretical framework (Benner et.al.,2006; Woods & Doan-Johnson, 2003) • Survey items on NCSBN’s Clinical Competency Assessment of Newly Licensed Nurses were adapted (NCSBN, 2007) • Mapped to QSEN competencies

  12. Categories of Items (TERCAP) • Professional Responsibility • Client Advocacy • Attentiveness • Clinical Reasoning, noticing • Clinical Reasoning, understanding • Communication • Prevention • Procedural Competency • Documentation

  13. Example of One Item Category Competencies • Prevention • Infection control • 2 client identifiers • Appropriate positioning • Safe environment

  14. Scoring—4 possibilities • Performance or action is consistent with standards of practice and free from actions that may place the client at risk for harm • Fails to perform or performs in a manner that exposes the client to risk for harm • No opportunity to observe in the scenario • Blank

  15. Scoring test • No weighted items • No pass fail standard • Description of Nurse’s performance across 9 categories of competency • Final rating of each item based on inter-rater agreement—at least 2 of 3 agree

  16. Scenarios • 3 sets of 3 scenarios scripted=9 • Adult acute care, common diagnoses • Each scenario had opportunities to observe all performance items • Each sim patient had hospital-like chart with information—labs, history, MAR, orders

  17. Simulation Testing/Rating • 21 nurse performers= and 63 videos • Scenario Set 1=5 participants • Scenario Set 2=8 participants • Scenario Set 3=8 participants • Each video evaluated by 3 raters • 189 rating instruments • 41 items rated on each instrument • 7,749 ratings

  18. Analysis Procedures • Predictive Analysis Software (v 18.0.3 SPSS Inc., Chicago, IL) • Frequency analysis to identify instrument properties: • Used as intended • Interrater reliability • Sensitive to common practice errors (construct validity) • Cronbach’s alpha (intercorrelation among items) was used to measure internal consistency

  19. Analysis Procedures Cont • ANOVA used to • Assess ability of instrument to distinguish between experienced and inexperienced nurses • Assess potential bias created by administration methods

  20. Results • Less than 1% of items left blank or not observed—indicates scenarios comprehensive • Interrater reliability—across all 41 items at least 2 raters agreed on 99.12% • Internal consistency Cronbach’s alpha=0.91-0.84 on 41 items combined and separate

  21. Results • Construct validity—pass rates should mirror those in other studies • Infection control—pass rate 57% mainly due to lack of hand hygiene • Documentation—pass rate 29%--area of frequent concern in practice

  22. Results • Criterion validity • 2 groups by nursing experience • <1 year or • 1-3 years • 2 way mixed ANOVA • Experienced nurses made fewer errors than new nurses (p<0.001) • Significant in 6 of 9 categories • Attentiveness • Clinical Reasoning (noticing) • Clinical Reasoning (understanding) • Communication • Procedural Competency • Documentation

  23. Comparison of Groups by Category

  24. NPP Results Inexperienced 0.5 year 1 year experience Inexperienced 0.5 yr 2 year experience

  25. Results • Test Bias • Scenario was not significant • Categories was significant—some competency categories more difficult • Communication, prevention, procedural competency and documentation more difficult

  26. Results • Test Bias continued • Scenario set—significant only for documentation which may be easier on Set 1 • Order of testing and practice effect not significant • Location of testing not significant

  27. Summary • Instrument has adequate validity and reliability • Raters used instrument as instructed and in a reproducible manner • Items were highly interrelated • Sensitive to common errors • Inexperienced nurses made more errors • Test not biased • Plots permit users to visualize performance

  28. Implications • Provides a valid explicit measure of performance that regulatory Boards could use along with other data to determine if practice errors are a one-time occurrence or a pattern of high risk behavior • Potential uses in education and practice to assess performance and effect of educational intervention

  29. Limitations • Volunteer subjects—not random or representative • Sample size too small to support confirmatory factor analysis of the instruments construct validity • Tailored to specific context and purpose • Limitations of simulation—non-verbal and skin change cues missing—suspend disbelief

  30. Future Research • Funded by NCSBN for Phase II • Criterion Validity by comparing RN self and supervisor ratings • Compare to education, certification • Broader cross section of experienced nurses recruited

  31. References • Auewarakul, C., Downing, S. M., Jaturatamrong, U, and Praditsuwan, R. (2005). Sources of validity evidence for an internal medicine student evaluation system: An evaluative study of assessment methods. Medical Education, 39, 276-283. • Benner, P., Sutphen, M., Leonard, V., Day, L. (2010). Educating Nurses: A Call for Radical Transformation. Stanford, CA: Jossey-Bass. • Boulet, J. R., Jeffries, P. R., Hatala, R. A., Korndorffer, J. R., Feinstein, D. M., & Roche, J. P. (2011). Research regarding methods of assessing learning outcomes. Simulation in Healthcare, 6(7), supplement, 48-51. • Institute of Medicine (IOM) (2011). The Future of Nursing: Leading Change, AdvancingHealth. Washington, DC: National Academies Press.

  32. References • Institute of Medicine. (2000). To err is human: Building a safer system. Washington, DC: National Academies Press • Kardong-Edgren, S., Adamson, K. A., & Fitzgerald, C. (2010). A review of currently published evaluation instruments for human patient simulation. Clinical Simulation in Nursing, 6(1). doi:10.1016/j.ecns.2009.08.004 • Williams, R. G., Klamen D. A., & McGaghie, W. C. (2003). Cognitive, social and environmental sources of bias in clinical performance ratings. Teaching andLearning in Medicine, 15(4), 270-292.

More Related