Past and Current Trends in the NCLEX-RN

DownloadPast and Current Trends in the NCLEX-RN

Advertisement
Download Presentation
Comments
trapper
From:
|  
(321) |   (0) |   (0)
Views: 140 | Added: 08-05-2012
Rate Presentation: 1 0
Description:

Past and Current Trends in the NCLEX-RN

An Image/Link below is provided (as is) to

Download Policy: Content on the Website is provided to you AS IS for your information and personal use only and may not be sold or licensed nor shared on other sites. SlideServe reserves the right to change this policy at anytime. While downloading, If for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.











- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -




1. Past and Current Trends in the NCLEX-RN? Mary J. Yoho, PhD., R.N., CNE Director of Faculty Development & Consultation Elsevier Health Science m.yoho@elsevier.com

2. Today?s Students

3. Resources for Developing Critical Thinking Test Items and Alternate Format Questions www.ncsbn.org NCLEX Test Plans 2007 & 2010 RN 2008 PN Candidate info Alternate format items Innovative Items Test Taking

4. 2007 NCLEX-RN? Test Plan Additions Disaster Planning Ergonomic Principles Accident and Error Prevention Clients with non-substance-related dependencies Gambling, sexual addiction, pornography Therapeutic Environment Complimentary & Alternative Therapies Music and relaxation therapy

5. 2010 NCLEX-RN? Test Plan Distribution of Content Safe & Effective Care Management of Care Safety & Infection Control Health Promotion & Maintenance Psychosocial Integrity Physiological Integrity Basic Care & Comfort Pharmacological & Parenteral Therapies Reduction of Risk Potential Physiological Adaptation

6. NCLEX-RN? Annual Pass Rates US Educated Graduates NCLEX-RN annual pass rates since 1994. Note that when the passing standard was increased (April 1, 1998 and April 1, 2004) there was a significant drop in the annual pass rate. NCLEX-RN annual pass rates since 1994. Note that when the passing standard was increased (April 1, 1998 and April 1, 2004) there was a significant drop in the annual pass rate.

7. Critical Thinking Item Writing One of the best preparations for student success on NCLEX? is to test like NCLEX?

8. i-clicker Question Which objective of the Critical Thinking and Test Item Writing Workshop is most important for you? Evaluate exam items for their ability to test critical thinking. Ability to measure an exam?s reliability and validity. Develop an exam blueprint that reflects a valid exam. Interjecting alternate format questions into your curriculum and exams.

10. Additional NCLEX Resources

11. Morrison (Chapter in Marilyn Oermann?s book) described internal and external curriculum evaluation in terms of nursing curriculum. Internal curriculum evaluation: Methods for improving test development and test analysis. Morrison (Chapter in Marilyn Oermann?s book) described internal and external curriculum evaluation in terms of nursing curriculum. Internal curriculum evaluation: Methods for improving test development and test analysis.

12. External curriculum evaluation: Methods used to compare an aggregate group of students with students throughout the United States. In Dr. Ormann?s book, Morrison?s chapter describes how to implement HESI exams as a measure of external curriculum evaluation. External curriculum evaluation: Methods used to compare an aggregate group of students with students throughout the United States. In Dr. Ormann?s book, Morrison?s chapter describes how to implement HESI exams as a measure of external curriculum evaluation.

13. External curriculum evaluation: Methods used to compare an aggregate group of students with students throughout the United States. In Dr. Ormann?s book, Morrison?s chapter describes how to implement HESI exams as a measure of external curriculum evaluation. External curriculum evaluation: Methods used to compare an aggregate group of students with students throughout the United States. In Dr. Ormann?s book, Morrison?s chapter describes how to implement HESI exams as a measure of external curriculum evaluation.

14. Today we are here to talk about internal curriculum evaluation and to learn how to improve this method of evaluation within your individual college or university. This is done by: Writing critical thinking test items (which will be done this afternoon using the four criteria for writing critical thinking test items identified by Morrison, et. al.) Using item analysis software to analyze every exam administered. Storing the data obtained from such analysis in a data base program (test item banking program, like HESIware). Today we are here to talk about internal curriculum evaluation and to learn how to improve this method of evaluation within your individual college or university. This is done by: Writing critical thinking test items (which will be done this afternoon using the four criteria for writing critical thinking test items identified by Morrison, et. al.) Using item analysis software to analyze every exam administered. Storing the data obtained from such analysis in a data base program (test item banking program, like HESIware).

15. These are the 5 guidelines for developing effective critical thinking exams. Assemble the basics?such basics include: Software and hardware that are necessary to conduct an exam analysis, including an item analysis. Knowledge and ability to use the item analysis and test development software. Knowledge about curriculum design. Write CT test items?you will know how to do this by the end of the day?but it is not easy and it is time-consuming, which is why it is essential that you can benefit from previous work done?by storing data. Pay attention to housekeeping duties?only about 10% of the process of writing good test items. Faculty often want to focus on this because it is concrete and items can easily be evaluated in terms of these parameters. Develop a test blueprint?can be done electronically; we will review this today. Scientifically analyze all exams?this is a MUST? ?need to turn on the light to cook breakfast.?These are the 5 guidelines for developing effective critical thinking exams. Assemble the basics?such basics include: Software and hardware that are necessary to conduct an exam analysis, including an item analysis. Knowledge and ability to use the item analysis and test development software. Knowledge about curriculum design. Write CT test items?you will know how to do this by the end of the day?but it is not easy and it is time-consuming, which is why it is essential that you can benefit from previous work done?by storing data. Pay attention to housekeeping duties?only about 10% of the process of writing good test items. Faculty often want to focus on this because it is concrete and items can easily be evaluated in terms of these parameters. Develop a test blueprint?can be done electronically; we will review this today. Scientifically analyze all exams?this is a MUST? ?need to turn on the light to cook breakfast.?

16. HESI?s definition of critical thinking?HESI?s definition of critical thinking?

17. How many of you are nurse practitioners who have never taught before? That is OK?today we will spend time talking about curriculum design. The most important thing you do as teachers is TEACH, not evaluate (Morrison, 2005 chapter in Oremann?s book). The years you spent learning nursing and practicing nursing bring a great deal to the academic setting and this experience is an invaluable teaching tool. This diagram describes the essence of curriculum design?we as a faculty decide what outcomes we want to achieve. Do you have an outcome with regard to NCLEX pass rates? (100% pass rate is unreasonable?in the 90% range is IMHO a good outcome). Write course objectives to meet these outcomes. For example, ?the students will care for clients with gastrointestinal disorders.? Decide what instruction will be provided to meet that objective?clinical practice with assignments for clients with GI disorders, classroom instruction, class presentation. Evaluate the instruction and if the content is not understood, reverse the order and ask?was the objective/outcome reasonable? When Outcome, Objectives, Instruction, and Evaluation ?talk together,? the Faculty has reached the educators? golden triangle.How many of you are nurse practitioners who have never taught before? That is OK?today we will spend time talking about curriculum design. The most important thing you do as teachers is TEACH, not evaluate (Morrison, 2005 chapter in Oremann?s book). The years you spent learning nursing and practicing nursing bring a great deal to the academic setting and this experience is an invaluable teaching tool. This diagram describes the essence of curriculum design?we as a faculty decide what outcomes we want to achieve. Do you have an outcome with regard to NCLEX pass rates? (100% pass rate is unreasonable?in the 90% range is IMHO a good outcome). Write course objectives to meet these outcomes. For example, ?the students will care for clients with gastrointestinal disorders.? Decide what instruction will be provided to meet that objective?clinical practice with assignments for clients with GI disorders, classroom instruction, class presentation. Evaluate the instruction and if the content is not understood, reverse the order and ask?was the objective/outcome reasonable? When Outcome, Objectives, Instruction, and Evaluation ?talk together,? the Faculty has reached the educators? golden triangle.

18. This golden triangle is referred to as curriculum design. Now you have had in essence the content of curriculum design 101. This golden triangle is referred to as curriculum design. Now you have had in essence the content of curriculum design 101.

19. Breaking down the golden triangle longitudinally: Begin with the school?s philosophy, decide on the outcomes, write the objectives to meet those outcomes, develop instructional methods to meet the objectives and then use evaluation tools to see if the instructional objectives have been effective in meeting the objectives, which will ultimately result in achievement of outcomes that are in keeping with the school?s philosophy. Even though it looks like the evaluation component consists of the largest part of this golden triangle?it is only the LAST in the circular process of designing and implementing a curriculum.Breaking down the golden triangle longitudinally: Begin with the school?s philosophy, decide on the outcomes, write the objectives to meet those outcomes, develop instructional methods to meet the objectives and then use evaluation tools to see if the instructional objectives have been effective in meeting the objectives, which will ultimately result in achievement of outcomes that are in keeping with the school?s philosophy. Even though it looks like the evaluation component consists of the largest part of this golden triangle?it is only the LAST in the circular process of designing and implementing a curriculum.

20. HESI?s definition of critical thinking?HESI?s definition of critical thinking?

21. The conceptual framework used to develop HESI critical thinking test items is based on classical test theory and the concepts presented by Bloom regarding cognitive levels and the concepts presented by Paul regarding critical thinking are incorporated into this theoretical framework. Dr. Ainslie Nibert developed a model that incorporates these theoretical frameworks which is published in the reliability and validity article that appears in CIN, July 2004: Morrison, S., Adamson, C., Nibert, A., & Hsia, S. (2004) HESI Exams: An overview of reliability and validity. CIN: Computers, Informatics, Nursing. (22)4, 220-226. The conceptual framework used to develop HESI critical thinking test items is based on classical test theory and the concepts presented by Bloom regarding cognitive levels and the concepts presented by Paul regarding critical thinking are incorporated into this theoretical framework. Dr. Ainslie Nibert developed a model that incorporates these theoretical frameworks which is published in the reliability and validity article that appears in CIN, July 2004: Morrison, S., Adamson, C., Nibert, A., & Hsia, S. (2004) HESI Exams: An overview of reliability and validity. CIN: Computers, Informatics, Nursing. (22)4, 220-226.

22. Bloom?s Taxonomy: Benjamin Bloom, 1956 (revised)

23. Bloom?s Taxonomy Revised Terms

24. Paul?s definition of critical thinking?Paul?s definition of critical thinking?

25. Post Analysis RELIABILITY Helps to determine the quality of a test Now let?s talk about evaluation? First, if this workshop followed a logical course of action, we would begin with how to write test items and then this afternoon (after lunch) we would talk about how to evaluate those test items. However, statistics have a tendency to put some people to sleep, and after lunch is too much of a challenge. Therefore, we are going to review the parameters for statistics in a good/bad test item and practice analyzing an exam. This afternoon we will practice writing critical thinking test items and present these test items to the group. So?let?s begin with the mainstays of exam development?Reliability and Validity. What is reliability? Now let?s talk about evaluation? First, if this workshop followed a logical course of action, we would begin with how to write test items and then this afternoon (after lunch) we would talk about how to evaluate those test items. However, statistics have a tendency to put some people to sleep, and after lunch is too much of a challenge. Therefore, we are going to review the parameters for statistics in a good/bad test item and practice analyzing an exam. This afternoon we will practice writing critical thinking test items and present these test items to the group. So?let?s begin with the mainstays of exam development?Reliability and Validity. What is reliability?

26. Reliability really has to do with consistency of scores: Test-retest reliability means that if the same students took the same exam today that they took yesterday they would make the same score?right? WRONG?they would have previous experience with the exam and such experience would influence their scores on the second examination experience. This is why HESI does not give the same test items to students?we use a function called ?compare? that enables us to identify which items a student has previously answered and to delete any such items from their retest. Reliability really has to do with consistency of scores: Test-retest reliability means that if the same students took the same exam today that they took yesterday they would make the same score?right? WRONG?they would have previous experience with the exam and such experience would influence their scores on the second examination experience. This is why HESI does not give the same test items to students?we use a function called ?compare? that enables us to identify which items a student has previously answered and to delete any such items from their retest.

27. Reliability Tools Fortunately, we have statistical methods for determining consistency of scores. One is known as the Kuder-Ricardson Formula 20 or the KR20 and another is the Cronbach?s alpha. Both are fine to use as a measure of reliability?LXR provides a Cronbach?s alpha and ParSystem, A+ test manager, and HESIware provides a KR20. In Morrison?s chapter (2004) in Linda Caputi?s book?she describes these statistical methods and repeatedly states ?If you can count to ONE?,? you can understand these statistics or exam and test item analysis. (Of course there are an infinite number of numbers between -1 and +1) The KR20 ranges from a -1 to a +1 and the closer the KR20 is to ONE, the more internal consistency the exam possesses. In other words, the high scoring students on this exam are consistently answering the test items correctly and the low-scoring students on this exam are consistently answering the test items incorrectly. The discrimination index (PBCC) also ranges from -1 to +1 and measures the quality of the individual test items? ability to distinguish between those who know (and can apply) the content and those who do not. The closer the PBCC is to ONE, the better is the test item?s discrimination value. Fortunately, we have statistical methods for determining consistency of scores. One is known as the Kuder-Ricardson Formula 20 or the KR20 and another is the Cronbach?s alpha. Both are fine to use as a measure of reliability?LXR provides a Cronbach?s alpha and ParSystem, A+ test manager, and HESIware provides a KR20. In Morrison?s chapter (2004) in Linda Caputi?s book?she describes these statistical methods and repeatedly states ?If you can count to ONE?,? you can understand these statistics or exam and test item analysis. (Of course there are an infinite number of numbers between -1 and +1) The KR20 ranges from a -1 to a +1 and the closer the KR20 is to ONE, the more internal consistency the exam possesses. In other words, the high scoring students on this exam are consistently answering the test items correctly and the low-scoring students on this exam are consistently answering the test items incorrectly. The discrimination index (PBCC) also ranges from -1 to +1 and measures the quality of the individual test items? ability to distinguish between those who know (and can apply) the content and those who do not. The closer the PBCC is to ONE, the better is the test item?s discrimination value.

28. Item difficulty (P-value)?only ?black and white:? If less than 30% of the group answer a test item correctly, action must be taken?give credit for more than one answer, nullify the test item (give credit for all choices), or delete the test item from the score. With a 4-choice multiple choice test item 30% of the group could answer the test item correctly?BY CHANCE ALONE. When 90% or more of the group answers a test item correctly, do we nullify or delete the question??NO, we rejoice, telling ourselves what great teachers we are!!! However, the item may in fact be too easy. Mastery items: There is certain content that we may deem as essential and may choose to include on an exam for the purpose of reinforcing this content, (e.g., the S&S of shock). For these items, we do not care if ALL the students answer the item correctly. However, these items should be identified BEFORE administering the exam, and no more than 10% of the exam should consist of mastery test items Testing is EXPENSIVE: (1) Faculty development to know how to write test items; (2) Faculty time to write the test items; (3) The cost for item analysis and test item banking software. If everyone answers all the test items (or most) correctly, then a less expensive method of evaluation should be implemented. SEE QM, May, 2005. PBCC?standard 0.20 for good item and teacher made exam KR20 of 0.70. Explain IDR?difference between high-scoring and low-scoring P-values. Item difficulty (P-value)?only ?black and white:? If less than 30% of the group answer a test item correctly, action must be taken?give credit for more than one answer, nullify the test item (give credit for all choices), or delete the test item from the score. With a 4-choice multiple choice test item 30% of the group could answer the test item correctly?BY CHANCE ALONE. When 90% or more of the group answers a test item correctly, do we nullify or delete the question??NO, we rejoice, telling ourselves what great teachers we are!!! However, the item may in fact be too easy. Mastery items: There is certain content that we may deem as essential and may choose to include on an exam for the purpose of reinforcing this content, (e.g., the S&S of shock). For these items, we do not care if ALL the students answer the item correctly. However, these items should be identified BEFORE administering the exam, and no more than 10% of the exam should consist of mastery test items Testing is EXPENSIVE: (1) Faculty development to know how to write test items; (2) Faculty time to write the test items; (3) The cost for item analysis and test item banking software. If everyone answers all the test items (or most) correctly, then a less expensive method of evaluation should be implemented. SEE QM, May, 2005. PBCC?standard 0.20 for good item and teacher made exam KR20 of 0.70. Explain IDR?difference between high-scoring and low-scoring P-values.

29. Review slide? Review slide?

30. Validity?more difficult to measure than reliability. Describe validity studies for the E2?follow-up on whether or not students predicted to pass NCLEX actually did pass. In terms of schools of nursing, exams are valid if they measure students? ability to practice nursing. Though some believe NCLEX outcomes are such a measure, a better measure is graduate follow-up studies. The NCLEX measures minimal level of competence. A graduate follow-up study?completed by the graduate, as well as the employer-- provides a better (IMHO) measure of the effectiveness of the program. Validity?more difficult to measure than reliability. Describe validity studies for the E2?follow-up on whether or not students predicted to pass NCLEX actually did pass. In terms of schools of nursing, exams are valid if they measure students? ability to practice nursing. Though some believe NCLEX outcomes are such a measure, a better measure is graduate follow-up studies. The NCLEX measures minimal level of competence. A graduate follow-up study?completed by the graduate, as well as the employer-- provides a better (IMHO) measure of the effectiveness of the program.

31. Test blueprints help to determine if you are examining what the test is supposed to examine. Database programs allow you to sort according to content (and item analysis data) so that you can easily retrieve test items that measure the content that is being evaluated. Test blueprints help to determine if you are examining what the test is supposed to examine. Database programs allow you to sort according to content (and item analysis data) so that you can easily retrieve test items that measure the content that is being evaluated.

32. Each test item is categorized by as many categories as the faculty choose. The number of test items on an exam in each of these categories is tabulated to derive the test blueprint. Each test item is categorized by as many categories as the faculty choose. The number of test items on an exam in each of these categories is tabulated to derive the test blueprint.

33. The test blueprint can be printed and this report should be stored along with the exam. Test items can be categorized in numerous ways. For example, nursing process, client needs, specialty areas. The test blueprint calculates the percentages and means of these categories within the exam, based on item analysis data from previous administrations of these test items, which are stored in the database. As test items are added and removed from an exam, these data change. The test blueprint can be printed and this report should be stored along with the exam. Test items can be categorized in numerous ways. For example, nursing process, client needs, specialty areas. The test blueprint calculates the percentages and means of these categories within the exam, based on item analysis data from previous administrations of these test items, which are stored in the database. As test items are added and removed from an exam, these data change.

34. Use of software for test analysis is absolutely essential. This is the HESIware IA report NOTE: The identifying data are presented?number of students and the number of test items, the highest, lowest, and medium score, the sd, mean score, and the KR20. Test items can be presented in a scrambled format, which means that the items and the answer choices are randomly scrambled. However, this report puts them back into the order in which the exam was written and the answer choices are presented in the same way they appear in the item banks/database. The display name helps to identify the test item. The Percent correct is the ?P-value??how many students answered the item correctly?remember the parameters?30-90%. The % of the students who answered the test item correctly from the LOWER scoring students on this exam. The % of the students who answered the test item correctly from the UPPER scoring students on this exam. The discrimination index (PBCC)--like to see these at 0.15 and above. The response frequencies?the highlighted choice is the answer?remember to note ?non-discriminating choices??those that no one chose. Use of software for test analysis is absolutely essential. This is the HESIware IA report NOTE: The identifying data are presented?number of students and the number of test items, the highest, lowest, and medium score, the sd, mean score, and the KR20. Test items can be presented in a scrambled format, which means that the items and the answer choices are randomly scrambled. However, this report puts them back into the order in which the exam was written and the answer choices are presented in the same way they appear in the item banks/database. The display name helps to identify the test item. The Percent correct is the ?P-value??how many students answered the item correctly?remember the parameters?30-90%. The % of the students who answered the test item correctly from the LOWER scoring students on this exam. The % of the students who answered the test item correctly from the UPPER scoring students on this exam. The discrimination index (PBCC)--like to see these at 0.15 and above. The response frequencies?the highlighted choice is the answer?remember to note ?non-discriminating choices??those that no one chose.

35. FOUR CRITERIA FOR A CRITICAL THINKING TEST ITEM?I usually ask for a drum roll?(dramatic).FOUR CRITERIA FOR A CRITICAL THINKING TEST ITEM?I usually ask for a drum roll?(dramatic).

36. The NCLEX asks application and analysis level questions. Morrison: If you can find the answer to a test item on one page of a text book, it is not a critical thinking test item?is KNOWLEDGE based. The questions should be clinically based and ask the student to solve a clinical problem. When you start administering exams that contain critical thinking test items, you can no longer teach by regurgitating the text book in class notes, Xeroxing the notes and putting them into the learning resource lab for students to copy, and then administering exams that contain test items from those notes. Being the ?sage on the stage? does not promote critical thinking, and certainly will not prepare the students to answer critical thinking test items. Pass out the sample cards (What if, What then?cards from Kathleen Walsh Free) as an example of a teaching method that promotes critical thinking. The NCLEX asks application and analysis level questions. Morrison: If you can find the answer to a test item on one page of a text book, it is not a critical thinking test item?is KNOWLEDGE based. The questions should be clinically based and ask the student to solve a clinical problem. When you start administering exams that contain critical thinking test items, you can no longer teach by regurgitating the text book in class notes, Xeroxing the notes and putting them into the learning resource lab for students to copy, and then administering exams that contain test items from those notes. Being the ?sage on the stage? does not promote critical thinking, and certainly will not prepare the students to answer critical thinking test items. Pass out the sample cards (What if, What then?cards from Kathleen Walsh Free) as an example of a teaching method that promotes critical thinking.

37. Stems with questions such as these increase the likelihood of writing a CT test item. Also, if the choices are close together, that too makes a test item highly discriminating. Think of the TV show ?Who Wants to be a Millionaire?? The lower value items (for $500 to $1,000) might ask: Which person was the 18th President of the U.S. Walt Disney. Adolph Hitler. George W. Bush. Abraham Lincoln. These choices make it easy to choose the answer, since two of the 4 are not even presidents of the U.S. and George W. Bush is the current president. A more discriminating question, based on the choices provided might ask: Who was the fourth president of the U.S.? Thomas Jefferson. James Monroe. John Adams James Madison. All choices are U.S. Presidents and all are early presidents?making it a more discriminating question. Stems with questions such as these increase the likelihood of writing a CT test item. Also, if the choices are close together, that too makes a test item highly discriminating. Think of the TV show ?Who Wants to be a Millionaire?? The lower value items (for $500 to $1,000) might ask: Which person was the 18th President of the U.S. Walt Disney. Adolph Hitler. George W. Bush. Abraham Lincoln. These choices make it easy to choose the answer, since two of the 4 are not even presidents of the U.S. and George W. Bush is the current president. A more discriminating question, based on the choices provided might ask: Who was the fourth president of the U.S.? Thomas Jefferson. James Monroe. John Adams James Madison. All choices are U.S. Presidents and all are early presidents?making it a more discriminating question.

38. These are format ?rules of thumb.? In terms of importance in writing CT test items, they are only about 10% of writing CT test items, but they are easy to implement. The most important thing you can do to develop good test items is conduct a scientific item analysis of every exam that is administered and store these data so the data can be used when choosing test items for the next exam. These are format ?rules of thumb.? In terms of importance in writing CT test items, they are only about 10% of writing CT test items, but they are easy to implement. The most important thing you can do to develop good test items is conduct a scientific item analysis of every exam that is administered and store these data so the data can be used when choosing test items for the next exam.

39. Names?can set up a hidden bias. If you ask about ?David? and someone has a child named David, it might set up a hidden bias when answering the test item. One school used faculty names for the clients?now that really can set up a hidden bias! Multiple-multiples: A and B, B and C, etc.?get rid of any of these?BAD test items. Remember all nurses are not ?she? and all babies are not ?he? and all physicians are not ?he.? Also, use Healthcare Provider when possible, rather than physician. Parsimonious writing is easier to read and MOST importantly today?easier to fit on one computer screen. Simply ask: Which intervention should the nurse implement? rather than: Which of the following interventions should the nurse implement? The answer to one test item should not be dependent on correctly answering another test item?that is ?double jeopardy.? Names?can set up a hidden bias. If you ask about ?David? and someone has a child named David, it might set up a hidden bias when answering the test item. One school used faculty names for the clients?now that really can set up a hidden bias! Multiple-multiples: A and B, B and C, etc.?get rid of any of these?BAD test items. Remember all nurses are not ?she? and all babies are not ?he? and all physicians are not ?he.? Also, use Healthcare Provider when possible, rather than physician. Parsimonious writing is easier to read and MOST importantly today?easier to fit on one computer screen. Simply ask: Which intervention should the nurse implement? rather than: Which of the following interventions should the nurse implement? The answer to one test item should not be dependent on correctly answering another test item?that is ?double jeopardy.?

40. Questions make it clear what you are asking, completions are not as clear. Do not make up words or provide a distracter that you know no one will choose?that decreases the discrimination value of the test item. Saw this concept in the question with ?approximately??clean up the items so that your are not repeating words in the responses that can be put into the stem. Questions make it clear what you are asking, completions are not as clear. Do not make up words or provide a distracter that you know no one will choose?that decreases the discrimination value of the test item. Saw this concept in the question with ?approximately??clean up the items so that your are not repeating words in the responses that can be put into the stem.

41. Why rationales?I know just one more thing to do! Morrison: ?You are going to hate me for a year, and love me thereafter? They are hard to start writing, but? They are learning experiences for students in that they should describe why a choice is correct and why the other choices are incorrect. They are faculty friendly because they will save you countless hours justifying test items to students. They are legally defensible?the student cannot say ?we do not get to see our exams.? You should provide them with a printout of their scores and have them sign the printout, which should be returned after the test review and placed in their permanent record. How to do a test review: from Morrison and Free?JNE, 2000. Tape test items and rationales on desks and allow students to rotate though classroom (NO paper or pencils permitted). Provide them with their test summary page and get them to sign it when they leave the room?keep in permanent record file so you can show that they had the opportunity to review the exam, and if it is not signed, they did not show up for the test review. With HESware, test review can be done at the computer. Faculty can choose whether or not the student can see rationales immediately after exam or at a later time. Students should NEVER be allowed to argue with faculty about a test item?see test item protest form in packets. Need 3 references because it promotes CT but also because cannot find the answer on one page of one text?is likely to take 3 resources to find the related content. Why rationales?I know just one more thing to do! Morrison: ?You are going to hate me for a year, and love me thereafter? They are hard to start writing, but? They are learning experiences for students in that they should describe why a choice is correct and why the other choices are incorrect. They are faculty friendly because they will save you countless hours justifying test items to students. They are legally defensible?the student cannot say ?we do not get to see our exams.? You should provide them with a printout of their scores and have them sign the printout, which should be returned after the test review and placed in their permanent record. How to do a test review: from Morrison and Free?JNE, 2000. Tape test items and rationales on desks and allow students to rotate though classroom (NO paper or pencils permitted). Provide them with their test summary page and get them to sign it when they leave the room?keep in permanent record file so you can show that they had the opportunity to review the exam, and if it is not signed, they did not show up for the test review. With HESware, test review can be done at the computer. Faculty can choose whether or not the student can see rationales immediately after exam or at a later time. Students should NEVER be allowed to argue with faculty about a test item?see test item protest form in packets. Need 3 references because it promotes CT but also because cannot find the answer on one page of one text?is likely to take 3 resources to find the related content.

42. Morrison: ?All of the above? and ?None of the Above? are faculty cheaters. These are used when you cannot think of enough distracters or when you want to do a listing and not have to provide any distracters. ?All except? even when underlined, bolded, and italicized can be missed and cause the student to answer incorrectly. ?All except? is frequently used so that the faculty only has to write one distracter and the other choices are some type of listing. If one response is 5-10 and the next is 7-8, then these two responses overlap. ?Present in a logical order? and ?Vary the correct answer? are not important if you randomize the choices?which most test administration programs allow. Morrison: ?All of the above? and ?None of the Above? are faculty cheaters. These are used when you cannot think of enough distracters or when you want to do a listing and not have to provide any distracters. ?All except? even when underlined, bolded, and italicized can be missed and cause the student to answer incorrectly. ?All except? is frequently used so that the faculty only has to write one distracter and the other choices are some type of listing. If one response is 5-10 and the next is 7-8, then these two responses overlap. ?Present in a logical order? and ?Vary the correct answer? are not important if you randomize the choices?which most test administration programs allow.

43. This is the MOST IMPORTANT housekeeping rule of all. Develop a writing policy for your school?outline such things as: Begin all choices with a capital letter. End all choices with a period. Describe age as 6-year-old, with hyphens between these words to make the description a noun, i.e.: A 6-year-old can be expected to use which sentence structure? Edition-1 of the CT and Test Writing book, P 39 describes several of the housekeeping policies for HESI item writers. This listing has been extensively re-written and Edition-2 will describe the most recently developed policy. MOST important?the faculty should examine this issue, write their policy, and give this policy to all current faculty and each new faculty member when they are hired. (Also is a good idea to provide new faculty with the CT and Test Item Writing book.) This is the MOST IMPORTANT housekeeping rule of all. Develop a writing policy for your school?outline such things as: Begin all choices with a capital letter. End all choices with a period. Describe age as 6-year-old, with hyphens between these words to make the description a noun, i.e.: A 6-year-old can be expected to use which sentence structure? Edition-1 of the CT and Test Writing book, P 39 describes several of the housekeeping policies for HESI item writers. This listing has been extensively re-written and Edition-2 will describe the most recently developed policy. MOST important?the faculty should examine this issue, write their policy, and give this policy to all current faculty and each new faculty member when they are hired. (Also is a good idea to provide new faculty with the CT and Test Item Writing book.)

44. This question is NOT a CT test item: It does not have a rationale, is not written at the application or above cognitive level, does not require a high level of discrimination to answer, and does not require multilogical thinking to answer. This question is NOT a CT test item: It does not have a rationale, is not written at the application or above cognitive level, does not require a high level of discrimination to answer, and does not require multilogical thinking to answer.

45. Here is the test item edited. It does have a rationale?on the next screen. It is written at the application level. It requires a high level of discrimination to choose the right answer from among plausible choices. It does require multilogical thinking to answer---the student must know that a shuffling gait (which puts the client at risk for falls) is a sign of Parkinson's? disease, and must know what nursing care is most important for this client?promotion of safety. Throw rugs pose a safety hazard. Here is the test item edited. It does have a rationale?on the next screen. It is written at the application level. It requires a high level of discrimination to choose the right answer from among plausible choices. It does require multilogical thinking to answer---the student must know that a shuffling gait (which puts the client at risk for falls) is a sign of Parkinson's? disease, and must know what nursing care is most important for this client?promotion of safety. Throw rugs pose a safety hazard.

46. See how the FIRST sentence brings the non-CT test item into the rationale and then goes on to talk about how this FACT is used in planning care for this client. Morrison: ?Drag those knowledge-based questions into the rationale and write a clinically oriented question based on these facts.? See how the FIRST sentence brings the non-CT test item into the rationale and then goes on to talk about how this FACT is used in planning care for this client. Morrison: ?Drag those knowledge-based questions into the rationale and write a clinically oriented question based on these facts.?

47. i-clicker Question Which element of a Critical Thinking Test Item do you feel is the most challenging to develop? Stem Responses Rationale

48. Morrison: No matter how good a test item is, it can always be made better. This process is referred to as ?tweaking.? Morrison: No matter how good a test item is, it can always be made better. This process is referred to as ?tweaking.?

49. Alternate Test Item Format The National Council of State Boards of Nursing (NCSBN) currently identifies five types of alternate item formats Multiple choice-multiple answer Fill-in-the-blank test items Hot-spots--identify an area on a picture or graphic Drag and drop (Ranking) Chart Exhibit 8% of NCSBN test items are alternate format test item format items. Multiple choice-multiple answer (MCMA)?are NOT multiple/multiple items. Those are the ones that ask: A and B B and C A, B, and C All of the above These types of items should NEVER be used?they are confusing. MCMA?should always have more than 4 choices?6 are preferred. A 4-choice question might be confused with a multiple-choice single-answer item and the student might choose only one answer. Fill-in the blank are always numbers?describe sentences: (Enter numerical value only.) (Enter numerical value only. If rounding is needed, round to two decimal places at the end of the calculation.) (Enter numerical value only. If rounding is needed, round to the closest whole number at the end of the calculation.) Multiple choice-multiple answer (MCMA)?are NOT multiple/multiple items. Those are the ones that ask: A and B B and C A, B, and C All of the above These types of items should NEVER be used?they are confusing. MCMA?should always have more than 4 choices?6 are preferred. A 4-choice question might be confused with a multiple-choice single-answer item and the student might choose only one answer. Fill-in the blank are always numbers?describe sentences: (Enter numerical value only.) (Enter numerical value only. If rounding is needed, round to two decimal places at the end of the calculation.) (Enter numerical value only. If rounding is needed, round to the closest whole number at the end of the calculation.)

50. Hot Spot

51. Ranking

52. Fill-in-the-blank

53. Calculator

54. Multiple Choice-Multiple Answer

55. Chart Exhibit

58. Home-Grown Alternate Format Items Standardized Testing Computerized Testing Skills Lab Practice Add to Lecture Clinical Simulation

59. When instilling a client?s eye drops, which technique(s) should the nurse include? Apply gentle pressure over the inner canthus. Apply firm pressure over the outer canthus. Hold the dropper six inches above the eye. Dab the cornea with a sterile cotton swab to absorb excess moisture. Ask the client to look up while placing the drop. Carefully drop the medication on the cornea. A. 1, 2, 3, and 6. B. 4, and 5 C. 3 and 6 only. D. 1 and 5 only. E. All of the above.

60. When instilling a client?s eye drops, which technique(s) should the nurse include? (Select all that apply.) Apply gentle pressure over the inner canthus. Apply firm pressure over the outer canthus. Hold the dropper six inches above the eye. Dab the cornea with a sterile cotton swab to absorb excess moisture. Ask the client to look up while placing the drop. Carefully drop the medication on the cornea.

61. Changing Multiple-Choice Questions Which action should the nurse complete first when implementing prescriptions for a client newly admitted with urosepsis? Administer the initial dose of Vancomycin. Obtain a urine specimen for culture. Notify lab personnel that peak and trough levels are needed. Insert an indwelling urinary catheter. Answer: D

62. To Drag-Drop (Ranking) In which sequence should the nurse implement the prescriptions for a client newly admitted with urosepsis? Administer the initial dose of Vancomycin. Obtain a urine specimen for culture. Notify lab personnel that peak and trough levels are needed. Insert an indwelling urinary catheter. Answer: D-B-A-C

63. Innovative Item Format Test items include: Identification on a picture, graph or chart Graphics Interaction Audio Video Animation Decision Task Item Sets

66. Home-Grown Innovative Format Items Standardized Testing Skills Lab Practice Safety Lab Add to Lecture Photos of Safety Lab Clinical Simulation

67. Divide into groups and give this assignment. Handout TWO transparencies to each group and ask why you are giving them TWO. Someone will always say in case we mess up the first one. NO?it so that they have enough room to write their rationales?they should be presented along with their test item. REMEMBER?the four criteria for writing a CT test item?HAVE a rationale!!! Divide into groups and give this assignment. Handout TWO transparencies to each group and ask why you are giving them TWO. Someone will always say in case we mess up the first one. NO?it so that they have enough room to write their rationales?they should be presented along with their test item. REMEMBER?the four criteria for writing a CT test item?HAVE a rationale!!!

68. i-clicker Question Will you implement the ?Faculty Activity?: Write a critical thinking test item, Present test item to group, and Evaluate the test item? No way?Not in this lifetime. I?ll discuss this with the faculty. Yes, at the next faculty meeting. I have already identified 5 questions that I want immediate input for revision.

69. Thank You!! Mary J. Yoho, PhD, RN, CNE m.yoho@elsevier.com Evolve Reach: Powered by HESI, the future of testing ?


Other Related Presentations

Copyright © 2014 SlideServe. All rights reserved | Powered By DigitalOfficePro