1 / 38

Chapter 12 Evaluating Instruction

Chapter 12 Evaluating Instruction. Radford University EDUC 615 Group 7 Spring 2011. ASSESSING INSTRUCTION. Do test items relate to objectives? Are the instructional objectives clear? Does the teacher present the material clearly?.

whitney
Download Presentation

Chapter 12 Evaluating Instruction

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Chapter 12Evaluating Instruction Radford University EDUC 615 Group 7 Spring 2011

  2. ASSESSING INSTRUCTION • Do test items relate to objectives? • Are the instructional objectives clear? • Does the teacher present the material clearly? Evaluation of instruction, evaluation of teacher performance, and evaluation of the curriculum are all interrelated. Instruction is evaluated through student achievement. Evaluation of instruction is evaluation of the effectiveness of the instructor: Examples of instruction evaluation questions:

  3. Evaluation of instruction is the evaluation of curriculum. • Curriculum evaluation reveals how well students achieve in the areas that are assessed. • Curriculum evaluation indicates whether the content has been adequately covered. ASSESSING INSTRUCTION

  4. ASSESSING INSTRUCTION • Sample questions: • Was the subject matter the right choice to begin with? • Was the content relevant? • Does it meet student and social needs? • Are the profession and the public satisfied with it?    • Was the content selected wisely? There are several curriculum dimensions or concerns that must be evaluated in addition to the assessment of student achievement.

  5. Evaluation, assessment, measurement, testing and accountability are all words frequently used in both public and professional circles. Evaluation and assessment are used interchangeably. AN ERA OF ASSESSMENT Measurement and testing are both ways of gathering assessment data: Measurement - the means of determining the degree of achievement of a particular competency. Testing - the use of instruments for measuring achievement. Accountability - the state of being liable for students’ academic progress.

  6. AN ERA OF ASSESSMENT  The United States is now in an era of assessment. Edward L. Thorndike conceptualized the first standardized tests: GRE, SAT and ACT are examples of well known standardized tests in America. No Child Left Behind of 2001 - allowed individual states to set academic standards. It is also known as “high-stakes testing.” These examinations can result in negative consequences, such as grade retention and failure to graduate from high school.

  7. Henry Giroux stated that “testing has become the new ideological weapon in developing standardized curricula, a weapon that ignores how schools can serve populations of students that differ vastly with respect to cultural diversity, academic and economic resources and classroom opportunities.” AN ERA OF ASSESSMENT The National Center for Fair and Open Testing advocated replacing standardized testing with multiple forms of “high quality classroom assessments that reflect the various ways children really learn.” Numerous educators dislike testing, standardized and non-standardized. They feel these tests impose predetermined curriculum and are destructive to students’ self-concepts.

  8. Stages of Planning for Evaluation (continued) • Three Phases of Evaluation • Preassessment-takes place before instruction; allows teachers to determine whether or not students have the prerequisite skills needed • Formative Evaluation-takes place during instruction; consists of formal and informal techniques; enables teachers to monitor their instruction so that they may keep on course • Summative Evaluation-takes place after instruction; major purpose is to determine whether the students have mastered the instruction; an effective teacher would use the results to revise his/her methods and program 

  9. NORM-REFERENCED MEASUREMENT AND CRITERION-REFERENCED MEASUREMENT • Norm-referenced and criterion-referenced measurements are two different types of testing measurements that are used by instructors. • Norm-Referenced Measurement: With norm-referenced measurement, a student's individual performance is compared to the performance of other students that took the test. Standardized tests are a commonly used example of norm-referenced measurement. When classroom teachers use this type of measurement, students are graded in relation to their performance of that group on that test. Grades are given in relationship to the middle grade, a certain number of students will be above and below this grade. • Criterion-Referenced Measurement: This type of measurement takes into consideration the individual performance of the student. Each student's success depends on their mastery of the objectives being tested. When this type of measurement is being used, students are graded on their own achievement, regardless of how the rest of the class has scored.

  10. NORM-REFERENCED MEASUREMENT AND CRITERION-REFERENCED MEASUREMENT • Comparison of the Two Types of Measurement: W. James Popham identified the "most fundamental difference" between these two measurements as "the nature of the interpretation that is used to make sense out of students' test performance". The Instructional Model, which is discussed in the text, leans more towards criterion-referenced measurement. It does say that norm-referenced measurement can be used in the classroom, however only in certain circumstances. It should not be used to ensure that some students receive lower grades so as to have an even distribution of grades.

  11. EVALUATION IN THREE DOMAINS • Psychomotor Domain:  "Objectives in this domain are best evaluated by actual performance of the skill being taught." (Oliva 380)  In order to pass the type of test in this domain you must meet certain criteria. http://www.assessment.uconn.edu/docs/LearningTaxonomy_Psychomotor.pdf • Cognitive Domain: This domain most often takes the form of written tests.  The three main instructional levels of this domain include fact, understanding, and application.  Normally the model of instruction used in this domain is direct instruction (lecture) and the evaluation is subjective and objective test questions.  "The teacher should seek to evaluate, when appropriate, student achievement in all six levels of the Bloom taxonomy of the cognitive domain, using both essay and objective test items."(Oliva 382) •   Bloom's Taxonomy •  Affective Domain: "Student achievement in the affective domain is difficult and sometimes impossible to assess.  Attitudes, values, and feelings can be deliberately concealed; learners have the right to hide their personal feelings and beliefs, if they choose." (Oliva 384)   • Krathwohl's Taxonomy

  12. PERFORMANCE-BASED ASSESSMENT Ideas to remember concerning assessment: • Students can demonstrate achievement both during and at the end of instruction through means other than typical examinations. • A skilled instructor can tell a good deal about pupils' success just by observing their classroom performance.  • Alternative techniques of evaluation (other than examinations) include student logs, reports, essays, notebooks, simulations, demonstrations, construction activities, self-evaluation, and portfolios.  (Oliva, 385-386).

  13. Alternative Assessments • Popular on the current scene of alternative assessments is the use of portfolios as a form of measurement.  • Portfolios are systematic collections of student work over time.  • Collections help students and teachers to assess the growth and progress of development over time.  • It is essential that students develop a sense of ownership of their portfolio to measure their progress and where the need to continue to work to be successful.  • Portfolios exemplify achievement in all three domains of learning: cognitive, affective, and psychomotor.  Examples of work may include individual classwork assignments, reports, poems, letters, reading logs, and audiotape recordings.  (Oliva, 386)

  14. Virginia Alternative Assessments: • Below are links for information concerning the current alternative assessments completed in Virginia •  Alternative and Alternate assessment administrator manual: • http://www.doe.virginia.gov/testing/alternative_assessments/administrators_manual_2010_11.pdf •  Virginia Alternate Assessment Program (VAAP) • http://www.doe.virginia.gov/testing/alternative_assessments/vaap_va_alt_assessment_prog/index.shtml •  Virginia Grade Level Alternative (VGLA) • http://www.doe.virginia.gov/testing/alternative_assessments/vgla_va_grade_level_alt/index.shtml • Virginia Substitute Evaluation Program (VSEP) • http://www.doe.virginia.gov/testing/alternative_assessments/vsep_va_substitute_eval_prog/index.shtml • Virginia Modified Achievement Standards Test (VMAST) • http://www.doe.virginia.gov/testing/alternative_assessments/vmast_va_mod_achievement_stds_test/index.shtml

  15. Alternative Assessments • Alternative assessment measures may begin       to include practices that could reduce or eliminate       homework and change grading practices • Marzano took the position that "a single grade       or a percentage score is not a good way to report         achievement in any subject area, because it simply cannot       present the level of detailed feedback necessary for effective         learning." • Alternative assessments may change the more traditional forms of classroom assessments, but are unlikely, in the foreseeable future, to replace the use of standardized and teacher-made tests of student achievement.  (Oliva, 387)

  16. Four rules of assessment by Popham: Teachers seeking to improve their classroom assessment skills might follow these four rules of assessment by Popham: • Use only a modest number of major classroom tests, but make sure these tests measure learner outcomes of indisputable importance. • Use diverse types of classroom assessments to clarify the nature of any learning outcome you seek. • Make students' responses to classroom assessments central to your instructional decision making. • Regularly assess educationally significant student affect - but only to make inferences about groups of students, not individual students. (Oliva, 387-388)

  17. ASSESSMENT INITIATIVE FROM BEYOND THE CLASSROOM • Assessment on a scale broader than the classroom should be considered when visiting the topic of instruction evaluation: we will now consider district, state, national, and international assessments DISTRICT ASSESSMENTS • beginning in the 1980's, many districts began district wide assessment in response to demand for accountability and increased "criticism over both real and perceived deficiencies • these are often administered at the end of each marking period  (Oliva, 388)

  18. STATE ASSESSMENTS • In the past twenty years, states have been subject to more attention in regards to assessment • state legislators and state departments of education were motivated to establish minimum standards for test performance partly due to disappointment by national and international assessment results and as a means to hold teachers and administrators accountable • the No Child Left Behind Act of 2001 prompted states to develop and administer state assessments • an exception to this movement is the Nebraska School-based Teacher-led Assessment Reporting System (STARS) which allows the use of district designed assessments in place of state tests; Nebraska's system consists of a portfolio of classroom tests, district tests, a state writing exam and one nationally recognized test and was approved for NCLB in 2006                  (Oliva, 389)

  19. NATIONAL ASSESSMENTS • SAT Test • scores on the verbal portion of the SAT dropped from 1952 to      1982; scores on the math portion showed a decline before 1991 • since that time (1982 for verbal and 1991 for math), scores on both portions rose • the rise in scores may be attributed to improvements in curriculum, instruction, and technology • in early 2005, the SAT was lengthened • analogy questions were dropped and a writing test was added • forty-eight percent of high school graduates took the SAT in 2006; thirty-eight percent of those tested were minority students • the National Center for Fair and Open Testing cites inaccuracy, bias, and susceptibility to coaching as flaws in the SAT (Oliva, 390)

  20. ACT Test • forty percent of all seniors graduating in 2006 took the American College Testing Program test • thirty-six percent of these graduates chose to take the optional writing test • between 1970 and 1982, ACT scores declined • scores have generally increased since 1984 •  National Assessment of Educational Progress (NAEP) • operated with federal funds since it began • known as the "Nation's Report Card" • policies are set by the National Assessment Governing Board (NAGP) comprised of twenty-six members appointed by the U.S. Secretary of Education • testing areas include: • the arts, civics, geography, U.S. history, math, reading, science, writing, and knowledge and skills in using the computer • foreign language and world history tests are in development   (Oliva, 390-391)

  21. NAEP tests from 40,000 to 150,000 students, in 900 to 2,500 schools (depending on the number of testing disciplines) • "report cards" showing national and state results are issued to the public after each assessment • national scores of students in grades four, eight, and twelve are reported by gender, race/ethnicity, region of the country, parents' education, type of school, type of location, and free/reduced-price school lunch program participation  • scores on NAEP assessments have been inconsistent since NAEP began testing in 1969 • gaps between scores made by whites and by other racial/ethnic groups have narrowed over the past twenty years • data is reported only for groups (as opposed to individual schools) to prevent embarrassment to particular schools although some districts choose to release this information • opponents of national assessments argue that testing will result in the development of national curriculum •  there is also concern that NAEP testing will fulfill the role of "auditor" in regards to NCLB and state assessments      (Oliva, 391-392)

  22. International Assessments • Two American Associations • International Assessment of Educational Progress • IAEP • International Association for the Evaluation of Educational Achievement • IEA

  23. First IAEP took place in 1988 Conducted by the Educational Testing Service Funded by the U.S. Department of Education and National Science Foundation 5 Countries involved Ireland Korea Spain United Kingdom United States Also involved four Canadian Provinces Assessed proficiency in Math and Science in 13 year olds  Results United States average was below other in math and science Results showed U.S. students' achievements are below a desirable level International Assessment of Educational Progress:  IAEP

  24. Took place in 1991 Assessed Science and Math once again Included fourteen countries this time Looked at the achievement of 9 year olds Included twenty countries Assessed the achievement of 13 year olds Results Korea and Taiwan rated at the top in both math and science United States Came in third with 9 year olds Ranked seventh with 13 year olds Second IAEP Assessment

  25. International Association for the Evaluation of Educational Achievement:  IEA • Funded by the U.S. Office of Education and the Ford Foundation • Studies have covered achievements in math, science, literature, reading comprehension, foreign languages (English and French), and civic education • Surveyed around 250,000 students and 50,000 teachers in 22 countries

  26. International Mathematics Studies • First International Mathematics Study • 1964 • Surveyed more than 130,000 students • Covered more than 5,000 schools • 12 countries • Second International Mathematics Study • Funded by the National Science Foundation and the U.S. Department of Education • 1981-1982 • Studied 12,000 eighth and twelfth graders enrolled in college-preparatory programs • 20 countries

  27. Third International Mathematics and Science Study:  TIMSS • Conducted in 1995 • Took place at the TIMSS and PIRLS International Study Group in Boston College's Lynch School of Education • Tested over 500,000 students in 41 countries • Results • U.S. 4th graders were above international average in science and only Korea outscored them • U.S. 4th graders scored above average in math • U.S. 8th graders were above average in science, but below average in mathematics • U.S. 12th graders were below international averages in both science and math

  28. Conducted in 1999 Tested eighth graders in 38 countries Found Math and Science scores were lower in this 1999 test compared to the 1995 test Results U.S. 8th graders in math Outperformed 17 nations and scored lower than 14 nations U.S. 8th graders in science Outperformed 18 nations and scored lower than 14 nations Third International Mathematics and Science Study-Repeat:  TIMSS-R

  29. Third International Mathematics and Science Study:  TIMSS 2003 • 4th and 8th graders • Math and Science • Over 40 countries involved • Results for mathematics • U.S. students showed improvements from 1995 to 2003 • Singapore students came out on top in both 4th and 8th graders • Hong Kong SAR, Japan, and Taipai scores followed that of Singapore in 4th graders • Korea, Hong Kong SAR, and Taipei followed Singapore in 8th graders

  30. Progress in International Reading Literacy Studies:  PIRLS • 1991 Study Results • U.S. 9 year olds ranked near the top on the International Association for the Evaluation of Educational Achievement study on reading literacy • 32 countries involved • U.S. 14 year olds would take second falling right behind France

  31. PIRLS 2001 • 4th graders • 34 countries • Results • U.S. fourth graders took 9th place • Was above international average on the combined literacy scale • Outperformed 23 of the 34 countries • Sweden was 1st • Netherlands 2nd • England 3rd

  32. 2006 Administered by Science Service American high school students won top Intel Foundation Young Scientist Awards in the Intel Science and Engineering Fairs 2007 Survey conducted by Roper Public Affairs for the National Geographic Society Found young Americans struggle with geographic literacy PIRLS 2006 and 2007

  33. Difficulties in Comparing Students Across the International Spectrum • Differences among nations which can affect scores • Curricula • Instructional Strategies • Political and Social Conditions • Length of the School Year • Time Allocated to Studies in School and at Home • Number of Pupils per Teacher • Motivation of Students • Dedication of Parents to Education • Traditions

  34. REALITY STATEMENTS Group seven has a strong understanding of how the school administrator affects the instructional program of their school. We understand that assessment is a continuous process.  It not only comes at the end of  instruction, but also can happen before instruction begins. When looking into evaluating instructional programs, we mostly think of standards based tests.  However, we understand that when evaluating an instructional program, many factors come into play.  We must consider the community in which we teach, our students' behaviors, the various learning styles of our students, the available resources, and the required curriculum that we teach.  We understand evaluation is a continuous process of learning for both students and instructors.

  35. In order to promote the success of each student we must: • Continue to use all forms of assessment  (pre-evaluation,    formative assessment, and summative assessment) to guide our classroom instruction. • Continue to have an understanding of the three domains of evaluation (psychomotor, cognitive, and affective).   •  Continue to use a variety of assessment techniques in our classrooms. •  Continue to develop and provide opportunities for our students to be self-directed life long learners. • Continue to administer district assessments, such as benchmark testing, and state assessments (SOLs) as prescribed by our individual divisions and the Commonwealth of Virginia.

  36. In order to promote the success of each student we must: • Continue to have a clear understanding of the stated learning objectives. • Continue to provide new and innovative ways to teach the learning objectives. (Teaching 21st Century Students Video )  • Continue to assess student achievement and provide data driven instruction to our students.   •  Continue to understand that data from written tests are not the only data we use to drive instruction. • Continue to understand that although we may not agree with all required testing, we must use the data to provide a better instructional environment for our students.

  37. GROUP PARTICIPATION Assessing Instruction and An Era of Assessment and Reality Statements - Melissa Ray Stages of Planning for Evaluation and Reality Statements-Victoria Florey Evaluation in Three Domains and Reality Statements- Trudy Cobler and Christian Miller Norm-Referenced Measurement and Criterion-Referenced Measurement and Reality Statements - Sarah Mercer Assessment Initiatives from Beyond the Classroom and Reality Statements- Kelly Russell and Jerad Ward Alternative Assessment and Reality Statements - Melissa Roark

More Related