slide1 l.
Skip this Video
Loading SlideShow in 5 Seconds..
Race to the Top Assessment Program General & Technical Assessment Discussion PowerPoint Presentation
Download Presentation
Race to the Top Assessment Program General & Technical Assessment Discussion

Loading in 2 Seconds...

play fullscreen
1 / 10

Race to the Top Assessment Program General & Technical Assessment Discussion - PowerPoint PPT Presentation

  • Uploaded on

Race to the Top Assessment Program General & Technical Assessment Discussion. Jeffrey Nellhaus Deputy Commissioner January 20, 2010. Through-Course Summative Assessment System. Definition. Implications for Curriculum.

I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
Download Presentation

PowerPoint Slideshow about 'Race to the Top Assessment Program General & Technical Assessment Discussion' - york

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript

Race to the Top Assessment ProgramGeneral & Technical Assessment Discussion

Jeffrey Nellhaus

Deputy Commissioner

January 20, 2010

through course summative assessment system
Through-Course Summative Assessment System


Implications for Curriculum

A through-course summative assessment system includes multiple assessment components. Components are administered periodically over the course of the school year. Student performance on each component is aggregated to produce summative results

  • This type of assessment system will require states/consortia to identify the standards assessed by each component, the component exam schedule, and whether LEAs must administer the components in a particular sequence
  • This will require significant changes in local curricula, because most/all state assessment programs do not currently dictate any particular instructional sequence
through course assessments construct validity
Through Course AssessmentsConstruct Validity


Applicants should be asked to describe

The Whole is Greater Than the Sum of the Parts

Proficiency means going beyond demonstrating a basic grasp of individual standards or groups of closely related standards, and includes the application of multiple standards from any aspect of the content area to solve complex problems

  • Their concept of proficiency and their approach to its measurement using the through-course summative assessment system by indicating
    • The standards that will be assessed by each component exam
    • How each component will address standards assessed previously
    • How individual test items will address multiple, as well as single, standards
through course assessments external validity
Through Course AssessmentsExternal Validity


Applicantsshould be asked to describe

The exams will need to measure and report “readiness”

An important measure of the external validity of the through-course assessment system will be the extent to which summative results for each content area accurately predict whether students are on track/ready for college or a career

  • How they plan to report the results of each component exam,
  • How they plan to aggregate component results, including implications the plan will have for item development, and provide a rationale for the method selected
  • How they plan to determine the summative score on each exam that predicts readiness; and how those scores will be validated over time
through course assessments reliability comparability
Through Course AssessmentsReliability & Comparability


Applicantsshould be asked to describe

The level of reliability needed will depend on reporting plans and intended uses of the results

High levels of reliability are required for accountability uses

Comparability requires high levels of reliability and standardization of all elements of the exams

  • How they will achieve a level of reliability that adequately supports their reporting plans and planned uses of results
  • The extent to which their plans require standardized test administration within and across schools and how it will be achieved
  • How they plan to establish high levels of reliability and accuracy in the scoring of constructed response questions within and across years, whether scored externally by contractors or locally by teachers
  • Preliminary plans for equating results across years
through course assessments feasibility practicability
Through Course AssessmentsFeasibility/Practicability


Applicantsshould be asked to provide

Validity and Reliability Are Not Enough

In addition to issues of validity and reliability, feasibility (long-term sustainability) is a major factor that needs to be considered in designing, developing and implementing large-scale assessment systems

  • An estimate of the average yearly cost (per student) of the assessment system, and a rationale for sustainability
  • An estimate of the testing time for each component, and a rationale indicating that the amount of testing time is sustainable
  • An estimate of LEA staffing time required to prepare for and administer exams, provide accommodations, and score items (where applicable)
  • The amount of initial and ongoing training and professional development that will be required to launch and maintain system over time
end of course high school exams
End-of-Course High School Exams


Applicants should be asked to describe

  • Criteria that will be used to certify the quality and rigor of each exam in the set
  • Criteria used to certify that the quality and rigor of the set of exams are comparable
  • The criteria that will be used to certify that the results of individual exams, or collections of different exams, can be used to identify students who are ready for the next course in the content area or post-secondary pursuits

Applicants will be invited to propose a “system” for developing and certifying the quality and rigor of a set of common end-of-course summative exams in multiple high school subjects.

If these exams are decentralized: developed, scored, administered and reported by individual LEAs or high schools, then to ensure consistency in quality and rigor across schools …

computer based test administration
Computer-based Test Administration


Applicants should be asked



Comparability with paper-and-pencil


Students with


  • How exams (1) will be administered in schools where computer/student ratios are low and there is limited or no access to broadband; and (2) their approach to ensure that students will have the opportunity to learn computer-based test taking skills
  • Not be asked how they will ensure that computer-based tests and any needed paper-and-pencil versions assess comparable levels of student knowledge and skill, if preserving the full power of the computer-based item types is required
  • Computer-based assessments provide more advantages than challenges for SWD. Applicants should be asked how they will to take advantage computer-based assessments to improve the validity of results for this population
innovation and improvement
Innovation and Improvement


Applicants should be asked to:

Applicants must ensure that they have a structured process and/or approach that will lead to innovation and improvement over time.

  • Set aside a certain percentage of their budget for research and development
  • Develop a 4-year research and development agenda identifying specific questions applicants want to answer. Specifically, questions that, once answered, would help them innovate or improve
  • Identify university and other partners who would help move their research agenda forward and/or serve on advisory panels during the four years of the grant and beyond
  • Agree to share the findings of their research with other consortia/states at periodic conferences sponsored by the USDE, and through USDE supported research networks.
issues for focused research
Issues for Focused Research



  • This is important, but (1) a statistical modeling issue more than a measurement issue (2) many states will use their RTTT grant funds to conduct research in this area. It might be more productive to provide additional support to states with these grants
  • Yes, research is needed here, assuming this is about the practical challenges of equating exams, which include extended performance tasks

Use of value-added methodology for teacher and school accountability

Comparability, generalizability, and growth modeling for assessments that include performance tasks