slide1 l.
Download
Skip this Video
Loading SlideShow in 5 Seconds..
A Presentation to the USDOE January 13, 2010 Mary Ann Snider Chief of Educator Excellence and Instructional Effectivenes PowerPoint Presentation
Download Presentation
A Presentation to the USDOE January 13, 2010 Mary Ann Snider Chief of Educator Excellence and Instructional Effectivenes

Loading in 2 Seconds...

play fullscreen
1 / 15

A Presentation to the USDOE January 13, 2010 Mary Ann Snider Chief of Educator Excellence and Instructional Effectivenes - PowerPoint PPT Presentation


  • 121 Views
  • Uploaded on

A Presentation to the USDOE January 13, 2010 Mary Ann Snider Chief of Educator Excellence and Instructional Effectiveness. Race to the Top- Assessment Programs Project and Consortium Management Lessons Learned from NECAP and the National Center for the Improvement of Educational Assessment.

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about 'A Presentation to the USDOE January 13, 2010 Mary Ann Snider Chief of Educator Excellence and Instructional Effectivenes' - tobit


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
slide1

A Presentation to the USDOEJanuary 13, 2010Mary Ann SniderChief of Educator Excellence and Instructional Effectiveness

Race to the Top- Assessment Programs

Project and Consortium Management

Lessons Learned from NECAP and the National Center for the Improvement of Educational Assessment

ri participates in three consortia models
RI Participates in Three Consortia Models

Three consortia- three different models

Model I- NECAP: efficiency, capacity, cost-savings for high impact program (State Testing Program)

Model II-WIDA: expertise on particular subgroup of students for moderate impact program (Complicated Test Design for Specific Population)

Model III-ACHIEVE: comparability/common curriculum, end-of-course model for low impact program (Specific content test model for comparability of results)

governance and leadership
Governance and Leadership
  • Model I- Members are operational partners
  • Model II- Members serve as a board of directors
  • Model III- Members serve on an advisory committee
governance and leadership4
Governance and Leadership

Depends on-

  • Size of consortia
  • Expertise and capacity of members
  • Purpose and products of assessments
  • Phase of program: initial design, maintaining and implementing, responding to changes
consortium members characteristics
Consortium Members Characteristics
  • Must have Common Standards
  • Must have a common vision for test blueprint (types of items, length of test, number of sessions)
  • Must have common operational agreements- spring versus fall, ability and willingness to integrate technology, release of test items, test security agreements
consortium member characteristics
Consortium Member Characteristics
  • Should have common uses of test (informing or determining promotion or graduation decisions, impact on educator evaluation)
  • Should have common reporting needs- scale scores, standards-based, sub-scores, item analyses, historical student data
consortium member characteristics7
Consortium Member Characteristics
  • Could have common technical expectations and capacities- demographic files, score files, timing to “clean files” for accuracy in reporting, standard setting agreements (representation and methodology), reconciling discrepancies, connection to data warehouse.
governance and leadership necap
Governance and Leadership- NECAP
  • Goal is to reach consensus but each state has one vote when consensus can’t be reached.
  • This model is carried throughout tiers of responsibility- commissioners (signing off on achievement cut scores, directors approving overall design and procedures, content teams selecting items and anchor papers, review teams approving items for inclusion)
roles for third parties
Roles for Third Parties
  • Facilitate management meetings
  • Provide technical oversight of assessment design
  • Serve as “architect” between operational partners and contractors
  • Convene Technical Advisory Committees
  • Develop ancillary test support materials
  • Provide professional development
features for success
Features for Success
  • Set clear expectations and clarify the extent of control each member will have on decisions
  • Decide which decisions need consensus and which need unanimous agreement and which can be handled by voting
  • Decide how contracts and funding will be shared
  • Develop strong protocols for communication (e.g. weekly calls, status reports, questions and concerns)
features for success11
Features for Success
  • Identify strengths and potential needs among all members in the partnership (e.g. content teams, strong ELL staff, etc.)
  • Determine what must be done collectively and what can be done individually (accountability methodology, single cut score and set of achievement descriptors, common administration procedures, accommodations, reports)
what can and probably will go wrong
What can (and probably will) go wrong?
  • Lead participants change- commissioners, testing directors, content members
  • State budgets and capacity change
  • There are vastly differing opinions when interpreting content standards for test items, anchor papers, etc. among members
what can and probably will go wrong13
What can (and probably will) go wrong!
  • Demands on the test change
  • A lack of strong commitment to working collaboratively makes a difficult decision harder

This is the Report Name

what should the rttt consider
What Should the RTTT Consider?
  • Identify what features are critical and should be expected across the consortia (e.g. Alignment to Common Standards, consistent accommodations, distribution of item types, involvement of teachers)
  • Acknowledge what assessments have been a struggle for states and encourage different types of consortia to develop them in partnership with experts (ELL, 2%, Alternate Assessments)
what should the rttt consider15
What should the RTTT Consider?
  • Allow states to work together on NECAP-like assessment programs in core areas with NAEP-like items embedded
  • Identify areas for innovation and build national assessment models (end-of-course assessments, career and technical assessments)
  • Work with testing companies to ensure they are prepared to accommodate the operational, contractual, and technical issues necessary to successfully support a consortia assessment project