1 / 20

1% + 2% = ______________: ADDING UP WHAT WE KNOW & DON’T KNOW ABOUT ALTERNATE ASSESSMENTS

This article explores the key definitions, assessment challenges, and percent proficient in alternate assessments. It discusses the different approaches, such as portfolio assessment and performance assessment, and addresses the needs of students with disabilities who are unable to perform at or above proficient levels. The article also examines the tension between standardization and validity evidence in assessment and the importance of eliminating construct irrelevant variance for accurate measurement.

mcarmichael
Download Presentation

1% + 2% = ______________: ADDING UP WHAT WE KNOW & DON’T KNOW ABOUT ALTERNATE ASSESSMENTS

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. 1% + 2% = ______________:ADDING UP WHAT WE KNOW & DON’T KNOW ABOUT ALTERNATE ASSESSMENTS Stephen N. Elliott, PhD Gerald Tindal, PhD Vanderbilt University University of Oregon

  2. Alternate Assessments Key Definitions • AA-AAS or Alternate Assessments of Alternate Achievement Standards (capped at 1% of total population) • AA-MAAS or Alternate Assessments of Modified Alternate Achievement Standards (capped at 2% of total population) Elliott & Tindal / CEC 2010

  3. DIFFERENT ACHIEVEMENT STANDARDS / 1 ACCOUNTABILITY SYSTEM Large-Scale Achievement Test With or Without Accommodations Grade Level Content Standards Grade Level Proficiency Standards AYP Report for All Students Grade Level Content Standards Modified Performance Indicators Modified Alternate Assessment Modified Proficiency Standards Grade Level Content Standards Alternate Performance Indicators (2% Capped) Alternate Assessment Rating Scales, Portfolio, or Performance Assessment Alternate Proficiency Standards (1% Capped) Elliott & Tindal / CEC 2010

  4. Standard 4: Geometry & Measurement 1. Identify and describe the basic properties of figures (e.g., 2 or 3 dimensionality, symmetry, number of faces, types of angles). Achievement Test Grade Level Content Standards Sources of Evidence Standard 4: Geometry & Measurement 1. Identify basic properties of 2 dimensional figures (e.g., number of sides, types of angles). 2. Identify basic properties of 3 dimensional figure. Modified Grade Level Content Standards Modified Achievement Test Extended Content Standards Performance Tasks, Classroom Work, & Observations Collected by Teachers Standard 4: Geometry & Measurement 1. Count the number of sides of a square. 2. Count the number of sides of a triangle. Elliott & Tindal / CEC 2010

  5. Definitions of Approaches for 1% Alternate Assessments • Portfolio Assessmentis an organized collection or documentation of student-generated or student-focused work typically depicting the range of individual student skills. [30 states] • Performance Assessmentis a task or series of tasks requiring a student to provide a response or create a product to show mastery of a specific skill or content standard. [21 states] • Comprehensive Rating Scales of Achievementare rating scales anchored by descriptive rubrics for quantifying teacher judgments of students’ knowledge and skills based on repeated direct and indirect observations situated in a number of school settings. [13 states] • Multiple Choice/Constructed Response – [6 states] Note. 2% AAs thus far have all used item formats like general achievement tests. Elliott 2009 / SpEd 3825

  6. The students? 1% + 2% = ----- • 1% focus on “authentic” skills that are integrated across domains and have potential for use outside of school • 2 % take a test that is sensitive to significant lack of basic skills yet is on grade level • In both groups, there is a need to distinguish access versus target skills (within students and over time) Elliott & Tindal / CEC 2010

  7. The students? 1% + 2% = ----- • Students with disabilities who have been determined unable to perform at or above the proficient level on a grade-level general achievement test. • 1% eligible students have a severe cognitive disability, require modified instruction and curriculum, & extensive support for skill generalization. (50 states) • 2% eligible students are those “whose disability has prevented them from achieving grade-level proficiency and who likely will not reach grade-level achievement in the same timeframe as other students.” (DOE Regs. 34 CFR Part 200 Title 1 & NCLB) (14 states) Elliott & Tindal / CEC 2010

  8. Assessment Challenges? 1% + 2% = ----- • Standards and Standardization • Alignment of items and grade level standards • Tension between the mandates of standardization and validity evidence that is needed • How to compare assessment approaches: What kind of evidence? • How to avoid conflating independent variables with dependent measures? Elliott & Tindal / CEC 2010

  9. Assessment Challenges? 1% + 2% = ----- • Eliminating construct irrelevant variance so measurement is of targeted knowledge and skills rather than access skills. • Teachers should ensure that for both 1% & 2% assessments students have had an opportunity to learn the assessed curriculum. • For many 1% assessments, students require significant support to response to a question or item. The support must function as an acceptable accommodation or it is likely to undermine the validity of the test score inference. • For many 2% assessments, students are confronted by items with substantial extraneous information and thus create cognitive load problems. Elliott & Tindal / CEC 2010

  10. Percent Proficient? 1% + 2% = ----- • Fixing time and level of achievement while using status not growth • Moving from interval scores to nominal categories: Is there lost meaning? • Articulating different test types that tap into the same construct (Standard Test, AA-MAS, and AA-AAS) Elliott & Tindal / CEC 2010

  11. Percent Proficient? 1% + 2% = ----- • Nearly 75% of students taking 1% AAs across the country are deemed Proficient, while less than 30% of students with disabilities taking the general assessment are deemed Proficient. Clearly the meaning of proficient and the relative rigor of the associated cut-scores are different. • It is too early to report on comparable proficient rates for the 2% AA. Elliott & Tindal / CEC 2010

  12. Consequences? 1% + 2% = ----- • Advocacy and Adequacy: Balancing opportunity and appropriateness • Better measurement may be in the offing (e.g., accommodations and universal design) • Focus on post-secondary outcomes and the need for more vertical articulation of outcomes across grades Elliott & Tindal / CEC 2010

  13. Consequences? 1% + 2% = ----- • Most1% AAs require significant teacher involved in the assessment. This takes time, but in many cases seems to result in a more instructionally relevant assessment. Teachers also often receive valuable feedback about students long before their assessment is reported for accountability purposes. • The development of items for 2% AAs has provided valuable information about ways to improve items for the general achievement test. • Inclusive assessments have many positive consequences for students and teachers. Elliott & Tindal / CEC 2010

  14. Technical Issues? 1% + 2% = ----- • Measurement of learning with the same yardstick • Different rules of engagement? • Different types of evidence? • Conflation of process and outcome variables (how a test is taken with the outcome of the test) • Consideration of validity as a unitary construct Elliott & Tindal / CEC 2010

  15. Technical Issues? 1% + 2% = ----- • Alignment of knowledge and skills tested on both 1% and 2% AAs with grade-level content standards. • Reliability of scoring of 1% AAs by independent raters. • Developing meaningful Achievement Level Descriptors and associated cut scores for Proficient determination. • Difficulty measuring growth with 1% AAs. Elliott & Tindal / CEC 2010

  16. The Future? 1% + 2% = ----- • Informing policy with evidence about improved programs as well as improved performance • Moving to more formative assessments that integrate various initiatives (e.g., Response to Intervention) into large-scale assessments • Disseminating fugitive literature from what we know Elliott & Tindal / CEC 2010

  17. The Future? 1% + 2% = ----- • Revisions of both 1% and 2% AAs to ensure alignment with Common Content Standards. • Increased emphasis on characterizing students’ academic progress / growth. • Increased measurement of achievement using multiple-measures (e.g., interim assessments, formative assessments). • Limited number of states implementing 2% AAs due to costs. Elliott & Tindal / CEC 2010

  18. Concluding Thoughts! 1% + 2% = ----- • The same rules for collecting validity evidence are needed, irrespective of populations or assessment approaches • Growth should be the coin of the realm with models of accountability fostering not restricting this focus on measurement • Inclusive assessments such as 1% and 2% AAs have advanced accountability practices for students with disabilities. Elliott & Tindal / CEC 2010

  19. Remaining Sessions in the Alternate Assessment Strand Proficient Performance: What Does it Mean & How is it Achieved Dianna Carrizales, Brad Lenhardt, & Nancy Latini Good Scores are Hard to Get: Technically Speaking Aran Felix, Kim Sherman, Gerald Tindal, & Naomi Zigmond Alternate Assessments’ Contributions to Better Classroom Instruction and Testing Stephen Elliott & Ryan Kettler Elliott & Tindal / CEC 2010

  20. Thank you very much for joining us! Stephen N. Elliott, PhD, Vanderbilt University Steve.elliott@vanderbilt.edu Gerald Tindal, PhD, University of Oregon Geraldt@orgeon.edu

More Related