1 / 26

Developmental Education Assessment, Placement, and Progression

Developmental Education Assessment, Placement, and Progression. Thomas Bailey Based on Research by Katherine Hughes, Shanna Jaggars, Judith Scott-Clayton. National Context. For many (most?) entering CC students, assessment center is one of first places they will visit

jackdixon
Download Presentation

Developmental Education Assessment, Placement, and Progression

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Developmental EducationAssessment, Placement, and Progression Thomas Bailey Based on Research by Katherine Hughes, Shanna Jaggars, Judith Scott-Clayton

  2. National Context • For many (most?) entering CC students, assessment center is one of first places they will visit • For the majority of students sitting for these exams, the result is placement into developmental education • Yet research has not consistently found that this process actually improves student outcomes

  3. CCRC Literature Review(Hughes & Scott-Clayton) • Examined three questions: • Is there consensus regarding the proper purpose and role of assessment in CCs? • Are the most commonly used assessments valid for their intended purpose? • Are there alternative models of assessment that may improve outcomes for underprepared students? • CUNY study brings new data to bear on similar set of questions

  4. No Consensus on Meaning of College Ready • Many assessments • Many cut off scores • Many policies with respect to • Mandatory Testing • Mandatory Placement

  5. Figure 3: Educational Outcome by Math CPT Score and Estimated Discontinuity

  6. Are Dev Ed Assessments Valid? • CUNY uses COMPASS math & reading tests (published by ACT, Inc.; one of two most common assessments) • There are lots of different ways to think about validity: • Construct validity: does the test measure what you think it does? • Predictive validity: does the test predict some measure of later success? • Argument-based approach to validity: “It is the interpretation of test scores required by proposed uses that are evaluated, not the test itself” (Standards for Educational and Psychological Testing) • Focus here is on predictive validity • This is a necessary, but not sufficient component of overall validity of the test • “[U]ltimately, it is the responsibility of the users of a test to evaluate this evidence to ensure the test is appropriate for the purpose(s) for which it is being used” (College Board, 2003, p. A-62) • Broadest analysis of validity eventually requires a program evaluation: when students are assigned to some treatment on the basis of a score, do better outcomes result?

  7. Predictive Validity Analysis • Research questions: • How well do placement test scores predict “success” in the relevant gatekeeper course? • How well do other measures (such as high school performance) predict success, either instead of or in addition to placement test scores? • How many students are “correctly placed” using current placement test cutoffs to divide students, versus assigning all students to the same level?

  8. What is “Gatekeeper Success”? • “Gatekeeper” course: first college-level course • We look at three measures: • Completed course with B or higher • Completed course with C or higher • Passed course (D- or higher) • These measures of success are all conditional upon actually enrolling in a gatekeeper course

  9. Research Method Overview • Focus on first-time 2004-2007 entrants at two-year colleges only, who have CAS and placement test data • First, estimate statistical relationships between placement test scores (and/or other predictors) and gatekeeper success • Restrict sample to students who took gatekeeper without taking developmental coursework (“estimation sample”) • Then, regress gatekeeper success on placement test scores (and/or other predictors) to estimate relationships • Examine two summary measures: R-squareds and correlation coefficients • Second, use logistic regression to predict which students are likely to be “correctly placed” using different placement criteria

  10. Methodological Concerns • Restriction of range • R-squareds, correlations are measured only for those who were placed directly into gatekeeper course • In general this tends to depress r-squareds and correlations • Extrapolation • For placement accuracy analysis, we must use relationships estimated on about 25% of the data to predict likelihood of “success” for the other 75% • So we must hope that the other 75% aren’t that different (not totally implausible)

  11. Table 1 (R-squareds)

  12. Table 1 (R-squareds)

  13. Table 1 (Correlations)

  14. Placement Accuracy Rates • We know who will be placed in dev ed or gatekeeper based on test scores • We can estimate whether or not given individual is predicted to succeed based on test scores • Can then assign each person to one of four cells • Placement accuracy rate is sum of bottom left/upper right cells • Can also compare this to accuracy rates without using test at all

  15. Figure 1 False positives Acc. placed Accurately placed False neg.

  16. Figure 1

  17. Figure 2 (By Writing Score)

  18. Table 2

  19. Table 2

  20. Table 2

  21. Caveats • Maximizing placement accuracy rates may not be the goal • Our computation treats false positives and false negatives equally, but may care more about one than the other • Values about which type of error is worse can be inferred from where the cutoff is placed • Ex: Pr(passingGK) for math at cutoff is 67% • This means those that are just below cutoff are wrongly placed—false negatives—67% of the time • Could increase placement accuracy by lowering cutoff • But if we think failing someone in GK is 2x worse than making someone take developmental unnecessarily, then cutoff is in the right spot

  22. Figure 1

  23. Predictive Validity:Take-Away Messages • Placement tests are much better at predicting who is likelyto do well in gatekeeper than at predicting who is likelyto fail • Placement tests are more predictive of gatekeeper success in math than in english • High school academic measures are almost as predictive as math test scores, and more predictive than english test scores • Placement accuracy rates are only modestly higher in some cases, and substantially worse in others, than what would result if no tests were used • But weighting false positives and false negatives differently may change this conclusion • Analysis of effectiveness of remediation still to come

  24. For more information: Please visit us on the web at http://ccrc.tc.columbia.edu, where you can download presentations, reports, CCRC Briefs, and sign-up for news announcements. Community College Research Center Institute on Education and the Economy, Teachers College, Columbia University 525 West 120th Street, Box 174, New York, NY 10027 E-mail: ccrc@columbia.edu Telephone: 212.678.3091 CCRC is funded in part by: Alfred P. Sloan foundation, Bill & Melinda Gates Foundation, Lumina Foundation for Education, The Ford Foundation, National Science Foundation (NSF), Institute of Education Sciences of the U.S. Department of Education

More Related