html5-img
1 / 21

[ Resampled Range of Witty Titles]

[ Resampled Range of Witty Titles]. Understanding and Using the NRC Assessment of Doctorate Programs. Lydia Snover, Greg Harris & Scott Barge Office of the Provost, Institutional Research Massachusetts Institute of Technology • 2 Feb 2010. Overview. Overview.

adolfo
Download Presentation

[ Resampled Range of Witty Titles]

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. [Resampled Range of Witty Titles] Understanding and Using the NRC Assessment of Doctorate Programs Lydia Snover, Greg Harris & Scott Barge Office of the Provost, Institutional Research Massachusetts Institute of Technology • 2 Feb 2010

  2. Overview Overview *NB: All figures/data in this presentation are used for illustrative purposes only and do not represent a known institution. Background & Context Approaches to Ranking The NRC Model: A Modified Hybrid Presenting & Using the Results

  3. Background & Context Introduction History of NRC Rankings MIT Data Collection Process

  4. Participating MIT Programs Introduction

  5. Section 2 Approaches to Ranking Approaches to Ranking

  6. How do we measure program quality? • Use indicators (“countable” information) to compute a rating • Number of publications • Funded research per faculty member • Etc., • Try to quantify more subjective measures through an overall perception-based rating • Reputation • “Creative blending of interdisciplinary perspectives” Approaches to Rankings

  7. Section 3 The NRC Approach The NRC Approach

  8. So how does NRC blend the two? The NRC used a modified hybrid of the two basic approaches: • In total, a 4-step process, indicator based, by field • Process results in 2 sets of indicator weights developed through faculty surveys: • “Bottom up” –importance of indicators • “Top-down” – perception-based ratings of a sample of programs • Multiple iterations (re-sampling) to model “the variability in ratings by peer raters.” * The NRC Approach *For more information on the rationale for re-sampling, see pp. 14-15 of the NRC Methodology Report

  9. So how does NRC blend the two? STEP 1: Gather raw data from institutions, faculty & external sources on programs. Random University (RU) submitted data for its participating doctoral programs. The NRC Approach  NRC

  10. So how does NRC blend the two? STEP 2: Use faculty input to develop weights: • Method 1: Direct prioritization of indicators--“What characteristics (indicators) are important to program quality in your field?” The NRC Approach Calculations

  11. So how does NRC blend the two? STEP 2: Use faculty input to develop weights: • Method 2: A sample of faculty each rate a sample of 15 programs from which indicator weights are derived. The NRC Approach PrincipleComponents & Regression

  12. So how does NRC blend the two? STEP 3: Combine both sets of indicator weights and apply them to the raw data: The NRC Approach X = Rating 

  13. So how does NRC blend the two? STEP 4: Repeat steps 500 times for each field A) Randomly draw ½ of faculty “important characteristics” surveys C) Randomly draw ½ of faculty program rating surveys G) Randomly perturb institutions’ program data 500 times* H) Use each pair of iterations (1 perturbation of data (G) + 1 set of weights (F)) to rate programs and prepare 500 ranked lists D) Compute “regression- based” weights The NRC Approach B) Calculate “direct” weights E) Combine weights F) Repeat (A) – (E) 500 times to develop 500 sets of weights for each field I) Toss out the lowest 125 and highest 125 rankings for each program and present the remaining range of rankings *For more information on the perturbation of program data, see pp. 50-1 in the NRC Methodology Report

  14. Section 4 Results Presenting & Using the Results

  15. What are the indicators? Results

  16. What will the results look like? • TABLE 1:Program values for each indicator plus overall summary statistics for the field Results

  17. What will the results look like? • TABLE 2:Indicators and indicator weights – one standard deviation above and below the mean of the 500 weights produced for each indicator through the iterative process (and a locally calculated mean) Results *n.s. in a cell means the coefficient was not significantly different from 0 at the p=.05 level.

  18. What will the results look like? • TABLE 3:Range of rankings for RU’s Economics program alongside other programs, overall and dimensional rankings Results

  19. What will the results look like? • TABLE 4:Range of rankings for all RU’s programs Results

  20. Q&A Q&A

  21. For more information… • The full NRC Methodology Report http://www.nap.edu/catalog.php?record_id=12676 • Helpful NRC Frequently Asked Questions Page http://sites.nationalacademies.org/pga/Resdoc/PGA_051962 Resources

More Related