Resampled range of witty titles
This presentation is the property of its rightful owner.
Sponsored Links
1 / 21

[ Resampled Range of Witty Titles] PowerPoint PPT Presentation


  • 40 Views
  • Uploaded on
  • Presentation posted in: General

[ Resampled Range of Witty Titles]. Understanding and Using the NRC Assessment of Doctorate Programs. Lydia Snover, Greg Harris & Scott Barge Office of the Provost, Institutional Research Massachusetts Institute of Technology • 2 Feb 2010. Overview. Overview.

Download Presentation

[ Resampled Range of Witty Titles]

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript


Resampled range of witty titles

[Resampled Range of Witty Titles]

Understanding and Using the NRC Assessment of Doctorate Programs

Lydia Snover, Greg Harris & Scott Barge

Office of the Provost, Institutional Research

Massachusetts Institute of Technology • 2 Feb 2010


Overview

Overview

Overview

*NB: All figures/data in this presentation are used for illustrative purposes only and do not represent a known institution.

Background & Context

Approaches to Ranking

The NRC Model: A Modified Hybrid

Presenting & Using the Results


Background context

Background & Context

Introduction

History of NRC Rankings

MIT Data Collection Process


Participating mit programs

Participating MIT Programs

Introduction


Section 2

Section 2

Approaches to Ranking

Approaches to Ranking


How do we measure program quality

How do we measure program quality?

  • Use indicators (“countable” information) to compute a rating

    • Number of publications

    • Funded research per faculty member

    • Etc.,

  • Try to quantify more subjective measures through an overall perception-based rating

    • Reputation

    • “Creative blending of interdisciplinary perspectives”

Approaches to Rankings


Section 3

Section 3

The NRC Approach

The NRC Approach


So how does nrc blend the two

So how does NRC blend the two?

The NRC used a modified hybrid of the two basic approaches:

  • In total, a 4-step process, indicator based, by field

  • Process results in 2 sets of indicator weights developed through faculty surveys:

    • “Bottom up” –importance of indicators

    • “Top-down” – perception-based ratings of a sample of programs

  • Multiple iterations (re-sampling) to model “the variability in ratings by peer raters.” *

The NRC Approach

*For more information on the rationale for re-sampling, see pp. 14-15 of the NRC Methodology Report


So how does nrc blend the two1

So how does NRC blend the two?

STEP 1: Gather raw data from institutions, faculty & external sources on programs. Random University (RU) submitted data for its participating doctoral programs.

The NRC Approach

NRC


So how does nrc blend the two2

So how does NRC blend the two?

STEP 2: Use faculty input to develop weights:

  • Method 1: Direct prioritization of indicators--“What characteristics (indicators) are important to program quality in your field?”

The NRC Approach

Calculations


So how does nrc blend the two3

So how does NRC blend the two?

STEP 2: Use faculty input to develop weights:

  • Method 2: A sample of faculty each rate a sample of 15 programs from which indicator weights are derived.

The NRC Approach

PrincipleComponents

& Regression


So how does nrc blend the two4

So how does NRC blend the two?

STEP 3: Combine both sets of indicator weights and apply them to the raw data:

The NRC Approach

X

=

Rating


So how does nrc blend the two5

So how does NRC blend the two?

STEP 4: Repeat steps 500 times for each field

A) Randomly draw ½ of faculty “important characteristics” surveys

C) Randomly draw ½ of faculty program rating surveys

G) Randomly perturb institutions’ program data 500 times*

H) Use each pair of iterations (1 perturbation of data (G) + 1 set of weights (F)) to rate programs and prepare 500 ranked lists

D) Compute “regression- based” weights

The NRC Approach

B) Calculate “direct” weights

E) Combine weights

F) Repeat (A) – (E) 500 times to develop 500 sets of weights for each field

I) Toss out the lowest 125 and highest 125 rankings for each program and present the remaining range of rankings

*For more information on the perturbation of program data, see pp. 50-1 in the NRC Methodology Report


Section 4

Section 4

Results

Presenting & Using the Results


What are the indicators

What are the indicators?

Results


What will the results look like

What will the results look like?

  • TABLE 1:Program values for each indicator plus overall summary statistics for the field

Results


What will the results look like1

What will the results look like?

  • TABLE 2:Indicators and indicator weights – one standard deviation above and below the mean of the 500 weights produced for each indicator through the iterative process (and a locally calculated mean)

Results

*n.s. in a cell means the coefficient was not significantly different from 0 at the p=.05 level.


What will the results look like2

What will the results look like?

  • TABLE 3:Range of rankings for RU’s Economics program alongside other programs, overall and dimensional rankings

Results


What will the results look like3

What will the results look like?

  • TABLE 4:Range of rankings for all RU’s programs

Results


Resampled range of witty titles

Q&A

Q&A


For more information

For more information…

  • The full NRC Methodology Report

    http://www.nap.edu/catalog.php?record_id=12676

  • Helpful NRC Frequently Asked Questions Page

    http://sites.nationalacademies.org/pga/Resdoc/PGA_051962

Resources


  • Login