Estimation of ability using globally optimal scoring weights
Download
1 / 43

Estimation of Ability Using Globally Optimal Scoring Weights - PowerPoint PPT Presentation


  • 122 Views
  • Uploaded on

Estimation of Ability Using Globally Optimal Scoring Weights. Shin-ichi Mayekawa Graduate School of Decision Science and Technology Tokyo Institute of Technology. Outline. Review of existing methods Globally Optimal Weight: a set of weights that maximizes the Expected Test Information

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about ' Estimation of Ability Using Globally Optimal Scoring Weights' - pascal


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
Estimation of ability using globally optimal scoring weights
Estimation of Ability Using Globally Optimal Scoring Weights

Shin-ichi Mayekawa

Graduate School of Decision Science and Technology

Tokyo Institute of Technology


Outline
Outline

  • Review of existing methods

  • Globally Optimal Weight: a set of weights that maximizes the Expected Test Information

  • Intrinsic Category Weights

  • Examples

  • Conclusions


Background
Background

  • Estimation of IRT ability q on the basis of simple and weighted summed score X.

    • Conditional distribution of X given qas the distribution of the weighted sum of the Scored Multinomial Distribution.

    • Posterior Distribution of q given X.

      h(q|x) @ f(x|q) h(q )

      • Posterior Mean(EAP) of q given X.

      • Posterior Standard Deiation(PSD)


Item score
Item Score

We must choose w to calculate X.

IRF


Item score1
Item Score

We must choose w and v to calculate X.

ICRF


Conditional distribution of x given q
Conditional distribution of X given q

  • Binary items

    • Conditional distribution of summed score X.

      • Simple sum: Walsh(1955), Lord(1969)

      • Weighted sum: Mayekawa(2003)

  • Polytomous items

    • Conditional distribution of summed score X.

      • Simple sum: Hanson(1994), Thissen et.al.(1995)

      • With Item weight and Category weight: Mayekawa & Arai(2007)


Example
Example

  • Eight Graded Response Model items 3 categories for each item.


Example choosing weight
Example (choosing weight)

  • Example: Mayekawa and Arai (2008)

  • small posterior variance  good weight.

  • Large Test Information (TI) good weight


Test information function
Test Information Function

  • Test Information Function is proportional to the slope of the conditional expectation of X given q, (TCC), and inversely proportional the squared width of the confidence interval (CI) of q given X.

  • Width of CI

    • Inversely proportional to the conditionalstandard deviation of X given q.



Test information function for polytomous items
Test Information Functionfor Polytomous Items

ICRF


Maximization of the test information when the category weights are known
Maximization of the Test Informationwhen the category weights are known.

  • Category weighted Item Scoreand the Item Response Function


Maximization of the test information when the category weights are known1
Maximization of the Test Informationwhen the category weights are known.


Maximization of the test information when the category weights are known2
Maximization of the Test Informationwhen the category weights are known.

  • Test Information


Maximization of the test information when the category weights are known3
Maximization of the Test Informationwhen the category weights are known.

  • First Derivative


Maximization of the test information when the category weights are known4
Maximization of the Test Informationwhen the category weights are known.


Globally optimal weight
Globally Optimal Weight

  • A set of weights that maximizethe Expected Test Informationwith some reference distribution of q .

    It does NOT depend on q .


Example1
Example

NABCT A B1 B2 GO GOINT A AINT

Q1 1.0 -2.0 -1.0 7.144 7 8.333 8

Q2 1.0 -1.0 0.0 7.102 7 8.333 8

Q3 1.0 0.0 1.0 7.166 7 8.333 8

Q4 1.0 1.0 2.0 7.316 7 8.333 8

Q5 2.0 -2.0 -1.0 17.720 18 16.667 17

Q6 2.0 -1.0 0.0 17.619 18 16.667 17

Q7 2.0 0.0 1.0 17.773 18 16.667 17

Q8 2.0 1.0 2.0 18.160 18 16.667 17

LOx LO GO GOINT A AINT CONST

7.4743 7.2993 7.2928 7.2905 7.2210 7.2564 5.9795


Maximization of the test information with respect to the category weights
Maximization of the Test Informationwith respect tothe category weights.

  • Absorb the item weight in category weights.


Maximization of the test information with respect to the category weights1
Maximization of the Test Informationwith respect tothe category weights.

  • Test Information

  • Linear transformation of the categoryweights does NOT affect the information.


Maximization of the test information with respect to the category weights2
Maximization of the Test Informationwith respect tothe category weights.

  • First Derivative


Maximization of the test information with respect to the category weights3
Maximization of the Test Informationwith respect tothe category weights.

  • Locally Optimal Weight


Globally optimal weight1
Globally Optimal Weight

  • Weights that maximizethe Expected Test Informationwith some reference distribution of q .


Intrinsic category weight
Intrinsic category weight

  • A set of weights which maximizes:

  • Since the category weights can belinearly transformed, we set v0=0, ….. vmax=maximum item score.



Example of intrinsic weights1
Example of Intrinsic Weights

  • h(q)=N(-0.5, 1): v0=0, v1=*, v2=2


Example of intrinsic weights2
Example of Intrinsic Weights

  • h(q)=N(0.5, 1): v0=0, v1=*, v2=2


Example of intrinsic weights3
Example of Intrinsic Weights

  • h(q)=N(1, 1 ): v0=0, v1=*, v2=2


Summary of intrinsic weight
Summary of Intrinsic Weight

  • It does NOT depend on q, butdepends on the reference distributionof q: h(q) as follows.

  • For the 3 category GRM, we found that

    • For those items with high discriminationparameter, the intrinsic weights tendto become equally spaced: v0=0, v1=1, v2=2

    • The Globally Optimal Weight isnot identical to the Intrinsic Weights.


Summary of intrinsic weight1
Summary of Intrinsic Weight

  • For the 3 category GRM, we found that

    • The mid-category weight v1 increases according to the location of the peak ofICRF. That is:

      The more easy the category is,

      the higher the weight .

    • v1 is affected by the relative location ofother two category ICRFs.


Summary of intrinsic weight2
Summary of Intrinsic Weight

  • For the 3 category GRM, we found that

    • The mid-category weight v1 decreases according to the location of the reference distribution of q: h(q).

    • If the location of h(q) is high, the mostdifficult category gets relatively high weight,and vice versa.

    • When the peak of the 2nd categorymatches the mean of h(q), we haveeqaully spaced category weights:

      v0=0, v1=1, v2=2



Test information
Test Information

LOx LO GO GOINT CONST

30.5320 30.1109 30.0948 29.5385 24.8868





Bayesian estimation of q from x2
Bayesian Estimation of q from X

(1/0.18)^2 = 30.864


Conclusions
Conclusions

  • Polytomous item has the Intrinsic Weight.

  • By maximizing the Expected Test Information with respect to either Item or Category weights, we can calculate the Globally Optimal Weights which do not depend on q.

  • Use of the Globally Optimal Weights when evaluating the EAP of q given X reduces the posterior variance.



ご静聴ありがとうございました。Thank you.


ad