1 / 40

Item Response Theory in a Multi-level Framework

Item Response Theory in a Multi-level Framework. Saralyn Miller Meg Oliphint EDU 7309. Agenda. Item Response Theory Conceptual Overview Show Models ( Rasch , 2PL, 3PL) Example In Class Practice Differential Item Functioning Conceptual Overview Example In Class Practice.

huslu
Download Presentation

Item Response Theory in a Multi-level Framework

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Item Response Theory in a Multi-level Framework Saralyn Miller Meg Oliphint EDU 7309

  2. Agenda • Item Response Theory • Conceptual Overview • Show Models (Rasch, 2PL, 3PL) • Example • In Class Practice • Differential Item Functioning • Conceptual Overview • Example • In Class Practice

  3. Defining Terms • IRT • Item Response Theory - provides a framework for evaluating how well assessments work, and how well individual items on assessments work • DIF • Differential Item Functioning– people from different groups with same ability function differently on certain items • CTT • Classical Test Theory – • Observed Score + Error = True Score

  4. IRT vs. CTT – Situating IRT • IRT allows for greater reliability • IRT can be used in CAT • IRT allows for difficulty and ability to be on the same scale • CTT is simple to compute • IRT can be analyzed using multi-level modeling

  5. How does IRT work?

  6. Defining Item Parameters • ai – ability parameter – point on the ability scale (θ) that intersects with the probability of 0.5 - P(θ) • bi – difficulty parameter – point on θ where the ICC has its maximum slope • ci – guessing parameter – added to ability (based number of response choices) to account for possible guessing

  7. IRT formula 3 PL Model (which includes 1PL and 2PL) Rasch Model

  8. Example: LSAT data • 5 Test items; 1000 examinees library(ltm) head(LSAT) Item 1 Item 2 Item 3 Item 4 Item 5 1 0 0 0 0 0 2 0 0 0 0 0 3 0 0 0 0 0 4 0 0 0 0 1 5 0 0 0 0 1 6 0 0 0 0 1

  9. Example: LSAT data descript(LSAT) Descriptive statistics for the 'LSAT' data-set Sample: 5 items and 1000 sample units; 0 missing values Proportions for each level of response: 0 1 logit Item 1 0.076 0.924 2.4980 Item 2 0.291 0.709 0.8905 Item 3 0.447 0.553 0.2128 Item 4 0.237 0.763 1.1692 Item 5 0.130 0.870 1.9010 Frequencies of total scores: 0 1 2 3 4 5 Freq 3 20 85 237 357 298

  10. Example: LSAT data Pairwise Associations: Item i Item j p.value 1 1 5 0.565 2 1 4 0.208 3 3 5 0.113 4 2 4 0.059 5 1 2 0.028 6 2 5 0.009 7 1 3 0.003 8 4 5 0.002 9 3 4 7e-04 10 2 3 4e-04 Point Biserial correlation with Total Score: Included Excluded Item 1 0.3618 0.1128 Item 2 0.5665 0.1531 Item 3 0.6181 0.1727 Item 4 0.5342 0.1444 Item 5 0.4351 0.1215 Cronbach's alpha: value All Items 0.2950 Excluding Item 1 0.2754 Excluding Item 2 0.2376 Excluding Item 3 0.2168 Excluding Item 4 0.2459 Excluding Item 5 0.2663

  11. ##Fitting the Rasch model## fitRasch1<-rasch(LSAT,constraint=cbind(length(LSAT)+1,1)) summary(fitRasch1) Call: rasch(data = LSAT, constraint = cbind(length(LSAT) + 1, 1)) Model Summary: log.Lik AIC BIC -2473.054 4956.108 4980.646 Coefficients: value std.errz.vals Dffclt.Item 1 -2.8720 0.1287 -22.3066 Dffclt.Item 2 -1.0630 0.0821 -12.9458 Dffclt.Item 3 -0.2576 0.0766 -3.3635 Dffclt.Item 4 -1.3881 0.0865 -16.0478 Dffclt.Item 5 -2.2188 0.1048 -21.1660 Dscrmn 1.0000 NA NA Integration: method: Gauss-Hermite quadrature points: 21 Optimization: Convergence: 0 max(|grad|): 6.3e-05 quasi-Newton: BFGS

  12. coef(fitRasch1,prob=TRUE,order=TRUE) DffcltDscrmn P(x=1|z=0) Item 1 -2.8719712 1 0.9464434 Item 5 -2.2187785 1 0.9019232 Item 4 -1.3880588 1 0.8002822 Item 2 -1.0630294 1 0.7432690 Item 3 -0.2576109 1 0.5640489 patterns<-rbind("all.zeros"=rep(0,5),"mix1"=rep(0:1,length=5),"mix2"=rep(1:0,length=5),"all.ones"=rep(1,5)) residuals(fitRasch1,resp.patterns=patterns,order=FALSE) Item 1 Item 2 Item 3 Item 4 Item 5 ObsExpResid all.zeros 0 0 0 0 0 3 5.016847 -0.9004457 mix1 0 1 0 1 0 0 2.739417 -1.6551184 mix2 1 0 1 0 1 28 23.314087 0.9704765 all.ones 1 1 1 1 1 298 323.237052 -1.4037121

  13. Item Characteristic Curveplot(fitRasch1,legend=TRUE,pch=rep(1:20,each=5),xlab="LSAT",col=rep(1:5,2),lwd=2,cex=1.2,sub=paste("Call:",deparse(fitRasch1$call)))

  14. Item Information Curveplot(fitRasch1, type = "IIC", legend = TRUE, pch = rep(1:2, each = 5), xlab = "Attitude",col = rep(1:5, 2), lwd = 2, cex = 1.2, sub = paste("Call: ", deparse(fitRasch1$call)))

  15. Test Information Curveinfo1<-plot(fitRasch1,type="IIC",items=0,lwd=2,xlab="LSAT")

  16. Multi-level analysis of IRT • Hierarchical generalized linear models (HGLM) • Framework used for the nesting structure of item responses. • We are going to focus on the intercept model where items are dichotomous. • Items are nested in examinees. • Item Responses (1st level) • Examinees (2nd level) • The HGLM model is a fully crossed design since all examinees answer all test items. • We will use a type of Rasch modeling.

  17. Item 1 Item 1 Person 1 Item 2 Item 2 Person 1 Item 3 Item 3 Item 4 Item 4 Person 2 Person2 Item 5 Item 5 Fully Nested Design Fully Crossed Design Item 6 Item 6

  18. HGLM Rasch Model • At level 1, all items are inserted into the model and usually the last item is used as the reference item (intercept). • At level 2, we have fixed and random effects where examinee ability is random, but item difficulty is fixed.

  19. Multi-level Formulas At level 1 we are obtaining the log-odds of the probability that person j obtains a correct score (one) on item i: At level 2 under this model, intercepts are random. This means we are allowing an examinee’s ability to be random. Slopes are not random. This means item difficulties are fixed. Now we can substitute the formulas above, back into the equation for the probability that person j answers item I correctly.

  20. Kyle’s data multi level kyle<-read.table("mlm2.txt",header=T) ##All items must be factors to use nlme### kyle$person<-as.factor(kyle$person) kyle$resp<-as.factor(kyle$resp) kyle$i1<-as.factor(kyle$i1) kyle$i2<-as.factor(kyle$i2) kyle$i3<-as.factor(kyle$i3) kyle$i4<-as.factor(kyle$i4) kyle$i5<-as.factor(kyle$i5) kyle$i6<-as.factor(kyle$i6) kyle$i7<-as.factor(kyle$i7) kyle$i8<-as.factor(kyle$i8) kyle$i9<-as.factor(kyle$i9) kyle$i10<-as.factor(kyle$i10)

  21. Kyle’s data multi level library(nlme) glmm.fit.kyle<-glmmPQL(resp~i1+i2+i3+i4+i5+i6+i7+i8+i9,random=~1|person,family=binomial,data=kyle) summary(glmm.fit.kyle) Linear mixed-effects model fit by maximum likelihood Data: kyle AIC BIC logLik NA NA NA Random effects: Formula: ~1 | person (Intercept) Residual StdDev: 1.134212 0.9028756 Variance function: Structure: fixed weights Formula: ~invwt Fixed effects: resp ~ i1 + i2 + i3 + i4 + i5 + i6 + i7 + i8 + i9 Value Std.Error DF t-value p-value (Intercept) -2.538158 0.7737393 171 -3.280379 0.0013 i11 5.844407 1.2270676 171 4.762906 0.0000 i21 4.554857 0.9629141 171 4.730283 0.0000 i31 5.056955 1.0357069 171 4.882613 0.0000 i41 3.551793 0.8852165 171 4.012344 0.0001 i51 3.551793 0.8852165 171 4.012344 0.0001 i61 2.548906 0.8606717 171 2.961531 0.0035 i71 1.806033 0.8671231 171 2.082787 0.0388 i81 2.307883 0.8604645 171 2.682136 0.0080 i91 0.510344 0.9462893 171 0.539311 0.5904

  22. Kyle’s data multi level library(nlme) glmm.fit.kyle<-glmmPQL(resp~i1+i2+i3+i4+i5+i6+i7+i8+i9,random=~1|person,family=binomial,data=kyle) summary(glmm.fit.kyle) ###Rest of output### Correlation: (Intr) i11 i21 i31 i41 i51 i61 i71 i81 i11 -0.571 i21 -0.725 0.475 i31 -0.675 0.444 0.561 i41 -0.784 0.510 0.646 0.602 i51 -0.784 0.510 0.646 0.602 0.697 i61 -0.800 0.514 0.654 0.609 0.708 0.708 i71 -0.787 0.503 0.640 0.595 0.694 0.694 0.710 i81 -0.798 0.512 0.651 0.606 0.706 0.706 0.720 0.709 i91 -0.711 0.449 0.572 0.532 0.622 0.622 0.639 0.634 0.639 Standardized Within-Group Residuals: Min Q1 Med Q3 Max -4.1891522 -0.5501785 0.1875289 0.4873082 4.6409578 Number of Observations: 200 Number of Groups: 20

  23. Calculate Difficulties Fixed effects: resp ~ i1 + i2 + i3 + i4 + i5 + i6 + i7 + i8 + i9 Value Std.Error (Intercept) -2.538158 0.7737393 i11 5.844407 1.2270676 i21 4.554857 0.9629141 i31 5.056955 1.0357069 i41 3.551793 0.8852165 i51 3.551793 0.8852165 i61 2.548906 0.8606717 i71 1.806033 0.8671231 i81 2.307883 0.8604645 i91 0.510344 0.9462893 To calculate item difficulty, we must use the following: I1 [-5.84-(-2.54)] = -3.3 I2 -2.01 I3 -2.52 I4 -1.01 I5 -1.01 I6 -0.01 I7 0.73 I8 0.23 I9 2.03 I10 2.54

  24. Kyle’s Data (single level) kyle<-read.table("mlm2.txt",header=T) library(psych) library(ltm) head(kyle) respperson id i1 i2 i3 i4 i5 i6 i7 i8 i9 i10 cons bconsdenom 1 1 1 1 1 0 0 0 0 0 0 0 0 0 1 1 1 2 1 1 2 0 1 0 0 0 0 0 0 0 0 1 1 1 3 1 1 3 0 0 1 0 0 0 0 0 0 0 1 1 1 4 1 1 4 0 0 0 1 0 0 0 0 0 0 1 1 1 5 1 1 5 0 0 0 0 1 0 0 0 0 0 1 1 1 6 1 1 6 0 0 0 0 0 1 0 0 0 0 1 1 1

  25. Kyle’s Data (single level) ##data is already stacked for multi-level analysis so data needs to be unstacked### kyle$item<-rep(1:10, 20) kyle.new<-kyle[,c(1,2,17)] kyle1<-reshape(kyle.new, timevar="item", idvar="person", direction="wide") head(kyle1) person resp.1 resp.2 resp.3 resp.4 resp.5 resp.6 resp.7 resp.8 resp.9 resp.10 1 1 1 1 1 1 1 1 1 1 1 0 11 2 1 1 1 1 1 1 1 1 0 1 21 3 1 1 1 1 1 1 1 1 0 0 31 4 1 1 1 1 1 1 0 1 1 0 41 5 1 1 1 1 1 0 1 1 0 0 51 6 1 1 1 1 1 1 0 1 0 0 ##create new subset without “person” variable## kyle2<-subset(kyle1,select=c(resp.1,resp.2,resp.3,resp.4,resp.5,resp.6,resp.7,resp.8,resp.9,resp.10))

  26. Kyle’s Data (single level) ##constraints where disc=1### fitRasch1<-rasch(kyle2,constraint=cbind(length(kyle2)+1,1)) summary(fitRasch1) Call: rasch(data = kyle2, constraint = cbind(length(kyle2) + 1, 1)) Model Summary: log.Lik AIC BIC -93.08932 206.1786 216.1360 Coefficients: valuestd.errz.vals Dffclt.resp.1 -3.3917 1.0854 -3.1249 Dffclt.resp.2 -2.0704 0.7076 -2.9260 Dffclt.resp.3 -2.5864 0.8188 -3.1586 Dffclt.resp.4 -1.0371 0.5806 -1.7862 Dffclt.resp.5 -1.0371 0.5806 -1.7862 Dffclt.resp.6 -0.0071 0.5445 -0.0129 Dffclt.resp.7 0.7520 0.5652 1.3306 Dffclt.resp.8 0.2395 0.5468 0.4380 Dffclt.resp.9 2.0760 0.7124 2.9144 Dffclt.resp.10 2.5993 0.8245 3.1527 Dscrmn 1.0000 NA NA Integration: method: Gauss-Hermite quadraturepoints: 21 Optimization: Convergence: 0 max(|grad|): 0.00025 quasi-Newton: BFGS

  27. Kyle’s Data (single level) #items ordered by difficulty and probability of positive response by the average individual# coef(fitRasch1,prob=TRUE,order=TRUE) DffcltDscrmn P(x=1|z=0) resp.1 -3.391711408 1 0.96744449 resp.3 -2.586410763 1 0.92998186 resp.2 -2.070364708 1 0.88798924 resp.4 -1.037146071 1 0.73829896 resp.5 -1.037111039 1 0.73829219 resp.6 -0.007050493 1 0.50176262 resp.8 0.239493993 1 0.44041105 resp.7 0.751995163 1 0.32038672 resp.9 2.076049140 1 0.11144661 resp.10 2.599328664 1 0.06918164

  28. Compare Difficulties (kyle data)

  29. Example in Class – Multi-level of LSAT data in ltm package ##need to reshape the data## LSAT1<-reshape(LSAT,varying=list(1:5),direction="long") LSAT1<-LSAT1[order(LSAT1$id),] colnames(LSAT1)<-c("item","score","id") LSAT1$item1<-ifelse(LSAT1$item==1,1,0) LSAT1$item2<-ifelse(LSAT1$item==2,1,0) LSAT1$item3<-ifelse(LSAT1$item==3,1,0) LSAT1$item4<-ifelse(LSAT1$item==4,1,0) LSAT1$item5<-ifelse(LSAT1$item==5,1,0) LSAT1[1:15,] ###MAKE VARIABLES FACTORS### ###RUN ANALYSIS### ###COMPUTE DIFFICULTIES###

  30. 1. MAKE VARIABLES FACTORS2. RUN ANALYSIS3. COMPUTE DIFFICULTIES

  31. Compare Difficulties

  32. Example in Class – Multi-level of LSAT data in ltm package glmm.fit.LSAT<-glmmPQL(score~item1+item2+item3+item4,random=~1|id,family=binomial,data=LSAT1) summary(glmm.fit.LSAT) Linear mixed-effects model fit by maximum likelihood Data: LSAT1 AIC BIC logLik NA NA NA Random effects: Formula: ~1 | id (Intercept) Residual StdDev: 0.8172182 0.8986588 Variance function: Structure: fixed weights Formula: ~invwt Fixed effects: score ~ item1 + item2 + item3 + item4 Value Std.Error DF t-value p-value (Intercept) 1.997380 0.09013816 3996 22.159099 0 item11 0.614355 0.13849689 3996 4.435876 0 item21 -1.057513 0.10750549 3996 -9.836829 0 item31 -1.776414 0.10453076 3996 -16.994170 0 item41 -0.763684 0.11004239 3996 -6.939908 0 Correlation: (Intr) item11 item21 item31 item11 -0.592 item21 -0.767 0.497 item31 -0.791 0.510 0.661 item41 -0.748 0.485 0.626 0.645 Standardized Within-Group Residuals: Min Q1 Med Q3 Max -4.2531208 0.2231402 0.3957765 0.5798209 1.9214245 Number of Observations: 5000 Number of Groups: 1000

  33. Differential Item Functioning • DIF is a way to detect biased (unfair) questions in a given test • An item is said to have DIF if: • People in the same group • With the same ability • Answer the question differently • Classic Example: Question about math that requires heavy reading load or Questions about calculating ERA

  34. Differential Item Functioning • Can be detected using logistic regression: • Looking for SS for an item to have DIF • Can be detected in a multi-level modeling framework: • Looking at the interaction effect between the grouping variable and that certain item • If the DIF estimate is larger than twice the standard error, the item is biased • Keep in mind: DIF = bad!

  35. DIF Example with LSAT Data > LSAT1$gender<-as.factor(rep(0:1, each=500)) > head(LSAT1) item score id item1 item2 item3 item4 item5 gender 1.1 1 0 1 1 0 0 0 0 0 1.2 2 0 1 0 1 0 0 0 0 1.3 3 0 1 0 0 1 0 0 0 1.4 4 0 1 0 0 0 1 0 0 1.5 5 0 1 0 0 0 0 1 0 2.1 1 0 2 1 0 0 0 0 0

  36. Item 1 glmm.dif.LSAT<-glmmPQL(score~0+item1*gender+item2*gender+item3*gender+item4*gender,random=~1|id,family=binomial,data=LSAT1) • summary(glmm.dif.LSAT) • Fixed effects: score ~ 0 + item1 * gender + item2 * gender + item3 * gender + item4 * gender Value Std.Error DF t-value p-value item1 0.330415 0.144 3993 2.299060 0.0216 gender0 1.534279 0.107 998 14.296862 0.0000 gender1 2.954049 0.162 998 18.206689 0.0000 item2 -0.665216 0.128 3993 -5.203006 0.0000 item3 -0.940232 0.126 3993 -7.469119 0.0000 item4 -0.695781 0.128 3993 -5.453253 0.0000 item1:gender1 24.417536 19800.750 3993 0.001233 0.9990 gender1:item2 -1.175118 0.219 3993 -5.360656 0.0000 gender1:item3 -2.155689 0.216 3993 -9.990973 0.0000 gender1:item4 -0.379341 0.226 3993 -1.675854 0.0938 Correlation: item1 gendr0 gendr1 item2 item3 item4 itm1:1 gnd1:2 gender0 -0.604 gender1 0.000 0.000 item2 0.508 -0.684 0.000 item3 0.515 -0.697 0.000 0.583 item4 0.509 -0.686 0.000 0.574 0.584 item1:gender1 0.000 0.000 0.000 0.000 0.000 0.000 gender1:item2 -0.296 0.399 -0.676 -0.583 -0.340 -0.335 0.000 gender1:item3 -0.301 0.406 -0.695 -0.340 -0.583 -0.341 0.000 0.708 gender1:item4 -0.287 0.387 -0.650 -0.324 -0.329 -0.564 0.000 0.669 gnd1:3 gender0 gender1 item2 item3 item4 item1:gender1 gender1:item2 gender1:item3 gender1:item4 0.681 Standardized Within-Group Residuals: Min Q1 Med Q3 Max -5.236435e+00 -3.620071e-11 4.086712e-01 7.028004e-01 1.950463e+00 Number of Observations: 5000 Number of Groups: 1000 LOOK! This item does not have DIF!

  37. Item 2 Does this item have DIF? Fixed effects: score ~ 0 + item1 * gender + item2 * gender + item3 * gender + item4 * gender Value Std.Error DF t-value p-value item1 0.330415 0.144 3993 2.299060 0.0216 gender0 1.534279 0.107 998 14.296862 0.0000 gender1 2.954049 0.162 998 18.206689 0.0000 item2 -0.665216 0.128 3993 -5.203006 0.0000 item3 -0.940232 0.126 3993 -7.469119 0.0000 item4 -0.695781 0.128 3993 -5.453253 0.0000 item1:gender1 24.417536 19800.750 3993 0.001233 0.9990 gender1:item2 -1.175118 0.219 3993 -5.360656 0.0000 gender1:item3 -2.155689 0.216 3993 -9.990973 0.0000 gender1:item4 -0.379341 0.226 3993 -1.675854 0.0938 Correlation: item1 gendr0 gendr1 item2 item3 item4 itm1:1 gnd1:2 gender0 -0.604 gender1 0.000 0.000 item2 0.508 -0.684 0.000 item3 0.515 -0.697 0.000 0.583 item4 0.509 -0.686 0.000 0.574 0.584 item1:gender1 0.000 0.000 0.000 0.000 0.000 0.000 gender1:item2 -0.296 0.399 -0.676 -0.583 -0.340 -0.335 0.000 gender1:item3 -0.301 0.406 -0.695 -0.340 -0.583 -0.341 0.000 0.708 gender1:item4 -0.287 0.387 -0.650 -0.324 -0.329 -0.564 0.000 0.669 gnd1:3 gender0 gender1 item2 item3 item4 item1:gender1 gender1:item2 gender1:item3 gender1:item4 0.681 Standardized Within-Group Residuals: Min Q1 Med Q3 Max -5.236435e+00 -3.620071e-11 4.086712e-01 7.028004e-01 1.950463e+00 Number of Observations: 5000 Number of Groups: 1000 YES =(

  38. Item 3 Does this item have DIF? Fixed effects: score ~ 0 + item1 * gender + item2 * gender + item3 * gender + item4 * gender Value Std.Error DF t-value p-value item1 0.330415 0.144 3993 2.299060 0.0216 gender0 1.534279 0.107 998 14.296862 0.0000 gender1 2.954049 0.162 998 18.206689 0.0000 item2 -0.665216 0.128 3993 -5.203006 0.0000 item3 -0.940232 0.126 3993 -7.469119 0.0000 item4 -0.695781 0.128 3993 -5.453253 0.0000 item1:gender1 24.417536 19800.750 3993 0.001233 0.9990 gender1:item2 -1.175118 0.219 3993 -5.360656 0.0000 gender1:item3 -2.155689 0.216 3993 -9.990973 0.0000 gender1:item4 -0.379341 0.226 3993 -1.675854 0.0938 Correlation: item1 gendr0 gendr1 item2 item3 item4 itm1:1 gnd1:2 gender0 -0.604 gender1 0.000 0.000 item2 0.508 -0.684 0.000 item3 0.515 -0.697 0.000 0.583 item4 0.509 -0.686 0.000 0.574 0.584 item1:gender1 0.000 0.000 0.000 0.000 0.000 0.000 gender1:item2 -0.296 0.399 -0.676 -0.583 -0.340 -0.335 0.000 gender1:item3 -0.301 0.406 -0.695 -0.340 -0.583 -0.341 0.000 0.708 gender1:item4 -0.287 0.387 -0.650 -0.324 -0.329 -0.564 0.000 0.669 gnd1:3 gender0 gender1 item2 item3 item4 item1:gender1 gender1:item2 gender1:item3 gender1:item4 0.681 Standardized Within-Group Residuals: Min Q1 Med Q3 Max -5.236435e+00 -3.620071e-11 4.086712e-01 7.028004e-01 1.950463e+00 Number of Observations: 5000 Number of Groups: 1000 YES =(

  39. Item 4 Does this item have DIF? Fixed effects: score ~ 0 + item1 * gender + item2 * gender + item3 * gender + item4 * gender Value Std.Error DF t-value p-value item1 0.330415 0.144 3993 2.299060 0.0216 gender0 1.534279 0.107 998 14.296862 0.0000 gender1 2.954049 0.162 998 18.206689 0.0000 item2 -0.665216 0.128 3993 -5.203006 0.0000 item3 -0.940232 0.126 3993 -7.469119 0.0000 item4 -0.695781 0.128 3993 -5.453253 0.0000 item1:gender1 24.417536 19800.750 3993 0.001233 0.9990 gender1:item2 -1.175118 0.219 3993 -5.360656 0.0000 gender1:item3 -2.155689 0.216 3993 -9.990973 0.0000 gender1:item4 -0.379341 0.226 3993 -1.675854 0.0938 Correlation: item1 gendr0 gendr1 item2 item3 item4 itm1:1 gnd1:2 gender0 -0.604 gender1 0.000 0.000 item2 0.508 -0.684 0.000 item3 0.515 -0.697 0.000 0.583 item4 0.509 -0.686 0.000 0.574 0.584 item1:gender1 0.000 0.000 0.000 0.000 0.000 0.000 gender1:item2 -0.296 0.399 -0.676 -0.583 -0.340 -0.335 0.000 gender1:item3 -0.301 0.406 -0.695 -0.340 -0.583 -0.341 0.000 0.708 gender1:item4 -0.287 0.387 -0.650 -0.324 -0.329 -0.564 0.000 0.669 gnd1:3 gender0 gender1 item2 item3 item4 item1:gender1 gender1:item2 gender1:item3 gender1:item4 0.681 Standardized Within-Group Residuals: Min Q1 Med Q3 Max -5.236435e+00 -3.620071e-11 4.086712e-01 7.028004e-01 1.950463e+00 Number of Observations: 5000 Number of Groups: 1000 NO =)

  40. Thank you for a great two years!

More Related