1 / 7

Validity defined…

Validity defined….

maylin
Download Presentation

Validity defined…

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Validity defined… • In science and statistics, validity has no single agreed definition but generally refers to the extent to which a concept, conclusion or measurement is well-founded and corresponds accurately to the real world. The word "valid" is derived from the Latin validus, meaning strong. Validity of a measurement tool (i.e. test in education) is considered to be the degree to which the tool measures what it claims to measure. Wikipedia…

  2. Reliability and Validity • The Woodcock-Johnson III and the Cognitive Abilities Test (Form 6): A Concurrent Validity Study by David F. Lohman: University of Iowa March 2003 • For the WJ-III, we used standard scores in all analyses. Since these are normed to a mean of 100 and SD of 15 at each age, they can be used both for within- grade and across-grade analyses.

  3. Reliability • Reliability (Examine subtest percentages. Report subtest and scores that are lower than 80%.) • 1. Inter-rater (Did the author(s) or others evaluate inter-rater reliability? If they did, how and what were the results?) • 2. Internal consistency (Did the author(s) or others evaluate internal consistency? If they did, how and what were the results?) • 3. Test-Retest (Did the author (s) or others evaluate test-retest reliability? If they did, how and what were the results?)

  4. 1. Did the author or others evaluate inter-rater reliability? • The best conclusion seemed to be that the CogAT primarily measured something shared by the various WJ-III test clusters, and only secondarily abilities unique to each.

  5. 2. Internal consistency (Did the author (s) or others evaluate internal consistency? • Although the The Cognitive Abilities Test (CogAT) and the WJ-III are both based on hierarchical models of human abilities, • the CogAT focuses on general reasoning abilities, whereas • the WJ-III attempts to measure a much broader collection of stratum II abilities in Cattell-Horn-Carroll (CHC) theory.

  6. 3. Test-Retest Did the author(s) or others evaluate test-retest reliability? If they did, how and what were the results? • Yes - Three different types of inter-battery analyses are reported. First, we report correlations between the nine cluster scores on the WJ-III and the four CogAT scores. • Second, we report correlations between the broad group factors that were represented in our models of each battery. • Third, we report the results of a confirmatory, inter-battery factor analysis in which we estimate the correlation between the general factors on the two batteries.

  7. Lohman • To summarize, Lohman tested whether the covariances among the 13 WJ-III tests computed for our samples of second and fifth graders differed from covariances among these tests observed in the standardization for children of roughly the same age. • For second graders, we found congruence, but only after eliminating Gv from our model. This conforms with the hypothesis that abilities may exhibit a less differentiated structure for younger children.

More Related