1 / 25

Chapter 3: How Standardized Test….

Chapter 3: How Standardized Test…. Lecture by: Chris Ross. How Standardized Tests Are Used with Infants & Young Children. Types of Standardized Tests: Ability => current level of knowledge or skill in a particular areas.

rozier
Download Presentation

Chapter 3: How Standardized Test….

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Chapter 3: How Standardized Test…. Lecture by: Chris Ross

  2. How Standardized Tests Are Used with Infants & Young Children • Types of Standardized Tests: • Ability => current level of knowledge or skill in a particular areas. • Psychological tests like: intelligence, achievement, & aptitude test are used for ability as well. • Achievement => related to the extend to which a person has acquired certain information or mastered identified skills. • Peabody Individual Achievement Test- Revised (measures achievement in math, reading recognition and comprehension, spelling and general information.

  3. How Standardized Tests Are Used with Infants & Young Children • Types of Standardized Tests: • Aptitude => is the potential to learn or develop proficiency in some area, provided that certain conditions exist or training is available. • The Stanford-Binet Intelligence Scale • Personality tests => measure a person’s tendency to behave in a particular way.

  4. How Standardized Tests Are Used with Infants & Young Children • Types of Standardized Tests: • Interest inventories => used to determine a person’s interest in a certain area or vocation and are not used with very young children. • Attitude measure => determines how a person is predisposed to think about or behave toward an object, event, institution, type of behavior, or person (group of people).

  5. How Standardized Tests Are Used with Infants & Young Children • Tests for Infants • Apgar Scale => is administered one and five minutes after birth to asses the health of the newborn. • Brazelton Neonatal Behavioral Assessment Scale => measures temperamental differences, nervous system functions and capacity of the neonate to interact. • The Gesell Developmental Schedules => first scales to measure infant development. • Several measures are discussed on pages 54-55

  6. How Standardized Tests Are Used with Infants & Young Children • Tests for Preschool Children • Screening Tests • Denver II • Ages and Stages Questionnaire • Brisance Screens • First Step Screening Test for Evaluating Preschoolers • Devereux Early Childhood Assessment • Many More tests are discussed on pages 56-58; 61

  7. How Standardized Tests Are Used with Infants & Young Children • Diagnostic Tests (pgs 58-59; 61) • Vineland Adaptive Behavior Scale • Standford-Binet Intelligence Sale • Battell Developmental Inventory-II • Language Tests (59-60; 61) • Preschool Language Scale • Pre-LAS • Achievement Tests (60-62) • National Reporting System

  8. How Standardized Tests Are Used with Infants & Young Children • Tests for School-Age Children (pg. 61-66) • Bilingual Syntax Measure II • Test of Visual-Motor Integration • Child Observation Record

  9. Steps in Standardized Test Design

  10. Specifying the Purpose of the Test • Purpose should be clearly defined • APA guidelines for including the test’s purpose in the test manual. The standards are: • The test manual should state explicitly the purpose and applications for which the test is recommended • The test manual should describe clearly the psychological, educational and other reasoning underlying the test and the nature of the characteristic it is intended to measure.

  11. Determining Test Format • Remember not all younger children can write, so verbal tests or child must possess a way to complete the assessment fairly. • Older children may do written (if able). • Some tests are designed to be administered individually or in a group setting

  12. Developing Experimental Forms • Process often involves: writing, editing, trying out, and rewriting/revising the test items. • Preliminary test is assembled and given to a sample of students. Experimental test forms resemble the final form.

  13. Assembling The Test • After the item analysis the final form of the test is created. • Test questions (or required behaviors) to measure each objective are selected. • Test directions are made final with instructions for the takers and administrators.

  14. Standardizing the Test • The final version of the test is administered to a larger population to acquire normative data. • Norms => provide the tool whereby children’s tests performance can be compared with the performance of a reference group.

  15. Developing the Test Manual • The final step in test design • Test developers now must: explain the standardizing information, describe the method used to select the norming group, give the number of individuals included in standardizing test is reported, geographic areas, communities, socioeconomic groups, and ethnic groups. Should also include the validity and reliability of the test

  16. Validity & Reliability • Validity => degree to which the test serves the purpose for which it will be used. • Reliability => extent to which a test is stable or consistent. • Content validity => The extent to which the content of a test such as an achievement test represents the objectives of the instructional program it is designed to measure.

  17. Validity & Reliability • Criterion-related validity => To establish validity of a test, scores are correlated with an external criterion, such as another established test of the same name. • Concurrent validity => The extent to which test scores on two forms of a test measure are correlated when they are given at the same time. • Construct validity => The extent to which a test measures a psychological trait or construct. Tests of personality, verbal ability, and critical thinking are examples of tests with construct validity.

  18. Validity & Reliability • Alternative-form reliability => the correlation between results on alternative forms of a test. Reliability is the extent to which the two forms are consistent in measuring the same attributes. • Split-half reliability => a measure of reliability whereby scores of equivalent sections of a single test are correlated for internal consistency.

  19. Validity & Reliability • Internal consistency => the degree of relationship among items on a test. A type of reliability that indicates whether items on the test are positively correlated and measure the same trait or characteristic. • Test-retest reliability => a type of reliability obtained by administering the same test a second time after a short interval and then correlating the two sets of scores.

  20. Factors That Affect Validity & Reliability • Some common factors are: • Reading ability • Testing room conditions • Memory • Physical condition of test taker • Lack of adherence to time limits • Lack of consistency

  21. Standard Error of Measurement • Standard error of measurement => as estimate of the possible magnitude of error present in the test scores. • True score => a hypothetical score on a test that is free of error. Because no standardized test is free of measurement error, a true score can never be obtained.

  22. Standard Error of Measurement • What are some items that can impact the test reliability? • Population sample; larger the sample will generally mean a more reliable test. • Length of test; longer test are usually more reliable than shorter. More items to measure can enhance true score and reliability. • Range of test scores from the norming group; the wider the spread of scores the more reliably the test can distinguish among them. The spread of test scores can be related to the number of students taking the test.

  23. Considerations in Choosing & Evaluating Tests

  24. Considerations…. • Brown (1983) factors that test users must consider: • Purpose of test • Characteristics to be measured • How are test results to be used • Qualifications of people who interpret scores and use results • Practical constraints

  25. Considerations…. • Think of the quality of a test/measure/assessment. A good manual should include the following information: • Purpose of the test • Test design • Establishment of validity and reliability • Test administration and scoring

More Related