1 / 13

Introduction to Indirect Student-Learning Assessment (Part II)

This article discusses the use of indirect assessments in evaluating student learning, specifically focusing on criteria, validity, and the importance of writing effective survey items. It provides insights into common issues and considerations in conducting indirect assessments.

anag
Download Presentation

Introduction to Indirect Student-Learning Assessment (Part II)

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Introduction to Indirect Student-Learning Assessment (Part II) Dr. Wayne W. Wilkinson September 28, 2016 ITTC Faculty Center Arkansas State University

  2. Refresher: Indirect Assessments • Require the inference of student learning: • No direct evidence or demonstration • Common topics of indirect assessments: • Perceptions of successfully meeting program outcomes • Satisfaction/attitudes/feelings toward program • Utility of program

  3. Criteria: The Survey Must Have a Purpose • Standards used to help make evaluative judgments • Program outcomes, program quality, program utility . . . • Common issues • Choices of proper criteria to use (e.g., program outcomes) • Disagreements over definition (what does “communicate effectively” mean?)

  4. Conceptual vs. Actual Criteria Conceptual Actual Measures of conceptual criteria: Operational definitions • Theoretical construct: • Ideal set of quality factors

  5. Conceptual and Actual Criterion Relations • Conceptual criteria are theoretical . . . actual and conceptual should overlap • Criterion deficiency • Criterion contamination • Criterion relevance

  6. Content and Face Validity • Content Validity: The degree that an assessment includes relevant aspects of the criteria • Determined by Subject Matter Experts • Face Validity: do the items seem legitimate for a measure of the criteria? • Reactions and attitudes of test takers

  7. Types of Items • Open-ended • Restricted (closed-ended) • Partially open-ended • Rating scales

  8. Rating Scales • Likert-type scale: • Aggregated ratings of favorable or unfavorable statements • Sematic differential: • Bipolar adjectives • Measure several variables on common scale

  9. Rating Scale Issues • Number of points on scale: • Odd or even (neutral point) • Labeling of scale: • Number of anchors • Numerical values • Item-rating scale congruence

  10. Writing Good Items • Avoid very long items • Avoid negative statements (no, none, not, etc.) • 5th– 6th grade reading level • Avoid “check all that apply” items

  11. Writing Good Items • Avoid items that express more than one thought (double barreled items) • Avoid evaluative assumptions The program aided my understanding of selecting and conducting the proper analysis for specific research questions. The program provided an excellent preparation for my career.

  12. Assembling Your Survey • Surveyspace and justification • Question organization: • Keep related items together • Question order can have unintended effects • Sensitive topic items after less sensitive • Graphic navigation paths

  13. Survey Evaluation Session • Survey goal: English BA Program Indirect Assessment • Justification for each item • Criteria Content Validity • Contamination or deficiency • Item issues • Common indirect assessment topics: • Perceptions of successfully meeting program outcomes • Satisfaction/attitudes/feelings toward program • Utility of program

More Related