1 / 22

Reliability and Validity of Data collection

Reliability and Validity of Data collection. Reliability of Measurement. Measurement is reliable when it yields the same values across repeated measures of the same event Relates to repeatability Not the same as accuracy Low reliability signals suspect data. Threats to Reliability.

Download Presentation

Reliability and Validity of Data collection

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Reliability and Validity of Data collection

  2. Reliability of Measurement • Measurement is reliable when it yields the same values across repeated measures of the same event • Relates to repeatability • Not the same as accuracy • Low reliability signals suspect data

  3. Threats to Reliability • Human error • Miss recording a data point • Usually result from poorly designed measurement systems • Cumbersome or difficult to use • To complex • Can reduce by using technology – Cameras

  4. 2. Inadequate observer training • Training must be explicit and systematic • Careful selection of observers • Must clearly define the target behavior • Train to competency standard • Have on-going training to minimize observer drift • Have back up observers observe the primary observers

  5. 3. Unintended influences on observers • Causes all sorts of problems • Expectations of what the data should look like • Observer reactivity when she/he is aware that others are evaluating the data • Measurement bias • Feedback to observers about how their data relates to the goals of intervention

  6. Solutions to Reliability Issues • Design a good measurement system • Take your time on the front end • Train observers carefully • Evaluate extent to which data are accurate and reliable • Measure the measurement system

  7. Accuracy of Measurement • Observed values match the true values of an event • Issue: Do not want to base research conclusions or treatment decisions on faulty data

  8. purposes of accuracy assessment: • Determine if data are good enough to make decisions • Discover and correct measurement errors • Reveal consistent patterns of measurement error • Assure consumers that data are accurate

  9. observed values Must match true values • Determined by calculating correspondence of each data point with its true value • Accuracy assessment should be reported in research

  10. Inter- observer Agreement (IOA) or Reliability (IOR) • Is the degree to which two or more independent observers report the same values for the same events • Used to: • Determine competency of observers • Detect observer drift • Judge clarity of definitions and system • Increase validity of the data

  11. Requirements for IOA / IOR • Observers must: • Use the same observation code and measurement system • Observe and measure the same participants and events • Observe and record independently of one another

  12. Methods to Calculate IOA / IOR • (Smaller Freq. / Larger Freq.) * 100 = percentage • Can be done with intervals as well • Agreements / Agreements + Disagreements X 100 • Methods can compare: • Total count recorded by each observer • Mean count-per-interval • Exact count-per-interval • Trial-by-trial

  13. Timing recording methods: • Total duration IOA • Mean duration-per-occurrence IOA • Latency-per-response • Mean IRT-per-response

  14. Interval recording and Time sampling: • Interval-by-interval IOA (Point by point) • Scored-interval IOA • Unscored-interval IOA

  15. Considerations in IOA • During each condition and phase of a study • Distributed across days of the week, time of day, settings, observers • Minimum of 20% of sessions, preferably 25-30% • More frequent with complex systems

  16. Considerations in IOA • Obtain and report IOA at the same levels at which researchers will report and discuss it within the results • For each behavior • For each participant • In each phase of intervention or baseline

  17. Other Considerations • More conservative methods should be used • Methods that will overestimate actual agreement should be avoided • If in doubt, report more than one calculation • 80% agreement usually the benchmark • Higher the better • Depends upon the complexity of the measurement system

  18. Reporting IOA • Can use • Narrative • Tables • Graphs • Report how, when, and how often IOA was assessed

  19. Validity • Many types • Are you measuring what you believe you are measuring • Ensures the data are representative • In ABA, usually measure: • a socially significant behavior • dimension of the behavior relevant to the question

  20. Threats to Validity • Measuring a behavior other than the behavior of interest • Measuring a dimension that is irrelevant or ill suited to the reason for measuring behavior • Measurement artifacts • Must provide evidence that the behavior measured is directly related to behavior of interest

  21. Examples • Discontinuous measurement • Poorly scheduled observations • Insensitive or limiting measurement scales

  22. Conclusions • Reliabiltiy and validity of data collection are important • Impacts the client, • Impacts your reputation for good work

More Related