Reliability and validity of data collection
Download
1 / 22

Reliability and Validity of Data collection - PowerPoint PPT Presentation


  • 97 Views
  • Uploaded on

Reliability and Validity of Data collection. Reliability of Measurement. Measurement is reliable when it yields the same values across repeated measures of the same event Relates to repeatability Not the same as accuracy Low reliability signals suspect data. Threats to Reliability.

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about ' Reliability and Validity of Data collection' - jarrod-morrow


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript

Reliability of measurement
Reliability of Measurement

  • Measurement is reliable when it yields the same values across repeated measures of the same event

  • Relates to repeatability

  • Not the same as accuracy

  • Low reliability signals suspect data


Threats to reliability
Threats to Reliability

  • Human error

    • Miss recording a data point

  • Usually result from poorly designed measurement systems

    • Cumbersome or difficult to use

    • To complex

  • Can reduce by using technology – Cameras


2 inadequate observer training
2. Inadequate observer training

  • Training must be explicit and systematic

  • Careful selection of observers

  • Must clearly define the target behavior

  • Train to competency standard

  • Have on-going training to minimize observer drift

  • Have back up observers observe the primary observers


3 unintended influences on observers
3. Unintended influences on observers

  • Causes all sorts of problems

    • Expectations of what the data should look like

    • Observer reactivity when she/he is aware that others are evaluating the data

    • Measurement bias

    • Feedback to observers about how their data relates to the goals of intervention


Solutions to reliability issues
Solutions to Reliability Issues

  • Design a good measurement system

    • Take your time on the front end

  • Train observers carefully

  • Evaluate extent to which data are accurate and reliable

  • Measure the measurement system


Accuracy of measurement
Accuracy of Measurement

  • Observed values match the true values of an event

  • Issue: Do not want to base research conclusions or treatment decisions on faulty data


Purposes of accuracy assessment
purposes of accuracy assessment:

  • Determine if data are good enough to make decisions

  • Discover and correct measurement errors

  • Reveal consistent patterns of measurement error

  • Assure consumers that data are accurate


Observed values must match true values
observed values Must match true values

  • Determined by calculating correspondence of each data point with its true value

  • Accuracy assessment should be reported in research


Inter observer agreement ioa or reliability ior
Inter- observer Agreement (IOA) or Reliability (IOR)

  • Is the degree to which two or more independent observers report the same values for the same events

  • Used to:

    • Determine competency of observers

    • Detect observer drift

    • Judge clarity of definitions and system

    • Increase validity of the data


Requirements for ioa ior
Requirements for IOA / IOR

  • Observers must:

    • Use the same observation code and measurement system

    • Observe and measure the same participants and events

    • Observe and record independently of one another


Methods to calculate ioa ior
Methods to Calculate IOA / IOR

  • (Smaller Freq. / Larger Freq.) * 100 = percentage

  • Can be done with intervals as well

    • Agreements / Agreements + Disagreements X 100

  • Methods can compare:

    • Total count recorded by each observer

    • Mean count-per-interval

    • Exact count-per-interval

    • Trial-by-trial


Timing recording methods
Timing recording methods:

  • Total duration IOA

  • Mean duration-per-occurrence IOA

    • Latency-per-response

    • Mean IRT-per-response


Interval recording and time sampling
Interval recording and Time sampling:

  • Interval-by-interval IOA (Point by point)

  • Scored-interval IOA

  • Unscored-interval IOA


Considerations in ioa
Considerations in IOA

  • During each condition and phase of a study

  • Distributed across days of the week, time of day, settings, observers

  • Minimum of 20% of sessions, preferably 25-30%

  • More frequent with complex systems


Considerations in ioa1
Considerations in IOA

  • Obtain and report IOA at the same levels at which researchers will report and discuss it within the results

    • For each behavior

    • For each participant

    • In each phase of intervention or baseline


Other considerations
Other Considerations

  • More conservative methods should be used

  • Methods that will overestimate actual agreement should be avoided

  • If in doubt, report more than one calculation

  • 80% agreement usually the benchmark

    • Higher the better

    • Depends upon the complexity of the measurement system


Reporting ioa
Reporting IOA

  • Can use

    • Narrative

    • Tables

    • Graphs

  • Report how, when, and how often IOA was assessed


Validity
Validity

  • Many types

  • Are you measuring what you believe you are measuring

    • Ensures the data are representative

  • In ABA, usually measure:

    • a socially significant behavior

    • dimension of the behavior relevant to the question


Threats to validity
Threats to Validity

  • Measuring a behavior other than the behavior of interest

  • Measuring a dimension that is irrelevant or ill suited to the reason for measuring behavior

  • Measurement artifacts

  • Must provide evidence that the behavior measured is directly related to behavior of interest


Examples
Examples

  • Discontinuous measurement

  • Poorly scheduled observations

  • Insensitive or limiting measurement scales


Conclusions
Conclusions

  • Reliabiltiy and validity of data collection are important

  • Impacts the client,

  • Impacts your reputation for good work


ad