1 / 30

Oral Health Training & Calibration Programme

Oral Health Training & Calibration Programme. Epidemiology-Calibration. WHO Collaborating Centre for Oral Health Services Research. Oral Health Clinical Survey. Oral Health Clinical Examination Tool. D entate Status Prosthetic Status and Prosthetic Treatment Needs Mucosal Status

maxwellr
Download Presentation

Oral Health Training & Calibration Programme

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Oral Health Training & Calibration Programme Epidemiology-Calibration WHO Collaborating Centre for Oral Health Services Research

  2. Oral Health Clinical Survey Oral Health Clinical Examination Tool Dentate Status Prosthetic Status and Prosthetic Treatment Needs Mucosal Status Occlusal Status Orthodontic Treatment Status Fluorosis Dean’s Index Gingival Index Debris and Calculus Indices Attachment Loss and Probing Score Tooth Status Chart Count of Tooth Surfaces with Amalgam Trauma Index Treatment and Urgent Needs

  3. Training for: Dentate Status Prosthetic Status Mucosal Status Fluorosis Orthodontic Status Orthodontic Treatment Status Periodontal Assessments Tooth Status Amalgam Count Traumatic Injury Treatment Needs Calibration for: Fluorosis Occulsal Status Periodontal Assessments Tooth Status Amalgam Count Training and Calibration Magnification is not allowed forexaminations

  4. Calibration Objectives Define Epidemiology - Index Discuss Validity and Reliability Examiner Comparability Statistics Calibration Inter and Intra Examiner

  5. Suggested 4 Day Calibration Training

  6. Suggested 4 Day Calibration Training cont.

  7. Epidemiology • The study of the distribution and determinants of health related states or events in specified populations and the application of this study to the control of health problems. • ‘Epi demos logos’ Greek: • ‘science upon the people’

  8. Measurement of Oral Disease • We use indices: • as a numerical expression to give a group’s relative position on a graded scale with a defined upper and lower limit. • as a standardised method of measurement that allows comparisons to be drawn with others measured with the same index. • to define the stage of disease; not absolute presence or absence.

  9. Desirable characteristics of an index • Valid • Reliable • Acceptable • Easy to use • Amenable to statistical analysis

  10. Prevalence is the number of cases in a defined population at a particular point in time describes a group at a certain point in time similar to a snapshot in time is expressed as a rate -x per 1000 population

  11. Descriptive study Simple description of the health status of a population or community. No effort to link exposures and effects. For example: % with caries % with periodontal disease

  12. Uses of a Prevalence Study • Planning • Targeting • Monitoring • Comparing International Regional

  13. Validity and Reliability Valid Yes Reliable Yes Valid No Reliable No Unbiased Valid No Reliable Yes Valid No Reliable No Biased

  14. Validity • Success in measuring what you set out to measure • Being trained by a Gold Standard trainer ensures validity by Training on what is proposed to be measured Confirming that everyone is measuring the same thing -“singing out of the same hymn book”

  15. Reliability • The extent to which the clinical examination yields the same result on repeated inspection. • Inter-examiner reliability: reproducibility between examiners • Intra-examiner reliability: reproducibility within examiners

  16. Reliability • Calibration ensures inter and intra examiner reliability and allows: International comparisons Regional comparisons Temporal comparisons • Without calibration Are any differences real or due to examiner variability?

  17. Examiner Reliability Statistics • Used when: • Training and calibrating examiners in a new index against a Gold Standard Examiner • Re-calibrating examiners against a Gold Standard Examiner

  18. Examiner Reliability Statistics • Two measures used: Percentage Agreement Kappa Statistic

  19. Percentage Agreement • Percentage agreement is one method to measure Examiner Reliability. • It means: the percentage of judgements where the two examiners have agreed compared to the total number of judgements made

  20. Example – Percentage Agreement Percentage Agreement is equal to the sum of the diagonal values divided by the overall total and multiplied by 100.

  21. Example – Percentage Agreement • Number of agreements = sum of diagonals • = 61 • Total number of cases = overall total • = 100 • Percentage agreement = 61%

  22. Kappa Statistic • The Kappa Statistic measures the agreement between the evaluations of two examiners when both are rating the same objects. • It describes agreement achieved beyond chance, as a proportion of that agreement which is possible beyond chance.

  23. Kappa Statistic • Interpreting Kappa • The value of the Kappa Statistic ranges from 0 - 1.00, with larger values indicating better reliability. A value of 1 indicates perfect agreement. A value of 0 indicates that agreement is no better than chance. • Generally, a Kappa > 0.60 is considered satisfactory.

  24. Interpreting Kappa • 0.00 Agreement is no better than chance • 0.01-0.20 Slight agreement • 0.21-0.40 Fair agreement • 0.41-0.60 Moderate agreement • 0.61-0.80 Substantial agreement • 0.81-0.99 Almost perfect agreement • 1.00 Perfect agreement

  25. Kappa Statistic • The formula for calculating the Kappa Statistic is:

  26. Example – Kappa Statistic PO is the sum of the diagonals divided by the overall total.

  27. Example - Kappa Statistic PE is the sum of each row total multiplied by the corresponding column total divided by square of the overall total

  28. Example - Kappa Statistic • Number of agreements = sum of diagonals = 61 • Total number of cases = overall total = 100 • PO = 0.61

  29. Example - Kappa Statistic

  30. References • Cohen J. A coefficient for nominal scales. Educational and Psychological Measurement 1960; 20: 37-46. • Cohen J. Weighted kappa: nominal scale agreement with provision for scaled disagreement or partial credit. Psychological Bulletin 1968; 70: 213-220.

More Related