slide1 n.
Download
Skip this Video
Loading SlideShow in 5 Seconds..
Intermediate methods in observational epidemiology 2008 PowerPoint Presentation
Download Presentation
Intermediate methods in observational epidemiology 2008

Loading in 2 Seconds...

play fullscreen
1 / 38

Intermediate methods in observational epidemiology 2008 - PowerPoint PPT Presentation


  • 244 Views
  • Uploaded on

Intermediate methods in observational epidemiology 2008. Quality Assurance and Quality Control. Threats to Causal Inference in Epidemiologic Studies. Threat. Solution. Experimental Design Adjustment/Control. Confounding. Quality Assurance Quality Control. Bias.

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about 'Intermediate methods in observational epidemiology 2008' - Patman


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
slide1

Intermediate methods in observational epidemiology 2008

Quality Assurance and Quality Control

threats to causal inference in epidemiologic studies
Threats to Causal Inference in Epidemiologic Studies

Threat

Solution

  • Experimental Design
  • Adjustment/Control

Confounding

  • Quality Assurance
  • Quality Control

Bias

slide3

Definitions of Quality Assurance and Quality Control

QA: Activities to assure quality of data that take place prior to data collection (through protocol and manuals of operation)

QC: Efforts during the study to monitor the quality of data at identified points during the collection and processing of data

slide4

STEPS IN QUALITY ASSURANCE

(1) Specify hypothesi(e)s

(2) Specify general design -- develop protocol

(3) Select or prepare data collection instruments,

and develop procedures for data collection/

processing -- develop operation manuals

(4) Train staff -- certify staff

(5) Using certified staff, pre-test and pilot study

instruments and procedures. In the pilot study, assess

alternative strategies for data collection- eg,

telephone vs. in-person interviews

(6) Modify (2) and (3) and retrain staff on

the basis of results of (5)

slide5

STEPS IN QUALITY ASSURANCE

(1) Specify hypothesi(e)s

(2) Specify general design -- develop protocol

(3) Select or prepare data collection instruments,

and develop procedures for data collection/

processing -- develop operation manuals

(4) Train staff -- certify staff

Based on a “grab” sample

(5) Using certified staff, pre-test and pilot study

instruments and procedures. In the pilot study, assess

alternative strategies for data collection- eg,

telephone vs. in-person interviews

(6) Modify (2) and (3) and retrain staff on

the basis of results of (5)

slide6

STEPS IN QUALITY ASSURANCE

(1) Specify hypothesi(e)s

(2) Specify general design -- develop protocol

(3) Select or prepare data collection instruments,

and develop procedures for data collection/

processing -- develop operation manuals

Based on a sample as similar as possible to the study population

(4) Train staff -- certify staff

(5) Using certified staff, pre-test and pilot study

instruments and procedures. In the pilot study, assess

alternative strategies for data collection- eg,

telephone vs. in-person interviews

(6) Modify (2) and (3) and retrain staff on

the basis of results of (5)

slide7

STEPS IN QUALITY ASSURANCE

(1) Specify hypothesi(e)s

(2) Specify general design -- develop protocol

(3) Select or prepare data collection instruments,

and develop procedures for data collection/

processing -- develop operation manuals

(4) Train staff -- certify staff

(5) Using certified staff, pre-test and pilot study

instruments and procedures. In the pilot study, assess

alternative strategies for data collection- eg,

telephone vs. in-person interviews

(6) Modify (2) and (3) and retrain staff on

the basis of results of (5)

slide8

QUALITY CONTROL PROCEDURES: TYPES

1. Observation monitoring

“Over the shoulder” observation of staff by experienced supervisor(s) to identify problems in the implementation of the protocol.

Example:

- Taping of interviews

slide9

QUALITY CONTROL PROCEDURES: TYPES

1. Observation monitoring

2. Quantitative monitoring

variability

  • Random repeat (phantom) measurements based on either internal or external pools (biologic samples) to examine:
  • . Intra-observer
  • . Inter-observer
  • Advantages
  • . Better overall quality of data
  • . Measurement of reliability
slide10

Phantom sample based on an internal pool

Aliquot 1: measurement in gold standard lab

Aliquot 2: measurement in study lab

Internal phantom sample

STUDY BASE BLOOD SAMPLES OF 7 PARTICIPANTS

slide11

Phantom sample based on an external pool

Aliquot 1: measurement in gold standard lab

Aliquot 2: measurement in study lab

STUDY BASE BLOOD SAMPLES OF 7 PARTICIPANTS

Phantomsample from the gold standard lab

slide12

QUALITY CONTROL PROCEDURES: TYPES

1. Observation monitoring

2. Quantitative monitoring

- Random repeat measurements

  • Monitoring of individual technicians for deviations from expected values
  • Example: monitoring of digit preference
  • for blood pressure (expected: 10%
  • for each digit)
quality control indices
Quality Control Indices
  • Validity (Accuracy)
  • Precision (Repeatability, Reliability)
slide16

Validity: Usually estimated by calculating sensitivity and specificity. The study (observed)

measurement (“test”) is compared with a more accurate method (“gold standard”).

When clearcut gold standard not

available: “inter-method reliability”

Problem: Limited to 2 x 2 tables

slide17

...Thus, traditional reliability indices (e.g., kappa, correlation coefficient) can be also used to estimate validity of continuous variables or variables with more than 2

categories

Gold Standard results

Study results

reliability sources of variability
Reliability: Sources of Variability
  • Measurement Error
    • Instrument/Technique/Lab
    • Observer/Technician
      • Intra-observer
      • Inter-observer
  • Intra-individual (physiologic)
slide19

Blood collected

from the individual

(replicate measurement)

Repeat blood collection in same individual X time later

Aliquot 1.1: Study lab determination

Aliquot 1.2: Lab determination done by same technician

Aliquot 1.2

Phantom sample

Aliquot 1.2: measurement done by same technician in a masked fashion

Aliquot 1.3: Lab determination done at a different lab

Send Aliquot 1.3 to a different lab

Aliquot 1.3

Aliquot 1.3: Lab determination done by a different technician at study lab

Aliquot 1.2: measurement done by a different technician in a masked fashion at study lab

Aliquot 1.4

Blood collected

from an individual

(1st measurement)

To measure within-individual variability?

To examine within-technician variability?

For other sources of variability, use phantom samples

To examine between-lab variability?

To examine between-technician variability?

Time

Design of a study to evaluate sources of variability

(Based on Chambless et al, Am J Epidemiol 1992;136:1069-1081)

indices of reliability also used for validity
Indices of Reliability (also used for validity)
  • % differences between repeat measurements (expected if no bias: ½ positive and ½ negative)
  • % observed agreement
  • Kappa
  • Correlation coefficient
  • Coefficient of variation
  • Bland-Altman plot
indices of reliability also used for validity1
Indices of Reliability (also used for validity)
  • % differences between repeat measurements (expected if no bias: ½ positive and ½ negative)
  • % observed agreement
  • Kappa
  • Correlation coefficient
  • Coefficient of variation
  • Bland-Altman plot
slide22

First Reading

Second Reading

Plaque

Normal

Total

Plaque

140

52

192

Normal

69

725

794

Total

209

777

986

Agreement Between First and Second Readings to Identify Atherosclerotic Plaque in the Left Carotid Bifurcation by B-Mode Ultrasound in the ARIC Study (Li et al, Ultrasound Med Biol 1996;22:791-9)

Percent Observed Agreeement: [140 + 725] ÷ 986 = 88%

  • Shortcomings
  • Chance agreement is not taken into account
  • If most observations are in one of the concordance cell(s), % Observed Agreement overestimates agreement
slide23

First Reading

Second Reading

Plaque

Normal

Total

Plaque

140

52

192

Normal

69

725

794

Total

209

777

986

Agreement Between First and Second Readings to Identify Atherosclerotic Plaque in the Left Carotid Bifurcation by B-Mode Ultrasound in the ARIC Study (Li et al, Ultrasound Med Biol 1996;22:791-9)

Percent Observed Agreeement: [140 + 725] ÷ 986 = 88%

  • Shortcomings
  • Chance agreement is not taken into account
  • If most observations are in one of the concordance cell(s), % Observed Agreement overestimates agreement
indices of reliability also used for validity2
Indices of Reliability (also used for validity)
  • % differences between repeat measurements (expected if no bias: ½ positive and ½ negative)
  • % observed agreement
  • Kappa
  • Correlation coefficient
  • Coefficient of variation
  • Bland-Altman plot
slide25

First Reading

Second Reading

Plaque

Normal

Total

Plaque

140

52

192

Normal

69

725

794

Total

209

777

986

The most popular measure of agreement: Kappa Statistics

PO Observed agreement proportion

PE Expected (chance) agreement proportion

slide26

Kappa Statistics

First Reading

Second Reading

Plaque

Normal

Total

Plaque

140

52

192

Normal

69

725

794

Total

209

777

986

PO = [140 + 725] ÷ 986 = 0.88

slide27

Kappa Statistics

First Reading

Second Reading

Plaque

Normal

Total

Plaque

140

52

192

Normal

69

725

794

Total

209

777

986

PO = [140 + 725] ÷ 986 = 0.88

Expected agreement: (1) multiply the marginals converging on the concordance cells, (2) add the products, and (3) divide by the square of the total:

slide28

Kappa Statistics

First Reading

Second Reading

Plaque

Normal

Total

Plaque

140

52

192

Normal

69

725

794

Total

209

777

986

PO = [140 + 725] ÷ 986 = 0.88

Expected agreement: (1) multiply the marginals converging on the concordance cells, (2) add the products, and (3) divide by the square of the total:

slide29

Kappa Statistics

First Reading

Second Reading

Plaque

Normal

Total

Plaque

140

52

192

Normal

69

725

794

Total

209

777

986

Thus, kappa values obtained from different populations may not be comparable

Agreement not due to chance

Maximum agreement not due to chance

PO = [140 + 725] ÷ 986 = 0.88

Expected agreement: (1) multiply the marginals converging on the concordance cells, (2) add the products, and (3) divide by the square of the total:

PE = [(209 x 192) + (777 x 794)] ÷ 9862= 0.68

  • Shortcomings
  • Kappa is a function of the prevalence of the condition
  • Can be calculated only for categorical variables (2 or more)
slide30

Interpretation of Kappa values

1.0

VERY GOOD

0.8

GOOD

0.6

MODERATE

0.4

FAIR

0.2

POOR

0

-1.0

(Altman & Bland, Statistician 1983;32:307-17)

indices of reliability also used for validity3
Indices of Reliability (also used for validity)
  • % differences between repeat measurements (expected if no bias: ½ positive and ½ negative)
  • % observed agreement and % observed positive agreement
  • Kappa
  • Coefficient of variation
  • Bland-Altman plot
slide32

Coefficient of variation (CV)

  • General definition: Standard Deviation
    • (SD) as a percentage of the mean
    • value
slide33

For each pair of repeat

measurements:

Calculation of the Coefficient of Variability

Xi1 and Xi2 = values of repeat measurements

on same lab sample

Xi = mean of these measurements

For each pair of values:

and

The mean overall CV over all pairs is the average of all pair-wise CVs

example of calculation of the coefficient of variation i1

Replicates

Example of Calculation of the Coefficient of Variation - I

Phantoms

PAIR No.

Do the calculations for each pair of replicate samples

1

Pair (Split samples) No. 1: Measurement of total cholesterol

1

2

Measurement No. 1 (X11)= 154 mg/dL

Measurement No. 2 (X12)= 148 mg/dL

Repeat the calculation for all pairs of measurements

and calculate average to obtain overall CV

Mean= [154 + 148] / 2= 151 mg/dL

V1= (154 - 151)2 + (148 - 151)2 = 18 mg/dL

reliability in the aric study am j epi 1992 136 1069
Reliability in the ARIC study (Am J Epi 1992;136:1069)

*Best: as high as possible

**Best: as low as possible

indices of reliability also used for validity4
Indices of Reliability (also used for validity)
  • % differences between repeat measurements (expected if no bias: ½ positive and ½ negative)
  • % observed agreement and % observed positive agreement
  • Kappa
  • Coefficient of variation
  • Bland-Altman plot