pha 5 surveillance
Download
Skip this Video
Download Presentation
PHA 5: Surveillance

Loading in 2 Seconds...

play fullscreen
1 / 61

PHA 5: Surveillance - PowerPoint PPT Presentation


  • 194 Views
  • Uploaded on

PHA 5: Surveillance. John Powles 2009. Objectives. Will concentrate on chronic disease CD surveillance will be covered in health protection. Definition of surveillance.

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about ' PHA 5: Surveillance' - tacita


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
pha 5 surveillance

PHA 5: Surveillance

John Powles

2009

objectives
Objectives

Will concentrate on chronic disease

CD surveillance will be covered in health protection

definition of surveillance
Definition of surveillance

‘...the continued watchfulness over the distribution and trends of incidence through the systematic collection, consolidation and evaluation of morbidity and mortality reports and other relevant data’ together with timely and regular dissemination to those who ‘need to know’

Langmuir 1963

objectives1
Objectives

To understand the

behaviour of the

disease in orderto control it

To assess the public health importance of the disease (or exposure)

surveillance
Surveillance

Has frequently been critical in galvanising responses to health threats

Eg HIV, SIDS, road traffic injuries and deaths, cancer, birth defects

And to monitoring progress towards their control

examples of surveillance data sources for chronic diseases
Examples of surveillance data sources for chronic diseases

Cancer registries

Diabetes registries

Standardised incidence studies (MONICA)

Behavioural (risk factor) surveillance

(smoking, alcohol use, adiposity etc)

National health examination surveys

NHANES I, II, III and IV in the USA

National Health Survey in the UK

cancer registries
Cancer registries

England now national

‘Notifiable disease’

Mostly from path labs

Death certificates as back up

Internationally collated by IARC

http://www-dep.iarc.fr/

evaluation of service screening
Evaluation of service screening
  • Screening services’ in-house data can provide them with process measures
    • Detection rates, non-operative biopsy rates, etc.
  • There is also a need to estimate the effect of provision of screening on clinical outcomes
    • Incidence of invasive cervical carcinoma
    • Death from breast cancer
    • Late stage breast cancer?
  • For these endpoints, we turn to the registries
cardiovascular disease
Cardiovascular disease
  • Stroke registries
  • CHD registries
  • Standardised incidence studies

Eg MONICA

interrogating routine surveillance data
Interrogating routine surveillance data
  • ? Secular trends
  • ? Geographic variation
  • ? ‘Outliers’ and ‘clusters’
problems in statistical assessment of surveillance data
Problems in statistical assessment of surveillance data
  • Hypotheses often ‘data dependent’
  • Multiple comparisons are made
  • Observations are not independent

Eg Adjacent years

Adjacent areas

slide21

Trends in death-certification rates for liver cirrhosis, 1950 -2000

Source:

Leon et al

Lancet, 2006, 367: 52-6

group comparisons of disease incidence rates are a fundamental starting point in public health
Group comparisons of disease incidence rates are a fundamental starting point in public health
  • What needs to be borne in mind in interpreting such comparisons?
department of error the lancet volume 367 issue 9511 25 february 2006 3 march 2006 page 650
Department of Error,The Lancet, Volume 367, Issue 9511, 25 February 2006-3 March 2006, Page 650
  • After publication of our paper (Jan 7, p 52),1 we were alerted to errors in it by Fabio Levi, of the University of Lausanne. An independent review of the programs and calculations used in the study has now been carried out. Although the key conclusions of the paper remain unchanged, an inadvertent error in the program used to analyse the data has regrettably necessitated non-trivial changes to many of the numerical values quoted in the paper. We regret any confusion caused. The corrected tables and the figure are available online. We describe below the changes required in the Results section. The other sections of the paper stand as originally published.
what needs to be borne in mind in interpreting such comparisons
What needs to be borne in mind in interpreting such comparisons?
  • Information error and empirical uncertainty
what needs to be borne in mind in interpreting such comparisons1
What needs to be borne in mind in interpreting such comparisons?
  • Information error and uncertainty

Information error may or may not be consequential!

what needs to be borne in mind in interpreting such comparisons2
What needs to be borne in mind in interpreting such comparisons?
  • Information error and (empirical) uncertainty

Information error may or may not be consequential!

Comparability

Are the entities comparable?

Can be a major difficulty in cross-national comparisons

(more on this below)

  • Chance (stochastic) variation
potential contribution of chance to variation in event rates
Potential contribution of chance to variation in event rates

Assumptions:

Numerator follows Poisson distribution

Denominator is free of sampling error

Rates

Rates (λ) are like velocities with an instantaneous value (like the speedometer reading on a car)

They are estimated ( ) by assuming them to be constant over an interval (eg over a calendar year – like estimating the instantaneous speed of a car by measuring its average speed between 2 mile posts)

confidence intervals for rates
Confidence intervals for rates

For ‘small’ numbers of events (??<30 - 100)

Exact intervals should be calculated

Can use look-up tables

poisson cis asymmetry at low n s

Upper 95% CI

10.24

Observed events

4

Lower 95% CI

1.09

Poisson CIs – asymmetry at low n’s

From widely available look-up tables or programs

Eg Altman

‘Confidence interval analysis’ (program)

calculated confidence intervals for rates
Calculated confidence intervals for rates

Where N > 30. Various approaches (see texts)

One is based on

Will give a symmetrical CI

Where N = number of observations and Y = person-time (assumed to be free of stochastic variation)

confidence intervals for age standardised rates
Confidence intervals for age-standardised rates

Does age-standardisation have any implications for the calculation of confidence intervals?

confidence intervals for age standardised rates1
Confidence intervals for age-standardised rates

Be aware that (direct) age-standardisation varies the weight attached to observations from each age-stratum

This also re-weights the contribution of age-strata to total variance.

So CIs for age-standardised rates are based on a more complex calculation (see texts or programs for details)

assumptions for the use of frequentist statistics eg ci s
Assumptions for the use of frequentist statistics (eg CI’s)

Observations are independent

  • Are adjacent years independent observations?
what methods are available for assessing the role of chance in data for successive years
What methods are available for assessing the role of chance in data for successive years?

Formal: time series analysis

  • Much used in economics

Informal: ‘eyeballing’

  • What does variation due to small numbers look like?
assumptions for the use of frequentist statistics eg ci s1
Assumptions for the use of frequentist statistics (eg CI’s)

Comparisons are pre-specified

(‘prior hypothesis’)

cf data dredging
Cf ‘Data dredging’
  • Scanning a large number of comparisons some of which will, on average, be ‘significant’
slide41

A ‘rule of thumb’

The usual approach to the problem of multiple statistical

testing and non-independence is to require a much higher

apparent level of statistical significance than 5 per cent.

This can be done by taking into account the number of tests

being performed. For example, if 20 such tests were carried

out, a significance level of (0.05/20) = 0.0025 or 0.25 per cent

could be required. The difficulty with this approach for this

atlas is that we do not know how many non-independent tests

there would be.

A simple alternative (which also avoids the need to perform

hundreds of statistical tests) is to note whether the 95 per cent

confidence intervals around the two rates overlap or not. If the

two rates were in fact independent, then (assuming roughly

equal variances) the non-overlapping of the 95 per cent

confidence intervals is roughly equivalent to the rates being

significantly different at a significance level of about 0.6 per

cent (p=0.006).

From Cancer Atlas of the United Kingdom and Ireland

when a new service is established that is expected to change the trend for some outcome
When a new service is established that is expected to change the trend for some outcome
  • But data requirements are high for time series analyses
    • Rule of thumb: 50 data points
slide46

Can be thought of as method of removing ‘noise’ for descriptive purposes (cf spatial comparisons)

can observations for adjacent areas be regarded as independent
Can observations for adjacent areas be regarded as independent?

If not, what are the implications for ‘significance testing’

more on assessing spatial differences
More on assessing spatial differences

Descriptive

Especially where there are grounds for suspecting that relevant determinants are spatially graded…seek advice on use of ‘empirical Bayes’ methods

Detection of ‘outliers’ and ‘clusters’

Specialist topic: more in the Environmental epidemiology module

incomparability in cause of death data
Incomparability in cause of death data

This can be a major difficulty eg in comparing mortality rates from heart disease between countries.

distribution of deaths attributed to vascular causes polish vs uk males aged 0 64 2002
Distribution of deaths attributed to vascular causes: Polish vs UK males aged 0-64, 2002

Attribution to ‘Symptoms and ill defined conditions’ chapter

UK 1%

Poland 9% of all deaths

surveillance summary
Surveillance: summary
  • Observed differences in disease rates between populations are often a starting point for public health awareness and (ultimately) action
  • Such differences may be
    • Temporal
    • Spatial
    • Spacio-temporal
  • Apparent differences may be due to
    • Error, empirical uncertainty and non-comparability
    • Sampling variation (chance)
surveillance summary1
Surveillance: summary
  • Assessing the role of chance in surveillance data is not straightforward because
    • Observations typically not independent
    • Comparisons typically not pre-specified

- so the appropriate approach needs careful thought (and advice)

  • Techniques are available for removing ‘noise’ from time trends and spatial comparisons
  • The identification of ‘outliers’ and clusters need special methods
  • Comparability: the higher the proportion of events allocated to ill-defined categories the fewer are available for specific causes, undermining comparability - but correction for this is rarely done (and seen as controversial)
ad