1 / 44

Assessing Individual Cancer Risk: The Impact of Variability and Underestimation

This article discusses the underestimation of individual cancer risks by the EPA and its potential impact on regulation. It highlights the importance of considering variability in susceptibility and exposure, and recommends more sophisticated models for risk assessment.

ebradley
Download Presentation

Assessing Individual Cancer Risk: The Impact of Variability and Underestimation

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. In the spirit of Winston Churchill (“Madam, we’ve already established that– now we are trying to establish the price”), I offer a syllogism: • Human beings differ one to another in their susceptibility to carcinogenesis (a.k.a. their individual risk at a particular exposure); • A single number (a cancer potency factor, an EDxx, a risk at an exposure below the POD, an MOE, etc., etc.) will correctly predict individual risk to someone within the spectrum of human susceptibility; • Therefore, this number will underpredict risk to everyone who is more susceptible than this person. • Only on Planet EPA does 1 + 2  3

  2. “(We’ve already established that: now by how much…?)” How many of us have our cancer risks under-estimated by EPA, and by how much, concerns me, because it leads to under-regulation. Others may well be concerned with the converse (over-estimation of individual risk). Everyone (even the economists) should be concerned with whether EPA’s estimates of population risk (“body counts”) are biased low: Population risk = (mean risk) * (size of population) Mean risk = Potency * (mean susceptibility) * (mean exposure) Mean susceptibility > (median susceptibility)

  3. Reasonably homogeneous wealth distribution: Typical citizen earns $100,000/yr; 2% in each “tail” differ by a factor of 10 (that is, some earn $10,000; others earn $1 M). Mean income = $ 194,000 Suppose we introduce another source of variability, such that the typical income remains unchanged, but the “tails” diverge from the typical by a factor of 1000 (that is, some earn $100; others earn $100 M) MEAN INCOME NOW = $ 39,000,000 IF WE ARE UNCERTAIN WHETHER THE VARIABILITY IS SMALL OR LARGE, WE CANNOT KNOW THE MEAN TO WITHIN A FACTOR OF 200

  4. Suggested reading: “Life is Lognormal”-- http://stat.ethz.ch/~stahel/lognormal/ (σ = 1) mode Mean= median x exp(½σ2) median 95th %ile = median x exp(1.645σ) Therefore, Max(95th/mean) = 3.9 mean 95th%ile

  5. Mean Value “Mass” above 90th σln X σ(geometric) “Mass” above 95th

  6. (from Finkel, Risk Analysis,10(2):291-301,1990)

  7. Uncertainty in the Sample Mean drawn from a Lognormal Population (from Finkel, Risk Analysis, 1990) 720 N=10 27 σ(sample) N=100 5.2 N=1000 (Ratio 95th/ 5th %iles) σ(population)

  8. 260 Million (Identical) Large, Spherical Rodents 6 feet 154 pounds

  9. Human Interindividual Variability in Steps along the Pathway to Carcinogenesis (Hattis and Barlow, Human and Ecological Risk Assessment, 1996) σ Category (ln X) (90% c.i.) # Data Sets

  10. (from A. Finkel, chapter in Low-Dose Extrapolation of Cancer Risks, 1995)

  11. From Haugen et al., Biostatistics, 10(3): 501-514, 2009

  12. (from Bois, Krowech, and Zeise, Risk Analysis, 15(2):205-13, 1995)

  13. From Science and Judgment in Risk Assessment (NRC 1994): “Recommendation: EPA should adopt a default assumption for susceptibility … EPA could choose to incorporate into its cancer risk estimates for individual risk a “default susceptibility factor” greater than the implicit factor of 1 that results from treating all humans as identical. EPA should explicitly choose a default factor greater than 1 if it interprets the statutory language [in the Clean Air Act Amendments of 1990: “the individual most exposed to emissions”] to apply to an individual with high exposure and above-average susceptibility.” “It is possible that ignoring variations in human susceptibility may cause significant underestimation of population [cancer] risk.”

  14. A Colossal Non Sequitur: “The EPA has considered [the NAS recommendation] but has decided not to adopt a quantitative default factor for human differences in susceptibility [to cancer] when a linear extrapolation is used. In general, the EPA believes that the linear extrapolation is sufficiently conservative to protect public health.Linear approaches from animal data are consistent with linear extrapolation on the same agents from human data (Goodman and Wilson, 1991; Hoel and Portier, 1994)” -- EPA Proposed Guidelines for Carcinogen Risk Assessment (1996)

  15. tobacco smoke saccharin nickel cadmium PCBs asbestos arsenic estrogens reserpine

  16. Arguments in Favor of Protecting for “Unidentifiable Variability” • provides impetus to advance the science • already being done for exposure variation • already being done for economic variation • Congressional intent • evidence of public perception • done, without challenge, in OSHA’s MeCl2 rule

  17. Recommendations in Light of Human Variability • Communicate more honestly that current estimates may be • “plausible upper bounds,” but if so, only for average people. • Resist efforts to arbitrarily remove purported “conservatism” • in estimates, and to require that “best estimates” be used. • Replace “default” models with more sophisticated ones only • if sufficient human data exist to generalize the conclusions. • Develop better safeguards so that individual genetic information • can be ascertained and acted upon (esp. when truly a “last • resort”), rather than closing the door on the information and • thereby over-exposing the minority (or the majority).

  18. Recommendations (cont.) 5. Also consider variability in “exposure” to economic harm, with the ultimate goal of replacing with and its PDF

  19. NAS “Science and Decisions, 2008 An assumption that the distribution is lognormal is reasonable, as is an assumption of a difference of a factor of 10 to 50 between the median and upper 95th percentile people… It is clear that the difference is significantly greater than the factor of 1, the current implicit assumption in cancer risk assessment. In the absence of further research leading to more accurate distributional values or chemical-specific information, the committee recommends that EPA adopt a default distribution or fixed adjustment value for use in cancer risk assessment. A factor of 25 would be a reasonable default value to assume as a ratio between the median and upper 95th percentile persons’ cancer sensitivity for the low-dose linear case, as would be a default lognormal distribution. … For some chemicals, as in the 4-aminobiphenyl case study below, variability due to interindividual pharmacokinetic differences could be greater. The suggested default of 25 will have the effect of increasing the population risk (average risk) relative to the median person’s risk by a factor of 6.8: For a lognormal distribution, the mean to median ratio is equal to exp(σ2/2). When the 95th percentile to median ratio is 25, σ is 1.96 [=ln(25)/1.645], and the mean exceeds the median by a factor of 6.8. If the risk to the median human were estimated to be 10−6, and a population of one-million persons were exposed, the expected number of cases of cancer would be 6.8 rather than 1.0.

  20. A Few Words on “Defaults”:

  21. TABLE 6-1 Examples of Explicit EPA Default Carcinogen Risk-Assessment Assumptions

  22. TABLE 6-3 Examples of “Missing Defaults” in EPA Dose-Response Assessments

  23. (from EPA Draft Final Guidelines for Carcinogen Risk Assessment, February 2003, pp. 1-5 and 1-6) NRC envisioned that principles for choosing and departing from default options would balance several conflicting objectives, including “protecting the public health, ensuring scientific validity, minimizing serious errors in estimating risks, maximizing incentives for research, creating an orderly and predictable process, and fostering openness and trustworthiness” [BUT…] “With a multitude of types of risk assessments and potential default options, it is neither possible nor desirable to specify step-by-step criteria for decisions to invoke a default option.”

  24. 0.51(dontworry) = behappy Generally, if a gap in basic understanding exists, or if agent-specific data are missing, the default is used without pause… If data support a plausible alternative to the default, but no more strongly than they support the default, both the default and its alternative are carried through the assessment and characterized for the risk manager. If data support an alternative to the default as the more reasonable judgment, the data are used. -EPA, Guidelines for Carcinogen Risk Assessment, 1996 draft (emphasis added)

  25. Wait– It Gets Worse: (from final EPA Cancer Guidelines) As an increasing understanding of carcinogenesis is becoming available, these cancer guidelines adopt a view of default options that is consistent with EPA's mission to protect human health while adhering to the tenets of sound science. Rather than viewing default options as the starting point from which departures may be justified by new scientific information, these cancer guidelines view a critical analysis of all of the available information that is relevant to assessing the carcinogenic risk as the starting point from which a default option may be invoked if needed to address uncertainty or the absence of critical information. EPA’s Human Health Research Program is strategically aimed at providing the methods, tools, and data needed to improve risk assessments to protect public health. The primary goal of the program is to reduce reliance on default assumptions and simplified approaches used in risk assessments in the absence of conclusive data.

  26. OSHA’s Evidentiary Criteria forAccepting a PBPK Alternative • The predominant and all relevant minor metabolic pathways must be well-described in several species, including humans. • The metabolism must be adequately modeled (only two pathways are responsible for the metabolism of MC, greatly simplifying the resulting PBPK model). • There must be strong empirical support for the putative mechanism of carcinogenesis (e.g., genotoxicity) and the proposed mechanism must be plausible. • The kinetics for the putative carcinogenic metabolic pathway must have been measured in test animals in vitro and in vivo and in corresponding human tissues at least in vitro.

  27. PBPK Criteria (cont.) • The putative carcinogenic metabolic pathway must contain metabolites which are plausible proximate carcinogens (e.g., reactive compounds such as formaldehyde or S-chloromethylglutathione). • The contribution to carcinogenesis via other pathways must be adequately modeled or ruled out as a factor. • The dose surrogate in target tissues used in PBPK modeling must correlate with tumor responses experienced by test animals. • The biochemical parameters specific to the compound, such as blood:air partition coefficients, must have been experimentally and reproducibly measured (especially those parameters to which the PBPK model is most sensitive).

  28. PBPK Criteria (cont.) • The model must adequately describe experimentally measured physiological and biochemical phenomena. • The PBPK models must have been validated with data (including human data) that were not used to construct the models. • There must be sufficient data, especially data from a broadly representative sample of humans, to assess uncertainty and variability in the PBPK modeling.

  29. Anonymous Footnote, Chapter 6 of Science and Decisions: One member of the Committee concluded that the new EPA policy is not unclear, but instead represents a troubling shift away from a decades-old system that appropriately valued sound scientific information and avoided the paralysis of having to re-examine generic information with every new risk assessment. During its deliberations, the Committee heard two things clearly from EPA that make the intent of its above language unambiguous: (1) that EPA regards “data” and inferences as two concepts that can be compared to each other, and that the former should trump the latter (we heard, for example, that the new policy is intended to repudiate the historical use of “risk assessment without data—just defaults”); and (2) that the goal of the policy shift is to “reduce reliance on defaults” (SAB 2004; EPA 2007).

  30. The problem with EPA’s new formulation is that a policy of “retreating to the default” if the chemical- or site-specific data are “not usable” ignores the vast quantities of data (interpretable via inferences with a sound theoretical basis) that already support most of the defaults EPA has chosen over the past 30 years. In order for a decision to not “invoke” a default to be made fairly, data supporting the inference that a rodent tumor response was irrelevant would have to be weighed against the data supporting the default inference that such responses are generally relevant (see, for example, Allen et al 1988), data supporting a possible nonlinearity in cancer dose-response would have to be weighed against the data supporting linearity as a general rule (see, for example, Crawford and Wilson 1996), data on pharmacokinetic parameters would have to be weighed against the data and theory supporting allometric interspecies scaling (see, for example, Clewell et al 2002), and so on. In other words, having no chemical-specific data other than bioassay data does not imply there is a “data gap,” as EPA now claims—it may well mean that vast amounts of data support a time-tested inference on how to interpret this bioassay, and that no data to the contrary exist because no plausible inference to the contrary exists in this case. In short, this Member of the Committee sees most of the common risk assessment defaults not as “inferences we retreat to because of the absence of information,” but rather as “inferences we generally endorse on account of the information.”

  31. Therefore, EPA’s stated goal of “reducing reliance on defaults” per se is problematic; it raises the question of why a scientific-regulatory agency would ever want to reduce its reliance on those inferences that are supported by the most substantial theory and evidence. Worse yet, it seems to prejudice the comparison between default and alternative models before it starts—if EPA accomplishes part of its mission by ruling against a default model, the “critical analysis of all available information” may be preordained by a distaste for the conclusion that the default is in fact proper. This member of the Committee certainly endorses the idea of reducing EPA’s reliance on those defaults that are found to be outmoded, erroneous, or correct in the general case but not in a specific case—but identifying those inferior assumptions is exactly what a system of departures from defaults, as recommended in the Red Book, in Science and Judgment, and in this report, is designed to do. EPA should modify its language to make clear that across-the-board skepticism about defaults is not scientifically appropriate. This member urges EPA to delineate what evidence will determine how it makes these judgments, and how that evidence will be interpreted and questioned—and EPA’s current policy (yet again) sidesteps these important tasks.

  32. Conclusions on Susceptibility and Defaults: • Distributions accounting for uncertainty and interindividual variability are preferable to point estimates. • EPA has stated for 25+ years that its point estimates of cancer risk are “plausible upper bounds, and could be as low as zero”: the first statement is false, and the second is misleading (a linear term in the LMS polynomial of zero is a totally different concept than “zero potency.”) • A plausible upper bound would account for the most basic characteristic of human beings (biological individuality); a zero lower bound would require a sensible attitude towards defaults and departures therefrom.

  33. “There are eight degrees of charity, each one higher than the other. The act of charity than which there is none higher is a gift or loan, or offer of partnership or employment, which enables the recipient to self-sustenance. Of lesser degree is charity to the poor wherein donor and recipient are unknown to each other. And lesser still, wherein the donor is unknown to the recipient.And lesser than these, wherein the recipient is unknown to the donor. Of yet lower degree is unsolicited alms put into the hands of the poor,And of lower degree still, alms which have been solicited.Inferior to these is charity which, though insufficient, is cheerfully given. The least charity of all is that which is grudgingly done.” -Rabbi Moses ben Maimon (“Maimonides”), (1135-1204) From Meditation 17: Nunc Lento Sonitu Dicunt, ‘Morieris’ -John Donne (1572-1631) [this bell, tolling for another, says “Thou must die”] “…Any man’s death diminishes me, because I am involved in mankind; and therefore never send to know for whom the bell tolls; it tolls for thee.”

  34. (If time permits)… The new decision-making paradigm partially (very partially) adopted by the Science and Decisions committee should actually start not with “problem formulation,” but “solution formulation.”

  35. [the “old” (current) way] What is the acceptable concentration of the substance? Signal of harm (bioassay, epidemiology) What is the risk from the substance? [a possible new way: “solution-focused risk assessment”] Which alternative(s) best reduce overall risk, considering cost? What alternative product(s) or process(es) exist? What product(s) or process(es) leads to exposures? Signal of harm (bioassay, epidemiology)

  36. Main Assertions: • We’ve gotten so far away from grounding risk assessment in a decision-making context that we increasingly refer to as “decisions” things that really are nothing more than pronouncements about risk. EPA “decides” that the NAAQS for ozone should be 75 ppb; OSHA “decides” that workplace air should contain less than 5 ug/m3 chromium(VI)—but these say merely that IF such levels were achieved, a certain acceptable amount of harm would persist. Worse yet, even if we assume perfect implementation and enforcement of controls [that may never have been defined in the “decision” process], at best these “decisions” will achieve a defined amount of exposure reduction, but not necessarily ANY risk reduction, because of risk-versus-risk effects! • If we’re going to decide rather than merely opine, the fundamental chicken-and-egg question is whether risk assessment questions should precede or should follow risk management questions. You are more likely to choose the relatively best decision if you think your way from solutions to problems, rather than dissecting the problem until you are ready [or told that you must be ready!] to “find a solution to the well-understood problem.” The earlier in the process you think about what can be done, the more likely you are to think of better ways to do it, solutions that cannot possibly occur to you after the problem has been defined in such a way as to exclude them.

  37. Other Ways To Recognize SFRA • It reverses the original Red Book paradigm (in which risk management doesn’t begin until risk assessment has “defined the problem”), to one in which a (preliminary) risk management step starts the process and harnesses risk assessment to evaluate solutions. • It shifts the balance more towards design standards and away from pure performance standards—but more than that, it attempts to capture the best features of both. • It combines risk assessment and more holistic decision frameworks such as life-cycle analysis, green design, and inherently safe production processes. It puts risk assessment to work comparing different ways of controlling hazards from the same “functional unit,” in LCA-speak.

  38. Other Guises of SFRA (cont.) • It shifts attention away from continued angst over the performance of risk assessment, and instead picks up on advice (first?) offered by Bernie Goldstein in 1993: “It is time for risk assessors to stop being defensive. Because risk management is broke is no reason to fix risk assessment.” • It changes risk-versus-risk assessment from a theoretical footnote (or a monkey wrench to justify turning our back on risks) to an integral feature of any analysis—from “what are some of the benefits from less exposure to one substance?” to “what are the total benefits of various actions designed primarily to reduce exposure to one substance?” • It restores risk assessment to a central place in environmental policy, just in time to avoid alternative paradigms that do away with it altogether in the name of replacing “paralysis by analysis” with “who needs analysis?”

More Related