Clark University Research
950 Main Street • Worcester, MA 01610
Tel: 508-793-7711 • academicaffairs@clarku.edu

Active Learning and Research
Active Learning and Research
Environmental science and policy professor Dale Hattis studies the level of cancer risk associated with exposure to a wide variety of chemical substances. He and student Jennifer Ericson worked to compile a related database bringing together research on age-related differences in cancer susceptibility.

Can we move beyond the rigidity of single-point 'uncertainty' factors?

By Dale Hattis. This article originally appeared in Risk Policy Report, September 18, 2001. It is reprinted here with permission of the publisher, Inside Washington Publishers. Copyright 2002. All rights reserved.
The principal way that the U.S. Environmental Protection Agency now addresses non-cancer risks is to derive single-point "Reference Doses" (RfD's) or "Reference Concentrations" (RfC's) that are thought to be protective of public health. Exactly how protective they are supposed to be, however, is nowhere specified by the Agency, nor does the Agency have any way to assess the danger of exceeding the RfD by various amounts. Moreover the Agency has no defensible way at present to evaluate the degree of health protection actually provided by its RfD's to diverse groups of exposed people, or to quantitatively estimate the expected health benefits of changes in exposures that have been the subjects of RfD determinations. Distributional approaches are now feasible that would:
  • open the system to provide a clear way to use improved relevant technical information, thus encouraging the generation of better data (for example, distributions of the rates at which different people absorb, metabolize, or excrete particular chemicals, and comparisons of these parameters with those in animals where toxicological tests were done);
  • provide people with a more honest representation of likely finite risks and our uncertainties about those risks; and
  • ultimately foster greater accountability to the public and other parties affected by regulatory decisions.
The current system uses one or several "uncertainty" factors to represent needs for additional protection (reduction in permitted dose) arising from particular kinds of deficiencies or other needed adjustments in the basic animal toxicologic or (rarely) human epidemiologic or clinical data. Each uncertainty factor is a single number point estimate (usually 10, but sometimes 3). These include absence of a "No Observed Adverse Effect Level (NOAEL)," use of a subchronic rather than a chronic toxicity study for estimation of the "NOAEL," or absence of a "complete" toxicological database including chronic toxic, reproductive and developmental studies. The most recent "uncertainty factor" to be added for some cases (though not for any current RfD's for noncancer effects) is an allowance of a new factor of up to 10-fold for possible effects on children related to their likely general sensitivity or specific developmental processes. With the exception of dosimetry adjustments now incorporated into RfC's, all of these factors and the rules for applying them have been articulated without reference to a clearly identified empirical basis or, as mentioned above, performance goals for the system as a whole.

This system is distantly derived from the original proposal of a 100-fold safety factor by Lehman and Fitzhugh in 1954 for deriving an "Acceptable Daily Intake" from toxicological observations on a small group of animals.[1] This basic framework helped the technical experts in the 1950's Food & Drug Administration (FDA) solve the major risk assessment and management problems of their time reasonably well. The suggested procedure was simple and consistent in providing for a 100-fold dose adjustment factor to allow for possible differences between test animals and for differences between typical humans and unusually sensitive humans. This approach guided the regulatory authorities to estimate "safe" levels of chemical residues in foods that, over the next several decades, have at least not yet been shown to have led to visible disasters in the form of a high enough incidence of distinctive adverse health responses to be readily detected and traced to particular permitted exposures.

So, what reasons are there for complaint about the system as it stands today, other than the lack of performance goals and clear empirically based derivations for the various factors of 10 or 3 incorporated into the procedure?

First, today we live in an age where the questions for analysis have broadened beyond the main issues confronting the FDA of 1954. In contexts as diverse as occupational safety and health, general community air pollution, drinking water contaminants and community exposures from waste sites, decision makers and the public ask questions which might be rephrased as "Do exposures to substance X at Y fraction of a NOAEL from a small group of relatively uniform animals really pose enough of a risk of harm to a more diverse group of exposed humans to merit directing major resources to prevention?" and on the other hand, "Wouldn't it be more prudent to build in extra safety factors to protect against effects to people who may be more sensitive than most because of young or old age, particular pathologies, or other causes of special vulnerability?" To address these questions one needs to make at least some quantitative estimates of the risks that result from current approaches, recognizing that there will be substantial uncertainties in such estimates.

Second, the common implication of the RfD system of population thresholds is likely to be misleading in many cases. Strictly speaking, population thresholds would be levels below which no one in a large and diverse population is affected. Even if some proponents of the current RfD system do not claim to be implying this strict interpretation of the idea of "population thresholds," it is likely that many people draw that inference of absolute safety anyway.

One basic concept has not changed from the time of Lehman and Fitzhugh. This is the idea that many toxic effects result from placing a chemically-induced stress on an organism that exceeds some homeostatic buffering capacity. Other types of mechanisms probably do exist, however, such as an irreversible accumulating damage model (e.g. for chronic neurological degenerative conditions) or a risk factor model (e.g., for cardiovascular diseases) whereby values of a continuous risk factor such as blood pressure or birth weight have strong quantitative relationships with the rates of occurrence of adverse cardiovascular events or infant mortality.[2]

However, where it is applicable, the basic homeostatic system overwhelming model leads to an expectation that there should be individual thresholds for such effects. An individual person will show a particular response (or a response at a specific level of severity) only when their individual threshold exposure level for the chemical in question has been exceeded. However this expectation for individual thresholds for response does not mean that one can necessarily specify a level of exposure that poses zero risk for a diverse population. In any finite population, of course, there must be some lowest value of a threshold for response, but there is no reason to expect that that lowest value does not have some finite chance to be below any specific limit that might be chosen in the absence of individual testing of the population in question.

In a large group of exposed people with differing homeostatic buffering capacities and different pre-existing pathologies, there may be people for whom a marginal perturbation of a key physiological process is sufficient to make the difference between barely adequate and inadequate function to avoid an adverse response, or even to sustain life. Recent persistent epidemiologic observations of appreciable mortality effects at ambient levels of fine airborne particles [3] help reinforce the case that health effects for a sensitive minority of people can sometimes be of major societal significance.

Therefore one benefit of adopting a quantitative approach for defining an RfD would be to help reduce the misimpression that toxicological mechanisms consistent with individual thresholds necessarily imply population thresholds (doses where there is no chance that any person will respond). Another benefit is that a quantitative approach would allow a harmonization of approaches to risk analysis between cancer and non-cancer outcomes-although in the direction of making the non-cancer assessments more like the quantitative projections done for carcinogenesis, rather than the reverse. Such an approach would also provide a basis to quantitatively assess risks for input to policy discussions. These can include both juxtapositions of costs and benefits of policies to control specific exposures, and judgments of the equity or "fairness" of the burden of health risk potentially imposed on vulnerable subgroups. In doing this, such a system could provide encouragement for the collection of better quantitative information on human variability, toxic mechanisms, and risks. A quantitative analytical framework could also allow comparable analyses of uncertainties among exposure and toxic potency-potentially leading to "value of information" analyses helpful in setting research priorities.

Finally, we now have the beginnings of an information and tools base that allows us to do better than the simple system of point-estimates of needed adjustments represented by the RfD system. Today we have the experience and the computational capabilities to craft distributional approaches that explicitly and separately recognize variability (real differences among people that cause differences in either exposure or susceptibility) and distinguish it from uncertainty (imperfection in our knowledge). This basic conceptual innovation to separate variability from uncertainty began to be recognized among practitioners of risk analysis in the 1980's,[4] and has since come to be widely developed by analysts inside and outside EPA involved with estimation of exposures and ecological risks.[5] Only cancer potency factors and non-cancer RfD's to date seem by policy to be beyond the reach of distributional analyses (although there is some receptivity within the technical staff of EPA to the idea of developing relevant distributional information for assessing health risks).

To help build and test an alternative to the current system, we have recently completed work under a four-year grant from EPA in which we:
  • Compiled an extensive data base of observations of human interindividual variability in pharmacokinetic and pharmacodynamic parameters[6] (largely from clinical studies of various drugs);
  • Applied these data to evaluate the protectiveness of the traditional 10-fold uncertainty factor for interindividual variability in general;[7] and
  • In concluding work,[8] combined the interindividual variability information with distributions representing other conventional uncertainty factors [9], [10], [11] to evaluate the risks posed by 18 of 20 randomly-selected RfD's from the EPA's IRIS database. In this final paper we also made a "straw man" proposal for a candidate risk management standard for evaluating the overall protectiveness of uncertainty factor recipes, and compared the evaluated results from the IRIS RfD's with this "straw man" standard. (Briefly, the "straw man" risk management standard called for achieving 95% confidence that the incidence of minimal adverse effects from an RfD exposure would be less than one in a hundred thousand.) The results of the comparison of the likely achievements of the 18 analyzed RfD's with this criterion are less than reassuring for the protectiveness of the current system-particularly in cases where RfD's have been derived for chemicals thought to have relatively good toxicological data bases leading to relatively small overall combined uncertainty factors of 100-fold or less.
The database we have compiled on human interindividual variability, our full analytical paper, and exemplary spreadsheets illustrating our analysis are provided on our website - www.clarku.edu/faculty/dhattis. Our hope is that this initial "straw man" demonstration will help stimulate the work and discussion needed to eventually displace the current point estimate/adjustment factor approach used by U.S. EPA, FDA, and analogous agencies world wide. With strenuous effort and good will we can expect that the 100th anniversary of the 1954 Lehman and Fitzhugh paper will see the "safety/uncertainty" factor system long since consigned to arcane history.

Footnotes:

1 A. J. Lehman, and O. G. Fitzhugh, O.G. "100-fold Margin of Safety." Assoc. Food Drug Off. U.S. Q. Bull. 18:33-35 (1954).   return to text

2 Hattis, D. "Strategies for assessing human variability in susceptibility, and using variability to infer human risks." In Human Variability in Response to Chemical Exposure: Measures, Modeling, and Risk Assessment, D. A. Neumann and C. A. Kimmel, eds.: 27-57, CRC Press, Boca Raton, FL (1998)    return to text

3 Krewski, D., Burnett, R. T., Goldbert, M. S., Hoover, K., Siemiatycki, J., Jerret, M., Abrahamowicz, M., and White, W. H. Reanalysis of the Harvard Six Cities Study and the American Cancer Society Study of Particulate Air Pollution and Mortality. Special Report to the Health Effects Institute. Part II. Sensitivity Analyses, Health Effects Institute, Cambridge, MA, July 2000, available on the web, http://www.healtheffects.org/pubs-special.htm.   return to text

4 Bogen, K. T., and Spear, R. C. "Integrating Uncertainty and Interindividual Variability in Environmental Risk Assessment," Risk Analysis 7:427-436 (1987).   return to text

5 Eastern Research Group, Report of the Workshop on Selecting Input Distributions for Probabilistic Assessments. Held in New York, New York on April 21-22, 1998, National Technical Information Service No. PB99-155285INZ (1999).   return to text

6 Hattis, D. Banati, P., Goble, R., and Burmaster, D. "Human Interindividual Variability in Parameters Related to Health Risks," Risk Analysis, 19:705-720, 1999.   return to text

7 Hattis, D., Banati, P., and Goble, R. "Distributions of Individual Susceptibility Among Humans for Toxic Effects-For What Fraction of Which Kinds of Chemicals and Effects Does the Traditional 10-Fold Factor Provide How Much Protection?" Annals of the New York Academy of Sciences, 895:286-316, 1999.   return to text

8 Hattis, D., Baird, S., and Goble, R. "A Straw Man Proposal for a Quantitative Definition of the RfD," in Final Technical Report, U.S. Environmental Protection Agency STAR grant # R825360, "Human Variability in Parameters Potentially Related to Susceptibility for Noncancer Risks," Paper presented 4/24/01 at the U.S. EPA/DoD symposium on Issues and Applications in Toxicology And Risk Assessment, Fairborn, Ohio. Available on the web: http://www.clarku.edu/faculty/dhattis.   return to text

9 Baird, S. J. S., Cohen, J. T. Graham, J. D., Shlyakhter, A. I., and Evans, J. S. "Noncancer risk assessment: A probabilistic alternative to current practice," Hum. Ecol. Risk Assess. 2: 78-99, 1996.   return to text

10 Evans, J. S. and Baird, S. J. S. "Accounting for missing data in noncancer risk assessment," Human and Ecological Risk Assessment 4 291-317 (1998).   return to text

11 Price, P. S., Swartout, J. C., Schmidt, C., and Keenan, R. E. "Characterizing inter-species uncertainty using data from studies of anti-neoplastic agents in animals and humans," Toxicological Sciences (2001) submitted.   return to text

About the Author: Dale Hattis is a Research Professor with the George Perkins Marsh Institute, Clark University.

Contact Information Site Search

Additional Resources
Search by student
Search by professor
Search by department
Fund it
Present it



© 2014 Clark University·