HUMAN HEALTH RISK ASSESSMENT

23
HUMAN HEALTH RISK ASSESSMENT


Leah D. Stuchal, Robert C. James, and Stephen M. Roberts


Risk assessment is an ever-evolving process whereby scientific information on the hazardous properties of chemicals and the extent of exposure results in a statement as to the probability that exposed populations will be harmed. The probability of harm can be expressed either qualitatively or quantitatively, depending on the nature of the scientific information available and the intent of the risk assessment. Risk assessment is not research per se, but rather a process of collecting and evaluating existing data. As such, risk assessment draws heavily on the disciplines of toxicology, epidemiology, pathology, molecular biology, biochemistry, mathematical modeling, industrial hygiene, analytical chemistry, and biostatistics. The certainty with which risks can be accurately assessed, therefore, depends on the conduct and publication of basic and applied research relevant to risk issues. While firmly based on scientific considerations, risk assessment is often an uncertain process requiring considerable judgment and assumptions on the part of the risk assessor. Ultimately, the results of risk assessments are integrated with information on the consequences of various regulatory options in order to make decisions about the need for, method of, and extent of risk reduction.


It is clear that society is willing to accept some risks in exchange for the benefits and conveniences afforded by chemical use. After all, we knowingly apply pesticides to increase food yield, drive pollutant-emitting automobiles, and generate radioactive wastes in the maintenance of our national defense. We legally discharge the by-products of manufacturing into the air we breathe, the water we drink, and the land on which our children play. In addition, we have a history of improper waste disposal, the legacy of which is thousands of uncontrolled hazardous waste sites. To ensure that the risks posed by such activities are not unacceptably large, it is necessary to determine safe exposure levels in the workplace and environment. Decisions must also be made on where to locate industrial complexes, on remediation options for hazardous waste sites, tolerance levels for pesticides in foods, safe drinking water standards, air pollution limits, and the use of one chemical in favor of another. Risk assessment provides the tools to make such determinations.


This chapter provides an overview of the risk assessment process and discusses:



  • The basic steps of risk assessment
  • How risk assessments are performed in a regulatory context
  • Differences between human health and ecological risk assessments
  • Differences in the estimation of cancer and noncancer risks
  • Differences between deterministic and probabilistic risk assessments
  • Issues associated with estimating risks from chemical mixtures
  • Comparisons of risks from chemical exposure with other health risks
  • Risk communication from chemical exposure with other health risks

23.1 RISK ASSESSMENT BASICS


A Basic Risk Assessment Paradigm


In 1983, the National Research Council described risk assessment as a four-step analytical process consisting of hazard identification, dose–response assessment, exposure assessment, and risk characterization. These fundamental steps have achieved a measure of universal acceptance and provide a logical framework to assemble information on the situation of potential concern and provide risk information to inform decision making (Figure 23.1). The process is rigid enough to provide some methodological consistency that promotes the reliability, utility, and credibility of risk assessment outcomes, while at the same time allowing for flexibility and judgment by the risk assessor to address an endless variety of risk scenarios. Each step in the four-step process known as risk assessment is briefly discussed below.



  • Step 1: Hazard Identification. The process of determining whether exposure to a chemical agent, under any exposure condition, can cause an increase in the incidence or severity of an adverse health effect (cancer, birth defect, neurotoxicity, etc.). Although the matter of whether a chemical can, under any exposure condition, cause cancer or other adverse health effect is theoretically a yes/no question; there are few chemicals for which the human data are definitive. Therefore, laboratory animal studies, in vitro tests, and structural and mechanistic comparability to other known chemical hazards are considered in addition to the epidemiological data. This step is common to qualitative and quantitative risk assessment.
  • Step 2: Dose–Response Assessment. The process of characterizing the relationship between the dose of a chemical and the incidence or severity of an adverse health effect in the exposed population. A dose–response assessment factors not only in the magnitude, duration, and frequency of exposure but also other potential response-modifying variables such as age, sex, and certain lifestyle factors. A dose–response assessment frequently requires extrapolation from high to low doses and from animals to humans.
  • Step 3: Exposure Assessment. The process of specifying the exposed population, identifying potential exposure routes, and measuring or estimating the magnitude, duration, and frequency of exposure. Exposure can be assessed by direct measurement or estimated with a variety of exposure models. Exposure assessment can be quite complex because exposure frequently occurs to a mixture of chemicals from a variety of sources (air, water, soil, food, etc.).
  • Step 4: Risk Characterization. The integration of information from steps 1 to 3 to develop a qualitative or quantitative estimate of the likelihood that any of the hazards associated with the chemical(s) of concern will be realized. The characterization of risk must often encompass multiple populations having varying exposures and sensitivities. This step is particularly challenging as a variety of data must be assimilated and communicated in such a way as to be useful to everyone with an interest in the outcome of the risk assessment. This may include not only governmental and industry risk managers but also the public as well. This step includes a descriptive characterization of the nature, severity, and route dependency of any potential health effects, as well as variation within the population(s) of concern. Any uncertainties and limitations in the analysis are described in the risk characterization, so that the strengths, weaknesses, and overall confidence in the risk estimates can be understood.
c23-fig-0001

Figure 23.1 Elements of risk assessment and risk management. Risk assessment provides a means to organize and interpret research data in order to inform decisions regarding human and environmental health. Through the risk assessment process, important data gaps and research needs are often identified, assisting in the prioritization of basic and applied toxicological research.



Source: Adapted from NRC (1983).


Circumstances may exist in which no risk can be inferred from an exposure assessment that reveals no opportunity for individuals to receive a dose of the chemical. Therefore, situations sometimes exist where a comprehensive risk assessment is unnecessary. In such instances, it may be more practical to communicate findings in a qualitative manner, that is, to state it is highly unlikely chemical X will pose any significant health risk because there is no exposure to the chemical. At other times, quantitative expressions of risk might be more appropriate, as in the case of a population chronically exposed to a known human carcinogen in drinking water. An expression of such risk might be that the lifetime excess cancer risk from exposure is 3 in 1,000,000 (or 3 × 10−6). Often, such numerical expressions of risk convey an unwarranted sense of precision by failing to communicate the uncertainty inherent in their derivation. They may also prove difficult for nontechnical audiences to comprehend. On the other hand, qualitative risk estimates may appear more subjective and not invoke the same degree of confidence in the risk assessment findings as a numerical expression of risk. Also, qualitative expressions of risk do not readily allow for comparative risk analyses, a useful exercise for putting added risk into context. Although addressed later in this chapter, it is worth mentioning here that effective risk communication plays a key role in utilizing risk assessment findings for the protection of public health.


Risk Assessment in a Regulatory Context: The Issue of Conservatism


Regulatory agencies charged with protecting public health and the environment are constantly faced with the challenge of setting permissible levels of chemicals in the home, workplace, and natural environment. For example, the Occupational Safety and Health Administration (OSHA) is responsible for setting limits on chemical exposure in the workplace, the Food and Drug Administration (FDA) has permissible limits on chemicals such as pesticides in the food supply, and the Environmental Protection Agency (U.S. EPA) regulates chemical levels in air, water, and sometimes soil. Ideally, the level of chemical contamination or residues in many of these media (food, water, air, etc.) would be zero, but this simply is not feasible in a modern industrial society. Although it may not be possible to completely eliminate the presence of unwanted chemicals from the environment, there is almost universal agreement that we should limit exposures to these chemicals to levels that do not cause illness or environmental destruction. The process by which regulatory agencies set limits with this goal in mind is a combination of risk assessment and risk management.


The risks associated with chemical exposure are not easily measured. While studies of worker health have been extremely valuable in assessing risks and setting standards for occupational chemical exposure, determining risks from lower doses typically associated with environmental exposures has been difficult. Epidemiologic studies of environmental chemical exposure can provide some estimate of increased risk of specific diseases associated with a particular chemical exposure compared with a control population, but there are several problems in attempting to generalize the results of such studies. Exposure of a population is often difficult to quantify, and the extrapolation of observations from one situation to another (e.g., different populations, different manners of exposure, different exposure levels, different exposure durations) is challenging. For the most part, risk assessments for environmental chemical exposures must rely on modeling and assumptions to generate estimates of potential risks. Because these risk estimates usually cannot be verified, they represent hypothetical or theoretical risks. This is an important facet of risk assessment that is often misunderstood by those who erroneously assume that risk estimates for environmental chemical exposure have a strong empirical basis.


As discussed in subsequent sections, there are many sources of uncertainty in deriving risk estimates. Good data regarding chemical exposure and uptake are seldom available, forcing reliance on models and assumptions that may or may not be valid. Toxicity information often must be extrapolated from one species to another (e.g., use of data from laboratory mice or rats for human health risk assessment), from one route of exposure to another (e.g., use of toxicity data following ingestion to evaluate risks from dermal exposure), and from high doses to the lower doses more commonly encountered with environmental exposure. In view of all of these uncertainties, it is impossible to develop precise estimates of risks from chemical exposures. Choices made by the risk assessor, such as which exposure model to use or how to scale doses when extrapolating from rodents to humans, can have a profound impact on the risk estimate.


Regulatory agencies address uncertainty in risk assessments by using conservative approaches and assumptions; that is, in the face of scientific uncertainty, they will select models and assumptions that tend to overestimate, rather than underestimate, risk so as to be health protective. Since most risk assessments are by, or for, regulatory agencies, this conservatism is a dominant theme in risk assessments and a continuous source of controversy. Some view the conservatism employed by regulatory agencies as excessive, resulting in gross overestimation of risks and unwarranted regulations that waste billions of dollars. Others question whether regulatory agencies are conservative enough and suggest that the public (particularly more sensitive individuals such as children) may not be adequately protected by contemporary risk assessment approaches.


Defining Risk Assessment Problems


A coherent risk assessment requires a clear statement of the risk problem to be addressed. This should be developed very early in the risk assessment process and is shaped by the question(s) the risk assessment is expected to answer. Ideally, both the risk assessor(s) and the individuals or organizations that will ultimately use the risk assessment will have input. This helps ensure that the analysis will be technically sound and serve its intended purpose.


One of the first issues to address is which chemicals or agents should be included in the analysis. In some situations, this may be straightforward, such as a risk assessment focused specifically on occupational exposure to a particular chemical. In other circumstances, the chemicals of concern may not be obvious. An example of this would be risk assessment for a chemical disposal site where the chemicals present and their amounts are initially unknown. A related issue is which health effects the risk assessment should address. While it is tempting to answer “all of them,” it must be recognized that each chemical in a risk assessment is capable of producing a variety of adverse health effects, and the dose–response relationships for these effects can vary substantially. Developing estimates of risks for each of the possible adverse effects of each chemical of interest is usually impractical. A simpler approach is to estimate risks for the health effect to which individuals are most sensitive, specifically, the one that occurs at the lowest dose. If individuals can be protected from this effect, whatever it might be, they will logically be protected from all other effects. Of course, this approach presumes that the most sensitive effect has been identified and dose–response relationship information for this effect exists. Obviously, for this approach to be effective, the toxicology of each chemical of interest must be reasonably well characterized.


In defining the risk problem, populations potentially at risk must be identified. These populations would be groups of individuals with distinct differences in exposure, sensitivity to toxicity, or both. For example, a risk assessment for a contaminated site might include consideration of workers at the site, occasional trespassers or visitors to the site, or individuals who live at the site if the land is (or might become) used for residential purposes. If residential land use is contemplated, risks are often calculated separately for children and adults, since they may be exposed to different extents and therefore have different risks. Depending on the goals of the risk assessment, risks may be calculated for one or several populations of interest.


Many chemicals move readily in the environment from one medium to another. Thus, a chemical spilled on the ground can volatilize into the air, migrate to groundwater and contaminate a drinking water supply, or be carried with surface water runoff to a nearby stream or lake. Risk assessments have to be cognizant of environmental movement of chemicals and the fact that an individual can be exposed to chemicals by a variety of pathways. In formulating the risk problem, the risk assessor must determine which of many possible pathways are complete; that is, which pathways will result in movement of chemicals to a point where contact with an individual will occur. Each complete pathway provides the opportunity for the individual to receive a dose of the chemical and should be considered in some fashion in the risk assessment. Incomplete exposure pathways—those that do not result in an individual coming in contact with contaminated environmental media (e.g., air, water, soil)—can be ignored because they offer no possibility of receiving a dose of chemical and therefore pose no risk.


Risk assessments can vary considerably in the extent to which information on environmental fate of contaminants is included in the analysis. Some risk assessments, for example, have attempted to address risks posed by chemicals released to the air in incinerator emissions. These chemicals are subsequently deposited on the ground where they are taken up by forage crops that are consumed by dairy cattle. Consumption of meat or milk from these cattle is regarded as a complete exposure pathway from the incinerator to a human receptor. As the thoroughness of the risk assessment increases, so does the complexity. As a practical matter, complete exposure pathways that are thought to be minor contributors to total exposure and risk are often acknowledged but not included in the calculation of risk to make the analysis more manageable.


Often, exposure can lead to uptake of a chemical by more than one route. For example, contaminants in soil can enter the body through dermal absorption, accidental ingestion of small amounts of soil, or inhalation of contaminants volatilized from soil or adherent to small dust particles. Consequently, the manner of anticipated exposure is important to consider, as it will dictate the routes of exposure (i.e., inhalation, dermal contact, or ingestion) that need to be included in the risk assessment for each exposure scenario.


Human Health versus Ecological Risk Assessments: Fundamental Differences


Ecological risk assessments are defined as those that address species other than humans, namely, plant and wildlife populations. Problem formulation is more challenging when conducting ecological risk assessments. Instead of one species, there are several to consider. Also, the exposure pathway analysis is more complicated, at least in part because some of the species of interest consume other species of interest, thereby acquiring their body burden of chemical. Unlike human health risk assessments, where protection of individuals against any serious health impact is nearly always the objective, goals for ecological risk assessments are often at the population, or even ecosystem, level rather than focusing on individual plants and animals. Consequently, development of assessment and measurement endpoints consistent with the goals of the ecological risk assessment is essential in problem formulation for these kinds of analyses.


Historically, the risk assessment process has focused primarily on addressing potential adverse effects to exposed human populations, and the development of well-defined methods for human health risk assessment preceded those for ecological risk assessment. However, increasing concern for ecological impacts of chemical contamination has led to a “catching up” in risk assessment methodology. While detailed methods for both human health and ecological risk assessment are now in place, they are not identical. The conceptual basis may be similar, including some form of hazard identification, exposure assessment, dose–response assessment, and risk characterization. However, there are some important differences in approaches, reflecting the reality that there are some important differences in evaluating potential chemical effects in humans versus plants and wildlife.


The most obvious difference between human health and ecological risk assessments is that the ecological risk assessments are inherently more complicated. Human health risk assessments, of course, deal with only one species. Ecological risk assessments can involve numerous species, many of which may be interdependent. Given the nearly endless array of species of plants and animals that might conceivably be affected by chemical exposure, there must be some process to focus on species that are of greatest interest to keep the analysis to a manageable size. A species may warrant inclusion in the analysis because it is threatened or endangered, because it is a species on which many others depend (e.g., as a food source), or because it is especially sensitive to toxic effects of the chemical and can therefore serve as a sentinel for effects on other species.


The increased complexity of analysis for ecological risk assessments extends to evaluation of exposure. In human health risk assessment, the potential pathways by which the chemical(s) of interest can reach individuals must be assessed and, if possible, the doses of chemicals received by these pathways estimated. In an ecological risk assessment, the same process must be undertaken, but for several species instead of just one. Also, an ecological risk assessment typically must evaluate food chain exposure. This is particularly important when chemicals of interest tend to bioaccumulate, resulting in very high body burdens in predator species at the top of the food chain. Not only must the potential for bioaccumulation be assessed, but also the escalating doses for species of interest must be estimated according to their position in the food chain. This type of analysis is only included in human health risk assessments when estimating dose from a potential food source (e.g., fish, meat, or milk).


A third distinction between human health and ecological risk assessment lies in the assessment objectives. Human health risk assessments characteristically focus on the most sensitive potential adverse health effect, specifically, that which occurs at the lowest dose. In this way, they are directed to evaluating the potential for any health effect to occur. For ecological risk assessments, the analyses generally address only endpoints that affect fecundity (growth, survival, and reproduction). Thus, the goal of an ecological risk assessment might be to determine whether the presence of a chemical in the environment at a particular concentration would result in declining populations for a specific species (e.g., due to mortality or reproductive failure), disappearance of a species in a particular area, or loss of an entire ecosystem, depending on risk management objectives. It is entirely possible that chemical exposure could result in the deaths of many animals, but as long as the populations were stable, the risk would be considered acceptable. The exception to population protection is for species with special legal protection (endangered, threatened, or listed). These species should be protected on an individual level, and all adverse effects should be considered in determining a critical effect.


What constitutes an unacceptable impact is not clearly defined in ecological risk assessment. Regulatory agencies may decide to focus on higher tropic level species or may not focus on a species at all, protecting the habitat instead. Alternatively, biodiversity may be utilized as an ecological endpoint. These alternative endpoints allow for the loss of entire populations and for the establishment of nonnative and invasive species. This reflects philosophical and risk management differences in terms of what constitutes an unacceptable chemical impact on humans versus plants or wildlife.


Because of the greater potential complexity of an ecological risk assessment, more attention must be given to ensuring that an analysis of appropriate scope and manageable size is achieved. For this reason, ecological risk assessments are more iterative in nature than their human health counterparts. An ecological risk assessment begins with a screening-level assessment, which is a form of preliminary investigation to determine whether unacceptable risks to ecological receptors may exist. It includes a review of data regarding chemicals present and their concentrations, species present, and potential pathways of exposure. It is a rather simplified analysis that uses conservative or worst-case assumptions regarding exposure and toxicity. If the screening analysis finds no indication of significant risks using very conservative models and assumptions, the analysis is concluded. If the results of the screening analysis suggest possible ecological impacts, a more thorough analysis is conducted that might include additional samples of environmental media, taking samples of wildlife to test for body burdens of chemicals, carefully assessing the health status of populations exposed to the chemical, conducting toxicity tests, conducting more sophisticated fate and transport analysis of the chemicals of potential concern, and a more detailed and accurate exposure assessment.


23.2 HAZARD IDENTIFICATION


Hazard identification involves an assessment of the intrinsic toxicity of the chemical(s) of potential concern. This assessment attempts to identify health effects characteristically produced by the chemical(s) that may be relevant to the risk assessment. While this may appear to be a straightforward exercise, in reality, it requires a good deal of careful analysis and scientific judgment. The reason for this is that the risk assessor rarely has the luxury of information that adequately describes the toxicity of a chemical under the precise set of circumstances to be addressed in the risk assessment. Instead, the risk assessor typically must rely on incomplete data derived from species other than the one of interest under exposure circumstances very different from those being evaluated in the risk assessment. The existence in the scientific literature of poorly designed studies with misleading results and conclusions, as well as conflicting data from seemingly sound studies, further complicates the task.


This section of the chapter discusses some of the considerations when reviewing and evaluating the toxicological literature for assessment of intrinsic toxicity. Many of these considerations address suitability of data for extrapolation from one set of circumstances to another, while others pertain to the fundamental reliability of the information. Much of the discussion regarding extrapolation deals with assessing the value of animal data in predicting responses in humans, since human health risk assessments are forced to rely predominantly on animal studies for toxicity data. Keep in mind that most of the same extrapolation issues are equally relevant for ecological risk assessments, where often toxicity in wildlife species has to be inferred from data available only from laboratory animal species.


Information from Epidemiologic Studies and Case Reports


Observations of toxicity in humans can be extremely valuable in hazard identification. They offer the opportunity to test the applicability of observations made in animal studies to humans and may even provide an indication of the relative potency of the chemical in humans versus laboratory animal models. If the human studies are of sufficient size and quality, they may stand alone as the basis for hazard identification in human health risk assessment.


Despite the attractiveness of human studies, they often have significant limitations. A less-than-rigorous effort to properly match exposed and control populations makes it difficult or impossible to attribute observed differences in health effects to chemical exposure with any confidence. Even in well-designed epidemiologic studies, there is always the possibility that an unknown critical factor causally related to the health effect of interest has been missed. For this reason, a consistent association between chemical exposure and a particular effect in several studies is important in establishing whether the chemical produces that effect in humans.


Other criteria in evaluating epidemiologic studies include the following:



  • The positive association (correlation) between exposure and effect must be seen in individuals with definitive exposure.
  • The positive association cannot be explained by bias in recording, detection, or experimental design.
  • The positive association must be statistically significant.
  • The positive association should show both dose and exposure duration dependence.

Information from Animal Studies


Typically, data from studies using laboratory animals must be used for some or all of the intrinsic toxicity evaluation of a chemical in humans. There are several aspects that need to be considered when interpreting the animal data, as discussed below.


Breadth and Variety of Toxic Effects


The toxicological literature should be reviewed in terms of the types of effects observed in various test species. This is an important first step in chemical toxicity evaluation because:



  • It identifies potential effects that might be produced in humans. To some extent, the consistency with which an effect is observed among different species provides greater confidence that this effect will occur in humans as well. An effect that occurs in some species but not others, or one sex but not the other, signals that great care will be needed in extrapolating findings in animals to humans without some form of corroborating human data.
  • A comparison of effects within species (e.g., sedation vs. hepatotoxicity vs. lethality) helps establish a rank order of the toxic effects manifested as the dose increases. This aids in identifying the most sensitive effect. Often, this effect becomes the focus of a risk assessment, since protecting against the most sensitive effect will protect against all effects. Also, comparisons of dose–response relationships within species can provide an estimation of the likelihood that one toxic effect will be seen given the appearance of another.

Mechanism of Toxicity


Understanding the mechanism of action of a particular chemical helps establish the right animal species to use in assessing risk and to determine whether the toxicity is likely to be caused in humans. For example, certain halogenated compounds are mutagenic and/or carcinogenic in some test species but not others. Differences in carcinogenicity appear to be related to differences in metabolism of these chemicals because metabolism is an integral part of their mechanism of carcinogenesis. For these chemicals, then, a key issue in selecting animal data for extrapolation to humans is the extent to which metabolism in the animal model resembles that in humans. A second example is renal carcinogenicity from certain chemicals and mixtures, including gasoline. Gasoline produces renal tumors in male rats, but not female rats or mice of either sex. The peculiar susceptibility of male rats to renal carcinogenicity of gasoline can be explained by its mechanism of carcinogenesis. Metabolites of gasoline constituents combine with a specific protein, α-2 μ-globulin, to produce recurring injury in the proximal tubules of the kidney. This recurring injury leads to renal tumors. Female rats and mice do not accumulate this protein in the kidney, explaining why they do not develop renal tumors from gasoline exposure. Humans also do not accumulate the protein in the kidney, making the male rat a poor predictor of human carcinogenic response in this situation.


In a sense, choosing the best animal model for extrapolation is always a catch-22 situation. Selection of the best model requires knowledge of how the chemical behaves in both animals and humans, including its mechanism of toxicity. In the situations in which an animal model is most needed (when we have little data in humans), we are in the worst position to select a valid model. The choice of an appropriate animal model becomes much clearer when we have a very good understanding of the toxicity in humans and animals, but in this situation, there is, of course, much less need for an animal model.


In addition to helping identify the best species for extrapolation, knowledge of the mechanism of toxicity can assist in defining the conditions required to produce toxicity. This is an important aspect of understanding the hazard posed by a chemical. For example, acetaminophen, an analgesic drug used in many over-the-counter pain relief medications, can produce fatal liver injury in both animals and humans. By determining that the mechanism of toxicity involves the production of a toxic metabolite during the metabolism of high doses, it is possible to predict and establish its safe use in humans, determine the consequences of various doses, and develop and provide antidotal therapy.


Dosages Tested


Typically, animal studies utilize relatively high doses of chemicals so that unequivocal observations of effect can be obtained. These doses are usually much greater than those received by humans, except under unusual circumstances such as accidental or intentional poisonings. Thus, while animal studies might suggest the possibility of a particular effect in humans, that effect may be unlikely or impossible at lower dosages associated with actual human exposures. The qualitative information provided by animal studies must be viewed in the context of dose–response relationships. Simply indicating that an effect might occur is not enough; the animal data should indicate at what dosage the effect occurs and, equally importantly, at what dosage the effect does not occur.


Validity of Information in the Literature


Any assessment of the intrinsic toxicity of a chemical begins with a comprehensive search of the scientific literature for relevant studies. While all of the studies in the literature share the goal of providing new information, the reality of the situation is that all are not equally valuable. Studies may be limited by virtue of their size, experimental design, methods employed, or the interpretations of results by the authors. These limitations are sometimes not readily apparent, requiring that each study be evaluated carefully and critically. The following are some guidelines to consider when evaluating studies:



  • Has the test used an unusual, new, or unproven procedure?
  • Does the test measure toxicity directly, or is it a measure of a response purported to indicate an eventual change (a pretoxic manifestation)?
  • Have the experiments been performed in a scientifically valid manner?
  • Are the observed effects statistically significant against an appropriate control group?
  • Has the test been reproduced by other researchers?
  • Is the test considered more or less reliable than other types of tests that have yielded different results?
  • Is the species a relevant or reliable human surrogate, or does this test conflict with other test data in species phylogenetically closer to humans?
  • Are the conclusions drawn from the experiment justified by the data, and are they consistent with the current scientific understanding of the test or area of toxicology?
  • Is the outcome of the reported experiment dependent on the test conditions, or is it influenced by competing toxicities?
  • Does the study indicate causality or merely suggest a correlation that could be due to chance?

Other Considerations


Numerous confounders can affect the validity of information derived from animal studies and its application or relevance to human exposure to the same chemical. Issues regarding selection of the appropriate species for extrapolation are discussed in Section 23.2. Even if the selection of species is sound, certain other characteristics of the experimental animals can influence toxic responses and therefore the extrapolation of these responses to humans. Examples include the age of the animal (e.g., whether studies in adult animals are an appropriate basis for extrapolation to human children), the sex of the animal (obviously, studies limited to just male or female animals cannot address all of the potential toxicities for both sexes of humans), disease status (e.g., whether results obtained in healthy animals are relevant to humans with preexisting disease, and vice versa), nutritional status (e.g., whether studies in fasted animals accurately reflect what occurs in fed humans), and environmental conditions.


Other confounders go beyond the animal models themselves and pertain to the type of study conducted. For example, studies involving acute exposure to a chemical are usually of limited value in understanding the consequences of chronic exposure, and chronic studies generally offer little insight into consequences of acute exposure. This is because chronic toxicities are often produced by mechanisms different from those associated with acute toxicities. For this reason, good characterization of the intrinsic toxicity of a chemical requires information from treatments of varying duration, ranging from a single dose to exposure for a substantial portion of the animal’s lifetime.


Information from In Vitro and In Silico Studies


In vitro and in silico studies are useful for predicting whether toxicity might occur as a result of exposure. The high cost of animal toxicity testing and large number of chemicals yet to be tested make these methods valuable tools for predicting hazard. In vitro studies include cells in culture, isolated tissues, tissue extracts or homogenates, subcellular fractions, and purified biochemical reagents (e.g., enzymes, other proteins, nucleic acids). The basis of their use in hazard identification is for determining the mechanism or mode of action and understanding how the chemical causes effects at the cellular, biochemical, and molecular level. Due to the complexity of an intact biological system, in vitro results cannot be extrapolated to a toxic endpoint. However, toxic effects can be predicted from these studies and verified in animal models. In silico studies include structure–activity relationships (SAR). They utilize computer modeling to predict biological activity and potency from the chemical structure. The most frequent use of in silico modeling is to predict a common mode of action for an entire class of chemicals. The Ah receptor binding ability of dioxin-like compounds was predicted in silico based on SAR.


23.3 DOSE–RESPONSE ASSESSMENT


In this portion of the risk assessment, the dose–response relationships for the toxicities of concern must be measured, modeled, or assumed, in order to predict responses to doses estimated in the exposure assessment. While dose–response relationships could theoretically be obtained for a variety of effects from each chemical of potential concern, in practice, attention is usually centered on the most sensitive effect of the chemical.


In risk assessment, two fundamentally different types of dose–response relationships are thought to exist. One is the threshold model, in which all doses below some threshold produce no effect, while doses above the threshold produce effects that increase in incidence or severity as a function of dose. The second model has no threshold—any finite, nonzero dose is thought to possess some potential for producing an adverse effect. The derivation of these two types of dose–response relationships and their use to provide estimates of risk are very different, as described in the following sections.


Threshold Models


For all toxicities other than cancer, there is some dose below which no observable or statistically measurable response exists. This dose, called the threshold dose, was graphically depicted in Chapter 1 (see also Figure 23.2). Conceptually, a threshold makes sense for most toxic effects. The body possesses a variety of detoxification and cell defense and repair mechanisms, and below some dose (i.e., the threshold dose), the magnitude of effect of the chemical is so small that these detoxification and defense/repair mechanisms render it undetectable.

c23-fig-0002

Figure 23.2 Estimation of a safe human dose (SHD). The first step is identification of the target organ or effect most responsive to the chemical (in this case, target organ 1 in the upper panel). Dose–response data for this effect are used to identify no observable adverse effect level (NOAEL) and/or lowest observable adverse effect level (LOAEL) doses in order to approximate the threshold dose. Either the NOAEL or LOAEL is divided by a series of uncertainty factors to generate the SHD.


In the most common form of threshold dose–response modeling, the threshold dose becomes the basis for establishing a “safe human dose” (SHD). Because we rarely, if ever, are able to define the true threshold point on the dose–response curve, the threshold dose is usually approximated. There are two methods for estimating the threshold. These methods include the no observable adverse effect level (NOAEL)/lowest observable adverse effect level (LOAEL) approach and the benchmark dose (BMD) approach. The preferred method is the BMD approach, which derives a threshold dose based on the dose–response curve. If data are not amenable to the BMD approach, then the NOAEL/LOAEL approach is utilized.


In the NOAEL/LOAEL approach, the more desirable method uses the highest reported dose or exposure level for which no toxicity was observed. This dose, known as the “NOAEL,” is considered for practical purposes to represent the threshold dose. This prevents underestimating the toxicity of a chemical. Sometimes, the available data do not include a NOAEL; that is, all of the doses tested produced some measurable toxic effect. In this situation, the lowest dose producing an adverse effect, termed the “LOAEL,” is identified from the dose–response data. The threshold dose will lie below, and hopefully near, this dose. A threshold dose is then projected from the LOAEL, usually by dividing the LOAEL by a factor of 10 (see Figure 23.2). There are several limitations to the NOAEL/LOAEL methodology. One limitation is that the ability of the NOAEL to approximate the threshold dose is dependent on dose selection and spacing in available studies, and in many cases, these are not well suited to determining the threshold. If the doses are spaced far apart, the NOAEL may be much lower than the actual threshold dose and result in an overestimation of toxicity. A second limitation is that the approach fails to consider the shape or slope of the dose–response curve, focusing instead on results from one or two low doses exclusively. This is especially important at lower, environmentally relevant concentrations where small changes in dose can result in large changes in effect. Another limitation is that studies with small numbers of animals may result in higher thresholds since there is not enough power in those studies to identify effects that occur with low frequencies. Therefore, poor study design can be rewarded with a higher threshold dose.


The second method for estimating a threshold dose is the BMD approach. This method utilizes all of the dose–response data and is not dependent on any single data point. In this approach, dose–response data for the toxic effect of concern are fit to a mathematical model, and the model is used to determine the dose corresponding to a predetermined benchmark response. For most quantal data, the dose at which 10% of the population exhibits a response (effective dose10 (ED10)) is chosen as the benchmark response. Exceptions include reproductive data (5% level) and human data (1% level) for which lower benchmarks are utilized. For continuous data, the benchmark response is a 10% change in endpoint that is considered to be biologically significant or a change of the treated mean equal to one standard deviation from the control mean. As an example, dose–response data might be used to determine the dose required to produce a 10% incidence of liver toxicity from mice treated with a chemical. This dose would be referred to as the ED10 or dose effective in producing a 10% incidence of effect. Often, for regulatory purposes, statistical treatment of the data is used to derive upper and lower confidence limit estimates of this dose. The more conservative of these is the lower confidence limit estimate of the dose, which in this case would be designated as the BMDL10 (see Figure 23.3). In order to develop an SHD from the ED10 or the BMDL10, a series of uncertainty factors would be applied, analogous to the NOAEL approach. In a sense, the BMD approach is like extrapolating an SHD from a NOAEL, except the BMD is much more rigorously defined.

c23-fig-0003

Figure 23.3 Derivation of the benchmark dose (BMD). Dose–response data for the toxic effect of concern are fit to a mathematical model, depicted by the solid line. This model is used to determine the effective dose (ED) corresponding to a predetermined benchmark response (BMR) (e.g., 10% of the animals responding), which is termed the BMD. Statistical treatment of the data can be used to derive the lower confidence limit estimate of the BMD, termed the BMDL. Either the BMD or BMDL may be used to represent the point of departure, although the more conservative BMDL is typically used for regulatory purposes.



Source: Adapted from USEPA (2012).


The BMD approach works best if there are response data available for a variety of doses. In order to derive an accurate estimate of the threshold dose utilizing the BMD approach, a statistically or biologically significant dose-related trend is necessary. Without a trend, the software will not be able to accurately model the data. Additionally, if there is no NOAEL, the LOAEL must be near the true threshold. Otherwise, there is too much uncertainty in the models (the BMDL will be model dependent), and the software will not be able to reproduce the shape of the curve in the threshold region with any certainty. In this case, the BMD approach would not provide any additional information as to the location of the threshold dose. In these instances, the NOAEL/LOAEL approach should be used.


From the estimates of the threshold dose, an SHD can be calculated. Different agencies have different terminologies that they apply to the SHD; the U.S. EPA refers to this dosage as a “reference dose” (RfD), or, if it is in the form of a concentration of chemical in air, as a “reference concentration” (RfCs). Other agencies have adopted different terminologies; for example, the U.S. FDA uses the term “allowable daily intake” (ADI). The basic concept is the same, and the approach to the development of an SHD is relatively simple, as illustrated in the flow diagram in Figure 23.2. Because a chemical may produce more than one toxic effect, the first step is to identify from the available data the adverse effect that occurs at the lowest dose. Second, the threshold dose or some surrogate measure of the threshold dose (e.g., the NOAEL or LOAEL reduced by some amount) is identified for the most sensitive toxic endpoint. The threshold dose (or its surrogate measure) is then divided by an uncertainty factor to derive the SHD, and this dose can then be converted into an acceptably safe exposure guideline for that chemical.


Calculating Safety for Threshold Toxicities: The SHD Approach


The calculation of an SHD essentially makes an extrapolation on the basis of the size differential between humans and the test species. This extrapolation is based on a dosimetric adjustment factor that accounts for toxicokinetic and some toxicodynamic differences between species. The calculation is similar to the following:


images

where


NOAEL = threshold dose or some other no observable adverse effect level selected from the no-effect region of the dose–response curve;


SHD = safe human dose;


UF = the total uncertainty factor, which depends on the nature and reliability of the animal data used for the extrapolation;


N = number of milligrams consumed per kilogram per day;


BWa = body weight of the animal; and


BWh = human body weight.


Typically, the uncertainty factor used varies from 10 to 10,000 and is dependent on the confidence placed in the animal database as well as whether there are human data to substantiate the reliability of the animal no-effect levels that have been reported. Of course, the number calculated should use chronic exposure data if chronic exposures are expected. This type of model calculates one value, the expected safe human dosage, that regulatory agencies have referred to as either the ADI or the RfD. Exposures, which produce human doses that are at or below these safe human dosages (ADIs or RfDs), are considered safe.


Example Calculation


Pentachlorophenol (PCP), a general-purpose biocide, will be used as an example of how to derive a safe human dosage. A literature review of the noncarcinogenic effects of PCP has shown that the toxicological effect of greatest concern is its hepatotoxic effects in test animals. The PCP LOAEL for these effects has been reported to be 1.5 mg/kg daily. Using the formulas shown in the previous text and an uncertainty factor of 300, an SHD could be calculated as follows:


images

images

Once the SHD has been estimated, it may be necessary to convert the dose into a concentration of the chemical in a specific environmental medium (air, water, food, soil, etc.) that corresponds to a safe exposure level for that particular route of exposure. That is, while some dose (in mg/kg · day) may be the total safe daily intake for a chemical, the allowable exposure level of that chemical will differ depending on the route of exposure and the environmental medium in which it is found.


Uncertainty Factor

The uncertainty factor is really a composite of several uncertainty factors intended to address weaknesses in the data or uncertainties in extrapolation from animals to humans. These uncertainties arise because of our inability to directly measure the actual human threshold dose. The weaker the data set available for evaluation (few studies, limited doses tested, etc.) and the more assumptions required, the greater the uncertainty that the NOAEL or LOAEL from the literature actually represents the threshold dose in humans. The purpose of dividing the NOAEL, LOAEL, or BMD by uncertainty factors is to ensure that the SHD used in the risk assessment is below the actual human threshold dose for toxicity for all individuals in the exposed population, thereby avoiding any underestimation of risk. The greater the uncertainty associated with the data, the larger the uncertainty factor required to insure protection.


The general rationale for selecting the size of the uncertainty factor for a particular area of uncertainty is as follows:



  • UFA—An uncertainty factor of up to 10 is applied in extrapolating toxicity data from one species to another. It is used to account for the possibility that humans are more sensitive to toxicity than the test species. A factor of 10 is utilized as the default value, and a factor of 3 is utilized if the study species is a nonhuman primate or if toxicodynamic and toxicokinetic data allow the calculation of a human equivalent dose (e.g., physiologically based pharmacokinetic (PBPK) modeling, species scaling).
  • UFH—An uncertainty factor of up to 10 is used to account for variability in sensitivity to toxicity among subjects. An uncertainty factor of 10 is applied to ensure that the final toxicity value is protective for sensitive individuals within a population. An uncertainty factor of 3 is utilized if the data is from a sensitive subpopulation known to be more susceptible to the adverse effect. An uncertainty factor of 1 is utilized if human data are available from a particularly vulnerable subpopulation.
  • UFS—An uncertainty factor of up to 10 might be applied if only subchronic data are available. It is possible under these circumstances that the threshold dose for longer exposures might be lower, and this uncertainty factor is intended to protect against this possibility. A factor of 3 is often utilized for an exposure duration that is greater than subchronic, but less than chronic (e.g., 1 year in rodents). An uncertainty factor of 1 is utilized when chronic data are available.
  • UFL—As discussed earlier, an uncertainty factor of up to 10 may be applied if the only value with which to estimate the threshold dose is a LOAEL value. Division by this uncertainty factor is meant to accomplish a reduction in the LOAEL to a level at or below the threshold dose. An uncertainty of 1 is applied when the BMD is utilized as the threshold dose.
  • UFD—An additional uncertainty factor of up to 10 is applied if the overall quality of the database is poor, the number of animal species tested is few, the number of toxic endpoints evaluated is small, or the available studies are found to be deficient in quality. An uncertainty factor of 3 is utilized if either a prenatal toxicity or two-generation reproduction study is absent from the database. If both are absent, an uncertainty factor of 10 should be applied.
  • MF—The development of some SHDs incorporates a modifying factor to account for deficiencies in the data set not covered by the other uncertainty factors. In 2002, the U.S. EPA discontinued the use of modifying factors, stating they are sufficiently incorporated in the general database uncertainty factors.

These uncertainty factors are multiplicative; that is, an uncertainty factor of 10 for sensitive individuals combined with an uncertainty factor of 10 for extrapolation of data from animals to humans results in a total uncertainty factor of 100 (10 × 10). Total uncertainty factors applied to develop an SHD commonly range between 300 and 1000, and values up to 10,000 or more are possible, although regulatory agencies may place a cap on the size of compounded uncertainty factors (e.g., a limit of 3000).


In the example calculation for PCP earlier, an uncertainty factor of 300 was utilized to calculate the SHD. Uncertainty factors were used to account for animal to human extrapolation, variability among humans in sensitivity, and the use of a LOAEL (UFA of 10 × UFH of 10 × UFS of 1 × UFL of 3 × UFD of 1 = 300).


Quantifying Noncancer Risk


Although the term “risk” often implies probability of an adverse event, the threshold approach to assessing chemical risk does not result in risk expression in probability terms. This approach is instead directed to deriving a safe limit for exposure and then determining whether the measured or anticipated exposure exceeds this limit. All doses or exposures below this “safe level” should carry the same chance that toxicity will occur—namely, zero. With this model, the acceptability of the exposure is basically judged in a “yes/no” manner. The most common quantitative means of expressing hazard for noncancer health effects is through a hazard quotient (HQ). Agencies such as the U.S. EPA calculate an HQ as the estimated dose from exposure divided by their form of the SHD, the RfD:


images

where


HQ = hazard quotient,


D = dosage (mg/kg · day) estimated to result from exposure via the relevant route, and


RfD = reference dose (mg/kg · day).


Interpretation of the HQ is relatively straightforward if the value is less than one. This means that the estimated exposure is less than the SHD and no adverse effects would be expected under these circumstances. Interpretation of HQ values greater than one is more complicated. A value greater than one indicates that the estimated exposure exceeds the SHD, but recall that the SHD includes a number of uncertainty factors that impart a substantial margin of safety. Therefore, exposures that exceed the SHD, but lie well within this margin of safety, may warrant further analysis but are unlikely to produce adverse health effects.


Dose–response relationships can vary from one route of exposure to another (e.g., a safe dose for inhalation of a chemical may be different from a safe dose for its ingestion). As a result, a given chemical may have different SHDs for different routes of exposure. Since individuals are often exposed to a chemical by more than one route, separate route-specific HQ values are calculated. For example, the estimated inhalation dose would be divided by the SHD for inhalation to calculate an HQ for inhalation, while the estimated dose received from dermal contact would be divided by a dermal SHD to derive the HQ for this route of exposure. Typically, the HQ values for each relevant route of exposure are summed to derive a hazard index (HI) for that chemical. Interpretation of the HI is analogous to the HQ—values less than one indicate that the safe dose has not been exceeded (in this case, by the aggregate from all exposure routes). A value greater than one suggests that effects are possible, although not necessarily likely. The HI is also a means by which effects of different chemicals with similar toxicities can be combined to provide an estimate of total risk to the individual.


Another means to convey the relationship between estimated and safe levels of exposure is through calculation of a margin of exposure. This is most often used in the context of the BMD approach. The margin of exposure is the BMD divided by the estimated dose. An acceptable margin of exposure is usually defined by the uncertainty factors applied to the BMD. If, for example, available data suggest that a total uncertainty factor of 1000 should be applied to the BMD for a specific chemical and effect, and the margin of exposure for that chemical is greater than 1000 (i.e., the estimated dose is less than the BMD divided by 1000), the exposure would be regarded as safe.


A Special Case: Assessing Risk from Lead Exposure


The aforementioned methods are almost universally applied in assessing the potential for noncancer health effects. There is, however, one exception for which a radically different approach is used: the evaluation of noncancer effects from lead in children. In 2012, the Centers for Disease Control and Prevention (CDC) recommended blood lead concentrations in children should not exceed 5 μg/dl in order to avoid intellectual impairment (this is a decrease from their previous recommendation of 10 μg/dl). Thus, the main objective in lead risk assessment is to determine whether childhood lead exposure is sufficient to result in a blood lead level that causes adverse effects.


To predict blood lead levels from environmental exposure, the U.S. EPA has developed a PBPK model known as the “integrated exposure uptake biokinetic model for lead in children” (IEUBK). The IEUBK model has four basic components (i.e., exposure, uptake, biokinetics, and probability distribution) and uses complex mathematics to describe age-dependent anatomical and physiological functions that influence lead kinetics. The model predicts the blood concentration (the dose metric most closely related to the health effect of interest) that results from an endless variety of exposure scenarios that can be constructed by the risk assessor (i.e., exposure to various concentrations of lead in soil, dust, water, food, and/or ambient air). The model also predicts the probability that children exposed to lead in environmental media will have a blood lead concentration exceeding a health-based level of concern (see Figure 23.4). It is important to note that the U.S. EPA has not yet decreased their health-based level of concern and currently utilizes 10 μg/dl as specified in the 1994 Revised Interim Soil Lead Guidance for CERCLA Sites and RCRA Corrective Action Facilities. However, the IEUBK model allows the user to choose the blood lead level of concern. The IEUBK approach is rather unique because it is among the few approaches that rely on an internal dose metric (i.e., blood lead level) and PBPK modeling for risk assessment purposes.

c23-fig-0004

Figure 23.4 Example of output from the IEUBK model. The curve displays the cumulative probability of developing a blood lead concentration at varying media concentrations as a result of the specified exposure. In this example, there is a probability of virtually 100% that the modeled exposure will result in a blood lead concentration greater than 1 μg/dl, but only about a 9% probability that the blood lead concentration will exceed 10 μg/dl.


Nonthreshold Models for Assessing Cancer Risks


Conceptual Issues


The nonthreshold dose–response model is typically reserved for cancer risk assessment. The assumption by regulatory agencies that chemical carcinogenesis has no dose threshold began several decades ago. This assumption was initially based largely on empirical evidence that radiation-induced cancer had no threshold and on the theory that some finite amount of DNA damage was induced by all doses of radiation. Smaller doses simply carried smaller risks, but all doses were assumed to carry some mathematical chance of inducing cancer. Following this lead, theories of chemical-induced carcinogenesis began to evolve along the same lines, centering on effects of highly reactive, DNA damaging carcinogens. It was presumed that, like radiation, chemical carcinogens induced cancer via mutations or genetic damage and therefore had no thresholds. So, like radiation before it, chemical-induced carcinogenesis was assumed to carry some quantifiable risk of cancer at any dose.


If viewed somewhat simplistically, a biologic basis for the absence of a practical threshold for carcinogens can be hypothesized. If one ignores the DNA repair processes of cells, or assumes that these protective processes become saturated or overwhelmed by “background” mutational events, it can be postulated that some unrepaired genetic damage occurs with each and every exposure to a carcinogenic substance. As this genetic damage is presumed to be permanent and carry the potential to alter the phenotypic expression of the call, any amount of damage, no matter how small, might carry with it some chance that the affected cell will ultimately evolve to become cancerous.


With this viewpoint, scientists and regulatory agencies initially proposed that the extrapolation of a cancer hazard must be fundamentally different from that used to extrapolate noncancer hazards, and cancer risk assessment models become probability based. In contrast to assessing the risk of noncancer health effects, where the dose at which no toxic effect will occur is determined, cancer risk assessment is a matter of assigning probabilities of cancer to different doses. The determination of safety or a safe dose is then a matter of deciding what cancer risks are so small that they can be regarded as de minimis or inconsequential.


Determining the relationship between carcinogen dose and cancer risk is very difficult for a number of reasons. One reason is that the concept of latency complicates the interpretation of dose–response relationships for carcinogens. Latency is the interval of time between the critical exposure and the ultimate development of disease. While noncancer effects tend to develop almost immediately or very soon after a toxic dose is received, cancer may not develop until an interval of 20 years or more has elapsed. For some carcinogens, increasing the dose shortens the latency period, causing tumors to develop more quickly. A positive carcinogenic response can then be thought of in two ways: as increased numbers of tumors or subjects with tumors or as a decrease in the time to appearance of tumors. The latter is important, because a dose capable of producing tumors has no consequence if the time required for the tumors to develop exceeds the remaining lifespan of a human or animal.


Another problem is that the critical portion of the dose–response curve for most risk assessments, the low-dose region applicable to most environmental and occupational exposures, is one for which empirical data are not available. Chronic cancer bioassays in animals are expensive and seldom test more than two or three doses. Also, cost limits the number of animals tested to about 50 or less per dose group. With this group size, only tumor responses of about 10% or more can be detected with statistical significance. Detection of the kinds of cancer responses that might be of interest to the risk assessor, for example, a response of 0.1, 0.001 or 0.00001% (10−3, 10−5, or 10−6, respectively), is therefore beyond the capabilities of these experiments. Consequently, the doses needed to produce these cancer responses are not determined. Expanding the number of animals routinely tested is not economically feasible, and even very large studies may not eliminate this problem. One attempt to test the utility of using larger dose groups, the so-called “megamouse” experiment, was still unable to increase the sensitivity of measurement beyond about 1%, even though almost 25,000 animals were used in this experiment. In short, animal cancer bioassays will typically provide only one or two dose–response points, and these points are always several orders of magnitude above the range of small risks/doses in which we are ultimately interested.


Because low-dose responses cannot be measured, they must be modeled. There are three types of models:



  1. The first category of models consists of the “mechanistic” models. These are dose–response models that attempt to base risk on a general theory of the biological steps that might be involved in the development of carcinogenesis. Examples of mechanistic models include the early “one-hit” and the subsequent “multihit” models for carcinogenesis. These models were based on assumptions concerning the number of “hits” or events of significant genetic damage that were necessary to induce cancer. A related model, the “linearized multistage” (LMS) model of carcinogenesis, is based on the theory that cancer cells develop through a series of different stages, evolving from normal cells to cancer cells that then multiply.
  2. The second category of cancer extrapolation models includes the “threshold distribution” models. Rather than attempting to mimic a particular theory of carcinogenesis, these models are based upon the assumption that different individuals within a population of exposed persons will have different risk tolerances. This variation in tolerance in the exposed population is described with different probability distribution of the risk per unit of dose. Models that fall within this category include the probit, the logit, and the Weibull.
  3. The third category of model is the “time-to-tumor” model. This type of model bases the risk or probability of getting cancer on the relationship between dose and latency. With this model, the risk of cancer is expressed temporally (in units of time), and a safe dose is selected as one where the interval between exposure and cancer is so long that the risk of other diseases becomes of greater concern.

Each of these models can accommodate the assumption that any finite dose poses a risk of cancer, the essential tenet of a nonthreshold model. However, the shape of the dose–response curve in the low-dose region can vary substantially among models (see Figure 23.5). Because the shape of the dose–response curve in the low-dose region cannot be verified by measurement, there is no means to determine which shape is correct. A simple example of the impact of choosing one cancer extrapolation model over another is given in Table 23.1, which compares the results of dose–response modeling using three different models where it was assumed in each model that a relative dose of 1.0 produced a 50% cancer incidence. The results generated by all three models are essentially indistinguishable at high doses where the animal cancer incidence might be observable, and so one would conclude that they all “fit” the experimental data equally well. However, when modeling the risks associated with lower doses, the dose/risk range in which regulatory agencies and risk assessors are most frequently interested, there is a wide divergence in the risk projected by each model for a given low dose. In fact, at 1/10,000th of the dose causing a 50% cancer incidence in animals, the risks predicted by these three models produce a 70,000-fold variation in the predicted response.

c23-fig-0005

Figure 23.5 Four different extrapolation models applied to the same experimental data. All fit the data equally well in the observable range, but each yields substantially different risk estimates in the low-dose region most applicable to occupational and environmental exposures.



Source: Adapted from NRC (1983).

Only gold members can continue reading. Log In or Register to continue

Stay updated, free articles. Join our Telegram channel

Jul 31, 2017 | Posted by in GENERAL SURGERY | Comments Off on HUMAN HEALTH RISK ASSESSMENT

Full access? Get Clinical Tree

Get Clinical Tree app for offline access