Risk Assessment Paradigms

Chapter 3
Risk Assessment Paradigms


As defined, risk assessment is the qualitative or quantitative characterization and estimation of potential adverse health effects associated with exposure of individuals or populations to hazards (materials or situations, physical, chemical, and/or microbial agents). Risk assessment is not used in isolation but as part of what is known in a broader context as risk analysis.


Risk analysis is a process used in many fields including business, finance, insurance, occupational workplace, natural disasters, and environmental pollution and health. Overall risk analysis includes risk assessment, risk management, and risk communication (Table 3.1). The process includes defining and analyzing human or naturally caused hazards to individuals or various communities including government agencies and then addressing the probability of an adverse event or outcome. While the analysis can be qualitative, a quantitative assessment attempts to numerically determine probabilities. The goal is safety; thus, management and communication are integral to the analysis process.


Table 3.1 Definitions Used in Risk Analysis


Source: Adapted from Ref. [1].












Risk assessment The qualitative or quantitative characterization and estimation of potential adverse health effects associated with exposure of individuals or populations to hazards (materials or situations, physical, chemical, and/or microbial agents)
Risk management The process for controlling risks, weighing alternatives, and selecting appropriate action, taking into account risk assessment, values, engineering, economics, and legal and political issues
Risk communication The communication of risks to managers, stakeholders, public officials, and the public, includes public perception and ability to exchange scientific information

Risk assessment is interdisciplinary in nature and includes the participation of many established disciplines (e.g., economics, engineering, mathematics, political and social sciences, as well as various natural sciences). The development of QMRA has its origins from the chemical risk framework associated with environmental pollution.


This chapter will address the evolution of the National Academy of Sciences chemical risk assessment approach, ecological risk assessment, and finally the QMRA framework. An overview of the various methodologies used in QMRA as developed by the World Health Organization (WHO), Environmental Protection Agency (EPA), and food industry addressing water and food safety will be presented.


Chemical Risk Assessment: National Academy of Sciences Paradigm


Chemical risk assessment has had an interesting and controversial history, where the development of the field of study was closely tied to governmental policies to control environmental chemical contaminants. For air and drinking water, this began in the early 1970s with the congressional mandates of the Clean Air Act (CAA), Clean Water Act (CWA), and Safe Drinking Water Act (SDWA). Amendments to these acts required that better estimates of potential hazards be made for risk management purposes. Thus, began a series of studies and reports from the National Academy of Science National Research Council (NRC) [2]. The principles, process, and methods were further refined, and in 1983, the NRC published the “Red Book” [3], which officially recognized the field of risk assessment. The report also recommended that risk assessment and risk management be kept distinct and that uniform guidelines for risk assessment be established for both cancer and noncancer effects. The framework published in the 1983 Red Book—Risk Assessment in the Federal Government Managing the Process—is shown in Figure 3.1.

c3-fig-0001

Figure 3.1 The risk assessment and management elements as described in the Red Book 1983



(Adapted from Ref. [1]).


Four steps in the risk assessment process were defined (Table 3.2). While hazard identification (HAZ ID) comprised of data from human and animal studies, dose–response modeling was done primarily in animal models. Exposure involved monitoring of the environment and the transport and fate of the chemicals through the various exposure pathways [3].


Table 3.2 Steps in Risk Assessment to Address Human Health Effects Associated with Chemicals











HAZ ID. Step used to describe the acute and chronic human health effects associated with any particular hazard, including toxicity, carcinogenicity, mutagenicity, developmental toxicity, reproductive toxicity, neurotoxicity
Dose–response assessment. Step used to characterize the relationship between various doses administered to subjects and the incidence of the health effect. Generally, animal models, extrapolated to human impacts, and in some cases epidemiological studies inform uncertainty
Exposure assessment. Step to determine the size and nature of the population exposed to some hazard; this includes the amount, route (e.g., inhalation and ingestion), and duration of the exposure
Risk characterization. Process that integrates the knowledge on the hazard and exposure with the dose–response to predict probability of an adverse outcome. The magnitude, variability, and uncertainty are key outputs associated with this final step

Chemical risk assessments generally followed two approaches: one for cancer endpoints and one for noncancer. Chemicals that may be carcinogenic have been evaluated using a weight-of-evidence classification system, which was originally developed from the International Agency for Research on Cancer of the WHO. This uses both human and animal data to identify the possible cancer risk. Another approach used by the National Toxicology Program (NTP) is to evaluate each animal study individually without necessarily offering this as an explanation of a chemical’s ability to cause cancer in humans (although generally presumed). Until recently, carcinogenic chemicals have been considered to be what is known as nonthreshold (i.e., any level of the chemical will pose some level of cancer risk). In examining animal dose–response studies, the slope of the line defines the cancer potency factor (CPF), which is the rate of increase in cancer risk as a function of increasing dose. Within the SDWA, for example, this approach was used to set maximum contaminant levels (MCLs) for chemicals that could have the potential to cause human cancer effects at exposures considered to have a de minimis health impact [often at 1 in a million (10−6)].


Those chemicals demonstrating a threshold, or some dose, below which there is no response in the people exposed have been described for noncancer endpoints. In these cases, the levels of no effect known as the no observable adverse effect level (NOAEL) and the lowest observable adverse effect level (LOAEL) were determined in animals with organ systems similar to humans or animals that may be the most sensitive. Safe levels were defined by the NOAEL and LOAEL, including acceptable daily intake (ADI) by the Food and Drug Administration (FDA) and reference dose (RfD), which includes an uncertainty factor by the EPA.


Exposures in the risk assessment process have been defined for chemical risk assessment as an average lifetime daily dose (concentration times the contact rate times the duration divided by the body weight (bw) times the lifetime). Yet defining this average has involved different approaches depending on the exposure medium and the legislative mandate. Under Superfund, for example, it was suggested that 30 years of exposure with the reasonable maximum exposure be determined, while in the CAA, a maximally exposed individual for 70 years was to be used.


Early risk assessment focusing on chemical contaminants included many other fields of study, such as toxicology, epidemiology, animal bioassays, environmental monitoring (with methods development), and statistical modeling. The controversies surrounding the risk assessment methods were focused on four major areas:



  1. Sensitivity and limitations of epidemiological studies
  2. Use of animals for extrapolation health effects for humans
  3. Types of mathematical models to be used for extrapolation of high dose to low dose
  4. Approaches for handling uncertainty in the estimates

While the use of real-world exposures was often limited, worst-case scenarios, the various maximum exposed individuals and vulnerable populations were assessed. As the various governmental agencies took different approaches to address exposure and dose–response, limited uniform methodologies were developed (Table 3.3). The U.S. Department of Energy, Department of Defense, Department of Health and Human Services, and Department of Agriculture have also made extensive use of risk assessments.


Table 3.3 Some Agencies Involved in Risk Assessment






















Agency Legislative Programs
Consumer product safety commission Consumer products
EPA mandated to use better risk assessment in 1977 for SDWA and in 1990 for CAA CAA, CWA, Beaches Environmental Assessment and Coastal Health Act (B.E.A.C.H.) of 2000; SDWA, Resource Conservation and Recovery Act (RCRA), Superfund, Toxic Substances Control Act (TOSCA), Pesticide Program
FDA Food additive program
Occupational safety and health administration mandated in 1980 by Supreme Court to undertake risk assessments for toxic chemicals Worker exposure and permissible, exposure levels
WHO Global goals and approaches to achieve safe drinking water, tolerable risks associated with recreation and wastewater reuse

Although risk assessment and risk management were, in theory, separate processes, in practice, assumptions and other decisions made in the analysis phase were closely tied to management options. While the risk assessment methodologies may not be prescribed in legislative statutes, the laws may dictate very specific risk management directives or mandates (Table 3.4). These have influenced the methods used for risk assessment, and as other risk paradigms developed, it was clear that risk assessment and risk management as well as risk communication in some cases needed to be integrated.


Table 3.4 Statutory Mandates on Risk









Zero-risk or pure-risk standards associated with the Delaney Clause are mandated as part of the Federal Food, Drug, and Cosmetic Act and prohibit any food additive that has been found to “induce cancer”; provisions in the CAA associated with the national ambient air quality standards call for protection of public health without regard to technology or cost. Within the SDWA, MCLs were set for chemicals based on levels of risk of 10−5 to 106
Technology-based standards as part of the SDWA and the CWA; the focus is on the cost and effectiveness of alternative control technologies to reduce risks. These include best practicable control technology, best conventional technology, best available technology economically achievable, and best demonstrated control technology
No unreasonable risk requires the balancing of risks against benefits in making risk management decisions. The Federal Insecticide, Fungicide, and Rodenticide Act and the TOSCA require the registration of pesticides that will not cause “unreasonable adverse effects on the environment” and assessment of chemical substances that “present an unreasonable risk of injury to health or the environment” given the benefits of the chemical, magnitude of the exposure, and the possibility of substitutes

The most recent advice reframes the risk assessment process and the interactions among the components [4] (Figure 3.2) so that problem formulation, stakeholder involvement (communications), and risk management are better integrated.

c3-fig-0002

Figure 3.2 Interactive risk assessment and management framework



(Adapted from NRC Ref. [4]).


Ecological Risk Assessment


The protection of the environment in addition to human health is clearly a goal shared by most. Increasingly, issues surrounding global climate change, loss of biological diversity (e.g., the rain forest), invasive species, and sustainability of resources (e.g., fisheries) have focused pollution control on the protection of environmental systems, ecosystems, and their functions. The framework for ecological risk assessment established by the EPA in 1992 [5] was developed conceptually from the National Academy’s paradigm for chemicals and human health. However, it was recognized that (1) ecosystems are complex communities and that the complex interactions are site specific, (2) ecosystems are made up of a variety of species which may have different sensitivities to the hazards of concern, (3) exposure pathways are difficult to ascertain, (4) nonchemical hazards (such as sediments effects on seagrasses) need to be considered, (5) quantitative risk estimates are difficult to develop, (6) feedback loops and adaptive process are in play, and (7) management is couched in restoration terms. To highlight these differences, the terminology used and the risk assessment process were altered and have evolved from assessing the impact of human disasters like oil spills to large-scale assessment and restoration associated with cumulative stressors [6, 7].


For addressing ecosystems, functions, and restoration, it was clear that risk management and risk assessment and even risk communication would need to be even better integrated and more stakeholder involvement was needed. Thus, the problem formulation phase became one of the most critical pieces added to the risk paradigm. This took into account an initial evaluation of the site-specific factors, resources, and values to be protected, types of stressors, scope of the evaluation, policy and regulatory issues, and scientific data needs. The term stressor was used more often in lieu of hazard. In addition, the adverse effects were multifaceted and included not only direct effects (increased mortality, reproductive changes, changes in maturation) but also indirect effects (decrease in habitat might mean a decrease in spawning). Ecological effects were often descriptive and reflective of the complexities and the difficulties in modeling these interactions.


Spatial evaluation is essential for comprehensive assessments of the multiple stressors and impacts in the variety of ecosystems, which warrant protection and restoration. These can now be quantified and mapped [8, 9]; however, only recently has the interaction between the cumulative impact of stressors and ecosystems services been presented [6].


The linking of human health risk assessment and ecological risk assessment is still limited, and a clear framework has not developed although some have proposed various approaches [10]. Monitoring is still seen as a key to the assessment of risk. In some cases, the stressor can be monitored for directly (heavy metals and nutrients), and the response (toxicity and phytoplankton production) can be ascertained in laboratory studies or in field studies. Both types of scientific data are seen as necessary. While dose–response relationships can be developed, for example, between nitrogen inputs and chlorophyll a, as a measure of the potential for development of eutrophic conditions, the relationship of this response to adverse effects, such as toxic algal blooms, is not so apparent [11]. Identification of the transport and fate of the contaminants across media (air, water, and sediments) and the most sensitive species is targeted, as essential scientific data needs.


Human health risk assessment and ecological risk assessment were initiated in a similar era of scientific advancements when anthropogenic contaminants were being readily observed and monitored for as well as impacts on the environment (e.g., fish kills). But the approaches were implemented as distinct fields of scientific study (arenas of ecology and human health). Endpoints of acceptable contaminant loading to water, land, or air could differ greatly, and in some cases, standards for discharges, say, under the CWA, were more stringent, based on the protection of the ecosystem rather than human health. There is a growing awareness that human health and ecosystem health are related and that a more integrated approach is needed to address both endpoints.


Linking environmental conditions and disease has been partly advanced through the use of spatial modeling and dynamic and mechanistic models. Mapping and better long-term data sets help to elucidate concepts around adaptation and resilience in ecosystem risk assessment with a link to management strategies for protection of ecosystem services, which include drinking water and recreational waters. The case of the blue-green algae is one in which the interconnectedness between frameworks and risk analysis goals can be readily seen.


One of the considerable issues in ecosystem health is the modification of lakes, which has led to higher salinities and high nutrient inputs. Nutrients, phosphorous in particular, are seen as associated stressors for freshwater systems, and as a result, cyanobacterial (blue-green algal) blooms have become much more common occurrences throughout the world. These bacteria produce an array of toxins now known to cause illness and even death in people and animals [12].


The Environment and the Stressors: Freshwater systems provide drinking water and recreational opportunities (fishing and swimming) as key human services. It is well known that both point and nonpoint sources of pollution impact waterways via nutrient inputs. The exact relationship is highly dependent on the water body, with temperature and sunlight playing key roles in exacerbating blooms. With excess nutrients comes eutrophication and more phytoplankton, which may be seen as nuisance blooms or toxic blooms depending on the type of algae. Globally, algal blooms are becoming more prevalent, spreading spatially and temporally.


The Hazards: The human and other mammalian hazards are associated with the toxins produced by a variety of blue-green algae. A number of actual cases of illness and deaths have been reported in animals and on occasion in humans from exposure to high levels of toxins in water and food, thus primarily through ingestion. Types of toxins, their effects, and the genera associated with the toxins include:



  • Microcystins, nodularin, and cylindrospermopsins affecting the liver caused by Microcystis, Anabaena, Planktothrix (Oscillatoria), Nostoc, Hapalosiphon, Anabaenopsis, Nodularia, Cylindrospermopsis, Aphanizomenon, and Umezakia, respectively
  • Anatoxin-a, anatoxin-a(S), and saxitoxins affecting the nerve synapse and nerve axons associated with Anabaena, Planktothrix (Oscillatoria), Aphanizomenon, Lyngbya, and Cylindrospermopsis
  • Aplysiatoxins affecting the skin associated with Lyngbya, Schizothrix, and Planktothrix (Oscillatoria)
  • Lyngbyatoxin-a affecting the skin and gastrointestinal tract caused by Lyngbya, Anabaena, and Aphanizomenon

Dose–Response: In this case, the relationship followed the chemical paradigm in which animal models were used to establish the tolerable daily intake (TDI). This used pure microcystin-LR and the derived LOAEL or the NOAEL and divided some appropriate safety factors based on the use of animal data to extrapolate to humans and other uncertainty factors [13]. These data have led to a management criterion for drinking water supported by the WHO of 1 µg/l of microcystin. There is discussion on the data that throw suspicion on the carcinogenic nature of the cyanobacterial toxins, yet to date no cancer endpoint has been established.


Exposure Assessment: While spatial and temporal data on the variety of toxins and their respective algal species in freshwaters are sparse, monitoring approaches are available using enzyme-linked immunosorbent assay (ELISA) techniques and high-performance liquid chromatography. Drinking water has been the main focus addressing source water and removal of the algal cells as well as the toxin through filtration and disinfection. Recreational exposure through aerosols as well as ingestion is also of concern as well as exposure through the food chain. While chlorophyll a has been used as a proxy, no established predictive model using nutrients, environmental conditions (e.g., temperature, sunlight, wind), chlorophyll a, and algal species for toxin concentrations has been developed.


Risk Characterization: There is a need for a global characterization of the exposure of the population to toxins through drinking water, recreational waters, and food. Although a study of aerosol exposures in New Zealand lakes experiencing blooms based on toxin monitoring of the air found no elevated risks above TDI [16]. Most of the characterizations have been undertaken to establish safe standards, such as a risk assessment for cyanobacterial toxins, which provided health guidelines for fish, prawns, and mussels (ranging from 18 to 39 µg/kg for cylindrospermopsin; 24–51 for microcystin-LR or equivalent toxins, including nodularin; and 800 µg/kg for saxitoxins) [17].


Management: Phosphorous limitations are the focus for protection of freshwater systems, but the link to downstream costs associated with impacts on fisheries, beaches, or drinking water systems has not been well established.


Approaches for Assessing Microbial Risks


Background


Infectious diseases have existed in the human population since the beginning of human kind and were identified as plagues, pestilence, and epidemic fevers in early Egyptian scrolls. But it was not until the advent of microbiological and epidemiological methods that the pathogens were identified (as the hazards) and the environment (e.g., waterborne) was identified as the exposure routes. Pathogens are continually evolving and emerging along with antibiotic resistance. Newly identified human pathogens are constantly found in the human population causing disease (Cryptosporidium, Cyclospora, and E. coli O154, SARS, cyclovirus). Yet, infectious disease risks, particularly at the country level, include mostly the numbers and statistics, which are for the most part determined by measured rates. That is the number of people who have disease X per 1000 people per some unit of time (usually over a year’s time). This of course is dependent on a disease surveillance program (from clinical laboratories, physicians, and health departments) and a reporting system. The better the program and reporting, the more accurate the rate. It is generally accepted that these public health disease surveillance systems greatly underestimate the level of disease in a community, and while providing a picture of past risk may not accurately reflect future risk of infectious disease. This also becomes problematic for new pathogens for which there is no established procedure for testing of patients and rarely addresses the various exposures or transmission pathways. In addition, outcome may be assessed by mortality in the extreme or simply by a case without identification of consequence (severity of the illness, number of days sick, medical care, etc.). It is often presumed that infectious disease exposure occurs through person-to-person transmission. However, clearly, waterborne and in some cases foodborne disease are established risks associated with environmental transmission and the contamination of water, air, or the food chain.


Since cholera was first identified and associated with waterborne transmission in the famous Broad Street pump study in London, epidemiology has always been the major science used to study the transmission of infectious disease and the role of the environment. “Epidemiology may be defined as the study of the occurrence and distribution of disease and injury specified by person, place and time” [18]. This was traditionally used to study epidemics or excess cases of disease, and therefore, the focus for microorganisms as pathogens was on epidemics or waterborne/foodborne outbreaks of disease. With the shift from infectious agents to chronic diseases such as cancer, environmental epidemiology arose. “Environmental epidemiology is the study of environmental factors that influence the distribution and determinants of disease in human populations” [18]. Epidemiology has always been used and will remain an integral part of the risk assessment process for pathogenic microorganisms. As epidemiological studies focus on actual human health effects instead of hypothesized outcomes, the data carry with them a great deal of scientific validity.


However, the use of epidemiology alone without other scientific fields integrated into the process will not fulfill the needs for a complete understanding of the risks associated with disease transmission and the management of such risks. Epidemiology may be limited by the sensitivity of the study generally capable of identifying risks greater than 1/10 for outbreaks and between risks of 1/100 and 1/1000 for larger endemic disease studies, thus unable to associate statistically smaller risks relative to the background. Therefore, the study of endemic disease risks becomes much more difficult. In addition, exposure data are lacking, incomplete, or imprecise, confounded by the nature of the human subjects. For example, the study by Payment et al. [19] found that 35% of the diarrheal illness was associated with tap water consumption. However, the microbial agents, their concentrations, distributions, and sources, and the potential for other serious or chronic health effects were not elucidated. Similarly, risk management strategies were not developed.


Risk assessment methods following the National Academy paradigm were used only on a limited scale for judging waterborne pathogenic microorganisms between 1983 and 1991 [20–26]. These early estimates of the risk based on models were criticized as not carrying the validity of the measured disease rates. A number of conferences and authors attempted to deal with pathogen risks in a qualitative manner, but few quantitative attempts were made until the 1980s. These early attempts lacked an adequate database for dose–response and exposure as well as an understanding of the type of risks associated with microorganisms in water and a framework for analyzing such risks.


Haas [21] was the first to look quantitatively at microbial risks associated with drinking water based on dose–response modeling. He examined mathematical models, which could best estimate the probability of infection from the existing databases associated with human exposure experiments. He found that for viruses a beta-Poisson model best described the probability of infection. This model was used to estimate the risk of infection, clinical disease, and mortality to hypothetical levels of viruses in drinking water, calculating annual and lifetime risks [20, 23]. Rose et al. [26] then used an exponential model to evaluate daily and annual risks of Giardia infections from exposure to contaminated water after various levels of reduction through treatment. This particular study used survey data for assessing the needed treatment for polluted and pristine waters based on Giardia cyst occurrence. The use of “probability of infection” models for development of standards for bacteria, viruses, and protozoa in water was suggested [25], and this approach was used in the development of the Surface Water Treatment Rule to address the performance-based standards in the control of Giardia [27, 28]. Hence, the field of QMRA was created.


The government began using QMRA for a variety of purposes. In a study for the U.S. Army, Cooper et al. [29] attempted to quantify the risks of water-related infection and illness exposure to Army units in the field. They reviewed the literature for information on infectious dose and clinical illness for potentially waterborne pathogens. Using this information, the probability of infection was assessed using logistic, beta, exponential, and lognormal models. A generalized model was then developed incorporating expected pathogen concentrations, consumption volume, and risk of infection for different military units. The study attempted to incorporate organism concentrations, effective treatment, and risk of infection; however, there was a limited existing database on microbial concentrations and infectious dose. Data used for estimation of subclinical to clinical illness rates for enteroviruses did not incorporate clinical data or reviews of clinical research. In addition, the data on occurrence of the pathogens in source waters and the likely efficiency of removal by treatment processes were limited.


One of the major limitations of the early attempts to use a QMRA approach for microorganisms was the lack of data and analysis of those data regarding all the steps in the risk assessment process, the health effects, and dose–response and exposure assessment. Sobsey et al. [30] published as part of the report Drinking Water and Health in the Year 2000 [30] a conceptual framework of data needs for addressing microorganisms. Microbial risk assessment needed better identification of the specific microorganisms, assessment of human health effects, development of dose–response data, understanding of physiological host–microorganism interactions, and incorporation of epidemiological data. With regard to exposure, the authors recommended better data on occurrence, transport and fate, regrowth potential, and susceptibility to water treatment processes.


The field of QMRA is relatively young but is growing. The science has been advanced primarily in the water and food safety arenas. From 1998 to 2013, around 157 articles, proceedings papers, reviews, and book chapters were published that use QMRA as a key phrase. This has increased from 1 paper per year to well over 25 per year. The use of microbial risk assessment in general has exploded from 1 paper per year in 1994 to over 180 per year. The field while small is global. The most highly cited topics include gray water and wastewater reuse, irrigation, bacterial regrowth, key pathogens (E. coli 0157H7, Campylobacter, Legionella, and rotavirus), food safety, transfer of microbes in food preparation, recreational waters, drinking waters, aquifer recharge, and poultry. The roles of climate, storms, and runoff are also being mentioned as well as assessment of new engineering control measures (e.g., UV disinfection). There is no doubt that the QMRA framework is being used to explore the emerging disease threats and issues of the day as more data are accumulated for each of the various steps in the process.


The QMRA Framework


The National Academy’s four-tiered approach has now been used for the microbial risk assessment process over the last decade. However, because microorganisms are living entities, the risk assessment has evolved to develop terminology, modeling approaches, and data sets that are specific to microbial risks. A combination of scientific fields, which also have different languages, have formed the basis of microbial risk assessment; these include but are not limited to epidemiology, medicine, clinical and environmental microbiology, and engineering. These fields have all added to the growing knowledge associated with HAZ ID, dose–response, exposure assessment, risk characterization, and risk management.


Hazard Identification


The HAZ ID is both identification of the microbial agent and spectrum of human illnesses and diseases associated with the specific organism. The types of clinical outcomes range from asymptomatic to death (see Chapters 1 and 4). These data come from the clinical literature and studies from clinical microbiologists and in some cases veterinary medicine. The pathogenicity and virulence of the microorganism itself are of great interest as well as the full spectrum of human disease that can result from specific microorganisms. The host response to the microorganisms in regard to immunity and multiple exposures and the adequacy of animal models for studying human impacts would also be addressed here. Endemic and epidemic disease investigations, case studies, hospitalization studies, and other epidemiological data are needed to complete this step in the risk assessment. The transmission of some diseases can be linked directly to the microbial hazard (e.g., vectorborne diseases such as malaria); therefore, in some cases, the transmission (and to some extent the exposure) is tied into HAZ ID for microbial risks. Other pathogens can have multiple routes of transmission (viruses in some cases can be spread by contact, inhalation, or ingestion). The use of new methods, such as molecular epidemiology, that can be used to track specific microorganisms from the patient back to the environment will be of great value for refining the role of the environment (air, food, soil, or water) in the transmission of various types of microorganisms and diseases.


There are new microbial hazards that have emerged that will require some thought and modification to the paradigm. These include antibiotic resistance, which involves gene exchange and then gene activation and expression. Proteins and fungi, for example, which act as allergens, may need special consideration as well.


Dose–Response Assessment


The dose–response is aimed at the mathematical characterization of the relationship between the dose administered and the probability of infection or disease or death in the exposed population. The dose–response models are based on experimental data for the most part (http://qmrawiki.msu.edu/index.php?title=Dose_Response). The microorganisms are measured in doses that are routinely used to count the specific microbe in the laboratory, such as colony counts on agar media for bacteria, plaque counts in cell culture for viruses, and direct microscopic counts of cysts/oocysts for the protozoa. However, this means that for the protozoa this results in essentially particle counts (nonviable organisms viewed microscopically could be counted in the dose), while with bacteria and viruses, the opposite problem exists—viable but nonculturable organisms are not counted (see Chapters 5 and 7). Despite these limitations in estimation of the dose, the methods used are similar to those used to detect these same microorganisms in environment samples. Natural routes of exposure are used: direct ingestion, inhalation, or contact. Both disease and infection can be measured in these studies as the endpoint. In most cases, less virulent strains of the microorganisms and healthy human adults were used. Multiple exposures should be evaluated, but in most past studies, they were not.


There remain key issues in dose–response to be addressed that include:



  • Multiple dosing over time
  • Use of quantitative polymerase chain reaction (qPCR) units, which measure the nucleic acid for the dose
  • Differences in potencies associated with age or immune status
  • Development of dose–response models from outbreak data

Issues Surrounding a Threshold in Microbial Risk Dose–Response Modeling


One of the more controversial areas surrounding microbial modeling is the potential for a single organism to initiate an infection. Current scientific data support the independent-action (or single-organism) hypothesis [31]. That means a single bacterium or virus or protozoan can reproduce, which is a known biological phenomenon that has been proven in laboratory studies. This concept has also been suggested as providing the explanation for sporadic cases of infectious disease. Although it is clear that the host defenses (immunity at the cellular and humoral levels) do play a critical role in determination of which individuals may develop infection and particularly develop more severe disease, it has also been suggested that these do not provide a complete explanation of the processes involved associated with modeling in more detail the interactions between the microbe and host as the infection takes place [32]. In the early literature, it was suggested that many microorganisms are needed to act cooperatively to overcome host defenses to initiate infection [33]. The independent-action theory, however, suggests that each microorganism alone is capable of initiating the infection, but more than one is needed, as the probability that a single microorganism will evade host defenses successfully is small [31]. This is analogous to another biological phenomenon—that of spermatozoa and fertilization [31].


The evaluation of the dose–response data sets supports the independent-action hypothesis; as in almost every case, the exponential or beta models provided a statistically significant improvement in fit over the lognormal model, which could be used to predict a threshold [21]. Currently, there are no scientific data to support a threshold level for these microorganisms.


It should be kept in mind that healthy human volunteers were most often used in the dose experiments, all with normal-functioning immune systems. There was no attempt to screen out those who had antibodies and were previously exposed, except in the case of the Cryptosporidium study [34, 35]. Thus, the argument that the immune system would influence these models suggests that the models could actually be less conservative and underestimate the risks associated with the sensitive or vulnerable populations.


Although the human data sets are extensive, they are not exhaustive in terms of answering many of the questions regarding the host–microbe interaction. In the future, more human and animal studies will be needed to further address both hazard and dose–response, including virulence, strain variation and immunity, and multiple exposures.


The concept of thresholds for microbial dose–response has also been invoked by some based on the idea that individual immunity or other innate biological processes exist to which low levels of pathogen exposure will not cause infection or harm. However, there is confusion between the potential for individual immunity against exposure to low numbers of pathogens and the manifestation in population dose–response curves based on response to an average dose experienced by a collection of hosts. Thus, individual and population thresholds should be defined (Haas, unpublished):



If individual thresholds exist, then there must be a portion of the population dose response curve with a steeper slope than exponential. Therefore the presence of individual thresholds must manifest itself in the observable range, but no microbial data sets have demonstrated steeper slopes than exponential, and thus true individual thresholds are inconsistent with the observed data.


Exposure Assessment


Exposure assessment is an attempt to determine the size and nature of population exposed and the route, concentrations, and distribution of the microorganisms and the duration of the exposure. The description of exposure includes not only occurrence based on concentrations but the prevalence (how often the microorganisms are found) or distribution of microorganisms in space and over time. Exposure assessment depends on adequate methods for recovery, detection, quantification, sensitivity, specificity, virulence markers, and viability, as well as studies and models addressing transport and fate through environment. For many microorganisms, the methods, studies, and models are not available (see Chapter 5). Often, the concentration in the medium associated with the direct exposure (e.g., drinking water, food) is not known, but must be estimated from other databases. Therefore, knowledge on the ecology of these microorganisms, sources in the environment, transport, and fate is needed, including inactivation rates and survival in the environment, ability to regrow as in the case of some bacteria and resistance to environmental factors (temperatures, humidity, sunlight, etc.), and movement through soil, air, and water. Finally, because the current methods for monitoring microorganisms in environmental samples often do not afford the necessary sensitivity to examine treated water or food in most cases to the levels desirable, a greater database is needed on the inactivation/removal of microorganisms through treatment processes. These data can be used to estimate levels in the final treated product.


One of the key areas of exposure assessment is persistence. The survival or inactivation of microorganisms through time, space, and treatment technologies will be key to understanding the final concentrations of doses to populations. This is dependent on the microbe and the environment (temperature, relative humidity, sunlight, attachment or clumping, presence of inhibitory substances or disinfectants) and time. Many more highly specified inactivation rates are needed for exposure assessment.


Risk Characterization


Risk characterization is an integration of the data on HAZ ID, dose–response, and exposure to estimate the magnitude of the public health problem and to understand the probability that it will occur as well as the variability and uncertainty of the predicted outcomes. The mathematical output is a probability of infection or disease or death (e.g., 1/10 or 1 in a million). The approach used in its simplest form is defined by a static linear approach with the development of the point estimate of risk. This generally represents the single best estimate of average or median value associated with the data:


images

Only gold members can continue reading. Log In or Register to continue

Stay updated, free articles. Join our Telegram channel

Dec 14, 2017 | Posted by in MICROBIOLOGY | Comments Off on Risk Assessment Paradigms

Full access? Get Clinical Tree

Get Clinical Tree app for offline access