COMPUTATIONAL TOXICOLOGY

20
COMPUTATIONAL TOXICOLOGY
*


Richard S. Judson, David M. Reif, and Keith A. Houck


This chapter provides an overview of the emerging field of computational toxicology (CT) and discusses:



  • Goals of CT research and development
  • Types of data generated and used
  • Organization of CT data into databases and knowledge bases
  • Applications

CT is an emerging field that combines in vitro and computationally generated data on chemicals, information on biological targets (genes, proteins), pathways and processes, and informatics methods to model and understand the mechanistic basis of chemical toxicity. CT often looks at trends across large sets of chemicals, large sets of data on a single chemical, or a combination of the two. Much of the computational effort focuses on organizing these data and using statistical and modeling methods to interpret them. Two other areas of research that often fall under the CT heading, but which will not be covered here, are systems biology modeling and quantitative structure–activity relationship (QSAR) modeling. These purely computational approaches are complementary to the in vitro, data-centered approach described here but are sufficiently different to warrant their own in-depth discussion. This chapter will introduce the goals, tools, and approaches used by CT practitioners and will illustrate them through several examples.


20.1 DATA RELEVANT TO CT APPLICATIONS


Although the term CT may imply a purely theoretical approach, in reality, all CT methods are heavily reliant on data to drive hypotheses and to build and validate models. What distinguishes many of the experimental aspects of CT from those of traditional toxicology is the reliance on high-throughput, in vitro methods. These methods are seeing increases in both usage and reliability. In addition to producing new data, CT methods often require animal-based in vivo data to anchor predictions from in vitro-based models. To do this in an efficient manner, it is important to compile quantitative data from a large number of animal studies into “computable” databases. Here, computable means that numerical or categorical data are extracted from text reports and tabulated.


Chemical Identity, Structure, and Properties


Like most toxicology methods, CT focuses on the potential toxic effects of chemicals and attempts to build models linking chemicals and their structures and properties to in vitro activity and in vivo toxicity. To do this, chemicals under study must be accurately identified by names or other identifiers and by chemical structure. Although seemingly straightforward, arriving at unique and correct chemical identity requires significant care.


Many chemicals go by common names but can also be identified by systematic names [i.e., using IUPAC rules (IUPAC, 1993)] and a variety of trade names. For instance, the pesticide atrazine has over 100 synonyms listed in public databases, including more than 25 variant systematic names. Some of this ambiguity reflects product names from different manufacturers and different formulations with greater or lesser degrees of purity. The Chemical Abstracts Service Registry Number (CAS registry number or CASRN) is a widely used alternative to uniquely identify a chemical, but even here, there is ambiguity. For atrazine, the accepted CASRN is 1912-24-9, but a variety of sources list at least six others. Some of these are old (withdrawn) CASRN, some refer to mixtures, and some are simple mistakes that have crept into papers and online databases. The official database of CASRN is managed by Chemical Abstracts Service and requires fee-based access. The European Commission, through the Joint Research Centre, has developed an alternative identifier called the EC number, which for atrazine is 217-617-8. This is an openly available system, but unfortunately, few data in the toxicology literature are annotated with EC numbers. Despite these drawbacks, several publicly available databases have carefully curated CASRN, systematic names, and structures for many chemicals. These include DSSTox (Richard and Williams, 2002; Russom et al., 2008) and ChemSpider (2008).


Pure chemicals can also be uniquely identified by their structure. There are three common structure conventions, plus a plethora of others. These are the Simple Molecular Input Line Entry System (SMILES) (Daylight, 2008), IUPAC International Chemical Identifier (InChI) (IUPAC, 2008), and Mol or structure definition (SD) file. SMILES and InChI are both codes that can be written on a single line and can be used to reconstruct the two-dimensional geometry of a chemical. For atrazine (Figure 20.1), the SMILES and InChI codes are “CCNc1nc(nc(n1)Cl)NC(C)C” and “InChI=1S/C8H14ClN5/c1-4-10-7-12-6(9)13-8(14-7)11-5(2)3/h5H,4H2,1-3H3,(H2,10,11,12,13,14),” respectively. Because the InChI codes can be very long, an alternative short version called the InChIKey has been developed. This is a “one-way” descriptor, meaning that one can generate the key from a structure, but cannot go backward. Therefore, it is useful as a unique identifier, but not for use as a shorthand to reconstruct structures for later analyses. For atrazine, the InChIKey is MXWJVTOOROXGIU-UHFFFAOYSA-N. SMILES are currently more widely used than InChIs, but in the future, this may change because the InChI code has the advantage of being unique—that is, the description of the coding algorithm enforces uniqueness. With SMILES, on the other hand, one can generate multiple valid strings to represent the same molecule. There is ongoing work in both the InChI and SMILES community to develop more robust coding algorithms that guarantee (for instance) the correct handling of structures with multiple chiral centers. There have been proposals to replace the use of arbitrary identifiers such as CASRN with the InChIKeys, but this begs the question of how to deal with mixtures or formulations, which are common subjects of chemical and toxicological study.

c20-fig-0001

Figure 20.1 Chemical structure of atrazine.


Given a chemical structure, one can calculate or look up many properties that are often important for toxicology studies. These include the octanol–water partition coefficient (logP or logKow), water solubility, melting and boiling point, etc. A commonly used, freely available tool is EPI Suite (U.S. EPA, 2010), which has the advantage of containing a large database of experimental values and will give the user an experimentally derived value when available rather than just returning a calculated estimate. EPI Suite is especially rich in data on environmental chemicals such as pesticides and industrial chemicals. It also provides estimates of several parameters used in ecotoxicology studies. A variety of commercial property-estimating packages are also widely used, including MOE (http://www.chemcomp.com/software.htm), QikProp (Schrodinger, 2010), Leadscope (2010), and OpenEye Babel (http://www.eyesopen.com/docs/babel/current/html/index.html).


In Vitro Methods and Data


In vitro assays are a key source for data for many CT applications. These assays test chemicals using cell-based or cell-free assay systems. This approach allows one to understand the effects of chemicals at a molecular and cellular level, including direct interactions with biomolecules (DNA, proteins) and the molecular or pathway-based effects triggered by chemical exposures to cells. Here, we describe the most widely used in vitro techniques for CT, which are high-throughput screening (HTS) and high-content screening (HCS) and whole-genome analyses. Other techniques not covered here are proteomics, in which one measures broad-based protein level changes in response to a chemical, and metabolomics, where the same type of experiment is performed, only looking at natural metabolites or small molecules.


Cell Systems (Primary, Cell Lines, Tissues, Cocultures, and Cell-Free)


The choice of cell system to use in any cell-based assay is critical. This is because of the tissue-specific expression of genes and proteins, activity of signaling pathways, presence of necessary cofactors, etc., that determine whether the target of toxicity exists in the selected cells. Choices of cell systems include primary cells, cell lines, stem cells, cocultures (i.e., mixtures of different cell types), tissue slices or sections (usually ex vivo), and three-dimensional versions of some of these.


Primary cells are isolated directly from an animal and have the advantage of behaving in a physiologically normal way, at least for a brief period. Signaling pathways are intact, there are a normal number of chromosomes, and xenobiotic metabolism can be active. There are, however, several drawbacks to using primary cells, the most obvious of which is the difficulty of obtaining these cells, especially for human tissues outside of blood cells, combined with the often limited ability to passage these cells in culture. Additionally, normal physiological function typically decreases in 24–48 h, so that any experiments need to be completed in this short window of time. Further, there can be significant donor-to-donor variability, even when cells come from inbred animals. The donor-variability issue is even larger with human samples.


Immortalized cell lines address some of the issues with primary cells, mainly that they can be much less variable because a large volume of nearly identical cells can be obtained that can be used across time and across labs. The major downside to using cell lines is that they are by definition abnormal, being most often derived from cancer cells and adapted to growth in vitro with resulting major changes in important signaling pathways controlling growth and differentiation. They are usually significantly different from normal cells derived from the same tissue as exemplified by the greatly reduced xenobiotic enzyme content and inducibility in the HepG2 human liver cell line relative to normal hepatocytes (Westerink and Schoonen, 2007).


A great advantage of cell lines is the ability to compare results lab to lab and year to year, but it is known that there is genetic (and probably epigenetic) drift in cell lines over passage number. While there are literally hundreds of cell lines derived from numerous tissues available, offering many choices for models, the lack of standardization often challenges the ability to compare results systematically. However, the diversity of cell lines available provides interesting opportunities, exemplified by the lines created for the human HapMap project (The International HapMap Consortium, 2005). These are immortalized lymphocytes from an ethnically diverse set of individuals, which have been genotyped for over 500K single nucleotide polymorphisms (SNPs). This resource allows researchers to study the effects of genetic variation on the activity of chemicals on cells (O’Shea et al., 2010).


Pluripotent and multipotent stem cells are becoming more widely used in toxicology because they can have the advantages of being more “normal” than typical cell lines and can be made to differentiate into many other cell types. Today, embryonic stem cells are most often used (either mouse or one of the limited number of human lines that are widely allowed to be used). At the forefront in toxicology applications is the derivation of cardiomyocytes for use in testing for cardiotoxic effects of chemicals (Dick et al., 2010). There is also high interest in developing liver cells from pluripotent stem cells, which would have wide applications in toxicology (Huang et al., 2011). The ability to create induced pluripotent stem cells from adult tissue promises to make more lines widely available for toxicity research in the near future.


There are a number of coculture systems that are being used—these are mixtures of different cell types in either 2D or 3D cultures. The reason for using such cocultures is that they can be used to form complete signaling cascades that are characteristic of normal tissues composed of multiple cell types. These cocultures are sometimes referred to as organotypic. For example, culture systems have been designed to model pathophysiology of inflammation and cardiovascular and respiratory diseases (Berg et al., 2010). Primary tissue analogs of these organotypic cultures are ex vivo tissue slices such as the hippocampal slices used to study brain damage (Noraberg et al., 2005) or liver slices for studying hepatotoxicity (Boess et al., 2003). We should also mention in passing that certain model organisms are being used in the CT field, mainly zebrafish embryos and C. elegans (Parng et al., 2002; Smith et al., 2009). These systems allow interrogation of higher-level, emergent properties of intact organisms that are amenable to treatment and study in a medium-throughput format in 96-well microtiter plates.


HTS Assays, Technologies, Trends


HTS refers to a set of techniques that measure interactions of chemicals with proteins or cells and does this for many chemicals or conditions simultaneously, typically in microtiter plates. Many of these assays were initially developed by or for the pharmaceutical industry for the testing of thousands to millions of compounds against molecular targets to find lead compounds against specific diseases (Bleicher et al., 2003; Mayr and Bojanic, 2009). Assays are typically run in standardized 96-, 384-, or 1536-well plates, often in concentration–response mode. With increased density of wells per plate, the quantity of cells and reagents and the cost per chemical all decrease, providing vast increases in efficiency. In order to manage these large numbers of samples, automation is often used, ranging from simple liquid handling stations to fully automated robotic systems that can fill an entire room. Each assay is characterized by whether it is run in cell-free, that is, biochemical, format or with cultured cells; by its target; and by its signal readout. Biochemical assays targeting specific protein interaction can measure chemicals binding to receptors, chemicals interfering or promoting protein–protein interactions, or chemicals affecting enzymatic activity. Assay signal readouts are typically generated through the use of radiolabeled or fluorescently labeled compounds, although for HTS, radiolabeled tests are less often used due to the volume of radioactive waste generated (Sundberg, 2000). A wide variety of assay technologies are commercially available to measure these activities. A recent trend is toward the use of label-free technologies such as high-throughput mass spectrometry to avoid the inherent changes to proteins and substrates involved with fluorescent and other tags (Lunn, 2010). In addition to biochemical assays, the use of cell-based assays in HTS format has become routine. These cellular assays are even used in ultrahigh-throughput approaches as exemplified by the quantitative HTS (qHTS) approach developed by the NIH Chemical Genomics Center (NCGC). The NCGC is able to rapidly test libraries of hundreds of thousands of compounds at 12–15 concentrations, so that quantitative concentration–response curves are generated (Inglese et al., 2006, 2007). This is of particular importance to the toxicology field where understanding dose-related activity is critical (see Figure 20.2 for an example of the large robotic systems used in HTS testing).

c20-fig-0002

Figure 20.2 A large-scale robotic screening system in use at the NIH Chemical Genomics Center. This is capable of screening up to 100,000 compounds at a time. Inside the enclosure is a large robot arm that moves plates between stations.


HTS methods are distinguished across several axes. First is what is being measured, most often falling into the following categories:



  • Transcript levels: Detect changing levels of mRNA transcripts in cells, for instance, in response to a chemical interacting with a nuclear receptor or elsewhere in a transcription factor pathway. The number of different transcripts measured depends on the technology and ranges from one or two in a reporter gene assay to tens, hundreds, or thousands with other methods. Reporter gene assays indirectly measure engineered mRNA transcripts encoding reporter genes such as luciferase or β-lactamase through the activity of the reporter gene product. The other methods measure endogenous mRNA transcripts directly or after conversion to cDNA and amplification. Microarray chip-based hybridization/fluorescence techniques can be used for a few genes or a whole genome but have somewhat limited dynamic range. An alternative is real-time polymerase chain reaction (RT-PCR), which is a more quantitative technique for measuring transcript levels and is typically run for up to 40 transcripts at a time. Chip or bead-based quantitative nuclease protection assays (QNPA) (Roberts et al., 2007; Rimsza et al., 2008) are another improvement over microarray chip-based methods because they do not require a PCR amplification step, which reduces cost and increases quantitative reproducibility. QNPA can be multiplexed with up to 200 transcripts measured in a single well.
  • Protein levels: The most often used technique is ELISA, or enzyme-linked immunosorbent assay. An antibody is developed for each protein to be detected. This is more difficult and expensive than equivalent RNA detection techniques because of the need to develop highly specific antibodies (Taipa, 2008).
  • Protein activity: These assays are usually run cell-free and measure the change in activity (usually for an enzyme target) when a test chemical is introduced. In the most common format, the production or degradation of a fluorescent substrate resulting from enzymatic activity of the target protein is monitored and inhibition of that enzymatic activity by the test chemical is determined.
  • Protein binding: These assays are typically run cell-free and measure the ability of a ligand to bind to a specific protein. Most often, a strong native ligand is initially bound to the protein and one measures how well the test ligand displaces the native ligand, which is usually fluorescently or radiolabeled. Effect of chemicals on protein–protein binding interactions can also be measured by techniques such as fluorescent resonance energy transfer (FRET), which requires fluorescent labeling of both target proteins.
  • Cytotoxicity: The killing of cells by chemical treatment can be monitored by a variety of techniques including measurement of mitochondrial reductase activity, loss of cell membrane integrity, and loss of ATP content. In any type of cell-based assay, it is important to monitor for cytotoxicity over the same concentration range of the test chemical as is used in the direct assay to control for possible interference with the measured endpoint. Ideally, one measures ligand activity and cytotoxicity in the same well.

The next axis is the type of system (cell-based or cell-free):



  • Biochemical (cell-free) or cellular: In a biochemical system, one is looking for interactions of a chemical with a protein (e.g., a receptor or enzyme). In cell-based systems, one can still measure direct chemical–protein interactions but can also monitor molecular or cellular changes of chemical exposure.
  • Cell type: The types of cells used can greatly affect the types of results one can measure.

Finally, one must consider the readout used in the assay. The effect of a chemical exposure can be monitored using levels of florescence, luminescence, light absorption, radiolabel methods, cell impedance, cell imaging (covered in Section “High-content methods”), antibody-based protein detection (ELISA), or sequence readouts. Each of these has advantages and disadvantages, annotated in Table 20.1.


Table 20.1 Typical Types of Readouts Used in HTS Assays












































Readout Type Description or Use Advantages Disadvantages
Radioligand Typical use is in cell-free assays to measure binding or activity of small molecules against receptors or enzymes. The target protein is preincubated with a radiolabeled ligand. The test chemical is then added and the amount of displaced radioligand is measured as a function of test chemical concentration Sensitive, directly measures chemical product Requires synthesis, use, and disposal of radioligand
Fluorescence Uses include assays that measure enzymatic activity by effects of enzyme on fluorescently labeled substrate, cellular production of fluorescent protein, labeling of cDNA to detect levels of specific mRNAs, and measurement of protein complexes, which then fluoresce due to fluorescent resonant energy transfer between labeled molecules Highly sensitive detectors, some multiplexing (more than one color per well), inexpensive Interference by fluorescent test chemicals (fluorescent intensity and fluorescent quenching) generating potential false-positive and false-negative results, background cellular fluorescence
Luminescence Luciferase reporter gene product signal, enzyme substrates (become luciferase substrate after enzyme activity) Very low background, high dynamic range Test chemicals that inhibit luciferase enzymatic activity
Light absorption Chloramphenicol acetyltransferase reporter gene product signal, enzyme substrates (become colored after enzyme activity) Inexpensive and simple Often poor sensitivity and limited dynamic range, test chemical interference with absorption at same wavelength
Nucleic acid sequence Detects changing levels of mRNA transcripts, for instance, in response to a chemical interacting with a nuclear receptor or elsewhere in a transcription factor pathway Can be highly multiplexed, sensitive Can be expensive; some methods have limited sensitivity and dynamic range
Antibody protein Detects the level of a target protein (ELISA) Can be multiplexed, directly detects protein levels Requires production of a specific antibody; sensitivity often limiting; assay usually requires wash steps
Cellular impedance Change in cell number or shape affecting impedance in a microelectrode array incorporated into a microtiter plate Sensitivity, no label required, real-time measurement Limited endpoints detected

High-Content Methods


HCS methods make use of automated fluorescent microscopy techniques (Bullen, 2008; Giuliano et al., 2005, 2006) to obtain information on cell size, morphology, or subcellular localization of proteins or other biomolecules. One typically fluorescently stains cells using dyes with unique excitation/emission spectra that target subcellular organelles (nucleus, cytoplasm, cell membrane, etc.) and simultaneously labels specific target proteins using antibodies conjugated with fluorescent tags of distinct excitation/emission spectra. The stained cells are imaged on the fluorescent microscope and image algorithms are used to quantitate the amount of target proteins in different subcellular compartments. Measurements include endpoints such as the amount of a specific phosphorylated protein in the nucleus, translocation of transcription factors from cytoplasm to nucleus, the mitochondrial membrane potential, the shape of the nucleus, the length and number of neurites from neuronal cell cultures, the degree of microtubule dissociation, etc. Multiplexing is feasible (up to four separate channels) so that levels of several proteins and subcellular compartments can be measured simultaneously.


Genomics: Whole-Genome Methods


The three “omics” technologies (genomics, proteomics, and metabolomics) have all been used in toxicology research, although whole-genome microarray work is the one most commonly used and will be the focus here. In this approach, total mRNA is extracted from cells, converted into complementary DNA (cDNA), and quantified gene by gene. This is done either by hybridizing the cDNA to whole-genome chips, where the spots with cDNA are then fluorescently labeled, or by using sequencing techniques (e.g., “next-generation sequencing”). The use of genomics in toxicology is typically called toxicogenomics (Blomme, Yang, and Waring, 2009; Fielden and Zacharewski, 2001; Hamadeh et al., 2002a, b; Nuwaysir et al., 1999; Zhou et al., 2009), and in this case, one usually compares profiles of genes between treated and untreated samples. The absolute value of expression for any given gene is often not of interest, but instead, one focuses on the change in expression that is driven by the treatment.


These methods are technically challenging, partly because the data is somewhat noisy (both from a biological and a technical standpoint). Often, it is necessary to run multiple technical and biological replicates and to do appropriate averaging. A second challenge arises from the need to analyze data on 20 K or more genes. The first step of analysis is to do background subtraction and normalization. Then one takes differences between treatment and control samples and looks for genes that are differentially expressed in a statistically significant way. Multiple analysis strategies have been published to derive the differentially expressed genes (DEGs) and to then make sense out of the patterns. A common approach is to then map sets of DEGs to pathways or Gene Ontology (GO) processes, which have ties back to the biological literature. A consortium effort called the Microarray Quality Control (MAQC) consortium has held workshops and published analyses on reproducibility of results across platforms, labs, and time (MAQC-I) (Shi et al., 2006) and compared different strategies for analyzing case–control genomics data sets (MAQC-II) (Shi et al., 2010).


The advantage of this approach is that it provides a global, hypothesis-free view of the effect of a chemical. The disadvantage is that these analyses are expensive to run, making it difficult to perform full concentration–response analysis over many chemicals, time points, and cell systems. Also, the volume of data is so large that there are significant interpretation challenges. Another fundamental limitation of all RNA analysis techniques is that RNA levels (or their changes) are not well correlated with the corresponding protein levels (Gygi et al., 1999). For these reasons, microarray data is more often used for targeted studies to test a specific hypothesis on a limited number of chemicals.


In Vitro Pharmacokinetics


The preceding methods are used to understand the pharmacodynamic (PD) effects of a chemical, namely, what effect does it have on the cell it targets? Equally important are pharmacokinetics (PK), which determines how much of a parent molecule and/or its metabolites gets to the cell in the first place. Classical pharmacokinetic experiments are carried out in whole animals, in parallel with traditional whole-animal toxicity tests. But in the same way that in vitro tests are allowing rapid and inexpensive initial evaluation of chemical effects, corresponding high-throughput in vitro PK assays are becoming available for use.


Here, we describe two approaches that are being developed and are beginning to be applied in toxicology research. In the first case, the goal is to predict the oral dose of a chemical that would produce an effect at the level of a cell, a method sometimes called Reverse Toxicokinetics (RTK) or reverse dosimetry (Jamei et al., 2009a, b). In vitro HTS data can tell us what concentration at the cell is required to activate a pathway or inhibit an enzyme. The goal is then to predict what dose a human or animal would have to consume in order to reach an internal concentration equal to this activating value. If one makes some simplifying assumptions (including steady-state exposure, 100% oral bioavailability, 100% renal excretion, and that target-site concentration equals plasma concentration), there are two main experimental values one needs to measure. These are the intrinsic clearance rate (the rate at which the liver metabolizes the parent compound, which can be measured in primary hepatocytes for the species of interest, including humans) and plasma protein binding (which can again be measured in species-specific plasma). The rate of renal excretion is approximated as fraction unbound times the normal adult glomerular filtration rate (Rule et al., 2004). For each chemical of interest, one needs an analytical chemistry protocol to measure concentration of the parent in the medium being studied using liquid chromatography/mass spectrometry (LC/MS) or gas chromatography/mass spectrometry (GC/MS). These measurements also need to be run in two or more concentrations to make sure that the system is not saturated. Finally, these experimental values can be used to parameterize a PK model, which in turn predicts a dose-to-concentration scaling factor, which is the concentration at steady state/dose rate (Css/DR). One then arrives at a prediction of the dose required to activate a pathway by dividing the IC50 (or similar quantity, in μM) by Css/DR. More complicated physiologically based pharmacokinetics models (PBPK) (Clewell, 1995; Krewski et al., 1994) could be used, but these typically require many more chemical-specific parameters, which are difficult to generate in a high-throughput manner. Without resorting to a full PBPK model, one can get some tissue-specific activating dose values by using QSAR models to predict certain partition coefficients, for instance, that for the blood–brain barrier (Schrodinger, 2010). The assumption of 100% oral bioavailability can also be relaxed and replaced with predictions for oral absorption from QSAR models (Schrodinger, 2010). Similar models can be built for other types of exposure (e.g., dermal or inhalation). Another extension that is possible is to use the PK model to predict peak exposure levels, for instance, to model the response to acute doses.


An example of this approach has been published by Rotroff et al. (2010). These authors carried out RTK analysis on a set of 40 chemicals, most of which were pesticide active ingredients. For all of these, a large collection of in vitro assay measurements were available, quantified as AC50s (concentration at which 50% of maximal activity was seen). For each chemical, the AC50 values were transformed into corresponding steady-state oral doses (“oral equivalent values”) required to activate the assay. The results of this experiment are summarized in Figure 20.3, which shows the distribution of oral equivalent values for each of the in vitro assays. The most notable point is that the combination of PD and PK spread the dose range of effects over many orders of magnitude from one chemical to another. At the extremes, etoxazole is expected to have in vivo biological activity at doses as low as 0.001 mg/kg/day, while no activity is expected for dichlorvos at doses below about 100 mg/kg/day. Recall that most of the assay data and all of the RTK parameters are based on human cells. One final point from Figure 20.3 is that the results of the RTK analysis can be directly compared with expected chronic exposures, in this case mainly due to pesticide residues in food. In the figure, the chronic exposure values are indicated by the green squares. From this figure, one sees that for most of these chemicals, there is a significant safety margin between the lowest oral equivalent value and the expected exposure. Unfortunately, for chemicals having no green squares, no systematic data on estimated exposure is currently available, so no such evaluation of the safety margin can be made.

c20-fig-0003

Figure 20.3 Results of RTK analysis for 40 chemicals. For each chemical, the plot shows the range of oral equivalent values as a box and whisker plot, with values of mg/kg/day. The expected chronic exposure for each chemical, due to pesticide residues in food, is shown by a green square. These chronic exposure values are derived for the most sensitive subpopulation (see Rotroff et al. for details).



Source: Reproduced with permission from Rotroff et al. (2010). © 2010 Oxford University Press.

Only gold members can continue reading. Log In or Register to continue

Stay updated, free articles. Join our Telegram channel

Jul 31, 2017 | Posted by in GENERAL SURGERY | Comments Off on COMPUTATIONAL TOXICOLOGY

Full access? Get Clinical Tree

Get Clinical Tree app for offline access