OMICS TECHNOLOGIES IN TOXICOLOGY

4
OMICS TECHNOLOGIES IN TOXICOLOGY


Mary Jane Cunningham


Omics technologies are methods that allow global screening of all macromolecules within cells. The macromolecules include DNA, RNA, protein, and cellular metabolites. The activities of these macromolecules affect not only the cell but also all levels of the biological system from cell to tissue to organ to organism to population. “Omics” is the shortened phrasing used to describe all of the technology areas, such as genomics, proteomics, metabolomics, and pharmacogenomics. It is derived from the suffix of these terms. Each of the technology areas will be described in this chapter. It will explain:



  • The origin of omics technologies
  • Approaches and applications of genomics
  • Approaches and applications of proteomics
  • Approaches and applications of metabolomics
  • Approaches and applications of pharmacogenomics
  • Approaches and applications of systems biology

4.1 INTRODUCTION TO OMICS


Omics technologies have a wide range of applications. They provide scientific knowledge for better design of medical therapies by enabling screening of substances for how well they work (efficacy) or their potential toxic insult (adverse effect). We are hopeful that omics will allow scientists to examine the inner workings of the cell at a glance and thereby use this knowledge to predict interactions.


The advancements in science, particularly in medicinal and organic chemistry, have led in recent years to an abundance of new compounds for use in areas as diverse as disease diagnostics, medical therapies, chemical applications, and environmental remediation. These newly synthesized molecules undergo rigorous testing according to guidelines established and monitored by regulatory agencies. The two most common agencies are the Food and Drug Administration (FDA) and the Environmental Protection Agency (EPA). Each compound must pass a series of tests before it is given approval for the marketplace. The bottleneck of producing thousands to millions of these compounds has been addressed using high-throughput synthesis methods. However, the testing of these compounds in relevant in vivo and in vitro systems is still a hurdle. Omics technologies were developed to address this hurdle.


Another term that has surfaced is toxicogenomics. Its definition as the use of omics technologies to investigate toxicity. Toxicogenomics is a specialized branch of toxicology that uses these global screening methods to investigate how compounds act in biological systems, such as determining their mechanism(s) of action, validating the interaction with the target molecule, detailing why they may cause adverse effects, and providing clues as to how the compound can be redesigned to a less toxic compound. The information gained is useful in helping predict whether the compound will ultimately pass all safety requirements, especially before entering clinical trials with human subjects.


How Omics Came to Be


A large driver for the establishment and development of omics technologies was the field of molecular biology. In the 1960s and 1970s, molecular biology underwent huge growth as a scientific discipline. Two avenues of technological advancement spearheaded this growth: (i) the development of advanced blotting techniques and (ii) the development of DNA sequencing methods. Southern, Northern, and Western blots were introduced as mainstream techniques during this period. Edward Southern developed the Southern blot to isolate and visualize individual DNA molecules. DNA fragments were isolated in an agarose gel and were visualized with intercalating dyes. The Southern blot was produced on a filter paper coming in contact with the gel and by drawing the labeled fragments out so a more quantitative and permanent record could be obtained. Shortly thereafter, blots were developed to visualize and quantify RNA (Northern blot) and protein (Western blot) molecules. While scientists were excited to finally visualize these macromolecules, these methods had limitations. They were time-consuming to perform and only a small number of molecules could be isolated and visualized at a time.


DNA sequencing was a common technique originally developed by Maxam and Gilbert. This method allowed scientists to identify the exact sequence of DNA fragments. For each fragment, the particular order of DNA bases (i.e., guanine, adenine, cytosine, and thymine) could be determined. Through later developments, significant advances were introduced by the use of terminating dyes and detection by nonradioactive labels. Therefore, DNA sequencing was made easier and more efficient without the hassle of using radioactivity.


In 1990, a national and international effort, the Human Genome Project, was started with the objective of determining the entire sequence of human DNA. The first full human DNA sequence was published in 2001 and the first phase of the Project was completed in 2003. The strategy for this project was to sequence small portions of DNA, approximately 50–800 base pairs in length, known as expressed sequence tags (ESTs). ESTs were sequenced and compiled into a public database, the GenBank. The sequences could be compared by mathematical methods, such as Basic Local Alignment Search Tool (BLAST), which allowed them to be aligned, overlapped, and compared to form a fully complete DNA sequence matching each gene. Information on ESTs formed the knowledge base for the whole human genome.


4.2 GENOMICS


The first technology area to develop was genomics. Genomics is the study of gene expression by globally screening the activity of RNA molecules. This definition, in its broadest sense, encompasses the study of messenger RNA (mRNA) expression as well as microRNA (miRNA) expression. The term refers to a variety of methods, such as polymerase chain reaction (PCR)–based assays, Northern blots, representational differences analysis (RDA), differential display (DD), rapid analysis of gene expression (RAGE), serial analysis of gene expression (SAGE), and macro- and microarrays. In its narrowest sense, it refers only to the techniques that are multiplexed and globally screening. Arrays became the prototype and are widely used and accepted. They have yielded the most quantitative and qualitative information. As with the development of all new technology fields, semantics has played a huge role. In the early literature, another term, transcriptomics, was used to refer to mRNA expression profiling but is less favored today.


To find better ways to sequence DNA and thereby study the DNA and RNA interactions of the entire cell a technology trend developed—Sequencing By Hybridization or SBH. SBH gave way to the development of global genomics screening techniques. The first SBH investigation attached molecules corresponding to ESTs with known sequences to a solid substrate. These molecules are referred to as oligonucleotides (or oligomers or oligos for short). They are a sequence of base pairs that correspond to an EST. A sequence of 30 base pairs is usually referred to as a “30-mer.” The most commonly used substrate was a nitrocellulose filter. mRNA from cellular tissues was isolated and reverse-transcribed to complementary DNA (cDNA). These cDNA molecules were then labeled with detector molecules, such as 32P. The labeled cDNA hybridized to the oligos on the filter. Hybridization is when a macromolecular fragment aligns and is attached to another macromolecular fragment of complementary sequence. The base pairs of both molecules line up and are noncovalently bonded. When this bonding occurs, the label is activated and the overall signal can be quantified. In this example, the detection of a radioactive signal was the indication of an active gene. Comparing control spots (from normal tissue samples) to treated spots (from samples of diseased tissues) gave a ratio of expression. If the signal for the treated spot was higher than the control spot, the activity was indicative of upregulation. If the signal for the control spot was higher than the treated spot, the activity was indicative of downregulation.


Augenlicht and coworkers were the first to publish findings using these “macroarrays.” His laboratory isolated mRNA of a human colon carcinoma cell line, HT-29. The RNA was used to create a cDNA library, which was replicated onto a nitrocellulose filter in a grid format. Radiolabeled cDNA molecules were made from biopsies of patients at varying degrees of risk for colon cancer. These latter molecules were hybridized to the filters. The extent of radioactivity was scanned and analyzed enabling the activity from a wide range of genes to be detected and quantified. Two percent of the genes were either upregulated or downregulated in patients with familial adenomatous polyposis (FAP) compared to patients with low risk of colon cancer. However, the surprising result was that 20% of patients with FAP or in which the cells had not yet accumulated into adenomas were upregulated. This result suggested that gene expression changes correlated with very early stages of the cancer. A diagram depicting a typical macroarray is shown in Figure 4.1.

c4-fig-0001

Figure 4.1 A diagram of a macroarray similar to that used by Augenlicht et al. The nitrocellulose filter is gridded for placement of the bacterial plasmids. Dark spots depict hybridized spotted plasmids that have increased radioactive signal.


Macroarrays had limitations, however. They relied on a radioactive output. Using radioactivity meant that scientists had to adhere to strict guidelines for use and disposal, monitor all work stations, samples, and personnel, keep accurate records, and, in the experimental process, wait for the X-ray films to develop. Macroarrays also used nitrocellulose membranes, which deteriorate with age and have a short half-life. In addition, they required the use of X-ray films, which needed to be read, stored, and later disposed of safely.


To address these issues, miniaturized versions of macroarrays, “microarrays,” were developed. These arrays used either silicon wafers or glass microscope slides as the solid substrate and nonradioactive labels. The microarray process involved three very basic steps: (i) printing of the microarrays, (ii) preparation of labeled molecules from cellular samples, and (iii) hybridization of the labeled molecules to the printed molecules followed by the detection and analysis of the resulting signals. These steps are depicted in Figure 4.2. The printing of the arrays and the preparation of the labeled molecules can be done concurrently. The third step of hybridization, detection, and data analysis brings the first two steps together.

c4-fig-0002

Figure 4.2 Microarray processing steps. Preparation of target molecules and the printing of the microarrays can occur concurrently. These steps come together at the hybridization step.


Change in Semantics for Microarrays


Before going any further, it is important to note that a change in semantics occurred as the history of microarrays developed. The most widely used array formats were cDNA arrays and oligonucleotide arrays. Early SBH papers (including the citation from Augenlicht’s laboratory and most of the early papers detailing work with cDNA arrays) refer to “targets” as those molecules printed onto the solid surface and “probes” as those molecules that were reverse-transcribed from RNA of the tissue samples. In the case of oligonucleotide arrays, these terms were reversed in meaning, and it is these definitions that have been adopted overall and are in use today. For clarification, the molecules printed on the solid array surface are “probes” and the molecules derived from the tissue samples, reverse-transcribed, labeled, and hybridized to the arrays are “targets.”


Printing of Microarrays


As mentioned earlier, the microarrays are printed onto a solid substrate. This array printing must be done with very tight quality control measures in place and is normally performed in a clean room environment. The presence of dust or lint will interfere with the printing process. Manufacturers of arrays must make several decisions regarding what to use for (i) the solid substrate, (ii) the modification of the substrate, (iii) the type of oligo to print, (iv) modification of the oligo, and (v) the type of printing method.


Several varieties of solid substrates have been used over the years. However, the most popular are glass slides and silicon wafers. The use of silicon wafers became popular by way of the semiconductor industry. The solid surfaces must be cleaned prior to printing and this is usually done with ultrasonication or washing with acid or alkali.


The solid surface needs to be modified to ensure that the oligos attach well. One simple method is to coat the surface with polylysine derivatives. Another method involves a multistep procedure with polyethylenimine. Probably the most common protocol uses silanization. In recent years, microporous polymers are coated onto the surface so that the oligos are fixed within the gel-like substance at an orientation that optimizes their hybridization.


The next choice is what type of molecule to print. As in the case study discussed, Augenlicht’s laboratory used whole bacterial plasmids containing the cDNA insert. Oligos used today are isolated and vary in length. Some manufacturers use long oligos (40-mers or longer) and others use short (25-mers and less). The overall sequence similarity and guanosine and cytosine content are important factors to consider when making this decision. It has been reported that long oligos may be optimal in sensitivity and specificity. The oligos must be checked for quality prior to printing and one way to check is to use agarose gel electrophoresis. Any product that shows more than one band or does not show any product at all is not printed and can be substituted with a “cleaner” alternative product corresponding to the same gene.


Another decision is whether to modify the printed molecule. Some investigators feel that a further modification may help with array stability. Some modifications used have been the addition of an amino group at either at the 3′- or 5′-ends or using thiol, disulfide, or benzaldehyde groups.


Finally, it must be decided how to perform the printing of the arrays. These methods have been divided into contact and noncontact methods. Contact methods use microspotting devices to deliver the oligo solution to the printing surface. These devices deliver the solution by capillary action and tips are either disposed of or washed thoroughly before the next printing. Initial devices were hand-built using an XYZ-axis gantry robot with the macromolecular solutions being delivered through stainless-steel printing pins. Various pin heads are now available in materials that are easier to clean, such as ceramic and tungsten. Noncontact methods disperse the oligo solutions onto the array surface without the dispensing tool touching the array surface. Many of these devices use piezoelectric technology where an electrical current controls droplet formation to precise measurements. Once the oligos are printed, the surface is treated to ensure a tight covalent bond and to lessen nonspecific binding during hybridization. Common procedures crosslink the oligomers to the surface with ultraviolet (UV) light, bake the slides in a vacuum oven, or treat with succinic anhydride or a mixture of succinic anhydride and acetic anhydride.


Finally, once the oligos are printed on the arrays, it is necessary to obtain an accurate quantification and to assess their quality. A simple tool is microscopic examination of the slides with food coloring. Another tool is to pull arrays using random sampling and hybridize these samples with a labeled PCR primer (usually one used in making the oligos). Keep in mind that this hybridization will destroy the active sites on the probes. Nondestructive means have successfully employed dyes such as dCTP-Cy3 or SYBR Green.


Preparing Target Molecules


Target molecules represent the molecules in the cell or tissue under study. The first step is to isolate RNA from the cells or tissues using a buffer containing guanidine isothiocyanate or a derivative. For tissues, this step is usually performed with homogenization. As a note of caution, tissues can vary widely in their characteristics. Modifications to the standard protocol may be necessary if the tissue is, for example, fibrous or fatty. The cell-buffer solution is then extracted with phenol/chloroform, and total RNA is purified using specialized centrifuge columns, which allow binding of the RNA. The RNA is eluted afterward and quanitified. At this point, a sample may be run on an agarose gel or added to a microchip and analyzed with a device that gives a quantity and quality readout. One such device is the Bioanalyzer manufactured by Agilent Technologies.


The total RNA is then used as a template to synthesize a labeled cDNA molecule. The first reaction (reverse transcription) synthesizes a strand of DNA from the RNA template. The second reaction synthesizes the other strand of DNA. The finished product is a double-stranded cDNA, which is subsequently labeled with a detector molecule, such as biotin.


The preparation of the target molecules may differ depending on which type of microarray is used. Some manufacturers now make reverse-transcribed cRNA molecules. Others require the labeling to be at a different point in the process.


Hybridization of Microarrays


An aliquot of the labeled cDNA is added to the microarray, and the hybridization process is allowed to proceed. The hybridization is most commonly done for 16–20 h. The arrays are then washed with buffers of varying stringency to ensure that only the hybridized pairs remain on the slide and all unattached molecules are washed off. The arrays are then scanned at the wavelength of the detector molecule, and the signal is quantified. The steps involving scanning the arrays and detecting the signal and quantifying it are referred to as “image analysis.” The analysis that takes place after these steps is referred to as “data analysis.” In the literature, these two types of analyses are often mistakenly used interchangeably.


Data Analysis for Gene Expression Microarrays

Only gold members can continue reading. Log In or Register to continue

Stay updated, free articles. Join our Telegram channel

Jul 31, 2017 | Posted by in GENERAL SURGERY | Comments Off on OMICS TECHNOLOGIES IN TOXICOLOGY

Full access? Get Clinical Tree

Get Clinical Tree app for offline access