The genetic revolution

html xmlns=”http://www.w3.org/1999/xhtml” xmlns:mml=”http://www.w3.org/1998/Math/MathML” xmlns:epub=”http://www.idpf.org/2007/ops”>


20 The genetic revolution




Case example


Twenty-three-year-old Ms. Alice Kemper seeks treatment in the ED of University Hospital for abdominal pain and bloating that has gotten progressively worse over the past week. Ms. Kemper reports that she had abnormal liver function tests during an uncomplicated pregnancy one year previously and that several members of her family have had cirrhosis or hepatitis. This information, along with her physical examination and the results of initial blood tests and urinalysis, suggests severe liver disease, and she is admitted to the hospital for further diagnostic testing. Based on a liver biopsy, eye examination, and genetic testing, Ms. Kemper is diagnosed with an advanced case of Wilson disease, a rare inherited disease in which excess copper accumulates in the liver, brain, eyes, and other organs. Because her liver damage is severe, Ms. Kemper is evaluated for a liver transplant. She meets the criteria for transplantation and is placed on the hospital transplant program’s waiting list for a liver transplant.


While Ms. Kemper is waiting for a transplant liver to become available, she is referred to Mr. Quinn, a genetic counselor, to give her more information about her condition. Mr. Quinn explains to her that Wilson disease is caused by multiple mutations in a gene that enables production of a protein that transports copper within the body. Without this protein, excess copper accumulates in and damages multiple organs. Because Wilson disease is an autosomal recessive disorder, patients with this condition have inherited gene mutations from both of their parents. If both parents are carriers of this gene, each of their children has a one-in-four chance of inheriting the mutations from both parents and developing the disease. Mr. Quinn explains that Ms. Kemper’s siblings are also at risk for this life-threatening genetic disease. Ms. Kemper informs him that she has two younger siblings, a 16-year-old brother and a 14-year-old sister, but that she does not want to inform them about her condition or communicate with them in any way. She explains that she was abandoned by her parents and her family years ago, and she refuses to have anything to do with them. She does, however, ask whether her 1-year-old daughter can be tested for the genetic markers for Wilson disease, so that she can be on the lookout for signs of the disease. Mr. Quinn replies that early signs of the disease do not generally appear until at least 5 years of age, and that it is very unlikely that her daughter has the condition, since transmission of the disease requires that both parents are carriers of the mutated genes. Ms. Kemper responds that her child’s father has moved away from the area and they are no longer in touch. She adds that, even though she understands that the chance that her child has the condition is small, she wants that information. How should Mr. Quinn and the medical team caring for Ms. Kemper proceed?



Promise and peril: the checkered history of human genetics


In 2003, scientists announced the successful completion of the Human Genome Project, the high-profile, fourteen-year-long, multi-billion-dollar international research initiative to produce a comprehensive map and chemical sequencing of the entire human genome.1 Proponents hailed this event as a monumental accomplishment in biomedical research, the achievement of what one researcher called “the Holy Grail” of human biology!2 The mapping and sequencing data of the Human Genome Project enabled a rapid acceleration in the pace of discovery of genes responsible for human diseases, from fewer than 200 disease-associated genes discovered in 1990 to more than 1800 such genes discovered in 2005.3 Francis Collins, Director for fifteen years of the US National Human Genome Research Institute, described the expected medical benefits of contemporary genomic science in these glowing terms:



We are on the leading edge of a true revolution in medicine, one that promises to transform the traditional “one-size-fits-all” approach into a much more powerful strategy that considers each individual as unique and as having special characteristics that should guide an approach to staying healthy.4


‘Personalized genomic medicine’ (PGM) and ‘precision medicine’ are terms widely used to refer to this “revolutionary” and “much more powerful” strategy for health care. PGM advocates envision the feasibility and routine use of detailed individual genomic profiles for large patient populations. This information, they maintain, will provide significant benefits in four major areas: prediction, prevention, personalization, and participation. First, PGM will enable prediction of future health risks for individual patients by identifying genes that are associated with various health conditions. Second, PGM’s early identification of these risks will enable professionals and patients to employ targeted preventive measures to minimize those specific health risks. Third, PGM will enable professionals to personalize their treatment of each individual patient. Physicians can, for example, focus their attention on the patient’s most significant health risks, and they can tailor drug treatments for the patient’s existing conditions based on “pharmacogenomic” information about specific gene variants that increase or decrease the effectiveness of specific medications. Finally, by giving patients important information about their own individual genetic health risks, PGM will enable patients to understand their individual health conditions and risks more fully, to participate more actively in health care decisions, and to take a more active role in health-promoting behaviors.5


PGM thus offers the prospect of significant benefits to patients, but it is surely not an unmixed blessing. Even very strong proponents of PGM like Collins acknowledge that genetic information can pose complex moral problems and can cause significant harm as well as benefit.6 Recognition of the potential risks of new genetic information prompted the architects of the Human Genome Project, from the inception of the Project, to commit a portion of its annual budget (initially 3 percent, and later 5 percent) to study of its ethical, legal, and social implications.7 This concern about the broader implications of genetic science was doubtlessly inspired in part by the history of grave abuses committed in the name of genetics.


The modern science of genetics emerged at the turn of the twentieth century with the rediscovery of the plant hybridization experiments of Gregor Mendel. Mendel’s theory of inheritance of genetic traits soon found application to human beings in British physician Archibald Garrod’s explanation of patterns of human inheritance of “inborn errors of metabolism” in 1908.8 The early history of human genetic science was closely linked, in both America and Europe, with the eugenics movement, a social campaign dedicated to improvement of the human race by education and policies designed to guide and control human reproduction.9 In addition to the new science of genetics, the eugenics movement of this era was heavily influenced by Darwin’s theory of evolution by natural selection and by racist anthropologies that emphasized racial differences and posited a hierarchical order in the “fitness” of the several races of human beings. Based in part on genetic theories explaining the inheritance of inborn traits from one’s parents, eugenicists sought to encourage the reproduction of those considered to have desirable traits (that is, “superior” races and upper social classes) and to discourage or prohibit the reproduction of the “unfit” (“inferior” races and the poor).


Major policy initiatives of the eugenics movement in the United States during the first half of the twentieth century included laws restricting immigration by southern and eastern Europeans and by Asians, laws prohibiting interracial marriage, and laws creating state programs for the involuntary sterilization of persons with a variety of conditions thought to be inherited, including “feeble-mindedness,” mental illness, and epilepsy.10 Proclaiming that their policies were “nothing but applied biology,” the leaders of the Nazi regime in Germany, with the active assistance of German physicians and medical scientists, enacted a systematic program of “racial hygiene,” including prohibition of marriage between Jews and “Aryans,” involuntary sterilization and eventual extermination of institutionalized patients, and persecution and later genocide of Jews, gypsies, and other “inferior races.”11


Post-war revelation of the grim details and the massive scope of these Nazi programs evoked worldwide moral revulsion and condemnation of the practice of eugenics (though state-authorized involuntary sterilization programs continued in a number of US states into the 1970s).12 The grave harm inflicted by these programs on millions of victims cast strong and enduring suspicion on any perceived use of genetic information for eugenic purposes. The US state practices of involuntary sterilization of patients with conditions believed to be hereditary were eventually prohibited, and new laws were enacted to protect reproductive freedom.


Despite the clear potential for abuse of genetic information, other post-World War II events focused attention on the potential value of human genetics. The use of nuclear weapons on Japan, the post-war acceleration of the nuclear arms race, and the rise of the atomic energy industry all called increasing attention to the need for protection from the genetic hazards of exposure to radiation.13 In 1953, James Watson and Francis Crick announced their discovery of the double helix structure of deoxyribonucleic acid (DNA), paving the way for understanding of the biochemical mechanisms of the genetic code.14 Subsequent discovery of the genes responsible for specific diseases enabled the development of tests for conditions like phenylketonuria (PKU) and Down syndrome. These developments had mixed consequences, however. Diagnosis enabled dietary treatment for PKU, but strongly negative attitudes about people with Down syndrome complicated the lives of those people and their families.15 The decision to allow “Baby Doe,” a newborn infant with Down syndrome, to die by forgoing life-saving surgery provoked a major ethics and public policy debate in the US in the 1980s and resulted in regulations designed to protect infants from denial of life-prolonging treatment.16


As this brief summary indicates, the history of modern human genetics is marked by both great promise and great peril. It is, therefore, no surprise that the Human Genome Project, a massive international research initiative with apparent potential to revolutionize health care, devoted sustained attention to the ethical, legal, and social implications of these revolutionary changes. This chapter will identify and examine major ethical questions prompted by the increasing role of genetics in health care.



Multiple issues, multiple players


The above outline of the history of human genetics indicates that genetic information poses a variety of ethical and social questions in several different domains, including biomedical research, health and social policy, and clinical medical care. This section will examine major issues in each of these three domains.



Biomedical research


Clinical genetics investigators assume all of the basic professional responsibilities for protection of human research subjects described in Chapter 19, “Research on human subjects,” including informing prospective subjects of the purpose and any expected benefits and risks of harm of genetic studies, obtaining their voluntary consent to participation in a research study, and protecting them from research-related harms. In addition to these basic responsibilities, genetic researchers also confront ethical issues that are distinctive to this type of research. Two issues, in particular, have been the subject of sustained attention and debate, the disclosure to subjects of “incidental findings” of genetics research, and the use of social categories in population-based genetics research.


Disclosing incidental findings. In both clinical and research contexts, professional caregivers and investigators may acquire specific information about the medical condition of a patient or subject that is not related to the health care sought by the patient or the research question under investigation. The likelihood of identifying potentially significant incidental information is especially high in genetic research that involves sequencing part or all of the genome of study subjects, and so multiple commentators, including the US Presidential Commission for the Study of Bioethical Issues, have addressed this topic and offered recommendations.17 A range of responses to this situation are possible. Investigators, for example, may argue that their purpose is to produce generalizable knowledge, not to provide personal health care, and so they have no responsibility to inform subjects about incidental findings. Other commentators maintain that investigators’ responsibilities should extend to disclosing incidental findings to their subjects if those findings are “clinically useful” or “medically actionable,” that is, if that information would enable subjects to take specific measures (adopt health behaviors or receive treatments) to prevent or treat a significant health condition. These commentators propose several different criteria for what should count as medically actionable information that investigators must disclose.18 Some patients and patient advocates, commonly called “citizen scientists,” assert that they should receive all information about themselves, whether or not it has obvious clinical significance at the present time; investigators respond that this would impose too heavy a burden on them and would inhibit valuable research.19 In light of the increasing ability to sequence whole genomes accurately and inexpensively, scholars have begun to consider whether genetic testing in both research and clinical contexts should look for all known genetic variants that are medically actionable and report any variants discovered.20 If this requirement were implemented, finding medically significant individual genetic variants would no longer be an “incidental” occurrence, but rather a routine part of any genetic workup. This approach would presumably appeal to patients who want complete information about their genetic risks, but would not be welcomed by patients who prefer not to have that information. Later in this chapter, I will suggest that the value of whole genome sequencing in the clinical setting is still very limited.


Population-based comparative research. With the completion of the Human Genome Project’s goals of mapping and sequencing the complete human genome, the attention of genetic investigators turned to comparing human genomes to understand their similarities and differences. These comparisons are essential for identifying the thousands of medically significant gene variants that will enable the genomic medicine of the future to personalize health care based on individual genetic profiles. This comparative research involves collecting tissue samples from members of different human populations, genotyping those samples, linking specific gene variants with medical conditions affecting those populations, and comparing the results. Obvious questions for this research are, “Which human groups should be chosen for comparison?” and “How will members of these groups be identified?” Groups and individuals chosen for comparative genetic studies might benefit from identification of significant genetic variants they have in common, enabling earlier and more accurate diagnosis, prevention, and treatment of gene-associated diseases. Research comparing African-American and European-American patients, for example, has identified gene variants that are associated with significantly different rates of kidney disease and prostate cancer in these populations.21


Genetic research comparing racial, ethnic, national, or tribal groups also poses significant risks of harm to those groups, however. Genetic research indicates that human beings are a relatively young and genetically homogeneous species, and so the DNA sequences of any two humans are about 99.5 percent identical, regardless of racial, ethnic, or other differences.22 Research that focuses on genetic differences between groups, therefore, may reinforce and exaggerate the significance of groupings that are biologically and socially questionable. Identification of differences in disease susceptibility might encourage racial or ethnic stereotyping and discrimination, and might overshadow the major environmental and social factors, including poverty and racism, that contribute to health disparities. Eric Juengst poses the ethical challenge for comparative genomic research and practice in these terms: “How can we preserve our commitment to human moral equality in the face of our growing understanding of human biological diversity?”23



Health policy


As noted in the historical outline above, the United States, Germany, and many other nations implemented a variety of public eugenic policies and programs during the first half of the twentieth century, based in part on the genetic science of that era, with morally repugnant consequences. Despite that sordid history, and sometimes in reaction to it, legislators and government officials have also played central roles in promoting and guiding the “reformed” human genetics of the post-World War II era. In the United States, for example, National Institutes of Health officials sought, and Congress appropriated, several billion dollars to fund the major US contribution to the Human Genome Project.24 With the discovery of specific genes responsible for multiple diseases and the development of screening tests for those diseases, state public health officials have recommended and obtained public funding to implement screening programs, including mandatory newborn screening for multiple genetic conditions.25 In 2008, Congress enacted the Genetic Information Nondiscrimination Act (GINA), a federal statute designed to protect individuals from discrimination based on their genetic information in health insurance and in employment.26 I will examine in more detail just one of the complex policy questions posed by genetics, namely, the proper scope of routine newborn genetic screening.


Newborn genetic screening. For more than half a century, routine genetic screening of newborn infants has been a standard practice in the United States. This practice began in the 1960s, following the development of a simple and inexpensive test using dried blood spots for PKU. PKU is caused by an inherited gene defect that prevents the body from metabolizing the amino acid phenylalanine. If PKU is not detected in infancy, excess phenylalanine accumulation causes severe intellectual disability, seizures, and other disorders. Early detection enables treatment with a special diet of phenylalanine-free foods that can prevent or greatly reduce the harmful consequences of PKU. Although PKU is a rare condition (it affects about one in 10,000 infants), the great benefit to affected infants of early detection and effective dietary treatment was widely viewed as sufficient justification for state-based programs of mandatory screening of all newborns, and so PKU screening programs became an early “success story” of medical genetics.


In subsequent years, states expanded their newborn screening programs, as new conditions were identified and new tests were developed. State public health officers based decisions to add screening tests in part on advocacy campaigns by parents of children with particular disorders.27 Screening for each additional disorder added considerable costs, not only for the screening test, but also for confirmatory testing, counseling of parents with affected infants, and treatment for the condition. In order to justify these costs, and the testing of newborns without the informed consent of their parents, public health scholars argued that screening tests must provide substantial benefits to infants by identifying serious conditions for which prompt treatment is both essential and effective. By the mid 1990s, some states screened for more than thirty genetic disorders, and others for fewer than five.28 During that decade, a new screening technology, tandem mass spectrometry, became widely available; this technology enabled “multiplex” testing for many genetic conditions at the same time.29 A working group of the American College of Medical Genetics (ACMG) received federal funding in 2002 to evaluate candidate genetic conditions and recommend a uniform panel of conditions for adoption by all state newborn screening programs. Relying heavily on the capabilities of mass spectrometry, the ACMG working group recommended that states screen newborns for twenty-nine “primary” disorders and twenty-five “secondary” disorders; the secondary disorders would be detected incidentally while screening for the primary disorders.30 To justify expansion of state mandatory newborn screening programs to include all of these disorders, the ACMG report offered a new and broader interpretation of the benefits of newborn screening. In addition to the traditional criterion of direct and significant benefit to children with these conditions, the report included consideration of the benefits of screening to families and to society. Families could benefit, for example, by receiving information that would be significant for future reproductive decisions and that would enable them to avoid a “diagnostic odyssey,” a long process of evaluation and of therapeutic trials for a sick child before successful diagnosis of the child’s rare genetic condition. Society could benefit from early identification of infants with rare and poorly understood conditions for which no treatments exist. If a sufficient number of infants with these rare conditions were identified and enrolled in research studies, investigators could understand the conditions better and develop effective treatments for them.


The ACMG-recommended uniform panel of newborn screening disorders was endorsed by multiple organizations, including an advisory committee of the US Department of Health and Human Services, the American Academy of Pediatrics, and the March of Dimes.31 Ross et al. reported in 2013 that all fifty states had adopted the ACMG panel.32 This result might suggest that there was little or no opposition to expansion of state newborn screening programs as recommended by the ACMG. In fact, however, several other prominent organizations, including the United States Preventive Services Task Force and the President’s Council on Bioethics, and a number of individual commentators, voiced strong criticism of the ACMG newborn screening working group’s study process, evaluation criteria, and screening recommendations.33 These critics argued that the ACMG group placed too much emphasis on the technological capabilities of tandem mass spectrometry and on the opinions of disease and screening specialists and of lay screening advocates, and it did not consult experts in the systematic review of evidence and in public health.34 As a result, they argued, the evidence cited for the benefit of screening for many of the recommended conditions is weak, and the significant potential for harms of screening to children and families, including unnecessary worry, misdiagnosis, labeling, and discrimination, was neglected. The additional costs of expanded screening, including follow-up testing, education, family counseling, and treatment, critics claimed, would divert scarce resources from other worthy public health programs.35 The President’s Council on Bioethics concluded that mandatory newborn screening be limited to a small number of conditions that meet the traditional criteria of providing substantial medical benefit to affected children. For conditions that do not yet meet this strict criterion, the Council recommends that screening be provided via research studies that obtain the informed consent of parents, until the clear benefit to children of screening for those conditions has been demonstrated.36


The development and decreasing cost of new technologies for whole genome sequencing may soon enable an exponential increase in the number of genetic conditions identified in newborns, if state screening programs choose to adopt and employ those technologies.37 This prospect will likely intensify the ongoing debate about the proper scope of screening programs and about whether the screening of newborn infants for a variety of different genetic conditions should be mandatory, voluntary, or even prohibited.

Only gold members can continue reading. Log In or Register to continue

Stay updated, free articles. Join our Telegram channel

Feb 4, 2017 | Posted by in GENERAL SURGERY | Comments Off on The genetic revolution

Full access? Get Clinical Tree

Get Clinical Tree app for offline access