Fig. 12.1
Vaticanus graecus 277, 10v-11r: table of contents in a fourteenth-century Hippocratic Corpus manuscript. Marcus Fabius Calvus owned this manuscript, transcribed it in his own hand, and used it in the preparation of his 1,525 Latin translation (Source: Online Vatican exhibit [30])
In 1025 AD, the Persian physician Avicenna (Abū ‘Alī al-Ḥusayn ibn ‘Abd Allāh ibn Sīnā) wrote the widely used medical treatise “The Canon of Medicine” in which he laid down rules for the experimental use and testing of drugs. He wrote a precise guide for practical experimentation in the process of discovering and proving the effectiveness of medical drugs and substances [6]. Some of these experimental recommendations included the time of action must be observed, so that effect and accident are not confused; the effect of the drug must be seen to occur constantly or in many cases, and if this did not happen, it must have denoted an accidental effect; and the experimentation must be done with the human body, for testing a drug on a lion or a horse might not prove anything about its effect on man. “The Canon of Medicine” was a very popular treatise and was used extensively in medical schools across Europe as late as 1650 [6].
12.3 Clinical Trials Begin
The earliest and most well-described prospective clinical trial was by James Lind in 1753 in his “A Treatise on The Scurvy.” His methods and findings are described in great detail:
On the 20th May, 1747, I took twelve patients in the scurvy on board the Salisbury at sea. Their cases were as similar as I could have them. They all in general had putrid gums, the spots and lassitude, with weakness of their knees. They lay together in one place, being a proper apartment for the sick in the fore-hold; and had one diet in common to all, viz., water gruel sweetened with sugar in the morning; fresh mutton broth often times for dinner; at other times puddings, boiled biscuit with sugar etc.; and for supper barley, raisins, rice and currants, sago and wine, or the like. Two of these were ordered each a quart of cyder a day. Two others took twenty five guts of elixir vitriol three times a day upon an empty stomach, using a gargle strongly acidulated with it for their mouths. … Two others had each two oranges and one lemon given them every day. These they eat with greediness at different times upon an empty stomach. They continued but six days under this course, having consumed the quantity that could be spared.…
The consequence was that the most sudden and visible good effects were perceived from the use of the oranges and lemons; one of those who had taken them being at the end of six days fit for duty. The spots were not indeed at that time quite off his body, nor his gums sound; but without any other medicine than a gargarism or elixir of vitriol he became quite healthy before we came into Plymouth, which was on the 16th June. The other was the best recovered of any in his condition, and being now deemed pretty well, was appointed nurse to the rest of the sick. [7]
One of the significant contributors to medicine and clinical trials was Edward Jenner (1749–1823), a British physician who was the first to prove that inoculation with cowpox could prevent deadly smallpox. He observed that milkmaids, who often would contract cowpox and developed sores on their hands, did not contract smallpox. On 14 May 1796, Jenner tested his hypothesis by inoculating an 8-year-old boy, James Phipps, who was the son of his gardener, with purulent material scraped from the cowpox blisters on the hands of his milkmaid (Fig. 12.2). Phipps developed a fever but no major illness. Later, he injected Phipps with smallpox material several times, but the boy remained well. He repeated this experiment on 23 additional subjects and proved that immunization with cowpox could prevent smallpox. Based on these experiments, the British government banned “variolation” in 1840 and supported the use of cowpox as a widespread vaccine, free of charge [8].
Fig. 12.2
Edward Jenner vaccinating James Phipps, May 14, 1796: lithograph by French artist Gaston Melingue (1840–1914) (From Horne [31])
The concept of randomization, blinding, and placebo controls was introduced by Amberson in 1931. He used a coin flip to determine whether two comparable groups of patients with pulmonary tuberculosis received sanocrysin or distilled water [9]. To reduce observer bias, the researchers ensured that the group assignment of the patients was known only to two of the authors of the report and the nurse in charge of the ward. Their attempts to blind the identity of the groups to which patients had been allocated using injections of distilled water were unlikely to have been successful, however, because all of the patients receiving sanocrysin suffered adverse systemic effects of the drug, including a death from liver necrosis. Amberson et al. were able to follow 19 of their 24 patients for up to 3 years after the last dose of sanocrysin, and they found no evidence of beneficial effects.
Following the report by Amberson et al. and in the same issue of the American Review of Tuberculosis, there was a less detailed report, in which Brock, of the Waverly Hills Sanatorium in Kentucky, arrived at very different conclusions about the effects of sanocrysin [10]. Brock concluded that the drug had “an outstanding clinical effect on exudative tuberculosis in white patients,” although “very little effect in limiting the progression of the disease in black patients.” These conclusions were based on his observations of 46 patients – all treated with varying doses of sanocrysin. Although the patients in Brock’s study suffered some of the same toxic effects of the drug, the patients and the drug regimens with which they were treated differed from those in the study by Amberson and his colleagues. Brock’s patients were not followed after the end of treatment. The stage of disease at which Brock’s white and black patients started treatment differed also, as did the care of black and white patients overall, since they were treated in a segregated 1920s Kentucky Sanatorium. Prior to 1954, the 17 southern states and the District of Columbia enforced racial segregation in every area of public activity, including hospital services. Clinicians across the country recognized that Amberson’s study was a better study overall, and they believed his findings. This led to the rightful demise of sanocrysin treatment for tuberculosis in the United States.
12.4 The Beginning of Large-Scale Trials
The first multicenter trials involved treatment of pulmonary tuberculosis with streptomycin and were published in the United Kingdom in 1948 and the United States in 1952. The British study encompassed 107 patients from 7 centers. The patients were carefully selected and divided into two groups: one group treated with streptomycin and bed rest, the other group with bed rest alone. They followed patients for 6 months and concluded that streptomycin-treated patients fared much better than the control group [11]. In the United States, the Veterans Administration, together with the US Armed Services, continued multicenter trials for tuberculosis over the next two decades, with good success.
Large-scale clinical trials were viewed as becoming the gold standard for proving effectiveness of treatment. The Salk poliomyelitis vaccine trials, sponsored by the National Foundation for Infantile Paralysis (March of Dimes), started in 1954 and involved nearly 1.8 million children [12]. The trials began at the Franklin Sherman Elementary School in McLean, Virginia. Children in the United States, Canada, and Finland participated in these trials, which used for the first time the now-standard double-blind method. On April 12, 1955, researchers announced the vaccine was safe and effective, and it quickly became a standard part of childhood immunizations in America. Nonetheless, the statistical design used in this encompassing experiment prompted criticism. Eighty-four test areas in 11 states used a randomized, blinded design where all participating children aged 6–9 years received injections of either vaccine or placebo and were then observed for evidence of the disease. Other test areas in 33 states used an “observed control” design, where participating children aged 7–8 years received injections of vaccine, but in the control group, no placebo was given, and children were then observed for the duration of the polio season. The use of the dual protocol illustrates both the power and the limitations of the randomized clinical trial even in the face of legitimate therapeutic claims. The placebo-controlled trials were necessary to define the Salk vaccine as the product of scientific medicine, even though it had been supported and pushed forward by a lay activist organization (March of Dimes). However, the observed control trials were essential in maintaining public support for the vaccine as ”the product of lay faith and investment in science,” since placebo-controlled trials often elicited negative responses from the public [12].
12.5 Ethics in Clinical Trials
The issue of ethics with respect to medical experimentation has been an ongoing concern. One of the most blatant breaches of ethics occurred in Nazi Germany during the 1930s and 1940s. Uncovering these atrocities at the Nuremberg trials helped introduce the concept of international responsibility for medical ethics. The Nuremberg Code for human experimentation was issued in 1946 to address ethical issues surrounding human subjects’ protection. This was the first document to set out ethical regulations in human experimentation based on informed consent [13]. It contained ten principles related to a physician’s ethical duties and made informed consent “absolutely essential.”
It was revised in a declaration adopted in June 1964 in Helsinki, Finland (the Helsinki Declaration). This was a nonlegally binding instrument in international law set in ethical principles in regard to human experimentation and was developed by the World Medical Association [14]. A notable change from the Nuremberg code was the relaxation of the conditions of consent, asking doctors to obtain consent if “at all possible,” and introduce the concept of a proxy consent, such as a legal guardian. This Declaration has undergone numerous revisions over the years. The first revision in 1975 introduced the concept of oversight of research by an independent committee, which became the system of institutional review boards (IRB). In the United States, IRB regulation became official in 1981 [14].
12.6 A Test of Medical Ethics in the United States
One of the most infamous studies and a clinical trial where ethical principles were lacking was the “Tuskegee Study of Untreated Syphilis in the Negro Male” conducted from 1932 to 1972 by the US Public Health Service with the cooperation of the Tuskegee Institute [15]. Its aim was to record the natural history of untreated syphilis. It involved 600 black men from Macon County, Alabama, including nearly 400 men with late-stage syphilis and 200 healthy controls. In return for participation, the subjects were promised free physical examinations, a meal on the day of examination, and burial stipends. The subjects were not informed whether they had syphilis or not, and many were told they were being treated for “bad blood,” which was a common local lay term. Some officials at the Centers for Disease Control later believed that “bad blood” was a local synonym for syphilis; however, numerous patients interviewed over time corroborated the fact that they were not aware that they may have harbored syphilis. Prior to 1946, the standard treatment for syphilis consisted of injections of arsenic and mercury. Penicillin was found to be an effective cure for syphilis in 1946; however, the subjects enrolled in the study were not offered this known effective treatment and were not told about their syphilis diagnosis.