The development of the pharmaceutical industry

Chapter 1 The development of the pharmaceutical industry




Antecedents and origins


Our task in this book is to give an account of the principles underlying drug discovery as it happens today, and to provide pointers to the future. The present situation, of course, represents merely the current frame of a long-running movie. To understand the significance of the different elements that appear in the frame, and to predict what is likely to change in the next few frames, we need to know something about what has gone before. In this chapter we give a brief and selective account of some of the events and trends that have shaped the pharmaceutical industry. Most of the action in our metaphorical movie happened in the last century, despite the film having started at the birth of civilization, some 10 000 years ago. The next decade or two will certainly see at least as much change as the past century.


Many excellent and extensive histories of medicine and the pharmaceutical industry have been published, to which readers seeking more detailed information are referred (Mann, 1984; Sneader, 1985; Weatherall, 1990; Porter, 1997; see also Drews, 2000, 2003).


Disease has been recognized as an enemy of humankind since civilization began, and plagues of infectious diseases arrived as soon as humans began to congregate in settlements about 5000 years ago. Early writings on papyrus and clay tablets describe many kinds of disease, and list a wide variety of herbal and other remedies used to treat them. The earliest such document, the famous Ebers papyrus, dating from around 1550BC, describes more than 800 such remedies. Disease was in those times regarded as an affliction sent by the gods; consequently, the remedies were aimed partly at neutralizing or purging the affliction, and partly at appeasing the deities. Despite its essentially theistic basis, early medicine nevertheless discovered, through empiricism and common sense, many plant extracts whose pharmacological properties we recognize and still use today; their active principles include opium alkaloids, ephedrine, emetine, cannabis, senna and many others1.


In contrast to the ancient Egyptians, who would, one feels, have been completely unsympathetic to medical science had they been time-warped into the 21st century, the ancient Greeks might have felt much more at home in the present era. They sought to understand nature, work out its rules and apply them to alleviate disease, just as we aim to do today. The Hippocratic tradition had little time for theistic explanations. However, the Greeks were not experimenters, and so the basis of Greek medicine remained essentially theoretical. Their theories were philosophical constructs, whose perceived validity rested on their elegance and logical consistency; the idea of testing theory by experiment came much later, and this aspect of present-day science would have found no resonance in ancient Greece. The basic concept of four humours – black bile, yellow bile, blood and phlegm – proved, with the help of Greek reasoning, to be an extremely versatile framework for explaining health and disease. Given the right starting point – cells, molecules and tissues instead of humours – they would quickly have come to terms with modern medicine. From a therapeutic perspective, Greek medicine placed rather little emphasis on herbal remedies; they incorporated earlier teachings on the subject, but made few advances of their own. The Greek traditions formed the basis of the prolific writings of Galen in the 2nd century AD, whose influence dominated the practice of medicine in Europe well into the Renaissance. Other civilizations, notably Indian, Arabic and Chinese, similarly developed their own medical traditions, which – unlike those of the Greeks – still flourish independently of the Western ones.


Despite the emphasis on herbal remedies in these early medical concepts, and growing scientific interest in their use as medicines from the 18th century onwards, it was only in the mid-19th century that chemistry and biology advanced sufficiently to give a scientific basis to drug therapy, and it was not until the beginning of the 20th century that this knowledge actually began to be applied to the discovery of new drugs. In the long interim, the apothecary’s trade flourished; closely controlled by guilds and apprenticeship schemes, it formed the supply route for the exotic preparations that were used in treatment. The early development of therapeutics – based, as we have seen, mainly on superstition and on theories that have been swept away by scientific advances – represents prehistory as far as the development of the pharmaceutical industry is concerned, and there are few, if any, traces of it remaining2.



Therapeutics in the 19th century


Although preventive medicine had made some spectacular advances, for example in controlling scurvy (Lind, 1763) and in the area of infectious diseases, vaccination (Jenner, 1798), curtailment of the London cholera epidemic of 1854 by turning off the Broad Street Pump (Snow), and control of childbirth fever and surgical infections using antiseptic techniques (Semmelweis, 1861; Lister, 1867), therapeutic medicine was virtually non-existent until the end of the 19th century.


Oliver Wendell Holmes – a pillar of the medical establishment – wrote in 1860: ‘I firmly believe that if the whole materia medica, as now used, could be sunk to the bottom of the sea, it would be all the better for mankind – and the worse for the fishes’ (see Porter, 1997). This may have been a somewhat ungenerous appraisal, for some contemporary medicines – notably digitalis, famously described by Withering in 1785, extract of willow bark (salicylic acid), and Cinchona extract (quinine) – had beneficial effects that were well documented. But on balance, Holmes was right – medicines did more harm than good.


We can obtain an idea of the state of therapeutics at the time from the first edition of the British Pharmacopoeia, published in 1864, which lists 311 preparations. Of these, 187 were plant-derived materials, only nine of which were purified substances. Most of the plant products – lemon juice, rose hips, yeast, etc. – lacked any components we would now regard as therapeutically relevant, but some – digitalis, castor oil, ergot, colchicum – were pharmacologically active. Of the 311 preparations, 103 were ‘chemicals’, mainly inorganic – iodine, ferrous sulfate, sodium bicarbonate, and many toxic salts of bismuth, arsenic, lead and mercury – but also a few synthetic chemicals, such as diethyl ether and chloroform. The remainder comprised miscellaneous materials and a few animal products, such as lard, cantharidin and cochineal.



An industry begins to emerge


For the pharmaceutical industry, the transition from prehistory to actual history occurred late in the 19th century (3Q19C, as managers of today might like to call it), when three essential strands came together. These were: the evolving science of biomedicine (and especially pharmacology); the emergence of synthetic organic chemistry; and the development of a chemical industry in Europe, coupled with a medical supplies trade – the result of buoyant entrepreneurship, mainly in America.



Developments in biomedicine


Science began to be applied wholeheartedly to medicine – as to almost every other aspect of life – in the 19th century. Among the most important milestones from the point of view of drug discovery was the elaboration in 1858 of cell theory, by the German pathologist Rudolf Virchow. Virchow was a remarkable man: pre-eminent as a pathologist, he also designed the Berlin sewage system and instituted hygiene inspections in schools, and later became an active member of the Reichstag. The tremendous reductionist leap of the cell theory gave biology – and the pharmaceutical industry – the scientific foundation it needed. It is only by thinking of living systems in terms of the function of their cells that one can begin to understand how molecules affect them.


A second milestone was the birth of pharmacology as a scientific discipline when the world’s first Pharmacological Institute was set up in 1847 at Dorpat by Rudolf Buchheim – literally by Buchheim himself, as the Institute was in his own house and funded by him personally. It gained such recognition that the university built him a new one 13 years later. Buchheim foresaw that pharmacology as a science was needed to exploit the knowledge of physiology, which was being advanced by pioneers such as Magendie and Claude Bernard, and link it to therapeutics. When one remembers that this was at a time when organic chemistry and physiology were both in their cradles, and therapeutics was ineffectual, Buchheim’s vision seems bold, if not slightly crazy. Nevertheless, his Institute was a spectacular success. Although he made no truly seminal discoveries, Buchheim imposed on himself and his staff extremely high standards of experimentation and argument, which eclipsed the empiricism of the old therapeutic principles and attracted some exceptionally gifted students. Among these was the legendary Oswald Schmiedeberg, who later moved to Strasbourg, where he set up an Institute of Pharmacology of unrivalled size and grandeur, which soon became the Mecca for would-be pharmacologists all over the world.


A third milestone came with Louis Pasteur’s germ theory of disease, proposed in Paris in 1878. A chemist by training, Pasteur’s initial interest was in the process of fermentation of wine and beer, and the souring of milk. He showed, famously, that airborne infection was the underlying cause, and concluded that the air was actually alive with microorganisms. Particular types, he argued, were pathogenic to humans, and accounted for many forms of disease, including anthrax, cholera and rabies. Pasteur successfully introduced several specific immunization procedures to give protection against infectious diseases. Robert Koch, Pasteur’s rival and near-contemporary, clinched the infection theory by observing anthrax and other bacilli in the blood of infected animals.


The founder of chemotherapy – some would say the founder of molecular pharmacology – was Paul Ehrlich (see Drews, 2004 for a brief biography). Born in 1854 and trained in pathology, Ehrlich became interested in histological stains and tested a wide range of synthetic chemical dyes that were being produced at that time. He invented ‘vital staining’ – staining by dyes injected into living animals – and described how the chemical properties of the dyes, particularly their acidity and lipid solubility, influenced the distribution of dye to particular tissues and cellular structures. Thence came the idea of specific binding of molecules to particular cellular components, which directed not only Ehrlich’s study of chemotherapeutic agents, but much of pharmacological thinking ever since. ‘Receptor’ and ‘magic bullets’ are Ehrlich’s terms, though he envisaged receptors as targets for toxins, rather than physiological mediators. Working in Koch’s Institute, Ehrlich developed diphtheria antitoxin for clinical use, and put forward a theory of antibody action based on specific chemical recognition of microbial macromolecules, work for which he won the 1908 Nobel Prize. Ehrlich became director of his own Institute in Frankfurt, close to a large dye works, and returned to his idea of using the specific binding properties of synthetic dyes to develop selective antimicrobial drugs.


At this point, we interrupt the biological theme at the end of the 19th century, with Ehrlich in full flood, on the verge of introducing the first designer drugs, and turn to the chemical and commercial developments that were going on simultaneously.



Developments in chemistry


The first synthetic chemicals to be used for medical purposes were, ironically, not therapeutic agents at all, but anaesthetics. Diethyl ether (’sweet oil of vitriol’) was first made and described in 1540. Early in the 19th century, it and nitrous oxide (prepared by Humphrey Davy in 1799 and found – by experiments on himself – to have stupefying properties) were used to liven up parties and sideshows; their usefulness as surgical anaesthetics was demonstrated, amid much controversy, only in the 1840s3, by which time chloroform had also made its appearance. Synthetic chemistry at the time could deal only with very simple molecules, made by recipe rather than reason, as our understanding of molecular structure was still in its infancy. The first therapeutic drug to come from synthetic chemistry was amyl nitrite, prepared in 1859 by Guthrie and introduced, on the basis of its vasodilator activity, for treating angina by Brunton in 1864 – the first example of a drug born in a recognizably ‘modern’ way, through the application of synthetic chemistry, physiology and clinical medicine. This was a landmark indeed, for it was nearly 40 years before synthetic chemistry made any further significant contribution to therapeutics, and not until well into the 20th century that physiological and pharmacological knowledge began to be applied to the invention of new drugs.


It was during the latter half of the 19th century that the foundations of synthetic organic chemistry were laid, the impetus coming from work on aniline, a copious byproduct of the coal-tar industry. An English chemist, Perkin, who in 1856 succeeded in preparing from aniline a vivid purple compound, mauvein, laid the foundations. This was actually a chemical accident, as Perkin’s aim had been to synthesize quinine. Nevertheless, the discovery gave birth to the synthetic dyestuffs industry, which played a major part in establishing the commercial potential of synthetic organic chemistry – a technology which later became a lynchpin of the evolving pharmaceutical industry. A systematic approach to organic synthesis went hand in hand with improved understanding of chemical structure. Crucial steps were the establishment of the rules of chemical equivalence (valency), and the elucidation of the structure of benzene by Von Kekulé in 1865. The first representation of a structural formula depicting the bonds between atoms in two dimensions, based on valency rules, also appeared in 18654.


The reason why Perkin had sought to synthesize quinine was that the drug, prepared from Cinchona bark, was much in demand for the treatment of malaria, one of whose effects is to cause high fever. So quinine was (wrongly, as it turned out) designated as an antipyretic drug, and used to treat fevers of all kinds. Because quinine itself could not be synthesized, fragments of the molecule were made instead; these included antipyrine, phenacetin and various others, which were introduced with great success in the 1880s and 1890s. These were the first drugs to be ‘designed’ on chemical principles5.



The apothecaries’ trade


Despite the lack of efficacy of the pharmaceutical preparations that were available in the 19th century, the apothecary’s trade flourished; then, as now, physicians felt themselves obliged to issue prescriptions to satisfy the expectations of their patients for some token of remedial intent. Early in the 19th century, when many small apothecary businesses existed to satisfy the demand on a local basis, a few enterprising chemists undertook the task of isolating the active substances from these plant extracts. This was a bold and inspired leap, and one that attracted a good deal of ridicule. Although the old idea of ‘signatures’, which held that plants owed their medicinal properties to their biological characteristics6, was falling into disrepute, few were willing to accept that individual chemical substances could be responsible for the effects these plants produced, such as emesis, narcosis, purgation or fever. The trend began with Friedrich Sertürner, a junior apothecary in Westphalia, who in 1805 isolated and purified morphine, barely surviving a test of its potency on himself. This was the first ‘alkaloid’, so named because of its ability to neutralize acids and form salts. This discovery led to the isolation of several more plant alkaloids, including emetine, strychnine, caffeine and quinine, mainly by two remarkably prolific chemists, Caventou and Pelletier, working in Paris in the period 1810–1825. The recognition that medicinal plants owed their properties to their individual chemical constituents, rather than to some intangible property associated with their living nature, marks a critical point in the history of the pharmaceutical industry. It can be seen as the point of origin of two of the three strands from which the industry grew – namely the beginnings of the ‘industrialization’ of the apothecary’s trade, and the emergence of the science of pharmacology. And by revealing the chemical nature of medicinal preparations, it hinted at the future possibility of making medicines artificially. Even though, at that time, synthetic organic chemistry was barely out of its cradle, these discoveries provided the impetus that later caused the chemical industry to turn, at a very early stage in its history, to making drugs.


The first local apothecary business to move into large-scale production and marketing of pharmaceuticals was the old-established Darmstadt firm Merck, founded in 1668. This development, in 1827, was stimulated by the advances in purification of natural products. Merck was closely followed in this astute business move by other German- and Swiss-based apothecary businesses, giving rise to some which later also became giant pharmaceutical companies, such as Schering and Boehringer. The American pharmaceutical industry emerged in the middle of the 19th century. Squibb began in 1858, with ether as its main product. Soon after came Parke Davis (1866) and Eli Lilly (1876); both had a broader franchise as manufacturing chemists. In the 1890s Parke Davis became the world’s largest pharmaceutical company, one of whose early successes was to purify crystalline adrenaline from adrenal glands and sell it in ampoules for injection. The US scientific community contested the adoption of the word ‘adrenaline’ as a trade name, but industry won the day and the scientists were forced to call the hormone ‘epinephrine’.


The move into pharmaceuticals was also followed by several chemical companies such as Bayer, Hoechst, Agfa, Sandoz, Geigy and others, which began, not as apothecaries, but as dyestuffs manufacturers. The dyestuffs industry at that time was also based largely on plant products, which had to be refined, and were sold in relatively small quantities, so the commercial parallels with the pharmaceutical industry were plain. Dye factories, for obvious reasons, were usually located close to large rivers, a fact that accounts for the present-day location of many large pharmaceutical companies in Europe. As we shall see, the link with the dyestuffs industry later came to have much more profound implications for drug discovery.


From about 1870 onwards – following the crucial discovery by Kekulé of the structure of benzene – the dyestuffs industry turned increasingly to synthetic chemistry as a source of new compounds, starting with aniline-based dyes. A glance through any modern pharmacopoeia will show the overwhelming preponderance of synthetic aromatic compounds, based on the benzene ring structure, among the list of useful drugs. Understanding the nature of aromaticity was critical. Though we might be able to dispense with the benzene ring in some fields of applied chemistry, such as fuels, lubricants, plastics or detergents, its exclusion would leave the pharmacopoeia bankrupt. Many of these dyestuffs companies saw the potential of the medicines business from 1880 onwards, and moved into the area hitherto occupied by the apothecaries. The result was the first wave of companies ready to apply chemical technology to the production of medicines. Many of these founder companies remained in business for years. It was only recently, when their cannibalistic urges took over in the race to become large, that mergers and takeovers caused many names to disappear.


Thus the beginnings of a recognizable pharmaceutical industry date from about 1860–1880, its origins being in the apothecaries and medical supplies trades on the one hand, and the dyestuffs industry on the other. In those early days, however, they had rather few products to sell; these were mainly inorganic compounds of varying degrees of toxicity and others best described as concoctions. Holmes (see above) dismissed the pharmacopoeia in 1860 as worse than useless.


To turn this ambitious new industry into a source of human benefit, rather than just corporate profit, required two things. First, it had to embrace the principles of biomedicine, and in particular pharmacology, which provided a basis for understanding how disease and drugs, respectively, affect the function of living organisms. Second, it had to embrace the principles of chemistry, going beyond the descriptors of colour, crystallinity taste, volatility, etc., towards an understanding of the structure and properties of molecules, and how to make them in the laboratory. As we have seen, both of these fields had made tremendous progress towards the end of the 19th century, so at the start of the 20th century the time was right for the industry to seize its chance. Nevertheless, several decades passed before the inventions coming from the industry began to make a major impact on the treatment of disease.



The industry enters the 20th century


By the end of the 19th century various synthetic drugs had been made and tested, including the ‘antipyretics’ (see above) and also various central nervous system depressants. Chemical developments based on chloroform had produced chloral hydrate, the first non-volatile CNS depressant, which was in clinical use for many years as a hypnotic drug. Independently, various compounds based on urea were found to act similarly, and von Mering followed this lead to produce the first barbiturate, barbitone (since renamed barbital), which was introduced in 1903 by Bayer and gained widespread clinical use as a hypnotic, tranquillizer and antiepileptic drug – the first blockbuster. Almost simultaneously, Einthorn in Munich synthesized procaine, the first synthetic local anaesthetic drug, which followed the naturally occurring alkaloid cocaine. The local anaesthetic action of cocaine on the eye was discovered by Sigmund Freud and his ophthalmologist colleague Koeller in the late 19th century, and was heralded as a major advance for ophthalmic surgery. After several chemists had tried, with limited success, to make synthetic compounds with the same actions, procaine was finally produced and introduced commercially in 1905 by Hoechst. Barbitone and procaine were triumphs for chemical ingenuity, but owed little or nothing to physiology, or, indeed, to pharmacology. The physiological site or sites of action of barbiturates remain unclear to this day, and their mechanism of action at the molecular level was unknown until the 1980s.


From this stage, where chemistry began to make an impact on drug discovery, up to the last quarter of the 20th century, when molecular biology began to emerge as a dominant technology, we can discern three main routes by which new drugs were discovered, namely chemistry-driven approaches, target-directed approaches, and accidental clinical discoveries. In many of the most successful case histories, graphically described by Weatherall (1990), the three were closely interwoven. The remarkable family of diverse and important drugs that came from the original sulfonamide, lead, described below, exemplifies this pattern very well.



Chemistry-driven drug discovery



Synthetic chemistry


The pattern of drug discovery driven by synthetic chemistry – with biology often struggling to keep up – became the established model in the early part of the 20th century, and prevailed for at least 50 years. The balance of research in the pharmaceutical industry up to the 1970s placed chemistry clearly as the key discipline in drug discovery, the task of biologists being mainly to devise and perform assays capable of revealing possible useful therapeutic activity among the many anonymous white powders that arrived for testing. Research management in the industry was largely in the hands of chemists. This strategy produced many successes, including benzodiazepine tranquillizers, several antiepileptic drugs, antihypertensive drugs, antidepressants and antipsychotic drugs. The surviving practice of classifying many drugs on the basis of their chemical structure (e.g. phenothiazines, benzodiazepines, thiazides, etc.), rather than on the more logical basis of their site or mode of action, stems from this era. The development of antiepileptic drugs exemplifies this approach well. Following the success of barbital (see above) several related compounds were made, including the phenyl derivative phenobarbital, first made in 1911. This proved to be an effective hypnotic (i.e. sleep-inducing) drug, helpful in allowing peaceful nights in a ward full of restive patients. By chance, it was found by a German doctor also to reduce the frequency of seizures when tested in epileptic patients – an example of clinical serendipity (see below) – and it became widely used for this purpose, being much more effective in this regard than barbital itself. About 20 years later, Putnam, working in Boston, developed an animal model whereby epilepsy-like seizures could be induced in mice by electrical stimulation of the brain via extracranial electrodes. This simple model allowed hundreds of compounds to be tested for potential antiepileptic activity. Phenytoin was an early success of this programme, and several more compounds followed, as chemists from several companies embarked on synthetic programmes. None of this relied at all on an understanding of the mechanism of action of these compounds – which is still controversial; all that was needed were teams of green-fingered chemists, and a robust assay that predicted efficacy in the clinic.



Natural product chemistry


We have mentioned the early days of pharmacology, with its focus on plant-derived materials, such as atropine, tubocurarine, strychnine, digitalis and ergot alkaloids, which were almost the only drugs that existed until well into the 20th century. Despite the rise of synthetic chemistry, natural products remain a significant source of new drugs, particularly in the field of chemotherapy, but also in other applications. Following the discovery of penicillin by Fleming in 1929 – described by Mann (1984) as ‘the most important medical discovery of all time’ – and its development as an antibiotic for clinical use by Chain and Florey in 1938, an intense search was undertaken for antibacterial compounds produced by fungi and other microorganisms, which yielded many useful antibiotics, including chloramphenicol (1947), tetracyclines (1948), streptomycin (1949) and others. The same fungal source that yielded streptomycin also produced actinomycin D, used in cancer chemotherapy. Higher plants have continued to yield useful drugs, including vincristine and vinblastine (1958), and paclitaxel (Taxol, 1971).


Outside the field of chemotherapy, successful drugs derived from natural products include ciclosporin (1972) and tacrolimus (1993), both of which come from fungi and are used to prevent transplant rejection. Soon after came mevastatin (1976), another fungal metabolite, which was the first of the ‘statin’ series of cholesterol-lowering drugs that act by inhibiting the enzyme HMG CoA reductase.


Overall, the pharmaceutical industry continues to have something of a love–hate relationship with natural products. They often have weird and wonderful structures that cause hardened chemists to turn pale; they are often near-impossible to synthesize, troublesome to produce from natural sources, and ‘optimizing’ such molecules to make them suitable for therapeutic use is akin to remodelling Westminster Abbey to improve its acoustics. But the fact remains that Nature unexpectedly provides some of our most useful drugs, and most of its potential remains untapped.

Stay updated, free articles. Join our Telegram channel

Oct 1, 2016 | Posted by in GENERAL SURGERY | Comments Off on The development of the pharmaceutical industry

Full access? Get Clinical Tree

Get Clinical Tree app for offline access