1: Medical harm: a brief history

The cure can be worse than the disease


Medicine has always been an inherently risky enterprise, the hopes of benefit and cure always linked to the possibility of harm. The word ‘pharmakos’ means both remedy and poison; the words ‘kill’ and ‘cure’ were apparently closely linked in ancient Greece (Porter, 1999). Throughout medical history there are instances of cures that proved worse than the disease, of terrible suffering inflicted on hapless patients in the name of medicine, and of well intentioned though deeply misguided interventions that did more harm than good. Think, for example, of the application of mercury and arsenic as medicines, the heroic bleeding cures of Benjamin Rush, the widespread use of lobotomy in the 1940s and the thalidomide disasters of the 1960s (Sharpe and Faden, 1998). A history of medicine as harm, rather than benefit, could easily be written; a one-sided, incomplete history to be sure, but a feasible proposition nonetheless.


Looking back with all the smugness and wisdom of hindsight, many of these so-called cures now seem to be absurd, even cruel. In all probability though, the doctors who inflicted these cures on their patients were intelligent, altruistic, committed physicians whose intention was to relieve suffering. The possibility of harm is inherent to the practice of medicine, especially at the frontiers of knowledge and experience. We might think that the advances of modern medicine mean that medical harm is now of only historical interest. However, for all its genuine and wonderful achievements, modern medicine too has the potential for considerable harm, perhaps even greater harm than in the past. As Chantler (1999) has observed, medicine used to be simple, ineffective and relatively safe; now it is complex, effective and potentially dangerous. New innovations bring new risks, greater power brings greater probability of harm and new technology offers new possibilities for unforeseen outcomes and lethal hazards. The hazards associated with the delivery of simple, well understood healthcare, of course remain. Consider, for example, the routine use of non-sterile injections in many developing countries. Before turning to the hazards of modern medicine however, we will briefly review some important antecedents of our current concern with the safety of healthcare.


Heroic medicine and natural healing


The phrase ‘First do no harm’, a later twist on the original Hippocratic wording, can be traced to an 1849 treatise ‘Physician and patient’ by Worthington Hooker, who in turn attributed it to an earlier source (Sharpe and Faden, 1998). The background to this injunction, and its use at that point in the development of Western medicine, lay in a reaction to the ‘heroic medicine’ of the early 19th century.


Heroic medicine was, in essence, the willingness to intervene at all costs and put the saving of life above the immediate suffering of the patient. As Sharpe and Faden (1998) have pointed out, when reviewing the history of iatrogenic harm in American medicine, it is this period that stands out for the violence of its remedies. Heroism was certainly required of the patient in the mid-19th century. For instance, in the treatment of cases of ‘morbid excitement’ such as yellow fever, Benjamin Rush, a leading exponent of heroic medicine, might drain over half the total blood volume from his patient. Yet Rush was heroic in his turn, staying in fever ridden Philadelphia to care for his sick patients. Rush explicitly condemned the Hippocratic belief in the healing power of nature, stating that the first duty of a doctor was ‘heroic action, to fight disease’.


Physicians more trusting of natural healing on the other hand saw heroic medicine as dangerous, even murderous. Sharpe and Faden quote the assessment of J. Marion Sims, a famous gynaecological surgeon, writing in 1835 at the time of his graduation from medical school:


I knew nothing about medicine, but I had sense enough to see that doctors were killing their patients, that medicine was not an exact science, that it was wholly empirical and that it would be better to trust entirely to Nature than to the hazardous skill of the doctors.
(SHARPE AND FADEN, 1998: P. 8)


These extreme positions, of heroic intervention and natural healing, eventually gave way to a more conservative position, espoused by such leading physicians as Oliver Wendell Holmes, who attempted to objectively assess the balance of risk and benefit of any particular intervention. This recognizably modern approach puts patient outcome as the determining factor and explicitly broadens the physician’s responsibility to the avoidance of pain and suffering, however induced – whether from the disease or the treatment.


Judgements about what constitutes harm are not straightforward and are irretrievably bound up with the personal philosophies of both physician and patient.To the sincere, if misguided, heroic practitioners, loss of life was the one overriding harm to be avoided and any action was justified in that pursuit. This was moderated by the more conservative position in striking a balance between intervening to achieve benefit and avoiding unnecessary suffering. Such dilemmas are of course common today when, for instance, a surgeon must consider whether an operation to remove a cancer in a terminally ill patient, which might prolong life, is worth the additional pain, suffering and risk associated with the operation. The final decision nowadays may rest with the patient and family, but they will be strongly influenced by medical advice. The patient too must decide whether to ‘first do no harm’ or whether to risk harm in the pursuit of other benefits. From this we can already see that there is no absolute state of safety that we can aspire to, but that safety must always be seen in the context of other objectives. Safety can, however, be prioritized and become an explicit goal instead; in contrast, for much of medical history, safety was an objective but one not backed by analysis and systematic action.


Hospitalism and hospital acquired infection


Dangerous treatments were one form of harm. However, hospitals could also be secondary sources of harm, in which patients acquired new diseases simply from being in hospital.By the mid-19th century, anaesthesia had made surgery less traumatic and allowed surgeons time to operate in a careful and deliberate manner. However, infection was rife. Sepsis was so common, and gangrene so epidemic, that those entering hospital for surgery were ‘exposed to more chance of death than the English soldier on the field of Waterloo’ (Porter, 1999: p. 369). The term ‘hospitalism’ was coined to describe the disease promoting qualities of hospitals, and some doctors believed they needed to be periodically burnt down. As late as 1863, Florence Nightingale introduced her Notes on Hospitals, as follows:


It may seem a strange principle to enunciate as the very first requirement in a Hospital that it should do the sick no harm. It is quite necessary, nevertheless, to lay down such a principle, because the actual mortality in hospitals, especially in those of large crowded cities, is very much higher than any calculation founded on the mortality of the same class of diseases amongst patients treated out of hospital would lead us to expect.
(SHARPE AND FADEN, 1998: P. 157)


Puerperal fever, striking mothers after childbirth, was particularly lethal and widely known to be more common in hospitals than in home deliveries. A small number of doctors in both England and America suspected that this was caused by transfer of ‘germs’ and argued that doctors should wash between an autopsy and a birth. These claims of the contagious nature of puerperal fever, and the apparently absurd possibility of it being transferred by doctors, were strongly rebutted by many, including the obstetrician Charles Meigs, who concluded his defence of his position with the marvellous assertion that ‘a gentleman’s hands are clean’ (Sharpe and Faden, 1998: p.154). Bacteria were apparently confined to the lower classes.


Dramatic evidence of the role of hygiene was provided by Ignaz Semmel-weiss in Vienna in his study of two obstetric wards. In Ward One, mortality from infection hit a peak of 29% with 600–800 women dying every year, whereas in Ward Two mortality was 3%. Semmelweiss noted that the only difference between wards was that patients on Ward One were attended by medical students and those on Ward Two by midwifery students. When they changed places, mortality rates reversed. Following the rapid death of a colleague who cut his finger during an autopsy, Semmelweiss concluded that his colleague had died of the same disease as the women and that puerperal fever was caused by conveying putrid particles to the pregnant woman during examinations. He instituted a policy of hand disinfection with chlorinated lime, and mortality plummeted. Semmelweiss finally published his findings in 1857, after similar findings in other hospitals, but found it difficult to persuade his fellow clinicians and his beliefs were still largely ignored when he died in 1865 (Jarvis, 1994).


Lister faced similar battles to gain acceptance of the use of antiseptic techniques in surgery, partly from scepticism about the existence of microorganisms capable of transmitting infection. However, by the end of the 19th century, with experimental support from the work or Pasteur and Koch, the principles of infection control and the new techniques of sterilization of instruments were fairly well established. Surgical gowns and masks, sterilization and rubber gloves were all in use and, most importantly, surgeons believed that safe surgery was both a possibility and a duty. However, one hundred years later, with transmission of infection well understood and taught in every nursing and medical school, we face an epidemic of hospital acquired infection. The causes of these infections are complex, with antibiotic resistant organisms, hospital overcrowding, shortage of time and lack of easily available washing facilities all playing a part. However, as in Semmelweiss’s time, a major factor is difficulty of ensuring that staff, in the midst of all their other duties, do not forget to wash their hands between patients.


Surgical errors and surgical outcome


Ernest Codman, a Boston surgeon of the early 20th century, was a pioneer in the scientific assessment of surgical outcome and in making patient outcome the guiding principle and justification of surgical intervention. Codman was so disgusted with the lack of evaluation at Massachusetts General Hospital that he resigned to set up his own ‘End-Result’ hospital. This was based on the, for Codman,commonsense notion that ‘every hospital should follow every patient it treats, long enough to determine whether or not the treatment has been successful, and then to enquire “if not, why not” with a view to preventing similar failures in the future’ (Sharpe and Faden, 1998: p. 29). Crucially Codman was prepared to consider, and more remarkably make public, the occurrence of errors in treatment and to analyse their causes (Box 1.1).



BOX 1.1 Codman’s categories for the assessment of unsuccessful treatment


Errors due to lack of technical knowledge or skill;


Errors due to lack of surgical judgement;


Errors due to lack of care or equipment;


Errors due to lack of diagnostic skill;


The patient’s unconquerable disease;


The patient’s refusal of treatment;


The calamities of surgery or those accidents and complications over which we have no known control.


(FROM SHARPE AND FADEN, 1998)

Stay updated, free articles. Join our Telegram channel

Jun 24, 2017 | Posted by in GENERAL SURGERY | Comments Off on 1: Medical harm: a brief history

Full access? Get Clinical Tree

Get Clinical Tree app for offline access