4: The nature and scale of error and harm

Studying errors and adverse events


There are a number of methods of studying errors and adverse events, each of which has evolved over time and been adapted to different contexts. Each of the methods has particular strengths and advantages, and also weaknesses and limitations. Well, what’s the best method, you might reasonably ask? The answer is, as so often in research, that it depends on what you are trying to do and what questions you are trying to answer. Some methods are useful for identifying how often adverse events occur, others are stronger on why they happen; some are warning systems, rather than methods of counting, and so on. Failing to understand that different methods have different purposes has led to considerable confusion and much fruitless debate over the years. For instance, the major retrospective record reviews have sometimes been criticized for not providing data on human factors and other issues not identified in medical records. In fact, such studies are not intended to provide such information. Their primary purpose is to assess the nature and scale of harm, although recent review techniques also suggest that valuable information on cause and prevention can be extracted. In all cases, the methodology of a study will depend on the questions being addressed, the resources available and the context of the study.


Methods of study


Thomas and Petersen (2003) classified methods of studying errors and adverse events into eight broad groups and reviewed the respective advantages and disadvantages of each method. In their paper, they use the term error to include terms such as mistakes, close calls, near misses and factors that contribute to error. In a later chapter we will discuss the difficulties of defining and classifying errors, but in this section the term error is used as a catch-all for any incident that does not involve patient harm. The terms ‘near miss’ and ‘close call’ are seldom clearly defined but broadly speaking refer to incidents in which harm was only narrowly avoided; this includes both incidents which never developed to the point of actually harming a patient and those in which prompt action averted disaster.


Tables 4.1 and 4.2 summarize the main types of studies of errors and adverse events, and their respective advantages and limitations. Thomas and Petersen’s original source version has been separated into two separate tables and the content has been adjusted; in particular, a section on case analysis has been added. Case analyses, usually referred to as root cause analysis or systems analysis, share some of the features of morbidity and mortality meetings, but are generally more focused and follow a particular method of analysis (Vincent, 2003) (Chapter 8).


Methods differ in several respects. Some methods are orientated towards detecting incidence (how many) of errors and adverse events (Table 3.1), while others address their causes and contributory factors (why things go wrong) (Table 3.2). The various methods rely on different sources of data: medical records, observations, claims data, voluntary reports and so on. Some focus on single cases or small numbers of cases with particular characteristics, such as claims, while ot hers attempt to randomly sample a defined population. Thomas and Petersen suggest that the methods can be placed along a continuum with active clinical surveillance of specific types of adverse event (e.g. surgical complications) being the ideal method for assessing incidence, and methods such as case analysis and morbidity and mortality meetings being more orientated towards causes. There is no perfect way of estimating the incidence of adverse events or of errors. For various reasons, all of them give a partial picture. Record review is comprehensive and systematic, but by definition is restricted to matters noted in the medical record. Reporting systems are strongly dependent on the willingness of staff to report and are a very imperfect reflection of the underlying rate of errors or adverse events (though they have other uses).


Hindsight bias


Hindsight bias is mentioned several times in the table. What is hindsight bias? The term derives from the psychological literature and in particular from experimental studies showing that people exaggerate in retrospect what they knew before an incident occurred–the ‘knewitall along’ effect. After a disaster, with the benefit of hindsight, it all looks so simple and the ‘expert’ reviewing the case wonders why the clinician involved couldn’t see the obvious connections. Looking back, the situation actually faced by the clinician is inevitably grossly simplified. We cannot capture the multiple pathways open to the clinician at the time or the unfolding story of a clinical encounter. Still less can we capture the pressures and distractions that may have been affecting clinical judgement, such as fatigue, hunger and having to deal with several other patients with complex conditions.



Table 4.1 Methods of measuring errors and adverse

images

Adapted from Thomas and Petersen, 2003


Hindsight bias has another facet, perhaps better termed outcome bias, which is particularly relevant in healthcare. When an outcome is bad, those looking back are much more likely to be critical of care that has been given and more likely to detect errors. For instance, Caplan, Posner and Cheney (1991) asked two groups of physicians to review sets of notes. The sets of notes were identical except that for one group the outcomes were satisfactory, and for the other group the outcome was poor for the patient. Much stronger criticisms were made of the care of the group who believed outcomes were poor, even though the care described was exactly the same. So, wesimplify things in retrospect and tend to be more critical when the outcome is bad.



Table 4.2 Methods of understanding errors and adverse

images

Adapted from Thomas and Petersen, 2003


Studying adverse events using case record review


Retrospective reviews of medical records aim to assess the nature, incidence and economic impact of adverse events and to provide some information on their causes. Adverse events are defined as an unintended injury caused by medical management rather than the disease process that results in some definite injury or, at the very least, spent on additional days in hospital (Box 4.1). Definitions are critical in patient safety and one has to be constantly aware of differences in terminology. For instance, a study by Andrews et al. (1997) in the United States showed a 17.7% rate of serious adverse events in a surgical unit, much higher than most other studies. However, their definition of adverse event was different from that usually employed and they used observation rather than record review, as most other studies do. These are not flaws; the study is a good one. The point is that one has to be careful about definitions when interpreting findings and comparing studies.



BOX 4.1 Defining adverse events


An adverse event is an unintended injury caused by medical management rather than by the disease process and which is sufficiently serious to lead to prolongation of hospitalization or to temporary or permanent impairment or disability to the patient at time of discharge or both:



  • Medical management includes both the actions of an individual member of staff or the overall healthcare system.
  • Medical management includes acts of omission (e.g. failure to diagnose or treat) and commission (e.g. incorrect treatment).
  • Causation of adverse event by medical management is judged on a 6-point scale, where 1 indicates ‘virtually no evidence for causation’ and 6 indicates ‘virtually certain evidence for causation’. Only adverse events with a score of 4 or higher, requiring evidence that causation is more likely than not, are reported in the results.
  • Adverse events may or may not be preventable, a separate judgement from that of causation. Preventability was also judged on a 6-point scale, with only those adverse events scoring 4 or higher being considered preventable.
  • Injury may result from intervention or from failure to intervene. Injuries that come about from failure to arrest the disease process are also included, provided that standard care would clearly have prevented the injury.
  • The injury has to be unintended, since injury can occur deliberately and with good reason (e.g. amputation).
  • Adverse events include recognized complications, which will be judged as leading to harm but being of low preventability.

(FROM BRENNAN ET AL., 1991)


The basic record review process is as follows. In phase I, nurses or experienced record clerks are trained to identify case records that satisfy one or more of 18 well-defined screening criteria – such as death, transfer to a special care unit or re-admission to hospital within 12 months. These have been shown to be associated with an increased likelihood of an adverse event (Neale and Woloshynowych, 2003). In phase II, trained doctors analyse positively screened records in detail to determine whether or not they contain evidence of an adverse event using a standard set of questions. The basic method has been followed in all the major national studies, though modifications of the review form and data capture have been developed (Woloshynowych, Neale and Vincent, 2003). In France, Phillipe Michel used prospective review, in the sense that record review is carried out close to the time of discharge on a previously defined set of patients and, in some cases, combined with interviews with staff (Michel et al., 2004).


The classic, pioneering study in this area is the Harvard Medical Practice Study, still hugely influential and much debated 20 years after it was carried out (Box 4.2). Similar studies have been conducted in Australia (Wilson et al., 1995), Utah and Colorado (Gawande et al., 1999), United Kingdom (Vincent, Neale and Woloshynowych, 2001), Denmark (Schioler et al., 2001), New Zealand (Davis et al., 2002), Canada (Baker et al., 2004) France (Michel et al., 2007) and other countries. The results of these studies are summarized in Table 4.3 and constitute, as Peter Davis expresses it, a new public health risk:



Table 4.3 Adverse events in acute hospitals in ten countries

images

Of the top 20 risk factors that account for nearly three-quarters of all deaths annually, adverse in-hospital events come in at number 11 above air pollution, alcohol and drugs, violence and road traffic injury.
(DAVIS, 2004)



BOX 4.2 The Harvard medical practice study


The Harvard Medical Practice Study reviewed patient records of 30, 121 randomly chosen hospitalizations from 51 randomly chosen acute care, non-psychiatric hospitals in New York State in 1984. The goal was to better understand the epidemiology of patient injury and to inform efforts to reform systems of patient compensation. The focus was therefore on injuries that might eventually lead to legal action. Minor errors and those causing only minor discomfort or inconvenience were not addressed.


Adverse events occurred in 3.7% of hospitalizations and 27.6% were due to negligence (defined as care that fell below the standard expected of physicians in that community, and which might therefore lead to legal action). Almost half of adverse (47.7%) events were associated with an operation. The most common non-operative adverse events were adverse drug events, followed by diagnostic mishaps, therapeutic mishaps, procedure related events and others. Permanent disability resulted from 6.5% of adverse events and 13.6% involved the death of a patient. Extrapolations from this data suggested that approximately 100 000 deaths each year were associated with adverse events. Later analyses indicated that 69.6% of adverse events were potentially preventable.


(FROM BRENNAN ET AL., 1991; LEAPE ET AL., 1991)


Rates of adverse events in most recent studies lie between 8 and 12%, a range now accepted as being typical of advanced healthcare systems (de Vries et al., 2008). The rate per patient is always slightly higher, as some patients suffer more than one event, and about half of adverse events are generally judged to be preventable. US rates are much lower, Australia seemingly much higher.


The lower US rates might reflect better quality care, but most probably reflect the narrower focus on negligent injury rather than the broader quality improvement focus of most other studies (Thomas et al., 2000a). Eric Thomas and colleagues also found, in a careful comparison of specific types of adverse events, that Australian reviewers reported many more minor expected or anticipated complications, such as wound infection, skin injury and urinary tract infection. These are adverse events by the strict definition of the term, but were not included by the American reviewers, who were focusing on more serious injuries (Thomas et al., 2000a).


Examples of adverse events from the first British study are shown in Box 4.3. Some, such as the reaction to anaesthetic, are not serious for the patient but are classed as an adverse event because there was an increased stay in hospital of one day; it was probably not preventable in that it would have been hard to predict such an idiosyncratic reaction. Many adverse events, about 70% in most studies, do not have serious consequences for the patient; the effects of minor events may be more economic, in the sense of wasted time and resources, than clinical. Others however, as the remaining examples show, cause considerable unnecessary suffering and extended time in hospital.



BOX 4.3 Examples of adverse events of varying severity


An 18-year-old girl was admitted as a day surgery case for examination of ears under anaesthetic. During recovery the patient suffered three fits related to the anaesthetic and required intravenous medication to control fits and an extended stay for overnight observation.


A 65-year-old lady was admitted to hospital for repair of a strangulated incisional hernia. Post-operatively the wound site failed to heal. The patient was sent home with a discharging and offensive wound. She returned three days later with a gaping and infected wound, which required cleansing and re-suturing under a general anaesthetic, antibiotics and an extended hospital stay of 15 days.


A 24-year-old woman with spina bifida presented to the emergency department feeling unwell. Her ankles were swollen and she was noted to have recently had a urinary tract infection. She was treated with antibiotics and discharged home. A week later she was admitted to hospital with very swollen lower limbs, high blood pressure and raised central venous pressure. A diagnosis of hypertensive congestive cardiac failure was made, delayed a week because of an incomplete initial assessment in the emergency department.


A 53-year-old man with a history of stroke, MRSA infection, leg ulcers and heart failure was admitted for treatment of venous ulceration and cellulitis of both legs. Post-operatively the patient had a urinary catheter in place; incorrect management of the catheter resulted in necrosis of the tip of the penis. He underwent a supra-pubic catheterization and developed an infection. The patient’s hospital stay was extended by 26 days.


(FROM VINCENT, NEALE AND WOLOSHYNOWYCH, 2001; NEALE, WOLOSHYNOWYCH AND VINCENT, 2001)


The impact and cost of adverse events


As the examples show, many patients suffer increased pain and disability from serious adverse events. They often also suffer psychological trauma and may experience failures in their treatment as a terrible betrayal of trust. Staff may experience shame, guilt and depression after making a mistake, with litigation and complaints imposing an additional burden (Vincent, 1997). These profoundly important aspects of patient safety, generally given far too little attention, are considered in Chapters 8 and 9.


The financial cost of adverse events, in terms of additional treatment and extra days in hospital, are considerable and vastly greater than the costs of litigation. One of the most consistent findings from the record reviews is that, on average, a patient suffering an adverse event stays an extra six to eight days in hospital. An extra few days in hospital is, clinically speaking, an unremarkable event and it is not necessarily particularly traumatic or unpleasant for the patient. However, when the sums are done and the findings extrapolated nationally the costs are staggering. In Britain, the cost of preventable adverse events is £1 billion per annum in lost bed days alone (Vincent, Neale and Woloshynowych, 2001). The wider costs of lost working time, disability benefits and the wider economic consequences would be greater still. The Institute of Medicine report (1999) was able to estimate that in the United States total annual national costs (lost income, lost household production, disability, healthcare costs) were between $17 billion and $29 billion for preventable adverse events and about double that for all adverse events; healthcare costs accounted for over one half of the total costs incurred. Even when using the lower estimates, the total national costs associated with adverse events and preventable adverse events represent approximately 4% and 2%, respectively of national health expenditure (Kohn, Corrigan and Donaldson, 1999).


Costs of direct hospital care, essentially additional time in hospital, have recently been estimated from the Dutch adverse events study finding that about 3% of all bed days and 1% of the total health budget could be attributed to preventable adverse events. The real overall costs are probably a good deal higher, as this estimate does not include additional treatments and investigations or any of the associated societal costs discussed above. Remember also that these estimates are confined to the hospital sector; we have no idea of the additional costs of adverse events in primary care or mental health.


Complications and adverse events in surgery


A significant percentage of adverse events are associated with a surgical procedure. For instance, in the Utah Colorado Medical Practice Study, the annual incidence rate of adverse events amongst hospitalized patients who received an operation was 3%, of which half were preventable. Some operations, such as extremity bypass graft, abdominal aortic aneurysm repair and colon resection, were at particularly high risk of preventable adverse events (Thomas et al., 2000b; Thomas and Brennan, 2001).


In the United Kingdom, complication rates for some of the major operations are 20–25% with an acceptable mortality of 5–10% (Vincent et al., 2004). However, at least 30–50% of major complications occurring in patients undergoing general surgical procedures are thought to be avoidable. In Canada, Wanzel et al. (2002) prospectively monitored the presence and documentation of complications for all 192 patients admitted over a two month period to a general surgical ward. 75 patients (39%) of the patients suffered a total of 144 complications, 2 of which were fatal, 10 life threatening and 90 of moderate severity. Almost all the complications were documented in the patient’s notes but only 20% were reviewed at the weekly morbidity and mortality rounds; about one-fifth of complications were due, in part, to error. Many adverse events classified as operative are, on closer examination, found to be due to problems in ward management rather than intra-operative care. For instance Neale, Woloshynowych and Vincent (2001) identified preventable pressure sores, chest infections, falls and poor care of urethral catheters in their study of adverse events, together with a variety of problems with the administration of drugs and intravenous fluids.


Deaths from adverse events: can we believe the findings of retrospective record review?


Retrospective review of medical records, like any other research method, has its limitations and the findings of the studies have to be interpreted with due regard to the methodological limitations. Adverse events that are not recorded in the notes, or at least cannot be discerned from the notes, will not be detected and therefore record review probably provides a lower estimate of the actual scale of harm. The process of record review also necessarily relies on implicit clinical judgement, and agreement between reviewers, particularly on judgements of preventability, have often only been moderate (Neale and Woloshynowych, 2003). Great efforts have been made to strengthen the accuracy and reproducibility of these judgments by training, by the use of structured data collection, by duplicate review with re-review and by resolution of disagreements; however, even with training the reliability of such judgments is only moderate. Nevertheless, following a series of careful studies, Kieran Walshe concluded that the recognition of adverse events by record review had moderate to good face, content and construct validity with respect to quality of care in a hospital setting (Walshe, 2000).


These and other methodological issues have come to the fore in debates about the number of deaths due to adverse events, particularly after the headline capturing claims that up to 98000 Americans were dying each year from adverse events in hospital. The methodological arguments are too complex to summarize in their entirety here, but it is important to note that the figures have been challenged, and to give a flavour of the arguments. For instance, one research team argued, following estimates of the death rates in hospital at the time of the study, that the patients who reportedly died from adverse events in the Harvard study were already severely ill and likely to die anyway (McDonald, Weiner and Hui, 2000). In a further challenge to the figures, Hayward and Hofer (2001) compared the findings with their own review of the standard of care of patients who died in hospital while having active, as opposed to palliative, care. They found that only 0.5% of patients would have lived longer than three months, even if they had all had optimal care. So, yes, there were some deaths which were perhaps preventable, but the great majority of these people were already very ill and would have died anyway.


In a reply to McDonald and colleagues, Lucian Leape (2000) noted that some people seemed to have the impression that many of the deaths that had been attributed to adverse events were minor incidents in the care of people who were severely ill and likely to die anyway. He pointed out that terminally ill patients had been excluded from the study, but agreed that there were a small group of patients (14% of deaths attributed to adverse events) who had been severely ill; for these patients the adverse event had tipped the balance. However, for the remaining 86%, the deficiencies in the care they received were a major factor leading to the death:


Examples include a cerebrovascular accident in a patient with atrial flutter who was not treated with anticoagulants, overwhelming sepsis… in a patient with signs of intestinal obstruction that was untreated for 24 hours, and brain damage from hypotension due to blood loss from unrecognized rupture of the spleen.
(LEAPE, 2000)


The issue of the incidence of adverse events in patients who died and their preventability has been addressed more recently in a major study in The Netherlands (Zegers et al., 2009). The records of 7926 patients were reviewed across 21 hospitals: 3943 admissions of discharged patients and 3983 admissions of hospital patients who died in 2004. A large sub-sample of deceased hospital patients was included to determine the incidence of potentially preventable deaths more precisely compared to previous international studies. Of these patients, 663 experienced a total of 744 adverse events, with 10% of patients suffering two or more adverse events. For deceased patients, the incidence of adverse events was 10.7%, and a rate of 5.2% for preventable adverse events. The incidence of adverse events was significantly higher therefore than for living patients. About half of the patients with preventable adverse events had a life expectancy of more than one year; the exact contribution of the adverse event to the death is not clear but the implication is that life was shortened by some months for these people. The authors estimate that in 2004 around 1735 deaths (95%CI 1482–2032) in Dutch hospitals were potentially preventable. We should note that the terminology is slightly confusing here, in that the adverse events described here are not the death itself, but serious problems in patients’ care that led to harm which in turn hastened death. We should also remember that an adverse event near the end of life should not only be assessed by the extent to which death was hastened; contracting Clostrium difficile or sustaining a major adverse drug reaction in one’s final days may turn a potentially relatively peaceful passing into a nightmare of pain and suffering.


Hospital acquired infection


The power of the major adverse event studies is that they reveal the overall scale of harm to patients and also, to some extent, the nature and causes of harm. In the following sections we will examine two of the major types of harm, healthcare nosocomial infection and adverse drug events. We will then address a further important question of who is most vulnerable to harm.


Nosocomial infection, or healthcare associated infection (HCAI), is the commonest complication affecting hospitalized patients. In the Harvard Medical Practice Study, a single type of hospital acquired infection, surgical wound infection, was the second largest category of adverse events (Burke, 2003). Currently 5–10% of patients admitted to hospital in Britain and the United States acquire one of more infections; millions of people each year are affected. In a massive survey of over 75 000 patients in 2006, Smyth et al. found a prevalence rate of 7.59% in the United Kingdom (Smyth et al., 2008). In the United States, 90000 deaths a year are attributed to these infections, which add an estimated $5 billion to the costs of care. Intensive care units sustain even higher rates, approximately 30% of patients being affected, with an impact on both morbidity and mortality (Vincent, 2003).


Four types account for about 80% of nosocomial infections: urinary tract infections, often associated with catheter use; bloodstream infections, often due to intravascular devices, surgical site infection and pneumonia. Each of these four types may arise in more than one way and may be due to one or more different bacterial species. Intravenous lines are a particular potent source of infection, and the chance of infection is increased the longer the line remains in place. This is particularly disturbing as lines inserted into patients are often not being used. In one study, a third of patients in a general hospital setting had intravenous lines or catheters inserted; one-third of the lines were not in active use; 20% of the cannulas inserted were never used at all and overall 5% of the lines in use led to an unpleasant complication (Baker, Tweedale and Ellis, 2002). Not all infections are necessarily preventable by any means, with overcrowding and understaffing being important contributory factors (Clements et al., 2008). However, there is a consensus that many could be avoided by interventions such as the proper use of prophylactic antibiotics before surgery and hand hygiene campaigns amongst staff. Despite many studies, and massive campaigns, there is still widespread failure to adhere to basic standards of hand hygiene and it is hugely difficult to bring about change.


Infection control has for decades been seen as a public health problem and tackled by specialist doctors and infection control nurses, rather than linked with general quality improvement work. The emergence of the patient safety movement has energized and supported infection control, leading to those involved widening their remit to monitor antibiotic use as well as infection and to associate themselves with the broader drive to make healthcare safer (Burke, 2003). Patient safety in turn may be able to learn much from the techniques of infection control, particularly in the methods of surveillance, rapid response to problems and epidemiological analyses. Infection control requires, amongst other things, careful specification of the types of infection coupled with both a rapid response to outbreaks and systematic, routine surveillance and monitoring.


Injection safety in developing countries


atient safety, in the form described in this book, has primarily developed in advanced, relatively well resourced healthcare systems. However, the safety of healthcare is of huge concern in poorer countries where infections are the leading cause of mortality. Deaths and morbidity from disease dominate, but the risks of infection from healthcare itself are terrifying. To get a sense of the scale of problems facing developing healthcare systems, we will look briefly at the question of injection safety, drawing on a comprehensive review by Yvan Hutin and colleagues (Hutin, Hauri and Armstrong, 2003). This review is one of a number of safety related programmes established by the World Health Organization, concerning such mattersas the safety of blood products, chemical safety, vaccine and immunization safety, drug safety and medical device safety.


During the 20th century, injection use has increased tremendously, and injections are now probably the commonest healthcare procedure. Many injections given to provide treatment in developing countries are in fact unnecessary, as oral drug treatment would be equally or more effective. The belief in the power of injections, as opposed to pills, is one reason for the continuation of this practice. The dangers come from the reuse of syringes without sterilization, with syringes often just being rinsed in water between injections. This should not be seen as simply due to poor training or low standards; in a poor country everything is reused, simply because there is no alternative. Although lack of knowledge and poor standards play a part, the danger is hugely compounded by the basic lack of resources and the need to reuse any item of equipment if at all possible.


A huge proportion of injections are given unsafely and the numbers of people affected are staggering (Figure 4.1).In some countries in Southeast Asia, as many as 75% of injections are unsafe, leading to massive risk of hepatitis, HIV infection and other blood borne pathogens. Hutin and colleagues call for the risks of unsafe injections to be highlighted in all HIV programmes, better management of sharps waste and the increased use of single use syringes which are unusable after the first injection has been given. They suggest that donors funding programmes of drug delivery should ensure that they include the cost of these syringes, or they may do more harm than good. WHO programmes, particularly in Burkina Faso, have demonstrated that major change can be achieved.



Figure 4.1 Injections given with sterile and reused equipment worldwide (Reproduced from British Medical Journal, Yvan J F Hutin, Anja M Hauri, Gregory L Armstrong. ‘‘Use of injections in healthcare settings worldwide, 2000: literature review and regional estimates’’. 327, no. 7423, [1075], 2003, with permission from BMJ Publishing Group Ltd.).

images

Only gold members can continue reading. Log In or Register to continue

Stay updated, free articles. Join our Telegram channel

Jun 24, 2017 | Posted by in GENERAL SURGERY | Comments Off on 4: The nature and scale of error and harm

Full access? Get Clinical Tree

Get Clinical Tree app for offline access