Surgical Errors



Some Basic Concepts and Terms





More than 20 million people undergo surgery every year in the United States alone. In the past, surgery could be extremely dangerous, in part because of the risks of the surgery itself (bleeding, infection), and in part because of the high risks of anesthesia. Because of major safety improvements in both of these areas, surgeries today are extremely safe, and anesthesia-related deaths are rare.1 Advances in surgery, anesthesia, and postoperative care have led to major declines in mortality in disorders generally treated by surgery, such as diseases of the gallbladder and appendix.2






Nevertheless, a number of troubling surgical safety issues persist. This chapter will deal with some of the more problematic issues directly related to surgery: anesthesia-related safety complications, wrong-site and wrong-patient surgery, retained foreign bodies, and surgical fires. I will end the chapter with a brief discussion of nonsurgical procedural safety.






Of course, surgery is not immune to medication errors (Chapter 4), diagnostic errors (Chapter 6), teamwork and communication errors (Chapter 9), and nosocomial infections, including surgical site infections (Chapter 10). These issues will be covered in their respective chapters, although some elements that are more specific to surgery—such as the use of the surgical checklist—will be touched on here. Early enthusiasm for the use of perioperative beta-blockers has waned after several studies in the past decade found surprisingly high rates of harm.3,4 Interested readers are referred to the latest American College of Cardiology/American Heart Association guidelines.5 For our purposes, suffice it to say that the general principles surrounding treating targeted patients with proper medications are likely to comport with similar discussions elsewhere in the book (e.g., venous thromboembolism prophylaxis, Chapter 11).






As with medication errors, in which problems from the intervention are grouped under a broad term (“adverse drug events”) that includes both errors and side effects (Chapter 4), some surgical complications occur despite impeccable care, while others are caused by errors. Surgeries account for a relatively high percentage of both adverse events and preventable adverse events. For example, one of the major chart review studies of adverse events (the Utah–Colorado study) found that 45% of all adverse events were in surgical patients; of these, 17% resulted from negligence and 17% led to permanent disability. Looked at another way, 3% of patients who underwent an operation suffered an adverse event, and half of these were preventable.6






The field of surgery has always taken safety extremely seriously. The first efforts to measure complications of care and approach them scientifically were developed by Boston surgeon Ernest Codman in the early twentieth century. Codman’s “End-Result Hospital”—following every patient for evidence of errors in treatment and disseminating the results of this inquiry—was both revolutionary and highly controversial79 (Appendix III). Nevertheless, the American College of Surgeons soon began inspecting hospitals (in 1918), an effort that served as the forerunner of the Joint Commission (Chapter 20). More recently, the person arguably most responsible for putting safety on the radar screen of modern medicine was another surgeon, Dr. Lucian Leape.10,11 And the field of surgery has pioneered the use of comparative data (most prominently in the form of the National Surgical Quality Improvement Program12) and regional collaboratives (such as in the Northern New England Cardiovascular Disease Study Group13) to promote performance improvement.






Despite these remarkable contributions, surgery, like the rest of medicine, has traditionally approached safety as a matter of individual excellence: a complication was deemed to represent a failing by the surgeon. Our new focus on systems improvement is allowing surgery to make major advances in safety.






Volume–Outcome Relationships





Beginning with a 1979 study by Luft et al. that demonstrated a relationship between higher volumes and better outcomes for certain surgeries, a substantial literature has generally supported the commonsensical notion that “practice makes perfect” when it comes to procedures.14 The precise mechanism of this relationship has not been elucidated, but it seems to hold for both the volume of individual operators (e.g., the surgeon, the interventional cardiologist) and the institution (e.g., the hospital or surgicenter).15






Although much of the volume–outcome relationship probably owes to the fact that well-functioning teams take time to gel—learning to anticipate each others’ reactions and preferences—there also seems to be a learning curve for procedural competence. One of the best-studied examples is that of laparoscopic cholecystectomy, a technique that essentially replaced the more dangerous and costly open cholecystectomy in the early 1990s. As “lap choley” emerged as the preferred procedure for gallbladder removal, tens of thousands of practicing surgeons needed to learn the procedure well after the completion of their formal training, providing an organic test of the volume–outcome curve.






The findings were sobering. One early study of lap choleys showed that injuries to the common bile duct dropped almost 20-fold once surgeons had at least a dozen cases under their belts.16 After that, the learning curve flattened, but not by much: the rate of common bile duct injury on the 30th case was still 10 times higher than the rate after 50 cases.






One can assume that most graduates of today’s surgical residencies are well trained in the techniques of laparoscopic surgery. But in the early days of a new procedure, patients have no such reassurance. A 1991 survey found that only 45% of 165 practicing surgeons who had participated in a 2-day practical course on laparoscopic cholecystectomy felt the workshop had left them adequately prepared to start performing the procedure. Yet three-quarters of these surgeons reported that they implemented the new procedure immediately after returning to their practices.17






Obviously, part of the solution to the volume–outcome and learning curve conundrums will lie in new training models, including the use of realistic simulation (Chapter 17). In fact, one study found that surgical residents who underwent simulation training until they reached a predefined level of proficiency were three times less likely to commit technical errors during laparoscopic cholecystectomy than those who received traditional training.18 In addition, some surgical and procedural specialties are now requiring minimum volumes for privileging and board certification, and a major coalition of payers (the Leapfrog Group) promotes high-volume centers as one of its safety standards under the banner of “evidence-based hospital referral” (Table 5-1). Certain states or insurers are insisting on minimum volumes or channeling patients to higher volume providers; institutions that achieve good outcomes and have high volumes are sometimes dubbed “Centers of Excellence.”







Table 5-1 The Leapfrog Group’s Volume Standards* 






Although such policies appear attractive at first, they are not without their own risks. First, patients may not be anxious to travel long distances to receive care from high-volume providers or institutions. Second, volumes that are too high may actually compromise quality by overtaxing institutions or physicians. Finally, many of the procedures being discussed, such as cardiac or transplant surgery, are relatively lucrative. Losing them could threaten the economic viability of low-volume institutions, which often cross-subsidize nonprofitable services (care of the uninsured, trauma care) with profits from the well-reimbursed surgeries.






This is not to say that channeling patients to high-volume (or better yet, demonstrably safer or higher quality19) doctors and practices is a mistake, but rather that it is a complex maneuver that requires thoughtful consideration of both expected and unforeseen consequences.






Patient Safety in Anesthesia





Although it is often stated that the modern patient safety movement began in late 1999 with the publication of To Err is Human,20 the field of anesthesia is a noteworthy exception. Anesthesia began focusing on safety a generation earlier, and its success story holds lessons for the rest of the patient safety field.






In 1972, a young engineer named Jeff Cooper began work at the Anesthesia Bioengineering Unit at Massachusetts General Hospital (MGH). Like Codman 60 years earlier, what he saw at MGH bothered him: mistakes were common, cover-ups were the norm, and systems to prevent errors were glaringly absent. Even worse, many procedures and environments appeared to be all but designed to promote errors. For example, he noticed that turning the dial clockwise increased the dose of anesthetic in some anesthesia machines, and decreased it in others. After delivering a lecture entitled “The Anesthesia Machine: An Accident Waiting to Happen,” Cooper and his colleagues began looking at procedures and equipment through a human factors lens (Chapter 7), using the technique of “critical incident analysis” to explore all causative factors for mistakes.2123






Around the same time, anesthesiology was in the midst of a malpractice crisis characterized by terrific acrimony and skyrocketing premiums. Other researchers, recognizing the possibility that there might be error patterns, began a careful review of settled malpractice cases for themes and lessons (“closed-case analysis”).24 And they found such patterns, in the form of poor machine design, lack of standardization, lax policies and procedures, poor education, and more.






The research and insights from Cooper’s work and the closed-case analyses were important, but they needed to be brought into the mainstream, particularly among physicians. Luckily, as so often happens, the right person emerged at the right time. Ellison “Jeep” Pierce assumed the presidency of the American Society of Anesthesiologists in 1983. Energized by the experience of a friend’s daughter who died under anesthesia during a routine dental procedure, Pierce conceived of a foundation to help support work to make care safer—in fact, he probably coined the term “patient safety” in founding the Anesthesia Patient Safety Foundation (APSF).25 APSF, working closely with other professional, healthcare, and industry groups, helped push the field forward, beginning by convincing caregivers that there was a real problem and that it was soluble with the right approach.26,27






What lessons from anesthesia are relevant to our broader efforts to improve overall patient safety?1,28,29 First, safety requires strong leadership, with a commitment to openness and a willingness to embrace change. Second, learning from past mistakes is an essential part of patient safety. In the case of anesthesia, the closed-case reviews led to key insights. Third, although technology is not the complete answer to safety, it must be a part of the answer. In anesthesia, the thoughtful application of oximetry, capnography, and automated blood pressure monitoring has been vital. Fourth, where applicable, the use of human factors engineering and forcing functions can markedly enhance safety (Chapter 7). For example, changing the anesthesia tubing so that the incorrect gasses could not be hooked up was crucial; this was a far more effective maneuver than trying to educate or remind anesthesiologists about the possibility of mix-ups. Finally, anesthesia was in the throes of a malpractice crisis and had a number of highly visible errors reported in the media. Sparks like these are often necessary to disrupt the inertia and denial that can undermine so many safety efforts. In the 1980s, anesthesiologists paid exorbitant rates for malpractice insurance—among the highest in the medical profession. Now that errors causing patient harm are so unusual, today’s rates fall in the midrange of all specialties, a good example of the “business case for safety.”






Wrong-Site/Wrong-Patient Surgery





In 1995, Willie King, a 51-year-old diabetic man with severe peripheral vascular disease, checked into a hospital in Tampa, Florida, for amputation of a gangrenous right leg. The admitting clerk mistakenly entered into the computer system that Mr. King was there for a left below-the-knee amputation. An alert floor nurse caught the error after seeing a printout of the day’s operating room (OR) schedule; she called the OR to correct the mistake. A scrub nurse made a handwritten correction to the printed schedule, but the computer’s schedule was not changed. Since this computer schedule was the source of subsequent printed copies, copies of the incorrect schedule were distributed around the OR and hospital. King’s surgeon entered the OR, read the wrong procedure off one of the printed schedules, prepped the wrong leg, and then began to amputate it. The error was discovered partway through the surgery, too late to save the left leg. Of course, the gangrenous right leg still needed to be removed, and a few weeks later it was, leaving King a double amputee.






Events like these are so egregious that they have been dubbed “Never Events”—meaning that they should never occur under any circumstances (Appendix VI). And who could possibly disagree? While one major study estimated the incidence of wrong-site or wrong-patient surgery to be approximately 1 in every 100,000 operations,30 other studies—particularly those that also considered outpatient surgery and nonsurgical procedures—have given higher estimates.31,32 And in one survey of approximately 1000 hand surgeons, 20% admitted to having operated on the wrong site at least once in their career, and an additional 16% had prepared to operate on the wrong site but caught themselves just in time.33






When one hears of wrong-site or wrong-patient procedures, it is difficult to resist the instinct to assign blame, usually to the operating surgeon. Yet we know there must be something more at play than simply a careless surgeon or nurse. The answer, as usual, is Swiss cheese (Chapter 2) and bad systems. Appreciating this makes clear the need for a multidimensional approach aimed at preventing the inevitable human slips from causing terrible harm.






The Joint Commission has promoted the use of the Universal Protocol to prevent wrong-site and wrong-patient surgery and procedures (Table 5-2). In essence, the Protocol acknowledges that single solutions to this problem are destined to fail, and that robust fixes depend on multiple overlapping layers of protection. Several elements of the Universal Protocol merit further comment.







Table 5-2 The Joint Commission’s “Universal Protocol for Preventing Wrong-Site, Wrong-Procedure, and Wrong-Person Surgery”