Overview
In the late 1990s, as patients, reporters, and legislators began to appreciate the scope of the medical errors problem, the response was nearly Pavlovian: we need more reporting! This commonsensical appeal had roots in several places, including the knowledge that transparency often drives change, the positive experiences with reporting in the commercial aviation industry,1 the desire by many interested parties (patients, legislators, the media, healthcare leaders) to understand the dimensions of the safety problem, and the need of individual healthcare organizations to know which problems to work on.
I’ll begin this chapter by focusing on reporting within the walls of a healthcare delivery organization such as a hospital or clinic. I’ll then widen the lens to consider extra-institutional reporting systems. Further discussion on these issues, from somewhat different perspectives, can be found in Chapters 3, 20, and 22.
The systems to capture local reports are generally known as incident reporting (IR) systems. Incident reports come from frontline personnel (e.g., the nurse, pharmacist, or physician caring for a patient when a medication error occurred) rather than, say, from supervisors. From the perspective of those receiving the data, IR systems are passive forms of surveillance, relying on involved parties to choose to report. More active methods of surveillance, such as retrospective chart review, direct observation, and trigger tools, have already been addressed in Chapter 1, though I’ll have more to say about them later in this chapter.
Although IR systems capture only a fraction of incidents, they have the advantages of relatively low cost and the involvement of caregivers in the process of identifying important problems for the organization. Yet the experience with them has been disappointing—while well-organized IR systems can yield important insights, they can also waste substantial resources, drain provider goodwill, and divert attention from more important problems.2
I believe that the following realities should frame discussions of the role of reporting in patient safety:
- Errors occur one at a time, to individual patients—often already quite ill—scattered around hospitals, nursing homes, and doctors’ offices. This generates tremendous opportunity to cover up errors, and requires that providers be engaged in efforts to promote transparency.
- Because reporting errors takes time and can lead to shame and (particularly in the United States) legal liability, providers who choose to report need to be protected from unfair blame, public embarrassment, and legal risk.
- Reporting systems need to be easy to access and use, and reporting must yield palpable improvements. Busy caregivers are not likely to report if systems are burdensome to use or reports seem to disappear into the dark corners of a bureaucracy.
- Many different stakeholders need to hear about and learn from errors. However, doctors, nurses, hospital administrators, educators, researchers, regulators, legislators, the media, and patients all have different levels of understanding and may need to see very different types of reports. This diversity makes error reporting particularly challenging.
- Although most errors do reflect systems problems, some can be attributed to bad providers. The public—and those charged with defending the public’s interest, such as licensing and credentialing boards—have a legitimate need to learn of these cases and take appropriate (including disciplinary) action (Chapter 19).
- Similarly, even when the problem is systemic and not one of “bad apples,” the public has a right to know about systems that are sufficiently unsafe that a reasonable person would hesitate before receiving care from them.
- Medical errors are so breathtakingly common that the admonition to “report everything” is silly. In the care of the average intensive care unit patient, 1.7 errors are made each day, and the average hospitalized patient experiences one medication error per day.3 A system that captured every error and near miss would quickly accumulate unmanageable mountains of data, require boatloads of analysts, and result in caregivers spending much of their time reporting and digesting the results instead of caring for patients.
Taken together, these “facts on the ground” mean that error reporting, while conceptually attractive, must be approached thoughtfully. IR systems need to be easy to use, nonpunitive, and manned by people skilled at analyzing the data and putting them to use. The energy and resources invested in designing action plans that respond to IRs should match the level of energy and resources invested in the IR system itself. After all, the goal of an IR system is not data collection but meaningful improvements in safety and quality.
There is yet another cautionary note to be sounded regarding IR systems: voluntary reporting systems cannot be used to derive rates—of errors, or harm, or anything, really.4 I know this because I periodically visit hospitals and look at their IR reporting trends. If the reports have gone up by, let’s say, 20% in the past year, they invariably tell me, “Look at these numbers. You see, we’ve established a culture of reporting, and people are confident that reports lead to action. We’re getting safer!”
That sounds great until I visit a comparable institution, but one whose IR volume has fallen recently. “This is great,” they always say. “We’ve had fewer errors!” The problem is obvious: I have absolutely no way of knowing which explanation is correct, and neither do you.
Given the limitations of IR systems, healthcare organizations need to employ other techniques to capture errors and identify risky situations.5,6 This chapter will cover a few of them, including failure mode and effects analysis and trigger tools. Errors and adverse events that are reported must be put to good use, such as by turning them into stories that are shared within organizations, in forums such as Morbidity and Mortality (M&M) conferences. Finally, errors that are of particular concern—often called sentinel events—must be analyzed in a way that rapidly generates the maximum amount of institutional learning and catalyzes the appropriate changes. The method of doing this is known as root cause analysis (RCA). The following sections discuss each of these issues and techniques.
General Characteristics of Reporting Systems
Error reports, whether filed on paper or through the Web, and whether routed to the hospital’s safety officer or to a federal regulator, can be divided into three main categories: anonymous, confidential, and open. Anonymous reports are ones in which there is no identifying information asked of the reporter. Although they have the advantage of encouraging reporting, anonymous systems have the disadvantage of preventing necessary follow-up questions from being answered. In a confidential reporting system, the identity of the reporter is known but shielded from authorities such as regulators and representatives of the legal system (except in cases of clear professional misconduct or criminal acts). Such systems tend to capture more useful data than anonymous systems, because follow-up questions can be asked. The key to these systems, of course, is that reporters must trust that they are truly confidential. Finally, in open reporting systems all people and places are publicly identified. These systems have a relatively poor track record in healthcare, because the potential for unwanted publicity and blame is very strong, and it is often easy for individuals to cover up errors (even with “mandatory” reporting).
Another distinguishing feature of reporting systems is the organizational entity that receives the reports. With that in mind, let’s first consider the local hospital system—the IR system—before expanding the discussion to systems that move reports to other entities beyond the clinical organization’s walls.
Hospital Incident Reporting Systems
Hospitals have long had IR systems, but prior to the patient safety movement these systems received little attention. Traditional IR systems relied on providers—nearly always nurses (most studies show that nurse reports outnumber physician reports by at least five to one7)—to fill out paper reports. The reports generally went to the hospital’s risk manager, whose main concern was often to limit his or her institution’s potential legal liability (Chapter 18). There was little emphasis on systems improvement, and dissemination of incidents to others in the system (other managers, caregivers, educators) was unusual. Most clinicians felt that reporting was a waste of time, and so few did it.
IR systems have become far more important in the past decade, but until recently have been studied insufficiently. Based on a 2008 survey of 1600 U.S. hospitals’ error reporting systems, Farley et al. described four key components of effective systems (Table 14-1). Unfortunately, they found that only a minority of hospitals met these criteria.8 Specifically, only a small fraction of hospitals showed evidence of a safety culture that encouraged reporting, properly analyzed error reports, and effectively disseminated the results of these analyses.
|
These failures are not for lack of effort or, in many hospitals, lack of investment. In fact, many hospitals have invested heavily in their IR systems, mostly in the form of new computerized infrastructure. However, the resources and focus needed to change the culture around incident reporting and management, to properly analyze reports, and to create useful and durable action plans have been less forthcoming.2
That said, the computerized systems do have their advantages. For example, most systems now allow any provider to submit an incident and categorize it by error type (e.g., medication error, patient fall; Table 14-2) and level of harm (e.g., no harm, minimal harm, serious harm, death). In confidential systems, the reporter can be contacted electronically to provide additional detail if needed. Computerized systems also make it easy to create aggregate statistics about reports, although it is important to reemphasize that voluntary systems are incapable of providing accurate error rates.3
|
Probably most importantly, incident reports can be routed to the managers positioned to take action or spot trends (unfortunately, not all computerized IR systems have this functionality). For example, when an error pertaining to the medical service is reported through the hospital’s computerized IR system at UCSF—my hospital—the system automatically sends an e-mail to me as chief of the service, as well as the service’s head nurse, the hospital’s risk manager, the “category manager” (an appropriate individual is assigned to each item in Table 14-2), and the hospital’s director of patient safety. Each of us can review the error, have a discussion about it (orally or within the IR system’s computerized environment), and document the action we took in response.
Although this represents great progress, it is important to appreciate the amount of time, skill, and energy all of these functions require. Many hospitals have built or purchased fancy IR systems, exhorted their staff to “report everything—errors, near misses, everything,” and then found themselves overwhelmed by thousands of reports. I’d much rather see a system that received fewer reports but acted on those reports promptly and effectively than one with far more reports that end up in the black hole of a hospital’s hard drive. As you see, developing thoughtful ways to manage data overload—whether the goal is to prevent alert fatigue in a CPOE system (Chapter 13) or to handle thousands of error reports—is an overarching theme in the field of patient safety.
In an increasingly computerized environment and with the advent of trigger tools and other new methods of identifying cases of error and harm, some patient safety experts are starting to question the role of voluntary error reporting by frontline personnel. For example, some wonder whether every medication error and every fall really need to be reported through an IR system. My answer is no. Perhaps we should be sampling common error categories in depth for short periods of time (i.e., monthly). I can envision a system in which January is Falls Month, February is Pressure Ulcer Month, and so on. I am confident that a hospital employing this strategy would learn everything it needed to know about its latent errors (Chapter 1) in these areas after such a sampling.9
On the other hand, organizations do need ways to disseminate the lessons they have learned from serious errors and to understand their error patterns in order to plan strategically (Chapter 22), to meet regulatory reporting requirements (Chapter 20), and to pursue their risk management and error disclosure responsibilities (Chapter 18). Given these realities, IR systems must be improved, not abolished. This is clearly an area sorely in need of fresh thinking.
The Aviation Safety Reporting System
Turning to reports that “leave the building,” the pressure to build statewide or federal reporting systems grew in part (like so much of the patient safety field) from the experience of commercial aviation.1,10 As we consider extra-institutional reporting systems, it is worth reflecting on whether this particular aviation analogy is apt.
On December 1, 1974, a Trans World Airlines (TWA) flight crashed into the side of a small mountain in Virginia, killing all 92 passengers and crew. As tragic as the crash was, the subsequent investigation added insult to injury, because the problems leading to the crash (poorly defined minimum altitudes on the Dulles Airport approach) were well known to many pilots but not widely disseminated. A year later, the Federal Aviation Administration (FAA) launched the Aviation Safety Reporting System (ASRS). Importantly, recognizing that airline personnel might be hesitant to report errors and near misses to their primary regulator, FAA contracted with a third party (NASA) to run the system and broadcast its lessons to the industry.
There is general agreement that the ASRS is one of the main reasons for aviation’s remarkable safety record (a 10-fold decrease in fatalities over the past generation; Figure 9-1). Five attributes of the ASRS have helped create these successes: ease of reporting, confidentiality, third-party administration, timely analysis and feedback, and the possibility of regulatory action. The ASRS rules are straightforward: if anyone witnesses a near miss (note an important difference from healthcare: non-near misses in aviation—that is, crashes—don’t need a reporting system, because they appear on the news within minutes), that person must report it to ASRS within 10 days. The reporter is initially identified so that he or she can be contacted, if needed, by ASRS personnel; the identifying information is subsequently destroyed. In over 30 years of operation, there have been no reported confidentiality breaches of the system. Trained ASRS personnel analyze the reports for patterns, and they have several pathways to disseminate key information or trigger actions (including grounding airplanes or shutting down airports, if necessary).
Even as healthcare tries to emulate these successes, it is worth highlighting some key differences between its problems and those of commercial aviation. The biggest is the scale of the two enterprises and their errors.10 Notwithstanding its extraordinary effort to encourage reporting, the ASRS receives about 35,000 reports per year, across the entire U.S. commercial aviation system.11,12 If all errors and near misses were being reported in American healthcare, this would almost certainly result in more than 35,000 reports per day—over 10 million reports per year! My own 600-bed hospital generates 15,000 yearly reports, and (a) we don’t report everything, (b) we are one of 6000 hospitals in the country, and (c) you would need to add in the errors and adverse events from nonhospital facilities and ambulatory practices to come up with an overall estimate. There are simply many more things that go wrong in the healthcare system than in the aviation system. This makes prioritizing and managing error reports a much knottier problem for us than it is in aviation.