Appendix I. Key Books, Reports, Series, and Web Sites on Patient Safety
The Institute of Medicine (IOM) Reports on Medical Errors and Healthcare Quality (From its “Quality Chasm” Series)
Appendix II. The AHRQ Patient Safety Network (AHRQ PSNET) Glossary of Selected Terms in Patient Safety
Active error (or active failure)—The terms active and latent as applied to errors were coined by James Reason. Active errors occur at the point of contact between a human and some aspect of a larger system (e.g., a human–machine interface). They are generally readily apparent (e.g., pushing an incorrect button, ignoring a warning light) and almost always involve someone at the frontline. Active failures are sometimes referred to as errors at the sharp end, figuratively referring to a scalpel. In other words, errors at the sharp end are noticed first because they are committed by the person closest to the patient. This person may literally be holding a scalpel (e.g., an orthopedist operating on the wrong leg) or figuratively be administering any kind of therapy (e.g., a nurse programming an intravenous pump) or performing any aspect of care. Latent errors (or latent conditions), in contrast, refer to less apparent failures of organization or design that contributed to the occurrence of errors or allowed them to cause harm to patients. To complete the metaphor, latent errors are those at the other end of the scalpel—the blunt end—referring to the many layers of the healthcare system that affect the person “holding” the scalpel.
Adverse drug event (ADE)—An adverse event (i.e., injury resulting from medical care) involving medication use.
- Anaphylaxis to penicillin
- Major hemorrhage from heparin
- Aminoglycoside-induced renal failure
- Agranulocytosis from chloramphenicol
As with the more general term adverse event, the occurrence of an ADE does not necessarily indicate an error or poor quality of care. ADEs that involve an element of error (of either omission or commission) are often referred to as preventable ADEs. Medication errors that reached the patient but by good fortune did not cause any harm are often called potential ADEs. For instance, a serious allergic reaction to penicillin in a patient with no prior such history is an ADE, but so is the same reaction in a patient who has a known allergy history but receives penicillin due to a prescribing oversight. The former occurrence would count as an adverse drug reaction or nonpreventable ADE, while the latter would represent a preventable ADE. If a patient with a documented serious penicillin allergy received a penicillin-like antibiotic but happened not to react to it, this event would be characterized as a potential ADE.
An ameliorable ADE is one in which the patient experienced harm from a medication that, while not completely preventable, could have been mitigated. For instance, a patient taking a cholesterol-lowering agent (statin) may develop muscle pains and eventually progress to a more serious condition called rhabdomyolysis. Failure to periodically check a blood test that assesses muscle damage or failure to recognize this possible diagnosis in a patient taking statins who subsequently develops rhabdomyolysis would make this event an ameliorable ADE: harm from medical care that could have been lessened with earlier, appropriate management. Again, the initial development of some problem was not preventable, but the eventual harm that occurred need not have been so severe, hence the term ameliorable ADE.
- Pneumothorax from central venous catheter placement
- Anaphylaxis to penicillin
- Postoperative wound infection
- Hospital-acquired delirium (or “sundowning”) in elderly patients
Identifying something as an adverse event does not imply “error,” “negligence,” or poor quality care. It simply indicates that an undesirable clinical outcome resulted from some aspect of diagnosis or therapy, not an underlying disease process. Thus, pneumothorax from central venous catheter placement counts as an adverse event regardless of insertion technique. Similarly, a postoperative wound infection counts as an adverse event even if the operation proceeded with optimal adherence to sterile procedures, the patient received appropriate antibiotic prophylaxis in the perioperative setting, and so on. (See also “iatrogenic”).
Anchoring error (or bias)—Refers to the common cognitive trap of allowing first impressions to exert undue influence on the diagnostic process. Clinicians often latch on to features of a patient’s presentation that suggest a specific diagnosis. Often, this initial diagnostic impression will prove correct, hence the use of the phrase anchoring heuristic in some contexts, as it can be a useful rule of thumb to “always trust your first impressions.” However, in some cases, subsequent developments in the patient’s course will prove inconsistent with the first impression. Anchoring bias refers to the tendency to hold on to the initial diagnosis, even in the face of disconfirming evidence.
Authority gradient—Refers to the balance of decision-making power or the steepness of command hierarchy in a given situation. Members of a crew or organization with a domineering, overbearing, or dictatorial team leader experience a steep authority gradient. Expressing concerns, questioning, or even simply clarifying instructions would require considerable determination on the part of team members who perceive their input as devalued or frankly unwelcome. Most teams require some degree of authority gradient; otherwise roles are blurred and decisions cannot be made in a timely fashion. However, effective team leaders consciously establish a command hierarchy appropriate to the training and experience of team members.
Authority gradients may occur even when the notion of a team is less well defined. For instance, a pharmacist calling a physician to clarify an order may encounter a steep authority gradient, based on the tone of the physician’s voice or a lack of openness to input from the pharmacist. A confident, experienced pharmacist may nonetheless continue to raise legitimate concerns about an order, but other pharmacists might not.
Availability bias (or heuristic)—Refers to the tendency to assume, when judging probabilities or predicting outcomes, that the first possibility that comes to mind (i.e., the most cognitively “available” possibility) is also the most likely possibility. For instance, suppose a patient presents with intermittent episodes of very high blood pressure. Because episodic hypertension resembles textbook descriptions of pheochromocytoma, a memorable but uncommon endocrinologic tumor, this diagnosis may immediately come to mind. A clinician who infers from this immediate association that pheochromocytoma is the most likely diagnosis would be exhibiting availability bias. In addition to resemblance to classic descriptions of disease, personal experience can also trigger availability bias, as when the diagnosis underlying a recent patient’s presentation immediately comes to mind when any subsequent patient presents with similar symptoms. Particularly memorable cases may similarly exert undue influence in shaping diagnostic impressions.
Bayesian approach—Probabilistic reasoning in which test results (not just laboratory investigations but also history, physical exam, or any aspect for the diagnostic process) are combined with prior beliefs about the probability of a particular disease. One way of recognizing the need for a Bayesian approach is to recognize the difference between the performance of a test in a population and that in an individual. At the population level, we can say that a test has a sensitivity and specificity of, say, 90%—that is, 90% of patients with the condition of interest have a positive result and 90% of patients without the condition have a negative result. In practice, however, a clinician needs to attempt to predict whether an individual patient with a positive or negative result does or does not have the condition of interest. This prediction requires combining the observed test result not just with the known sensitivity and specificity but also with the chance the patient could have had the disease in the first place (based on demographic factors, findings on exam, or general clinical gestalt).
Benchmark—A benchmark in healthcare refers to an attribute or achievement that serves as a standard for other providers or institutions to emulate. Benchmarks differ from other standard of care goals, in that they derive from empiric data—specifically, performance or outcomes data. For example, a statewide survey might produce risk-adjusted 30-day rates for death or other major adverse outcomes. After adjusting for relevant clinical factors, the top 10% of hospitals can be identified in terms of particular outcome measures. These institutions would then provide benchmark data on these outcomes. For instance, one might benchmark “door-to-balloon” time at 90 minutes, based on the observation that the top-performing hospitals all had door-to-balloon times in this range. In regard to infection control, benchmarks would typically be derived from national or regional data on the rates of relevant nosocomial infections. The lowest 10% of these rates might be regarded as benchmarks for other institutions to emulate.
Blunt end—The blunt end refers to the many layers of the healthcare system not in direct contact with patients, but which influence the personnel and equipment at the sharp end who do contact patients. The blunt end thus consists of those who set policy, manage healthcare institutions, and design medical devices, and other people and forces, which, though removed in time and space from direct patient care, nonetheless affect how care is delivered. Thus, an error programming an intravenous pump would represent a problem at the sharp end, while the institution’s decision to use multiple different types of infusion pumps, making programming errors more likely, would represent a problem at the blunt end. The terminology of “sharp” and “blunt” ends corresponds roughly to active failures and latent conditions.
Checklist—Algorithmic listing of actions to be performed in a given clinical setting (e.g., advanced cardiac life support [ACLS] protocols for treating cardiac arrest) to ensure that, no mater how often performed by a given practitioner, no step will be forgotten. An analogy is often made to flight preparation in aviation, as pilots and air traffic controllers follow pretakeoff checklists regardless of how many times they have carried out the tasks involved.
Clinical decision support system (CDSS)—Any system designed to improve clinical decision making related to diagnostic or therapeutic processes of care. Typically a decision support system responds to “triggers” or “flags”—specific diagnoses, laboratory results, medication choices, or complex combinations of such parameters—and provides information or recommendations directly relevant to a specific patient encounter.
CDSSs address activities ranging from the selection of drugs (e.g., the optimal antibiotic choice given specific microbiologic data) or diagnostic tests to detailed support for optimal drug dosing and support for resolving diagnostic dilemmas. Structured antibiotic order forms represent a common example of paper-based CDSSs. Although such systems are still commonly encountered, many people equate CDSSs with computerized systems in which software algorithms generate patient-specific recommendations by matching characteristics, such as age, renal function, or allergy history, with rules in a computerized knowledge base.
The distinction between decision support and simple reminders can be unclear, but usually reminder systems are included as decision support if they involve patient-specific information. For instance, a generic reminder (e.g., “Did you obtain an allergy history?”) would not be considered decision support, but a warning (e.g., “This patient is allergic to codeine.”) that appears at the time of entering an order for codeine would be.
Close call—An event or situation that did not produce patient injury, but only because of chance. This good fortune might reflect robustness of the patient (e.g., a patient with penicillin allergy receives penicillin, but has no reaction) or a fortuitous, timely intervention (e.g., a nurse happens to realize that a physician wrote an order in the wrong chart). Such events have also been termed near miss incidents.
Competency—Having the necessary knowledge or technical skill to perform a given procedure within the bounds of success and failure rates deemed compatible with acceptable care. The medical education literature often refers to core competencies, which include not just technical skills with respect to procedures or medical knowledge but also competencies with respect to communicating with patients, collaborating with other members of the healthcare team, and acting as a manager or agent for change in the health system.
Complexity science (or complexity theory)—Provides an approach to understanding the behavior of systems that exhibit nonlinear dynamics, or the ways in which some adaptive systems produce novel behavior not expected from the properties of their individual components. Such behaviors emerge as a result of interactions between agents at a local level in the complex system and between the system and its environment.
Complexity theory differs importantly from systems thinking in its emphasis on the interaction between local systems and their environment (such as the larger system in which a given hospital or clinic operates). It is often tempting to ignore the larger environment as unchangeable and therefore outside the scope of quality improvement or patient safety activities. According to complexity theory, however, behavior within a hospital or clinic (e.g., noncompliance with a national practice guideline) can often be understood only by identifying interactions between local attributes and environmental factors.
Computerized provider order entry or computerized physician order entry (CPOE)—Refers to a computer-based system of ordering medications and often other tests. Physicians (or other providers) directly enter orders into a computer system that can have varying levels of sophistication. Basic CPOE ensures standardized, legible, complete orders, and thus primarily reduces errors caused by poor handwriting and ambiguous abbreviations.
Almost all CPOE systems offer some additional capabilities, which fall under the general rubric of CDSS. Typical CDSS features involve suggested default values for drug doses, routes of administration, or frequency. More sophisticated CDSSs can perform drug allergy checks (e.g., the user orders ceftriaxone and a warning flashes that the patient has a documented penicillin allergy), drug-laboratory value checks (e.g., initiating an order for gentamicin prompts the system to alert you to the patient’s last creatinine), drug–drug interaction checks, and so on. At the highest level of sophistication, CDSS prevents not only errors of commission (e.g., ordering a drug in excessive doses or in the setting of a serious allergy) but also errors of omission. For example, an alert may appear such as, “You have ordered heparin; would you like to order a partial thromboplastin time (PTT) in 6 hours?” Or, even more sophisticated: “The admitting diagnosis is hip fracture; would you like to order heparin for deep vein thrombosis (DVT) prophylaxis?” See also “Clinical decision support system.”
Confirmation bias—Refers to the tendency to focus on evidence that supports a working hypothesis, such as a diagnosis in clinical medicine, rather than to look for evidence that refutes it or provides greater support to an alternative diagnosis. Suppose that a 65-year-old man with a past history of angina presents to the emergency department with acute onset of shortness of breath. The physician immediately considers the possibility of cardiac ischemia, so asks the patient if he has experienced any chest pain. The patient replies affirmatively. Because the physician perceives this answer as confirming his working diagnosis, he does not ask if the chest pain was pleuritic in nature, which would decrease the likelihood of an acute coronary syndrome and increase the likelihood of pulmonary embolism (a reasonable alternative diagnosis for acute shortness of breath accompanied by chest pain). The physician then orders an ECG and cardiac troponin. The ECG shows nonspecific ST changes and the troponin returns slightly elevated.
Of course, ordering an ECG and testing cardiac enzymes is appropriate in the work-up of acute shortness of breath, especially when it is accompanied by chest pain and in a patient with known angina. The problem is that these tests may be misleading, since positive results are consistent not only with acute coronary syndrome but also with pulmonary embolism. To avoid confirmation bias in this case, the physician might have obtained an arterial blood glass or a D-dimer level. Abnormal results for either of these tests would be relatively unlikely to occur in a patient with an acute coronary syndrome (unless complicated by pulmonary edema), but likely to occur with pulmonary embolism. These results could be followed up by more direct testing for pulmonary embolism (e.g., with a helical CT scan of the chest), while normal results would allow the clinician to proceed with greater confidence down the road of investigating and managing cardiac ischemia.
This vignette was presented as if information were sought in sequence. In many cases, especially in acute care medicine, clinicians have the results of numerous tests in hand when they first meet a patient. The results of these tests often do not all suggest the same diagnosis. The appeal of accentuating confirmatory test results and ignoring nonconfirmatory ones is that it minimizes cognitive dissonance.
A related cognitive trap that may accompany confirmation bias and compound the possibility of error is “anchoring bias”—the tendency to stick with one’s first impressions, even in the face of significant disconfirming evidence.
Crew resource management (CRM)—Also called crisis resource management in some contexts (e.g., anesthesia), encompasses a range of approaches to training groups to function as teams, rather than as collections of individuals. Originally developed in aviation, CRM emphasizes the role of human factors—the effects of fatigue, expected or predictable perceptual errors (such as misreading monitors or mishearing instructions), as well as the impact of different management styles and organizational cultures in high-stress, high-risk environments. CRM training develops communication skills, fosters a more cohesive environment among team members, and creates an atmosphere in which junior personnel will feel free to speak up when they think that something is amiss. Some CRM programs emphasize education on the settings in which errors occur and the aspects of team decision making conducive to “trapping” errors before they cause harm. Other programs may provide more hands-on training involving simulated crisis scenarios followed by debriefing sessions in which participants assess their own and others’ behavior.
Critical incidents—A term made famous by a classic human factors study by Jeffrey Cooper of “anesthetic mishaps,” though the term had first been coined in the 1950s. Cooper and colleagues brought the technique of critical incident analysis to a wide audience in healthcare but followed the definition of the originator of the technique. They defined critical incidents as occurrences that are “significant or pivotal, in either a desirable or an undesirable way,” though Cooper and colleagues (and most others since) chose to focus on incidents that had potentially undesirable consequences. This concept is best understood in the context of the type of investigation that follows, which is very much in the style of root cause analysis. Thus, significant or pivotal means that there was significant potential for harm (or actual harm), but also that the event has the potential to reveal important hazards in the organization. In many ways, it embodies the expression in quality improvement circles that “every defect is a treasure.” In other words, these incidents, whether near misses or disasters in which significant harm occurred, provide valuable opportunities to learn about individual and organizational factors that can be remedied to prevent similar incidents in the future.
Decision support—Refers to any system for advising or providing guidance about a particular clinical decision at the point of care. For example, a copy of an algorithm for antibiotic selection in patients with community-acquired pneumonia would count as clinical decision support if made available at the point of care. Increasingly, decision support occurs via a computerized clinical information or order entry system. Computerized decision support includes any software employing a knowledge base designed to assist clinicians in decision making at the point of care.
Typically a decision support system responds to “triggers” or “flags”—specific diagnoses, laboratory results, medication choices, or complex combinations of such parameters—and provides information or recommendations directly relevant to a specific patient encounter. For instance, ordering an aminoglycoside for a patient with creatinine above a certain value might trigger a message suggesting a dose adjustment based on the patient’s decreased renal function.
Error—An act of commission (doing something wrong) or omission (failing to do the right thing) that leads to an undesirable outcome or significant potential for such an outcome. For instance, ordering a medication for a patient with a documented allergy to that medication would be an act of commission. Failing to prescribe a proven medication with major benefits for an eligible patient (e.g., low-dose unfractionated heparin as venous thromboembolism prophylaxis for a patient after hip replacement surgery) would represent an error of omission.