(1)
Australian Patient Safety Foundation, Adelaide, SA, Australia
(2)
Department of Anaesthesia, Royal Adelaide Hospital, University of Adelaide, Adelaide, SA, Australia
(3)
Discipline of Surgery, University of Adelaide, Royal Adelaide Hospital, L5 Eleanor Harrald Building, North Terrace, 5000 Adelaide, SA, Australia
Abstract
Safety is not only a reasonable expectation of any member of the public entering a health facility, or receiving heathcare, but it is also usually economically sensible because it reduces overall cost. The cost saved by delivering a safe and reliable health service by avoiding complications and mishap, typically eclipses the cost expenditure or outlay on such measures. In other words, making the health system safer is worthwhile not only for the staff and consumers involved, but also for the provider of that system or service, which is often the taxpayer or government.
Introduction
For many years there has been an assumption, not unreasonably, that when a patient is admitted to a hospital or undergoes treatment, then the overall outcome would be improvement of their clinical condition, where at all possible. The notion of suffering more illness or worsening their clinical condition through entry to and the intervention of the healthcare system was seldom considered, but this has been addressed more recently, especially as our standards and expectations have progressively risen. Until quite recently, when a patient was seriously harmed by a healthcare process rather than an underlying disease or injury, it was considered “deeply unfortunate and best forgotten” (Health Policy and Economic Research Unit and British Medical Association 2002). In many countries, the tort system was present to compensate the patient and “cover” the healthcare professionals involved. However, it has now become quite clear that large numbers of patients are harmed by healthcare (Kohn et al. 2000; Department of Health and An Organisation with a Memory 2000; Runciman and Moller 2001) that very few are compensated and that the tort system is inefficient, costly, and “user unfriendly” for both plaintiff and defendant (Runciman et al. 2003). The term “safety and quality” is now widely used in relation to healthcare. But what is the relationship between safety and quality? It is clear that safety cannot be considered in isolation, as an amount spent on safety cannot necessarily be spent on quality or on productivity, and vice versa, but all are inextricably linked. Indeed, safety and quality often profoundly improve productivity, but this is often not visible immediately.
Safety and Quality
Few would argue that safety is one important dimension of the quality of healthcare (Fig. 8.1) (Minister of Health 2003). However, there are also safety implications for each of the other dimensions. No or restricted access to healthcare reduces safety, some effective treatments have risky side effects or unintended consequences, efficiency dictates that constraints be placed on staff numbers, or a procedure being performed long after it was first needed may lack of timeliness – all may greatly increase “risk” of adverse outcome(s). Furthermore, patients foregoing conventional treatment (such as blood transfusion) because it is not acceptable, for religious reasons, for example, face increased risk, and patients who have inappropriate treatment (over- or undertreatment) are placed in double jeopardy by the potential for harm that may arise from not having the right treatment as well as the risk of that treatment not being carried out properly. Safety, therefore, the “flip side” of risk, is an elusive multifaceted property of the complex socio-technical system that constitutes healthcare and is difficult to measure. Indeed, only recently has agreement been reached on the definitions of a set of concepts and terms relevant to the study of safety (see Appendix I).
Fig. 8.1
The dimensions of quality and organizational layers of healthcare. (Reprinted by Runciman et al. (2007f). Copyright © 2007)
The Harm Caused by Healthcare
Estimates of at least some of the harm that arises from healthcare have been made by reviewing randomly selected medical records. Studies in several countries have suggested that around 10 % of admissions are associated with an adverse event (when a patient is harmed by healthcare). Half of this harm is the cause of the admission and half occurs during the admission (Wilson et al. 1995). At least half of this harm is associated with surgery – problems with or failure of the procedure (18 %); hospital-acquired infection (16 %); perioperative problems such as pain, nausea, vomiting, ileus, or fever (6 %); hemorrhage or hematoma (5 %); problems with thromboembolism prophylaxis (3 %); or with a prosthesis or catheter (2 %). Other problems which may affect surgical as well as other patients make up a further one quarter of all adverse events (Runciman et al. 2002). These include wrong, delayed, or missed diagnosis or treatment (14 %), decompensation of a body system (11 %), hospital-acquired injury (e.g., burn, fall, pressure ulcer) (8 %), or problem with a medication error or drug (7 %). Anesthesia accounts for 2 % of the problems (Wilson et al. 1995). There is now evidence that some problems manifest after discharge (Forster et al. 2004) and that 1 % of primary care consultations are for iatrogenic problems (Britt et al. 2002). These latter proportions might be expected to naturally increase with shorter hospital stays due to early discharge policies and day surgery.
The pattern of problems resulting litigation is different. Firstly, in half of all cases that are settled, experts do not find that an adverse event has occurred at all – the problems usually arise from unrealistic expectations – nearly always associated with poor communication and rapport between doctor and patient. Litigation is often associated with “tightly coupled” events (i.e., events in which the link between the cause and effect is obvious); litigation is likely for a fracture from a fall in hospital or surgical damage to an internal organ or major vessel, but unlikely for deep vein thrombosis or failure to carry out an indicated procedure (Runciman and Moller 2001) (see Appendix I).
Some of the consequences of iatrogenic harm are severe. Nearly all the adverse event studies have yielded similar figures in this respect, with 1 in 50 admissions being associated with major or permanent disability or death (1.7 % with disability and 0.3 % with death) (Runciman et al. 2007a). This means that the overall number of deaths exceeds the road toll in most countries, although it has been pointed out that those who die in association with iatrogenic harm are nearly all elderly, frail, and infirm (Hayward and Hofer 2001).
The costs of iatrogenic harm are high. Direct costs of managing the consequences amount to at least 5 % of the total amount spent on healthcare and indirect costs are at least the same again. Costs of litigation amount to a further 1–2 %, in spite of the fact that fewer than 1 in 100 patients harmed are compensated. The costs to the friends, relatives, carers, and employers in both human and financial terms are also very substantial. Overall, therefore, an amount approaching 1 in every 10 dollars spent on healthcare is consumed by the consequences of adverse events. As 9 % of gross domestic product (GDP) is spent on healthcare in most western countries (15 % in the USA), it may be estimated that as much as 1 % of GDP is consumed by iatrogenic problems (Runciman and Moller 2001).
Errors
Healthcare is a complex socio-technical system. Despite a recent emphasis on “systems thinking,” systems, devices, equipment, machines, and humans cannot be considered in isolation. Clinical medicine continues to consist of a series of interactions between patients, clinicians, and environment, and this is where things can be “got right” or can go wrong. Healthcare will always remain a quintessentially human endeavor. Errors are unintentional, an inevitable feature of the human condition, and are nearly always made by people acting in good faith. The challenge is not to prevent error, but to error-proof the complex system, which makes up healthcare (Reason 1990).
In order to prevent, intercept, or ameliorate things that go wrong, it is necessary to understand how and why they go wrong. How to elucidate this will be dealt with later in this chapter. Figure 8.2 shows the stages of acquiring and understanding information, making a plan, carrying it out, and comparing the result with what was intended (Runciman et al. 2007b).
Fig. 8.2
Schematic representation of the types of error that can be made in making and carrying out a plan (1–10, see accompanying text) (Reprinted by Runciman et al. (2007f). Copyright © 2007)
Information in the world is acquired, filtered, and processed in order to generate a “schema” representing the situation by comparing the incoming information with a stored version of a schema in the mind (knowledge). If there is a match, a pre-stored rule can be applied if one is available and an action generated. This can happen automatically and subconsciously for familiar situations, but at the other end of the spectrum, it may be necessary to think from first principles. Either way, an action may be generated; this may or may not be the action intended, and the execution of the action may be flawed. Chance may degrade perception and cognitive function and may influence the outcome. Finally, the outcome is detected and compared with the intention. A mismatch will indicate that an error has occurred and may need correction or appropriate crisis management. An error may have contributions from each of the stages in Fig. 8.2 and may be defined as when a flawed plan has been made and/or a plan has not been carried out as intended.
Errors in Information
Faulty, patchy, or absent information “in the world” underlies many errors. Information imparted verbally, via written notes, or electronically is rarely comprehensive and may not be explicit. Handovers, referrals, and the results of investigations may omit important facts and may even be misleading. A major problem in clinical medicine lies in the difficulty of knowing what one does not know.
Errors in Acquisition of Knowledge
These also underlie many adverse events and may be due to a failure to take a proper history, of patient recall; to examine the patient; to read the notes; or to review the patient. There have been many cases of abnormal results not being flagged or being filed without being seen, sometimes with tragic consequences. Short of a comprehensive electronic system recording who acknowledges receipt of an abnormal report, there is no foolproof way of preventing this problem.
Errors in Perception
Information may be misheard or misread, especially with sound-alike and look-alike names, labels, and objects such as drug ampoules. If a wrong name or dose is perceived and “locked in” to one’s mind, it is highly likely that the erroneous sequence will proceed. Acronyms may be confusing and may not help.
Errors in Matching
Information must not only be “sensed” but “made sense of” on an ongoing basis. “Confirmation bias” or “fixation error” occurs when new information is interpreted (inappropriately) as fitting with a preformed concept or schema. These can be strongly influenced by both past and recent experience with similar situations. Once a situation is identified or a diagnosis made, there is a tendency to fit new information to this schema, even when, in retrospect, it was suggesting that the schema was wrong or different. A diagnosis of psychosis rather than meningitis or musculoskeletal back pain rather than a ruptured aortic aneurysm may well “stick” for quite a while in spite of evolving signs to the contrary.
Errors in Stored Schemata
Errors may be due to knowledge that has never been known, has been forgotten, or is incorrect. A practitioner may, for example, prescribe a drug to which a patient is allergic or erroneously not link a trade name and a generic name.
Errors in Knowledge Stored as Rules
Clinical acumen compromises developing a store of schemata (see section “Errors in stored schemata”) and of responses to them. These rules may be forgotten, have been flawed in the first place, or may become flawed with the advent of new information. Much inappropriate treatment is due to practitioners relying on recall of well-used routines and algorithms when there is evidence in the literature that a new course of action would be more appropriate. The ready availability at the point of care of up-to-date care pathways, checklists, and recommendations endorsed by a relevant professional body is clearly an ideal, which should be pursued. There is evidence that only 55 % of patients receive recommended, appropriate care, as judged by conformance with basic indicators for common acute or chronic conditions (McGlynn et al. 2003). Conformance with recommended care is less than 80 % for breast cancer and less than 60 % for a range of orthopedic conditions, colorectal cancer, and benign prosthetic hypertrophy. Conformance for peptic ulcer disease is less than 40 % and for hip fracture less than 30 % (McGlynn et al. 2003). Rules may also be applied “blindly” in situations that require individualization and departure for that patient.
Slips and Lapses
A lapse is when an intended action is actually not carried out, often because of an interruption, and a slip is when an inappropriate action sequence is enacted instead of an appropriate one, because the conscious monitoring of what one is doing is interrupted or degraded by attention to other tasks. Illness, distraction, fatigue, time pressures, and multitasking predispose to these sorts of errors, which typically occur when skilled people have to do a number of complex tasks in close juxtaposition to each other and suffer an attentional “blink.”
Errors in Choice of Rule
A perfectly good rule may be applied in the wrong context. Use of a cephalosporin antibiotic as prophylaxis for surgical wound infection may be inappropriate for a patient who has a chronic leg ulcer colonized by an organism resistant to that antibiotic or allergic to it.
Technical Errors
These involve the imperfect execution of intended actions. These are influenced by the skill and experience of practitioners, training, choice of technique and equipment, and the difficulty of the particular task, for practitioners who have to operate at the limits of their capacity in relation to the requirement of a task, such as in an emergency. Technical errors are inevitable, just as some technically brilliant opening batsmen will inevitably be “out” first ball.
Deliberative Errors
If a situation is unfamiliar, and a new solution is required, conscious thought or deliberation is necessary. Human beings generally try to avoid this, can only perform at a slow pace, and are prone to error. An example would be to have to convert a drug from mg/l into mmol/ml and then calculate the requisite number of ml to be given based on body weight.
Chance may exert an effect at any of these levels (e.g., a sudden distraction) and may make the difference between a near miss and a fatal error. Certain types of error are more likely in certain situations. For example, slips and lapses may occur in routine situations when arousal is low, whereas technical errors or fixation errors are likely in rapidly evolving crises when arousal is high but there is cognitive overload. In crises, when time is strictly limited, there are advantages in reverting to precompiled responses as there is little time to work things out from first principles. The emergency management of severe trauma (EMST) protocol is an example of an algorithm-based approach which may be useful in circumstances which are hectic, and there is a chance of something being missed (Trauma Committee and Royal Australasian College of Surgeons 1992).
Violations
It is important to distinguish between errors and violations. Errors are unusual and unintended. Violations involve a deliberate departure from expected practice, usually with some sort of trade-off. They may be distinguished from errors because one can deliberately change one’s practice or behavior to avoid a violation (Reason 1990). Corporate violations occur when organizations knowingly increase risk for staff and patients by, for example, rostering staff for excessive hours. Working for more than 16 h without a break produces a detriment in performance equivalent to a blood alcohol of 0.05 mmol/l.
How Things Go Wrong (Runciman et al. 2007c): The System and the “Swiss-Cheese” Model (Reason 1990)
Overall, healthcare has not been designed as a system and does not behave like one – it has evolved haphazardly. Some bits work well, but coordination between the bits can be very disjointed. Specialization has led to compartmentalization and the development of “silos” with the potential for poor coordination of care between silos and poor continuity of care over time and phases of management (e.g., around discharge between the hospital specialist and primary caregiver).
Management and service silos often have their own agenda and communicate poorly with clinicians and patients. There may be conflicts between training and service delivery and a failure to maintain standards of infrastructure and equipment as a result of competing agenda and a clash of cultures. Silos compete for money, equipment, and staff and may compete for patients in “turf disputes.”
Hierarchies in healthcare lead to power gradients between consultants and junior staff and between doctors and nurses and may impede teamwork and coordination; those low in the “pecking order” can have powerful influences exerted upon them to comply with poor practices and not to complain. Poor supervision and inadequate teaching and training have been implicated in several scandals, even though these problems were evident to many of the staff at the time.
Tensions with respect to fatigue from long shifts allowing continuity of care versus short shifts necessitating multiple handovers between teams who may not know complex patients very well have received considerable attention recently. Poor communication can aggravate these problems and patchy information transmission at handovers between shifts has been well documented.
When remuneration comes from more than one source, such as some sort of salary from a hospital and income from private practice, conflicts of allegiance develop and supervision, teaching, and training may suffer. Fee for service provides incentives for overservicing, whereas capitation may provide incentives for less attention to be paid to complex issues requiring long-term strategies. Although there is often a good one-to-one relationship in the private sector, responsibility and accountability has been diffused in the public sector, and it is often not clear who actually has carriage of responsibility for ensuring that plans for complex patients are carried out and the results acted upon. In public hospitals trainee staff scheduled to work in a new area may undergo little or no orientation with respect to the tasks and protocols they are expected to comply with and may have a strong sense of disempowerment.
There are many factors, environmental (i.e., beyond the control of the organization), organizational (such as scheduling and matching tasks to skills), personal (staff and patient knowledge, training, attitudes, and beliefs), and technical (equipment, devices, drugs, information technology, situational), which can be substandard in this difficult environment and contribute to problems.
In a particular incident, problems in some of these areas may be identified as having interacted in a particular chronological sequence to conspire together to turn a number of mundane everyday problems into a disastrous outcome for a patient. These breaches of the defenses are envisaged as holes in Swiss cheese which, if they line up on a particular occasion, can cause an accident trajectory to pass through a number of layers of defense and harm a patient (Fig. 8.3) (Reason 2000).
What to Do When Things Go Wrong
The complexity of healthcare means that it is inevitable that things will go wrong quite frequently. The vulnerabilities of patients dictate that some of these will result in serious harm. Major disruptions to working patterns, delays, and inconvenience are also common. It is important, when things have gone wrong, not to simply shrug them off, but to take steps to look after the patients affected – even if all they have been caused is inconvenience – and to try to ensure the same thing does not happen again. Adequate support for the patient and their family is clearly most important when serious harm or when death has resulted.
Looking After the People (Runciman et al. 2007d)
It is vital to take active steps in this respect, both for the well-being of the patient (the first victim) and his or her friends and relatives and for that of any staff member involved (the second or third victims). Iatrogenic harm is analogous to a head injury. The primary harm is a “fait accompli,” but the secondary harm is in the hands of the medical team who must swing into action immediately to minimize this. Problems arising from not properly and promptly informing anyone harmed in an empathetic manner, saying sorry, and promising to try to prevent a recurrence are the prime driving forces behind litigation.
The immediate responses must be to make sure that the patient is being cared for (if alive) and that no one else is likely to be harmed, record what happened and what has to be done, and plan the initial contact with the patient and/or family and friends. Plans should be made immediately to support the first, second, and subsequent victims. Records must be as close to contemporaneous as possible, and if made later, the date and time should be correctly recorded. No record should be deleted or altered, and no amendments made without full dating and signing. Notes should be confined to facts and opinions avoided, particularly if these could be construed as being critical of anyone. Breaking bad news should ideally not be done on your own. If the nature of your practice means that this is a rare event, it is helpful to have someone more experienced in this area to be present who may do most of the talking. This meeting and subsequent ones should be held in an appropriate, private, nonthreatening environment, and whoever the patient and/or family would like to be present should be invited. Ensure that detailed protocols about the conduct and planning of the initial and subsequent meetings are available; it may be useful to scan these to provide a checklist before proceeding. With respect to conduct of the meetings, good communication, trust, empathy, respect, and listening to and soliciting questions are all important, as is cultural sensitivity and recognizing and dealing with emotional responses. The problem that has occurred should be described and the decisions leading up to it briefly outlined. An expression of regret (saying sorry, without admitting liability) is highly desirable. The rationale for the treatment or procedure should be gone over again with facts about what went wrong, what immediate steps were taken, and what is planned. It is most important that support be provided to the patient and/or family, and it is very helpful to provide them with your own contact number or, if that is not possible, with that of a senior person who they can contact if they have any further questions. Writing questions down to ask later can be helpful. For problems of ongoing sequelae, it is ideal to meet at regular intervals, possibly even daily or more often, until the situation has stabilized.