7: Human error and systems thinking

The lessons of major accidents


Our understanding of how the events preceding a disaster unfold has been greatly expanded in the last 20 years by the careful examination of a number of high profile accidents (Boxes 7.1 and 7.2). The brief summaries of major accidents, and the account of the Columbia Space Shuttle accident, allow us to reflect on the many ways in which failure can occur and the complexity of the story that may unfold during a serious investigation. Human beings have the opportunity to contribute to an accident at many different points in the process of production and operation. Problems and failures may occur in the design, testing, implementation of a new system, its maintenance and operation. Technical failures, important though they can be, often play a relatively minor part. Looking at other industries, although they are often very different from healthcare, helps us understand the conceptual landscape and some of the practicalities of accident investigation.



BOX 7.1 Major disasters involving human error


Chernobyl (April 1986)


Chernobyl’s 1000 MW Reactor No. 4 exploded, releasing radioactivity over much of Europe. Although much debated since the accident, a Soviet investigation team admitted ‘deliberate, systematic and numerous violations’ of safety procedures.


Piper Alpha (July 1988)


A major explosion on an oil rig resulted in a fire and the deaths of 167 people. The Cullen enquiry (1990) found a host of technical and organizational causes rooted in the culture, structure and procedures of Occidental Petroleum. The maintenance error that led to the initial leak was the result of inexperience, poor maintenance procedures, and deficient learning mechanisms.


Space Shuttle Challenger (January 1986)


An explosion shortly after lift-off killed all the astronauts on board. An ‘O ring’ seal on one of the solid rocket boosters split after lift-off, releasing a jet of ignited fuel. The causes of the defective O-ring involved a rigid organizational mindset, conflicts between safety and keeping on schedule and the effects of fatigue on decision making.


Herald of Free Enterprise (March 1987)


The roll-on-roll-off ferry sank in shallow water off Zeebruge, Belgium, killing 189 passengers and crew. The enquiry highlighted the commercial pressures in the ferry business and the friction between ship and shore management that led to safety lessons not being heeded. The company was found to be ‘infected with the disease of sloppiness’.


Paddington Rail Accident (October 1999)


31 people died when a train went through a red light onto the main up-line from Paddington, where it collided head on with an express approaching the station. The enquiry identified failures in training of drivers, a serious and persistent failure to examine reported poor signal visibility, a safety culture that was slack and less than adequate and significant failures of communication in the various organizations.


(THIS ARTICLE WAS PUBLISHED IN HUMAN FACTORS IN SAFETY CRITICAL SYSTEMS, LUCAS D, “CAUSES OF HUMAN ERROR”. 38–39, COPYRIGHT ELSEVIER. 1997)



BOX 7.2 The loss of Space Shuttle Columbia


The Columbia Accident Investigation Board’s independent investigation into the loss on 1 February 2003 of the Space Shuttle Columbia and its seven-member crew lasted nearly seven months, involving a staff of more than 120, along with some 400 NASA engineers supporting the Board’s 13 members. Investigators examined more than 30 000 documents, conducted more than 200 formal interviews, heard testimony from dozens of expert witnesses, and reviewed more than 3000 inputs from the general public. In addition, more than 25 000 searchers combed vast stretches of the Western United States to retrieve the spacecraft’s debris. The Board recognized early on that the accident was probably not an anomalous, random event, but likely to be rooted to some degree in NASA’s history and the human space flight programme’s culture. The Board’s conviction regarding the importance of these factors strengthened as the investigation progressed, with the result that this report placed as much weight on these causal factors as on the more easily understood and corrected physical cause of the accident.


The physical cause of the loss of Columbia and its crew was a breach in the Thermal Protection System on the leading edge of the left wing, caused by a piece of insulating foam which separated from the External Tank at 81.7 seconds after launch and struck the wing. During re-entry this breach in the Thermal Protection System allowed superheated air to penetrate through the leading edge insulation and progressively melt the aluminium structure of the left wing, resulting in a weakening of the structure until increasing aerodynamic forces caused loss of control, failure of the wing, and break-up of the Orbiter.


The organizational causes of this accident are rooted in the Space Shuttle Programme’s history and culture, including the original compromises that were required to gain approval for the Shuttle, subsequent years of resource constraints, fluctuating priorities, schedule pressures, mischaracterization of the Shuttle as operational rather than developmental, and lack of an agreed national vision for human space flight. Cultural traits and organizational practices detrimental to safety were allowed to develop, including: reliance on past success as a substitute for sound engineering practices; organizational barriers that prevented effective communication of critical safety information and stifled professional differences of opinion; lack of integrated management across programme elements; and the evolution of an informal chain of command and decision-making processes that operated outside the organization’s rules.


(ADAPTED FROM US NATIONAL AERONAUTICS AND SPACE ADMINISTRATION, 2003, www.nasa.gov)


The most obvious errors and failures are usually those that are the immediate causes of an accident, such as a train driver going through a red light or a doctor picking up the wrong syringe and injecting a fatal drug. These failures are mostly unintentional, though occasionally they are deliberate, though misguided, attempts to retrieve a dangerous situation. Some of the ‘violations of procedure’ at Chernobyl were in fact attempts to use unorthodox methods to prevent disaster. Attempts to control an escalating crisis can make matters worse, as when police officers believed they needed to contain rioting football fans who were in fact trying to escape from a fire. Problems may also occur in the management of escape and emergency procedures, as when train passengers were unable to escape from carriages after the Paddington crash.


The immediate causes described above are the result of actions, or omissions, by people at the scene. However, other factors further back in the causal chain can also play a part in the genesis of an accident. These ‘latent conditions’, as they are often termed, lay the foundations for accidents in the sense that they create the conditions in which errors and failures can occur (Reason, 1997). This places the operators at the sharp end in an invidious position, as James Reason eloquently explains:


Rather than being the instigators of an accident, operators tend to be the inheritors of system defects … their part is usually that of adding the final garnish to a lethal brew whose ingredients have already been long in the cooking.
(REASON, 1990)


The accidents described (Box 7.2) allude to poor training, problems with scheduling, conflicts between safety and profit, communication failures, failure to address known safety problems and to general sloppiness of management and procedures. Some of these failures may have been known at the time, in that communication failures between management and supervisors may have been a longstanding and obvious problem. However, latent conditions may also be created by decisions which may have been perfectly reasonable at the time, but in retrospect are seen to have contributed to an accident. For instance, the training budget for maintenance workers may have been cut to avoid staff redundancies. In any organization there are always pressures to reduce training, eliminate waste, act quickly to keep on schedule and so on. Safety margins are eroded bit by bit, sometimes without anyone noticing, eventually leading to an accident.


Another feature of these explanations for accidents, especially the more recent ones, are the references to safety culture and organizational culture. The safety culture of a train company, for instance, is described as ‘slack and less than adequate’. The Columbia investigation refers to a number of ‘cultural traits and organizational practices detrimental to safety’, such as reliance on past success rather than formal testing, barriers to passing on safety information, stifling of dissenting voices and informal decisions that bypassed organizational rules and procedures. These are all broadly speaking cultural, in that they refer to or are embedded in the norms, attitudes and values of the organizations concerned.


Safety culture is hard to define precisely but may become more tangible when one reflects on one’s own experience of organizations. In some hospital wards for instance, the atmosphere may be friendly and cheerful, but it is clear that there is little tolerance for poor practice and the staff are uniformly conscientious and careful. In contrast, others develop a kind of sub-culture in which sloppy practices are tolerated, risks are run and potentially dangerous practices allowed to develop. These cultural patterns develop slowly but erode safety and morale. Sometimes these features of the ward or organization are ascribed to the personalities of the people working there, who are viewed as slapdash, careless and unprofessional. The use of the term culture however, points to the powerful influence of social forces in moulding behaviour; we are all more malleable than we like to think and to some extent develop good or bad habits according to the prevailing ethos around us.


We should also note that major accidents in high hazard industries are often the stimulus for wide ranging safety improvements. For instance, the enquiry into the Piper Alpha oil disaster led to a host of recommendations and the implementation of a number of risk reduction strategies, which covered the whole industry and addressed a wide range of issues. These included the setting up of a single regulatory body for offshore safety, relocation of pipeline emergency shutdown valves, and the provision of temporary safe refuges for oil workers, new evacuation procedures and requirements for emergency safety training (Reason, 1990; Vincent, Adams and Stanhope, 1998).


Finally, consider the resources that have been put into understanding these accidents. In the case of Columbia, hundreds of people were involved in intense investigation of all aspects of NASAs functioning. Certainly these accidents were all tragedies; many people died unnecessarily, there was a great deal at stake for the organizations concerned, and enormous political and media pressures to contend with. Unnecessary deaths in healthcare, in comparison, receive relatively little attention and are only occasionally the subject of major enquiries. Large sums of money are spent on the roads and railways on safety measures, again relatively little in health. Patient safety is, thankfully for staff and patients alike, now firmly on the healthcare agenda in many countries. But the resources for keeping patients safe are still pretty minimal.


Is healthcare like other industries?


Aviation, nuclear power, chemical and petroleum industries are, like healthcare, hazardous activities carried out in large, complex organizations by, for the most part, dedicated and highly trained people. Commercial, political, social and humanitarian pressures have compelled these industries to raise their game and make sustained efforts to improve and maintain safety. Healthcare, in contrast, has relied on the intrinsic motivation and professionalism of clinical and managerial staff which, while vital, is not sufficient to ensure safety. Hearing other people working in dangerous environments talk about how they treat safety as something to be discussed, analysed, managed and resourced tells us that safety is not just a by-product of people doing their best, but a far more complex and elusive phenomenon.


We should, however, be cautious about drawing parallels between healthcare and other industries. The high technology monitoring and vigilance of anaesthetists and the work of pilots in commercial aviation are similar in some respects, but the work of surgeons and pilots is very different. Emergency medicine may find better models and parallels in the military or in fire fighting than in aviation, and so on. The easy equation of the work of doctors and pilots has certainly been overstated, even though many useful ideas and practices have transferred from aviation to medicine. For instance, simulation and team training in anaesthesia and other specialties was strongly influenced by crew resource management in aviation. However, surgical team training has to be grounded in the particular tasks and challenges faced by surgical teams. We cannot just import aviation team training wholesale. Aviation acts as a motivator and source of ideas, but the actual training has to be developed and tested within the healthcare setting.


Differences between healthcare and other industries


What differences can be identified between healthcare and other industries? First, healthcare encompasses an extraordinarily diverse set of activities. Healthcare encompasses the mostly routine, but sometimes highly unpredictable and hazardous world of surgery; primary care, where patients may have relationships with their doctors over many years; the treatment of acute psychosis, requiring rapid response and considerable tolerance of bizarre behaviour; some highly organized and ultrasafe processes, such as radiotherapy or the management of blood products; and the inherently unpredictable, constantly changing environment of emergency medicine. To this list we can add hospital medicine, care in the community, patients who monitor and treat their own condition and, by far the most important in poorer cultures, care given in people’s homes. Even with the most cursory glance at the diversity of healthcare, the easy parallels with the comparatively predictable high-hazard industries, with a relatively limited set of activities, begins to break down.


Work in many hazardous industries, such as nuclear power is, ideally, routine and predictable. Emergencies and departures from usual practice are unusual and to be avoided. Many aspects of healthcare are also largely routine and would, for the most part, be much better organized on a production line basis. Much of the care of chronic conditions, such as asthma and diabetes, is also routine and predictable, which is not to say that the people suffering from these conditions should be treated in a routine standardized manner. However, in some areas, healthcare staff face very high levels of uncertainty. In hospital medicine, for example, the patient’s disease may be masked, difficult to diagnose, the results of investigations not clear cut, the treatment complicated by multiple comorbidities and so on. Here, a tolerance for uncertainty on the part of the staff, and indeed the patient, is vital. The nature of the work is very different from most industrial settings.


A related issue is that pilots and nuclear power plant operators spend most of their time performing routine control and monitoring activities, rather than actually doing things. For the most part the plane or the plant runs itself, and the pilot or operator is simply checking and watching. Pilots do, of course, take over manual control and need to be highly skilled, but actual ‘hands on’ work is a relatively small part of their work (Reason, 1997). In contrast, much of healthcare work is very ‘hands on’ and, in consequence, much more liable to error. The most routine tasks, putting up drips, putting lines in to deliver drugs, all require skill and carry an element of risk. Finally, and most obviously, passengers in trains and planes are generally in reasonable health. Many patients are very young, very old, very sick or very disturbed, and in different ways vulnerable to even small problems in their care.


The organization of safety in healthcare and other industries


As well as comparing specific work activities, we can also consider more general organizational similarities and differences. David Gaba (2000) has identified a number of ways in which the approach to safety in healthcare differs from other safety-critical industries. First, most high risk industries are very centralized with a clear control structure; healthcare, even national systems such as in England, is fragmented and decentralized in comparison. This makes it very difficult to regulate and standardize equipment and basic procedures; standardizing the design of infusion pumps, for instance, is highly desirable but very difficult to achieve in practice. Second, other industries put much more emphasis on standardizing both the training and the work process. Rene Amalberti (2001) has pointed out that it is a mark of the success and safety of commercial aviation that we do not worry about who the pilot is on a particular flight; we assume they, in his phrase, are ‘equivalent actors’, who are interchangeable. This is not an insult, but a compliment to their training and professionalism. In healthcare the autonomy of the individual physician, while absolutely necessary at a clinical level, can also be a threat to safety (Gaba, 2000; Amalberti et al., 2005). If nurses, for instance, are constantly responding to different practices of senior physicians in intensive care, unnecessary variability and potential for error is introduced.


Third, safe organizations devote a great deal of attention and resources to ensuring that workers have the necessary preparation and skills for the job; medical school is a long and intensive training, but a young doctor will still arrive on a new ward and be expected to pick up local procedures informally – sometimes with catastrophic consequences, as we will see in the next chapter. Finally, Gaba points out that healthcare is comparatively unregulated compared to other industries. In many countries there is a host of regulatory bodies, each with responsibility for some aspect of education, training or clinical practice. However, regulation still has very little effect day to day on clinical practice. All of these issues are complex and we will return to many of them later in the book. For now however, it is sufficient to note there are many differences, as well as some similarities, between healthcare and other industries in both activity and organization.


What is error?


I kept my tea in the right hand side of a tea caddy for some months and when that was finished kept it in the left, but I always for a week took off the cover of the right hand side, though my hand would sometimes vibrate. Seeing no tea brought back memory.
(CHARLES DARWIN NOTEBOOKS C217 QUOTED IN BROWNE, 2003)


Patient safety is beset by difficulties with terminology and the most intractable problems occur when the term error is used. For instance, you might think that it would be relatively easy to define the term ‘prescribing error’. Surely, either a drug is prescribed correctly or not? Yet, achieving a consensus on this term required a full study and several iterations of definitions amongst a group of clinicians,with still room for disagreement (Dean, Barber and Schachter, 2000). Such definitional and classification problems are longstanding and certainly not confined to healthcare. Regrettably, we are not going to resolve the problems here. However, we can at least draw some distinctions and show the different ways that error is defined and discussed. Hopefully this will clear some of the fog that envelops the term and allow us to discern the various uses and misuses in the patient safety literature.


Defining error


In everyday life, recognizing error seems quite straightforward, though admitting it may be harder. My own daily life is accompanied by a plethora of slips, lapses of memory and other ‘senior moments’, in the charming American phrase, that are often the subject of critical comment from those around me. (How can you have forgotten already?). Immediate slips, such as Darwin’s example shown above, are quickly recognized. Other errors may only be recognized long after they occur. You may only realize you took a wrong turning some time later when it becomes clear that you are irretrievably lost. Some errors, such as marrying the wrong person, may only become apparent years later. An important common theme running through all these examples is that an action is only recognized as an error after the event. Human error is a judgement made in hindsight (Woods and Cook, 2002). There is no special class of things we do or don’t do that we can designate as errors; it is just that some of the things we do turn out to have undesirable or unwanted consequences. This does not mean that we cannot study error or examine how our otherwise efficient brains lead us astray in some circumstances, but it does suggest that there will not be specific cognitive mechanisms to explain error that are different from those that explain other human thinking and behaviour.


Eric Hollnagel (1998) points out that the term error has historically been used in three different senses: as a cause of something (plane crash due to human error), as the action or event itself (giving the wrong drug) or as the outcome of an action (the death of a patient). The distinctions are not absolute in that many uses of the term involve both cause and consequence to different degrees, but they do have a very different emphasis. For instance, the UK National Patient Safety Agency has found that patients equate ‘medical error’ with a preventable adverse outcome for the patient. Terms like ‘adverse event’, although technically much clearer, just seem like an evasion or a way of masking the fact that someone was responsible.


The most precise definition of error, and most in accord with everyday usage, is one that ties it to observable behaviours and actions. As a working definition, Senders and Moray (1991) proposed that an error means that something has been done which:



  • was not desired by a set of rules or an external observer;
  • led the task or system outside acceptable limits;
  • was not intended by the actor.

This definition of error, and other similar ones (Hollnagel, 1998), imply a set of criteria for defining an error. First, there must be a set of rules or standards, either explicitly defined or at least implied and accepted in that environment; second, there must be some kind of failure or ‘performance shortfall’; third, the person involved did not intend this and must, at least potentially, have been able to act in a different way. All three of these criteria can be challenged, or at least prove difficult to pin down in practice. Much clinical medicine, for instance, is inherently uncertain and there are frequently no guidelines or protocols to guide treatment. In addition, the failure is not necessarily easy to identify; it is certainly not always clear, at least at the time, when a diagnosis is wrong or when at what point blood levels of a drug become dangerously high. Finally, the notion of intention, and in theory at least being able to act differently, is challenged by the fact that people’s behaviour is often influenced by factors, such as fatigue or peer pressure, which they may not be aware of and have little control over. So, while the working definition is reasonable, we should be aware of its limitations and the difficulties of applying it in practice.


Classifying errors


Classifications of error can be approached from several different perspectives. An error can be described in terms of the behaviour involved, the underlying psychological processes, and in terms of the factors that contributed to it. Giving the wrong drug, for instance, can be classified in terms of the behaviour (the act of giving the drug), in psychological terms as a slip (discussed below) and be due, at least in part, to fatigue. To have any hope of a coherent classification systems these distinctions have to be kept firmly in mind, as some schemes developed in healthcare mix these perspectives together indiscriminately.


Human factors experts working in high-risk industries often have to estimate the likelihood of accidents occurring when preparing a ‘safety case’ to persuade the regulator that all reasonable safety precautions have been taken. The preparation of a safety case usually involves considering what errors might occur, how often and in what combinations. To facilitate this, a number of classification schemes have been proposed. One of the most detailed, incorporating useful features of many previous schemes, is the one used in the Predictive Human Error Analysis (PHEA) technique (Embrey, 1992; Hollnagel, 1998) (Table 7.1).


Table 7.1 PHEA classification of errors



























































Planning errors Incorrect plan executed

Correct, but inappropriate plan executed

Correct plan, but too soon or too late

Correct plan, but in the wrong order
Operation errors Operation too long/too short

Operation incorrectly timed

Operation in wrong direction

Operation too little/too much

Right operation, wrong object

Wrong operation, right object

Operation omitted

Operation incomplete
Checking errors Check omitted

Check incomplete

Right check on wrong object

Wrong check on right object

Check incorrectly timed
Retrieval errors Information not obtained

Wrong information obtained

Information retrieval incomplete
Communication errors Information not communicated

Wrong information communicated

Information communication incomplete
Selection errors Selection omitted

Wrong selection made

From Hollnagel, 1998


PHEA has been developed for industries where the actions of a particular person controlling operations can be fairly closely specified (operations here meaning the operation of the system, not the surgical type). The scheme is deliberately generic, a high level classification scheme which can be applied in many different environments. It covers errors of omission (failure to carry out an operation), errors of commission (doing the wrong thing) and extraneous error (doing something unnecessary). Generally there is quite high agreement when independent judges are asked to classify errors with schemes of this kind, which at least gives a starting point in describing the phenomena of interest. Looking at such schemes gives one new respect for human beings; the wonder is not how many errors occur but, given the numerous opportunities for messing things up, how often things go well.


Conceptual clarity about error is not just an obsession of academics; it has real practical consequences. Classifications of medical errors often leave a lot to be desired, frequently grouping and muddling very different types of concept. Reporting systems, for instance, may ask the person reporting to define the error made, or select the type of error from a list. In one system I reviewed the causes of an error including ‘wrong drug given’, ‘a mistake’ and ‘fatigue’, and the clinician was meant to choose between them. In reality, any or all of these might be applicable. If the clinician is not presented with a sensible set of choices, there is no hope of learning anything useful from the incident.


Describing and classifying error in medicine


Generic error classification schemes may seem very remote from healthcare, too abstract, too conceptual and only of interest to researchers. However, PHEA maps quite easily onto many standard clinical practices. Consider the checking of anaesthetic equipment before an operation; there are several different types of check to be made, but all the ways of failing to check probably fall into one of the five types listed in PHEA. In operating the anaesthetic equipment, anaesthetic drugs can be given for too long, at the wrong time, the dials can be turned in the wrong direction, the wrong dial can be turned and so on. Communication between the surgeon and anaesthetist about, say, blood loss might not occur, might be incomplete or be misleading. Realizing the importance of clarity and classification, some researchers have sought to clarify the definitions in use and build classification schemes that everyone can agree on. We will briefly examine work on prescribing error and diagnostic error, which present contrasting challenges of both classification and understanding.


Prescribing error


Studies suggest that prescribing errors occur in 0.4–1.9% of all medication orders written and cause harm in about 1% of inpatients. A major problem with interpreting and comparing these studies is that many of the definitions of prescribing error used are either ambiguous or not given at all. To bring some rigour and clarity to the area, Bryony Dean and colleagues (Dean, Barber and Schachter, 2000) carried out a study to determine a practitioner based definition of prescribing error, using successive iterations of definitions until broad agreement was obtained. The final agreed list is shown in Table 7.2 and we can see that this definition of prescribing error covers a wide range of specific failures. A strength of working ‘from the ground up’ and basing such decisions on the views of pharmacists, doctors and nurses is that the final definition is clinically meaningful and the descriptions of acts and omissions that result are also clearly defined.


Table 7.2 Varieties of prescribing error



























































Prescriptions inappropriate for patient Drug that is contraindicated

Patient has allergy to drug

Ignoring potentially significant drug

Inadequate dose

Drug dose will give serum levels above/below therapeutic range

Not altering drug in response to serum levels outside therapeutic range

Continuing drug in presence of adverse reaction

Prescribing two drugs where one will do

Prescribing a drug for which there is no indication
Pharmaceutical issues Intravenous infusion with wrong dilution

Excessive concentration of drug to be given by peripheral line
Failure to communicate essential information Prescribing a drug, dose or route that is not that intended

Writing illegibly

Writing a drug’s name using abbreviations

Writing an ambiguous medication order

Prescribing ‘one tablet’ of a drug that is available in more than one strength

Omission of route of administration for drug that can be given by more than one route

Prescribing an intermittent infusion without specifying duration

Omission of signature
Transcription errors Not prescribing drug in hospital that patient was taking prior to admission

Continuing a GPs prescribing error when patient is admitted to hospital

Transcribing incorrectly when rewriting patient’s chart

Writing ‘milligrams’ when ‘micrograms’ was intended

Writing a prescription for discharge that unintentionally deviates from in hospital prescription

On admission to hospital writing a prescription that

unintentionally deviates from pre-admission prescription

Reprinted from The Lancet, 359, no. 9315, Bryony Dean, Mike Schachter, Charles Vincent and Nick Barber. “Causes of prescribing errors in hospital inpatients: a prospective study.” [232–237], © 2002, with permission from Elsevier.


The descriptions are, as the table shows, sensibly couched in terms of behaviour as far as possible, though concepts such as ‘intention’ also need to be included. Many of the specific types of prescribing error do fall into the general categories in the PHEA scheme. There are, for instance, failures of planning (not prescribing what was intended), failures of operation (writing illegibly, using abbreviations), failures of communication of various kinds (transcription errors) and so on. There may not be a complete mapping of one scheme to another, but comparing the two schemes does show the relationship between generic and specific schemes and that the same errors can, even in behavioural terms, be classified in more than one scheme.


Diagnostic errors


Prescribing errors are a relatively clearly defined type of error in that they do at least refer to a particular act – that is writing or otherwise recording a drug, a dose and route of administration. Diagnosis in contrast is not so much an act as a thought process; whereas prescribing happens at a particular time and place, diagnosis is often more an unfolding story. Diagnostic errors are much harder to specify and the category ‘diagnostic error’ wider and less defined. The list of examples of diagnostic error in Table 7.3 shows how the label ‘diagnostic error’ may indicate either a relatively discrete event (missing a fracture when looking at an X-ray) or something that happens over months or even years (missed lung cancer because of failures in the co-ordination of outpatient care). These examples show that the term error can be an oversimplification of very complex phenomena and sometimes a long story of undiagnosed illness.


Table 7.3 Examples of diagnostic errors











































Examples Comment on error
Errors of uncertainty (no-fault errors)
Missed diagnosis of appendicitis in elderly patient with no abdominal pain Unusual presentation of disease
Missed diagnosis of Lyme disease in an era when this was unknown Limitations of medical knowledge
Wrong diagnosis of common cold in patient found to have mononucleosis Diagnosis reasonable but incorrect
Errors precipitated by system factors (system errors)
Missed colon cancer because flexible sigmoidoscopy performed instead of colonoscopy Lack of appropriate equipment or results
Fracture missed by emergency department Radiologist not available to check initial assessment
Delay in diagnosis due to ward team not informed of patient’s admission Failure to co-ordinate care
Errors of thinking and reasoning (cognitive errors)
Wrong diagnosis of ventricular tachycardia on ECG with electrical artefact simulating dysrhythymia Inadequate knowledge
Missed diagnosis of breast cancer because of failure to perform breast examination Faulty history taking and inadequate assessment
Wrong diagnosis of degenerative arthritis (no further test ordered) in a patient with septic arthritis Premature decision made before other possibilities considered

Adapted from Graber, Gordon and Franklin, 2002


Examples Commentonerror


Diagnostic errors have not yet received the attention they deserve, considering their probable importance in leading to harm or sub-standard treatment for patients; the emphasis on systems has led us away from examining core clinical skills such as diagnosis and decision making. Diagnostic errors are also very difficult to study, being hard to define, hard to fix at a particular point in time and not directly observable; they have recently been described as the ‘next frontier’ for patient safety (Newman-Toker and Pronovost, 2009). Graber, Gordon and Franklin (2002), amongst others, have argued for a sustained attack on diagnostic errors, dividing them into three broad types which require different kinds of intervention to reduce them (Table 7.3). They distinguish ‘no-fault errors’, which arise because of the difficulty of diagnosing the particular condition, ‘system errors’ primarily due to organizational and technical problems and ‘cognitive errors’ due to faulty thinking and reasoning.


We should be cautious about accepting a sharp division between no-fault, system and cognitive errors, as this distinction, while broadly useful, is potentially misleading. First, separating out some errors as ‘cognitive’ is slightly curious; in a sense all error is ‘cognitive’ in that all our thinking and action involves cognition. The implication of the term cognitive error is really to locate the cause of the diagnostic error in failures of judgement and decision making. Second, the term ‘system error’, although widely used, is to my mind a rather ghastly and nonsensical use of language. Systems may fail, break down or fail to function, but only people make errors. System error as a term is usually a rather unsatisfactory shorthand for factors that contributed to the failure to make an accurate diagnosis, such as a radiologist not being available or poor coordination of care. In reality, diagnosis is always an interaction between the patient and the doctor or other professional, who are both influenced by the system in which they work.


The psychology of error


In the preceding two sections error has mainly been examined in terms of behaviour and outcome. However, errors can also be examined from a psychological perspective. The psychological analyses to be described are mainly concerned with failures at a particular time and probe the underlying mechanisms of error. There is therefore not necessarily a simple correspondence with medical errors which, as discussed, may refer to events happening over a period of time. In his analysis of different types of error, James Reason (1990) divides them into two broad types of error: slips and lapses, which are errors of action, and mistakes which are, broadly speaking, errors of knowledge or planning. Reason also discusses violations which, as distinct from error, are intentional acts which, for one reason or another, deviate from the usual or expected course of action.


Slips and lapses


Slips and lapses occur when a person knows what they want to do, but the action does not turn out as they intended. Slips relate to observable actions and are associated with attentional failures, whereas lapses are internal events and associated with failures of memory. Slips and lapses occur during the largely automatic performance of some routine task, usually in familiar surroundings. They are almost invariably associated with some form of distraction, either from the person’s surrounding or their own preoccupation with something in mind. When Charles Darwin went to the wrong tea caddy, he had a lapse of memory. If, on the other hand, he had remembered where the tea was but had been momentarily distracted and knocked the caddy over rather than opening it, he would have made a slip.


Mistakes


Slips and lapses are errors of action; you intend to do something, but it does not go according to plan. With mistakes, the actions may go entirely as planned but the plan itself deviates from some adequate path towards its intended goal. Here the failure lies at a higher level: with the mental processes involved in planning, formulating intentions, judging and problem solving (Reason,1990). If a doctor treats someone with chest pain as if they have a myocardial infarction, when in fact they do not, then this is a mistake. The intention is clear, the action corresponds with the intention, but the plan was wrong.


Rule based mistakes occur when the person already knows some rule or procedure, acquired as the result of training or experience. Rule based mistakes may occur through applying the wrong rule, such as treating someone for asthma when you should follow the guidelines for pneumonia. Alternatively, the mistake may occur because the procedure itself is faulty; deficient clinical guidelines for instance.


Knowledge based mistakes occur in novel situations, where the solution to a problem has to be worked out on the spot. For instance, a doctor may simply be unfamiliar with the clinical presentation of a particular disease, or there may be multiple diagnostic possibilities and no clear way at the time of choosing between them; a surgeon may have to guess at the source of the bleeding and make an understandable mistake in their assessment in the face of considerable stress and uncertainty. In none of these cases does the clinician have a good ‘mental model’ of what is happening to base their decisions on, still less a specific rule or procedure to follow.


Violations


Errors are, by definition, unintended in the sense that we do not want to make errors. Violations, in contrast, are deliberate deviations from safe operating practices, procedures, standards or rules. This is not to say that people intend that there should be a bad outcome, as when someone deliberately sabotages a piece of equipment; usually people hope that the violation of procedures won’t matter on this occasion or will actually help get the job done. Violations differ from errors in several important ways. Whereas errors are primarily due to our human limitations in thinking and remembering, violations are more closely linked with our attitudes, motivation or the work environment. The social context of violations is very important and understanding them, and if necessary curbing them, requires attention to the culture of the wider organization, as well as the attitudes of the people concerned.


Reason (1990) distinguishes three types of violations:



  • A routine violation is basically cutting corners for one reason or another, perhaps to save time or simply to get on to another more urgent task.
  • A necessary violation occurs when a person flouts a rule because it seems the only way to get the job done. For example, a nurse may give a drug which should be double checked by another nurse, but there is no one else available. The nurse will probably give the drug, knowingly violating procedure, but in the patient’s interest. This can, of course, have disastrous consequences, as we will see in the next chapter.
  • Optimizing violations which are for personal gain, sometimes just to get off work early or, more sinister, to alleviate boredom, ‘for kicks’. Think of a young surgeon carrying out a difficult operation in the middle of the night, without supervision, when the case could easily wait until morning. The motivation is partly to gain experience, to test oneself out, but there may be a strong element of the excitement of sailing close to the wind in defiance of the senior surgeon’s instructions.

The psychological perspective on error has been very influential in medicine, forming a central plank of one of the most important papers in the patient safety literature (Leape, 1994). Errors and violations are also a component of the organizational accident model, discussed in the next chapter. However, attempts to use these concepts in practice in healthcare, in reporting systems for instance, have often foundered. Why is this? One important reason is that in practice the distinction between slips, mistakes and violations is not always clear, either to an observer or the person concerned. The relationship between the observed behaviour, which can be easily described, and the psychological mechanism, are often hard to discern. Giving the wrong drug might be a slip (attention wandered and picked up the wrong syringe), a mistake (misunderstanding about the drug to be given) or even a violation (deliberate over-sedation of a difficult patient). The concepts are not easy to put into practice, except in circumstances where the action, context and personal characteristics of those involved can be quite carefully explored.


Perspectives on error and error reduction


As must now be clear, error has many different facets and the subject of error, and how to reduce error, can be approached in different ways. While there are a multitude of different taxonomies and error reduction systems, we can discern some broad general perspectives or ‘error paradigms’ as they are sometimes called. Following Deborah Lucas (1997) and James Reason (1997) four perspectives can be distinguished: the engineering perspective, psychological, individual and organizational. The psychological perspective has already been discussed and we will not consider it further here. The various perspectives are seldom explicitly discussed in medicine but, once you have read about them, you will certainly have seen them in action in discussions of safety in healthcare. Each perspective leads to different kinds of solutions to the problem of error. Some people just blame doctors for errors and think discipline and retraining is the answer; some want to automate everything; others put everything down to ‘the system’. Each perspective has useful features, but unthinking adherence to any particular one is unlikely to be productive.


Engineering perspective


The central characteristic of the engineering perspective is that human beings are viewed as potentially unreliable components of the system. In its extreme form, this perspective implies that humans should be engineered out of a system by increasing automation, so avoiding the problem of human error. In its less extreme form, the engineering approach regards human beings as important parts of complex systems, but places a great deal of emphasis on the ways people and technology interact. For instance, the design of anaesthetic monitors needs to be carefully considered if the wealth of information displayed is not to lead to misinterpretation and errors at times of crisis.


In the manufacture of computers and cars on assembly lines, less human involvement in repetitive tasks has undoubtedly led to higher reliability. However, automation does not always lead to improvements in safety and may actually introduce new problems – the ‘ironies of automation’ as Lisanne Bainbridge expressed it (Bainbridge, 1987). In particular, the operators of equipment become much less ‘hands on’ and spend more time monitoring and checking. This is well expressed in the apocryphal story of the pilot of a commercial airliner who turned to his co-pilot and said, of the onboard computer controlling the plane, ‘I wonder what it’s doing now?’ There have, however, been some real life tragedies in which automation led human beings astray with tragic consequences (Box 7.3).



BOX 7.3 The Vincennes incident


In 1988, the USS Vincennes erroneously shot down a civilian Airbus carrying 290 passengers. The Vincennes had been fitted with a very sophisticated Tactical Information Co-ordinator (TIC), which warned of a hostile aircraft close to the ship. The captain also received a warning that the aircraft might be commercial but, under great time pressure, and considering the safety of ship and crew he accepted the TIC warning and shot down the airliner. In another US warship with, paradoxically, less sophisticated warning system, the crew relied less on the automated system and decided the aircraft was civilian.


(THIS ARTICLE WAS PUBLISHED IN HUMAN FACTORS IN SAFETY CRITICAL SYSTEMS, LUCAS D, “CAUSES OF HUMAN ERROR”. 38–39, COPYRIGHT ELSEVIER. 1997)

Stay updated, free articles. Join our Telegram channel

Jun 24, 2017 | Posted by in GENERAL SURGERY | Comments Off on 7: Human error and systems thinking

Full access? Get Clinical Tree

Get Clinical Tree app for offline access