11: Clinical interventions and process improvement

(BERWICK DM. ‘‘TAKING ACTION TO IMPROVE SAFETY: HOW TO IMPROVE THE CHANCES OF SUCCESS.’’ PRESENTATION AT THE ANNENBERG CENTER FOR HEALTH SCIENCES CONFERENCE, ENHANCING PATIENT SAFETY AND REDUCING ERRORS IN HEALTH CARE, IN RANCHO MIRAGE, CALIFORNIA. NOVEMBER 8-10, 1998. REPRODUCED WITH PERMISSION FROM INSTITUTE FOR HEALTHCARE IMPROVEMENT)


In healthcare,weare coming to understand how difficult the safety problem is, in cultural, technical, clinical and psychological terms, not to mention its massive scale and heterogeneity. The second half of this book, beginning with this overview of clinical interventions and process improvement, covers the principal avenues of improvement and in later chapters addresses the complex task of integrating the human and technological changes that are needed. We have seen, in the analysis of individual incidents, just how many factors can contribute to the occurrence of an error or bad outcome. Yet still, at safety conferences, you will hear people saying ‘it’s the culture’, ‘the key is strong leadership’, ‘team building is the answer’, ‘if we just had good professional standards all would be well’, ‘we know we’ve got a problem, lets just get on and fix it’ and so on. Of course all these things are important, and there are some things which can and should be ‘just fixed’, but one of the greatest obstacles to progress on patient safety is, paradoxically, the attraction of neat solutions, whether political, organizational or clinical. First, we must understand what a complex problem this is; only then will we be able to tackle all aspects of it effectively.


Healthcare is an extremely diverse enterprise and the causes of harm, and the associated solutions, will differ according to the process under consideration. Some factors, such as leadership, culture and attitudes to safety, are generic and important in all environments. However, the kinds of specific solutions required to ensure high reliability in, for instance, blood transfusion services, will obviously differ from those aimed at reducing inpatient suicides. Improving safety requires some generic, cross-organizational action, coupled with some speciality and process specific activities.


At the clinical level, safety can be elusive, for all the specialist knowledge and experience available. There are multiple possibilities and lines of attack. Should we rely on team building, vigilance and awareness of hazards? Should we attack the numerous process problems, inefficiencies and frustrations that beset clinical staff, sapping their morale and precipitating error and patient harm? Perhaps, as in so many other industries, technology is the answer, getting the human being out of the loop? Or perhaps patient harm is best prevented by clinical innovations, for instance the development of new drugs and procedures to counteract the hazards of hospital acquired infection? All of these approaches are important but it is not easy to assess how much weight to give to any one of them in any specific circumstance. In this chapter and the next two we will examine technical solutions of various kinds with the aim of showing their essential features, advantages and limitations. First it is useful to sketch out the territory and consider some of the implicit, often unspoken, assumptions underlying approaches to improving safety.


Two visions of safety


A wealth of different techniques and approaches are available in the quest for safer healthcare, variously supported by theory, evidence and common sense, and it can be very difficult to discern underlying themes and directions. Underlying the plethora of approaches however, we can distinguish two broad approaches. These two visions of safety are seldom explicitly articulated, but are ever present themes in debates and discussions about patient safety.


The phrase ‘Design, Technology and Standardization’ encapsulates one vision of safety, which is closely linked to the engineering safety paradigm discussed in Chapter 5. In this view, human fallibility is to the fore and the aim is to simplify, standardize and improve basic processes and reduce reliance on people by automating or at least offering as much support as possible in those tasks for which people are necessary. Process improvement approaches are discussed in this chapter and the roles of design and technology in the next two. ‘People create safety’ encapsulates the second broad approach, discussed in later chapters. Woods and Cook (2002), following Rasmussen (1990) and others, have argued for an alternative to the rigid, proceduralized, technology driven view of safety and that more truly reflects the realities of clinical work. Underlying these two visions are two contrasting views of human ability and experience, the one stressing error and fallibility, the other stressing adaptability, foresight and resilience (Table 11.1). Adopting one or other of these positions, whether acknowledged or not, will determine the kind of practical steps taken to improve safety and so have important practical consequences. In practice, elements of both approaches may be needed to resolve particular problems, but distinguishing them is important as many discussions and debates about safety revolve around these two positions.


Table 11.1 Two visions of safety





































Replace or support human beings Practitioners create safety
Emphasizes fallibility and irrationality Emphasizes expertise and skill
Hindsight bias and memory failure Flexibility and adaptability
Extreme over-confidence Experience and wisdom
Vulnerable to environmental influences Anticipation of hazards
Lack of control over thought and action Recovery from error
Technical and procedural interventions New and enhanced skills
Design and standardization Culture of high reliability organizations
Protocols and guidelines Mindfulness and hazard awareness
Information technology Training in anticipation and recovery
Technical solutions Teamwork and leadership

Design, technology and standardization


Many approaches to quality improvement in healthcare are rooted in a basic industrial model, in which the solutions to errors and defects rest in an increasing standardization usually coupled with a reliance on technology. Ideally, the human contribution to the process of care is reduced to a minimum, as in industrial production or commercial aviation. Careful design of the basic processes of care and appropriate use of technology overcomes human fallibility, vulnerability to fatigue and environmental influences. Examples of safety measures within this broad framework would include: simplification and standardization of clinical processes, more fundamental re-design of equipment and processes, computerized medication systems, electronic medical records and memory and decision support, whether computerized or in the form of protocols, guidelines, checklists and aide memoires. Note that even systems which explicitly acknowledge human fallibility, suchas decision support systems, still require human ingenuity and expertise to use them. For instance, while support systems assist clinicians by reminding them of actions to be taken and recommending courses of action, they can only be useful if the clinician has the expertise to extract relevant information from the patient, use the system appropriately and so on. You need expertise in order to use decision support effectively.


We also need to distinguish two broad types of standardization and proceduralization. The first relates to systems which attempt to improve on existing systems of communication, such as the electronic medical record. There is no doubt that an electronic record could have immense advantages in terms of access to information, reliability of coding, standardization of information recorded and linkage to other systems. However, from the clinician’s viewpoint, such systems may introduce other problems – for instance, problems of access when hardware fails, slowness of response, and other unanticipated problems. Nevertheless, most clinicians would agree that it is desirable to bring hospital information systems up to the standard of, for instance, the average supermarket chain.


A more important and contentious issue relates to the standardization of clinical practice itself, in the form of guidelines, protocols, decision support and structuring of tasks and procedures. Clinicians are sometime suspicious of these initiatives, suspecting that standardization is being imposed not to improve healthcare but in order to regulate, cut costs and otherwise constrain clinicians in their work. However, properly understood and implemented, such approaches are potentially a support to the clinical staff. Standardization and simplification of core processes should reduce the cognitive load on clinical staff – thus freeing them for more important clinical tasks that require human empathy and expertise.


People create safety


Proponents of the ‘people create safety’ view are, rightly, extremely impressed by how often outcomes are good in the face of extreme complexity, conflicting demands, hazards and uncertainty. Making healthcare safer depends on this view, not on minimizing the human contribution but on understanding technical work and how people overcome hazards. Cook, Render and Woods (2000) remind us how reliant safety is on clinicians and others looking ahead, bridging gaps, managing conflicts and, in effect, creating safety. A good illustration of this approach is in their recommendation that researchers study ‘gaps’, discontinuities in the process of care, which may be losses of information, losses of momentum or interruptions in the delivery of care. They suggest that safety will be increased by understanding and reinforcing practitioners’ normal ability to bridge gaps.


While clinicians’ ability to anticipate, react and accommodate to changing circumstances is crucial to effective and safe healthcare, we should not assume that safer care will be achieved solely by reliance on these human qualities. To begin with, this reliance on human expertise places an additional burden on those at the sharp end, returning us, oddly, to a reliance on training that systems thinking sought to free us from. True, it is training of a different kind (anticipation, flexibility), but training nonetheless. More importantly, it seems an odd responsetogaps. Why should we not try to reduce the number of gaps in the first place, with more efficient systems and better design? This depends, of course, on the nature of the gaps and other problems that practitioners need to anticipate and address. Sudden changes in the patient’s condition or an acute emergency require all the qualities that Cook and Woods rightly highlight. Anticipation is also used, however, to resolve organizational deficiencies, as when a surgeon has to improvise because notes are not available at the start of an operation, or telephones ahead to double check that equipment is available. However, notes and equipment that reliably turned up would reduce, if not obviate, the need for such anticipation. The real problem is to find a way to marry the two approaches, standardizing and proceduralizing where this is feasible and desirable,while knowing that this can neverbeacomplete solution and simultaneously promoting human resilience and the ‘creation of safety’. Before developing this theme however, we need to discuss the role of evidence based medicine in creating a safer healthcare system.


Clinical practices to improve safety


The first Institute of Medicine Report on patient safety, ‘To erris human’ (Kohn, Corrigan and Donaldson, 1999), called on all parties in healthcare to make patient safety a priority. To this end they recommended that the Agency for Healthcare Research and Quality (AHRQ) determine which patient safety practices were effective and produce a report to disseminate to all clinicians. The resulting report, produced by Kaveh Shojania and colleagues at the Evidence Based Practice Center in San Francisco with the assistance of numerous US experts, is a massive, wide ranging compendium of patient safety practices and an invaluable resource of clinical practices, which reduce the complications of healthcare (Shojania, Duncan and McDonald, 2001). The review followed, wherever possible, a standard approach to reviewing the literature on a specific topic, making a formal assessment of the strength of evidence available. For each safety practice, the authors of the relevant section were asked to examine:



  • Prevalence of the problem targeted by the practice;
  • Severity of the problem targeted by the practice;
  • The current use of the practice;
  • Evidence of efficacy and/or effectiveness of the practice;
  • The practice’s potential for harm;
  • ata on cost if available;
  • Implementation issues.

Shojania and colleagues acknowledged that this approach, more usually applied to specific clinical interventions, was difficult to apply to generic safety interventions, such as information technology or human factors work. Many of these practices were drawn from areas outside medicine and often little researched in healthcare. Some generic practices, such as clinical decision support, were separated out and described as techniques for promoting and implementing safety practices. The final list of 79 selected practices was roughly grouped according to the strength of evidence for each one and promising areas were highlighted for future research. Eleven practices (Box 11.1) were singled out as having very strong evidence of efficacy. A further 14 had good evidence for efficacy; these included such practices as using hip protectors to prevent injury after falls, localizing surgery to high volume centres, use of computer monitoring to prevent adverse drug reactions, improving information transfer at time of discharge, and multicomponent programmes to tackle pain management and hospital acquired delirium. In the summary, the authors emphasize that their report was a first attempt to organize and evaluate the relevant literature, which they hope will act as a catalyst for future work and not be seen as the final word on the subject.



BOX 11.1 Most highly rated patient safety practices from the AHRQ Report



  • Appropriate use of prophylaxis to prevent venous thromboembolism (VTE) in patients at risk;
  • Use of peri-operative beta-blockers in appropriate patients to prevent peri-operative morbidity and mortality;
  • Use of maximum sterile barriers while placing central intravenous catheters to prevent infections;
  • Appropriate use of antibiotic prophylaxis in surgical patients to prevent peri-operative infections;
  • Asking that patients recall and restate what they have been told during the informed consent process;
  • Continuous aspiration of subglottic secretions to prevent ventilator-associated pneumonia;
  • Use of pressure relieving bedding materials to prevent pressure ulcers;
  • Use of real-time ultrasound guidance during central line insertion to prevent complications;
  • Patient self management for warfarin to achieve appropriate outpatient anticoagulation and prevent complications;
  • Appropriate provision of nutrition, with a particular emphasis on early enteral nutrition in critically ill patients;
  • Use of antibiotic impregnated central venous catheters to prevent catheter related infections.

(ADAPTED FROM: SHOJANIA KG, DUNCAN BW, McDONALD KM, ET AL., EDS. MAKING HEALTH CARE SAFER: A CRITICAL ANALYSIS OF PATIENT SAFETY PRACTICES. EVIDENCE REPORT/TECHNOLOGY ASSESSMENT NO. 43 (PREPARED BY THE UNIVERSITY OF CALIFORNIA AT SAN FRANCISCO–STANFORD EVIDENCE-BASED PRACTICE CENTER UNDER CONTRACT NO. 290-97-0013), AHRQ PUBLICATION NO. 01-E058, ROCKVILLE, MD: AGENCY FOR HEALTHCARE RESEARCH AND QUALITY. JULY 2001. AVAILABLE AT http://www.ahrq.gov/clinic/ptsafety/)


Preventing venous thromboembolism (VTE)


As an example of a safety practice with good evidence, we will consider the important topic of preventing thromboembolism. VTE refers to occlusion within the venous system, and includes deep vein thrombosis (DVT). VTE occurs frequently in hospital patients, with risk of VTE depending on multiple factors including age, medical condition, type of surgery and duration of immobilization. Without prophylaxis, DVT occurs after approximately 20% of all major surgical procedures and over 50% of orthopaedic procedures. Measures to prevent VTE can be pharmacological (heparin, warfarin, aspirin) or mechanical (elastic stockings, pneumatic compression). The authors of this section of the AHRQ report present extensive evidence for the efficacy, safety and cost-effectiveness of prophylaxis in a wide range of conditions and procedures. For instance, pooled results of 46 randomized trials have established that low dose unfractionated heparin (LDUH) reduces the risk of DVT after general surgery from 25 to 8%.


VTE is frequent, painful, dangerous, wastes time and resources and is sometimes fatal; it is, in many cases, preventable. In spite of this, prophylaxis is often underused or used inappropriately. Surveys of both general and orthopaedic surgeons in the United States, for instance, have found over 10% never use VTE prophylaxis, with rates of prophylaxis varying widely for different procedures. The use of appropriate prophylactic measures is undoubtedly a valuable clinical practice. The mystery is why, when the evidence is so strong, it is so often not used or used inappropriately. Educational programmes promoting guidelines and computerized decision support have improved the use of prophylaxis and there are now major campaigns in several countries, but adherence to these basic practices remains incomplete.


Evidence based medicine then provides the foundation of good practice but does not directly address the safety issue, which is why care known to be effective is not delivered to the patient. From our point of view, the most important point is that an evaluation of a clinical practice has led to questions of a psychological nature and towards core patient safety issues of error and human behaviour. These themes emerge more strongly in the next section, which addresses some criticisms of the report’s approach to patient safety,


Evidence based medicine meets patient safety


Following the publication of the AHRQ report, Lucian Leape, Berwick and Bates (2002) wrote a powerful critique, in which they argued that the report had in various respects missed the point of patient safety. We will review their arguments, not to dismiss the undoubtedly useful report, but to highlight important issues about the nature of patient safety and the directions it should take in improving the safety of care.


In the first place, Leape and colleagues recalled that in the original Harvard study only about one-third of adverse events were not preventable with current practice. The remainder were due to error or more general problems in the process of care. The AHRQ report, they suggested, was targeting new therapies and techniques, and to some extent side-stepping the thorny issues of error and poor quality care. The primary reason for this, they suggested, was not that the AHRQ authors were reluctant to tackle these issues but simply that they followed the evidence and concentrated on areas where there was a substantial body of research. The upshot of this was that the report was heavily weighted towards individual safety practices and therapies and gave insufficient weight to the factors that determine what care patients actually receive. Leape and colleagues agreed that it was first necessary to identify practices with proven benefit, such as anticoagulation for VTE. However, the practical issues for patient safety practitioners were:


First how to ensure that every patient who needs anticoagulation receives it and second how to ensure that the medication is delivered flawlessly – on time, in the right dose, every time, without fail. Such systems are at the heart of patient safety but not addressed by the report.
(LEAPE ET AL., 2002)


Leape and colleagues went on to argue that many established safety practices (i.e. sponge counts after an operation) had been omitted, simply because they are well established and, more importantly, that many promising avenues, such as systems for reducing medication errors, had not been given sufficient attention. They further questioned whether the standard evidence based approach was necessary where practices had obvious face validity or where sufficient evidence had accumulated in other environments (i.e. the impact of fatigue on performance and judgement).


Why were Leape and colleagues so concerned about the direction taken by this report? Essentially, it seems, because it might set a direction for patient safety that they regarded as misconceived. Even though the report does give some attention to human factors and systems issues, the weight given to specific clinical practices might suggest that the problems of patient safety could be effectively addressed with new therapies and careful evaluation. In fact, most patient safety practitioners are much more concerned about the fragmented, chaotic state of most healthcare systems and the frankly abysmal safety record in many areas. Resolving this requires a tenacious attempt to improve the basic processes and systems of healthcare as well as engaging all who work in healthcare in the endeavour. The remainder of this book addresses the various ways in which this colossal task is being attacked, beginning with the key issue of simplification and standardization.


Quality management and process improvement


Manufacturing industries have made huge gains in safety, efficiency and cost-effectiveness by close attention to the design, maintenance and performance of the processes used in factories. Rather than inspect products afterwards to identify defects, those concerned with quality control and management sought to build quality into the process. Much of the impetus for these improvements stemmed from the publication of W. Edwards Deming’s ‘System of Profound Knowledge’, a title more suggestive of esoteric spiritual practices than the science of quality improvement. The intention of the book however, and the approach it describes, is resolutely practical. Deming, Joseph Juran, Kauro Ishigawa and others have described and documented the successful application of these approaches since the 1950s in Japanese and American industries (Langley et al., 1996).


Doctors, nurses and others often find it hard to understand that approaches developed in manufacturing can have any relevance to healthcare. We deal with patients as individuals, how can we learn anything from companies that make cars? In fact, of course, cars and computers can now be completely customized and matched to individual needs and preferences. Healthcare is also full of processes, of varying degrees of complexity and incoherence, which are very akin to manufacturing processes: pharmacy, ordering test results, the blood service and so on. But the message of Deming and others is much more than that. Paul Batalden attended a series of lectures given by Deming in 1981. He recalls talking to the great man during the single hour that Deming allowed himself for dinner:


As we talked, he shared his views about the way the health system worked, what he observed. I realised he was used to ‘seeing things’ with different lenses. I went back to the lectures… I saw that he was not really talking about manufacturing; it was a theory of work which conceptualised the continual improvement of quality as intrinsic tothe work itself. He didn’t see a doctor then a nurse then a patient – he saw them as interdependent elements of a system and he looked for how that system could work better.
(BATALDEN, QUOTED IN KENNEY, 2008)


In 1983, the scope of quality control was expanded in systems that sought to extend the basic ideas to all the operations of a company, so that every function was oriented towards improving quality (Feigenbaum, 1983). Total quality management, driven particularly by Japanese industry, took this further still, emphasizing that the entire workforce needed to be involved in improving the quality of the organization and, through these efforts, in the quality of the final product. In healthcare this has become as aspiration, but not yet a reality. The report on the British NHS by Lord Darzi, for instance, puts quality at the centre of everything the NHS does and makes it clear that everyone should play their part in promoting and driving higher qualitycare for patients(Darzi, 2009).


The methods of quality management are well described in many books (i.e. Langley et al., 1996; Nelson, Batalden and Godfrey, 2007). Quality methods are sometimes presented simply as a set of tools and techniques, but properly conceived the various systems aim at substantial and enduring organizational change based onprinciples and values that each organization must define for it. We cannot possibly review all the various approaches, but it is necessary to understand the importance of these approaches in promoting both safety and quality and the fact that improving some aspects of quality, for instance standardizing and simplifying processes, will also make care safer. Quality improvements approaches have also underpinned large-scale attempts to improve safety, such as the Safer Patients Initiative discussed in Chapter 19.


Simplifying and standardizing the processes of healthcare


Compared with manufacturing industry, healthcare has little standardization, comparatively little monitoring of processes and outcome, and few safeguards against error and other quality problems (Bates, 2000). Most healthcare processes were not designed, but just evolved and adapted to circumstances. A particular problem is that many healthcare processes are both long and complicated. Simply mapping the process that currently exists can be a major task, and performing a failure, modes and effects analysis on that process can be immensely time consuming, as we have seen. As Don Berwick points out, complex systems break down more often than simple ones, because there is more opportunity:


The statistics are quite simple. Imagine a system with, say, 25 elements each of which functions properly – no errors – 99% of the time. If the errors in each element occur independently of each other, then the probability that the entire system of 25 elements will function correctly is (0.99)25 or about 0.78. With 50 elements, it is 0.61; with 100 elements, it is 0.37. Make the reliability of each element higher, say 0.999, and the overall success rates are 0.98 for 25 elements, 0.95 for 50 elements and 0.90 for 100 elements. We can, indeed, improve the reliability of a system by perfecting its parts and handoffs, but reducing complexity is even more powerful.


(BERWICKDM. “TAKING ACTION TO IMPROVE SAFETY: HOW TO IMPROVE THE CHANCES OF SUCCESS.” PRESENTATION AT THE ANNENBERG CENTER FOR HEALTH SCIENCES CONFERENCE, ENHANCING PATIENT SAFETY AND REDUCING ERRORS IN HEALTH CARE, IN RANCHO MIRAGE, CALIFORNIA. NOVEMBER 8–10, 1998. REPRODUCED WITH PERMISSION FROM INSTITUTE FOR HEALTHCARE IMPROVEMENT)


The process of prescribing, ordering and giving drugs is a good example of complexity and lack of standardization. David Bates gives an example of the problems that he observed in his own hospital before a sustained attack on medication error and adverse drug reactions:


Take for example the allergy detection process used in our hospital several years ago, which was similar to that used in most hospitals at the time. Physicians, medical students and nurses all asked patients what their allergies were. This information was recorded at several sites in the medical record, though there was no one central location. The information was also required to be written at the top of every order sheet, although in practice this was rarely done. The pharmacy recorded the information in its computerised database, but it found out about allergies only if the information was entered into the orders, and often it was not. Checking by physicians, pharmacy and nursing staff was all manual. This information was not retained between the inpatient and outpatient settings, or from admission to admission. Not surprisingly, about one in three orders for drugs to which a patient had a known allergy slipped through.


(REPRODUCED FROM BRITISH MEDICAL JOURNAL, DAVID W BATES. ‘‘USING INFORMATION TECHNOLOGY TO REDUCE RATES OF MEDICATION ERRORS IN HOSPITALS’’. 320, NO. 7237, [788–791], 2000, WITH PERMISSION FROM BMJ PUBLISHING GROUP LTD.)


Reading this description, it is hard to understand why, even before technological advances, this system had been allowed to continue for so many years: multiple sites of information; numerous, possibly conflicting sources of information; excessive reliance on human vigilance and memory; excessive complexity and potential for error at every stage. If you had been trying to design a system to produce errors you could hardly have done better. When you work in such a system, and we all do in one way or another, it is hard to step back and see the whole process and understand its flaws. Furthermore, in healthcare, very often no one person has responsibility or oversight of the whole system, which makes both monitoring and improvement very difficult.


The system Bates describes has now been replaced by one in which all allergies are noted in one place in the information system, drugs are mapped to ‘drug families’ (e.g. penicillin) so that they can be checked more easily, information is retained over time and checking for allergies is routinely performed by computers, rather than tired and fallible human beings. Many healthcare systems however, have not benefited from such an overhaul. Ordering and reading of X-rays, communication of risk information about suicidal or homicidal patients, informing patients and their family doctors about abnormal test results, booking patients in for emergency operations, effective discharge planning; all these and many more are vital for safe healthcare, yet day-to-day experience tells patients and staff that they are far from error free.


Waste, delay and rework


Successful businesses work constantly to reduce waste and delay and so constrain costs. Waste and delay in healthcare are obviously problems of quality and cost-effectiveness, but also indirectly impact on safety and patient experience; at its simplest, staff time spent on inefficient processes is staff time taken away from direct patient care. Every organization wastes time and resources to a varying degree, whether it is a home wasting food or a hospital wasting time and resources with complex, laborious and overly bureaucratic processes. Hospitals are repositories of the most unbelievable inefficiencies often sitting alongside feats of extraordinary ingenuity and efficiency. Many people work daily with a degree of disorganization in a drug cupboard that they would never tolerate in their own home – another bizarre example of how dangerous practices, which are right in front of us day after day, become invisible because ‘that’s how its always been.’


The elimination of waste and inefficiency is emphasized, particularly by the Toyota Production system, a complete philosophy of work and organization that has evolved over decades and is deeply embedded in the very fabric of the organization (Liker, 2004). ‘Lean thinking’ evolved from Toyota but developed independently in a variety of ways in different companies and industries always aiming to provide what the customer wants quickly, efficiently and with little waste. Obvious applications in healthcare would be minimizing or eliminating delays, repeated encounters, errors and inappropriate procedures and indeed any unnecessary work that takes staff away from work that contributes directly to patient care, whether at the bedside or elsewhere. Waste occursin healthcare at every level (as in every other industry) but many delays and problems can be resolvedbyfront line staff once they are given the freedom and encouragement to do so (Box 11.2).



BOX 11.2 Eliminating waste and delay in healthcare


In one hospital, on each shift nurses made an average of 23 searches for keys to the narcotics cabinet; this wasted 49 minutes per shift and delayed analgesia to patients. Administrators tested assigning numbered keys at the start of each shift, with safeguards to prevent loss or misuse. This procedure nearly eliminated searches for keys and saved 2895 nurse-hours yearly in a 350-bed hospital. Another hospital pharmacy used any deviations from procedures to reflect on the processes. Rather than accept workarounds they changed their systems. Without any technology investments, searches for missing medication decreased by 60% and stockouts fell by 85%.
(ADAPTED FROM THOMPSON, WOLF AND SPEAR, 2003; SPEAR AND SCHMIDHOFER, 2005)


A general internal medicine practice knew that the diagnostic testing process and reporting of test results to patients needed to be improved, because of long delays and frequent follow-up telephone calls from patients. Every member of the practice, doctors, nurses and administrators, completed an initial assessment of the process. After flowcharting the process, which revealed rework, waste, delay and long cycle times, the group brainstormed and then rank ordered the solutions. They then tested the solution of holding a short meeting at the beginning of the day to deal with all diagnostic test results at one time and decide on actions needed. Within two weeks patient phone calls for laboratory results had decreased, reflecting the fact that staff were now calling patients in a timely manner about their results
(QUALITY BY DESIGN. A CLINICAL MICROSYSTEMS APPROACH. NELSON E. C, BATALDEN, P., & GODFREY, M. M. 2007, JOSSEY BASS, SAN FRANCISCO. REPRINTED WITH PERMISSION OF JOHN WILEY & SONS, INC.).

Only gold members can continue reading. Log In or Register to continue

Stay updated, free articles. Join our Telegram channel

Jun 24, 2017 | Posted by in GENERAL SURGERY | Comments Off on 11: Clinical interventions and process improvement

Full access? Get Clinical Tree

Get Clinical Tree app for offline access