Clinical microsystems
A clinical microsystem is a group of clinicians and staff working together with a shared clinical purpose to provide care for a defined population of patients (Mohr, Batalden and Barach, 2004). The system includes the clinicians, support staff, the equipment and technology, the care processes and everything that needs to be done to provide care to the patients – essentially the structure and process in Donabedian’s terms, but on the smaller scale of an extended clinical team. Examples of clinical microsystems include a cardiovascular surgical care team, a community based mental health centre, or a neonatal intensive care unit. Each example has in common core elements: a focused type of care; clinicians and staff with the skills and training needed to engage in the required care processes; a defined patient population; and a certain level of information and technology to support the work. Each hospital or healthcare economy will have a large number of these microsystems, each delivering a different type of care but interacting with each other and the larger organization.
Mohr, Batalden and others have argued that the clinical microsystem is a particularly critical target for safety and quality improvement. The microsystem concept stems from earlier studies of service organizations outside healthcare, which achieved particularly high quality; each of these organizations created small units, which could be replicated across the organization, organized and engineered to provide a high quality service to the customer. Clinical microsystems tend to belarger and more complicated, but the essential idea is similar. They are recognizable sub-systems within the larger organization, which have their own purpose, organization and delivery processes; they are small enough to permit defined improvement programmes to take hold relatively quickly but large enough to influence the care of considerable numbers of patients. In a series ofstudies in the 1990s and beyond, Mohrand Donaldson (Nelson, Batalden and Godfrey, 2007) defined the qualities of high performing microsystems:
- Constancy of purpose over time;
- Investment in improvement in terms of both time and resource;
- Alignment of role and training for efficiency and staff needs;
- Interdependence of the care team in meeting patient needs;
- Integration of information and technology into work flows;
- Ongoing measurement of outcomes;
- Support from the wider organization;
- Connection to the community to enhance care and extend influence.
Improvement then requires a simultaneous focus on processes, organization, supervision, training and teamwork, underpinned by leadership, constancy of purpose and support from senior leaders. In this chapter we will see how these ideas have been put into practice, but first we need to briefly consider the difficult topic of the evaluation of safety and quality improvement.
The evaluation of complex interventions
A clinical trial is conceptually similar but very different in practice from the comparatively tidy, isolated laboratory experiment on which a clinical trial is modelled. The concept is simple – comparing the response in two or more groups – but this can be challenging in practice, both methodologically and practically. A drug trial, for instance, has to separate the action of the drug from everything else that affects the course of a disease and must also encompass the variability in patients’ lives, disease course and other illnesses. Now consider a surgical trial comparing, for instance, open and laparoscopic procedures. This is more complicated still because of the variations in the intervention as well as the patient: surgeons, facilities and post-operative care may all differ making it even more difficult to identify any advantages of the new surgical technique; the noise can drown out the signal. Now consider a still more complex problem, that of evaluating a safety and quality intervention within an organization; a number of methodological issues particularly stand out.
A hospital is a complex adaptive system
The structure of most healthcare organizations is a fairly rigid hierarchy, complicated by the inclusion of a number of distinct professional hierarchies. It can be very frustrating, and puzzling, to plan and set out a clear programme of organizational change and then find that in practice it doesn’t proceed as expected. Organizations, infact, change all the time, in response to both internal and external influences, and their dynamic quality is just one of the features that makes organizational change complex; one is constantly working with a moving target. Clinical microsystems, and indeed healthcare organizations, are complex adaptive systems (CAS), constantly evolving and changing. CASs are networks of many agents, which may be people, cells, species, companies or countries, in which all the agents constantly act and react to each other. The control of these systems tends to be highly dispersed and decentralized, in fact there may be no clearlocus of control. Coherent and predictable behaviour in such systems arises from both competition and cooperation amongst the agents. The overall behaviour is the result of a huge number of decisions made every moment by many individual agents (Waldrop, 1992).
This definition of a CAS was coined by John Holland, Murray Gell Mannand others, at the Santa Fe Institute. Examples of CASs include the stock market, social insect colonies, the ecosystem, the brain and the immune system, the cell and the developing embryo, manufacturing businesses and any human social systems such as political parties or communities. The language of the definition is rather alarming and the extent to which analogies can really be drawn between the behaviour of the immune system and the functioning of a hospital is obviously not a simple question. Fortunately though, we do not have to worry about that at the moment. The essential point to grasp for our purposes is that hospitals are not just complicated, which they are, but complex in the sense that the coherence of the system can only be achieved by the many independent individual people acting and responding independently (Plesk and Greenhalgh, 2001). This is in a sense obvious, but in practice not how the organizations are necessarily run; the crisp diagrams of organizational structure and clear hierarchies suggest that orders proceed from the top and actions are taken. The complex view is that a hospital is more like a biological system with multiple interacting components. To clinicians who constantly struggle with the complexity and unpredictability of biological systems, the images are familiar, though few probably apply biological thinking to running their units. If one thinks in terms of trying to change a biological system, then the difficulties and the fluid, dynamic nature of change becomes less puzzling.
The intervention evolves over time
Assessing the impact of safety and quality interventions is difficult, partly because hospitals are complex, but also because the intervention itself may be complex, ranging over many different clinical areas and across different levels of the organization; front line staff, middle management and the executives are all involved. In a randomized trial, one endeavours to specify the treatment as closely as possible; this is relatively straightforward for drugs, somewhat more difficult for surgery but both can be specified with reasonable clarity. For an organizational intervention over two years say, there is absolutely no way of knowing in practice how events will unfold or how it will be taken up. This is not necessarily a failure of specification, in that it may actually be counterproductive to try to specify and standardize all aspects of the intervention, leaving no room for local innovation. Plesk and Greenhalgh (2001) point out that the CAS perspective suggests that when planning interventions, one should specifically plan and allow for an evolving intervention by:
- Using biological metaphors to guide thinking;
- Creating conditions in which the system can naturally evolve over time;
- Providing simple rules and minimum specifications;
- Setting forth a good enough vision and creating a wide space for natural creativity to emerge from local actions within the system.
Each organization reacts to and embraces (or rejects) the intervention and so, in a very real sense, each organization actually gets a different intervention – even if all organizations went to the same learning sessions and had the same materials and information. To make matters worse, for those attempting to understand the impact of any intervention, the effects of such programmes evolve over a long time period and the effects continue after the formal programme has ended. Finally, there is an additional complication which is that organizations do not always start in the same place and are not equally ready for such interventions; this issue is discussed further below.
We might add to this that it is also not clear to what extent there is a ‘right’ way to do things. In practice, organizations approach improvement in very different ways, even when they are allegedly following the same programme (see below). We might see this as simply reflecting our ignorance. For instance, before antibiotics were discovered, there were dozens of different remedies and tests of treatments for tuberculosis, which all vanished like smoke once a true treatment emerged. However, although we may be able to isolate some fundamentals, it is unlikely that a ‘one size fits all’ model will ever emerge. Organizations always have to start from where they are, which varies considerably, and adapt their efforts to local conditions and the changing external environment.
We must return again to measurement. With some notable exceptions, not nearly enough attention as been paid to measurement in most quality and safety improvement programmes. Usually some indices are monitored, but not nearly enough to reflect the breadth of the actual change programme; the need to ‘get on and change things’ often seems to take precedence over the need to find out later whether all one’s efforts were worthwhile. It is also tricky to predict all the potential variables of interest. Supposing, for instance, one aims to improve the reliability of antibiotic use on surgical wards; the surgical wards fail to make use of it and no improvement is seen but two of the medical wards hear about it, successfully adapt it and show major change. Is this a failure of the intervention or an unexpected success? Both, really.
Reviews and comparisons of different quality and safety improvement plans have generally found evidence for modest improvements (Schouten, Hulscher and Everdingen, 2008) but none of the various, and numerous, broad strategies and approaches really stands out as ‘the way forward’. The situation was well summarized by Richard Grol and colleagues in 2002, and still applies today:
From what we know, no quality improvement programme is superior and real sustainable improvement might require implementation of some aspects of several approaches – perhaps together, perhaps consecutively. We just do not know which to use, when to use them, or what to expect.
(REPRODUCED FROM QUALITY & SAFETY IN HEALTH CARE, R GROL, R BAKER, F MOSS. ‘‘QUALITY IMPROVEMENT RESEARCH: UNDERSTANDING THE SCIENCE OF CHANGE IN HEALTH CARE’’. 11, NO. 2, [110–111], 2002, WITH PERMISSION FROM BMJ PUBLISHING GROUP LTD.)
In reflecting on the programmes to be discussed in the remaining chapter, we need to be cognisant of both the methodological and practical difficulties. Certainly it is right that we should require evaluation of such programmes, but we should also be cognisant of the difficulties of defining the intervention, or predicting the course it might take as it evolves within the organization. Much more attention needs to be given to measurement and evaluation but we need to understand the difficulties and challenges.
Improving the safety of intensive care
Our first example of long-term, sustained change is the work of Peter Pronovost, Albert Wu and their colleagues in critical care medicine at Johns Hopkins Hospital. This work is remarkable on several counts. First, the leaders all have a serious and longstanding commitment to patient safety. Second, the work has been sustained over a decade now, with continuous evolution and refinement. Third, they have combined a desire for improvement with an equal passion for science and measurement and fourth, they have documented both the journey and the outcomes (Box 19.1). Many other leaders have a similarly longstanding commitment but the evaluation and publication of the Johns Hopkins team have made their work particularly influential. We cannot do justice to the entire programme, but we will review some of the salient features.
BOX 19.1 Safety, quality and science
When we started this work, the safety and delivery of care were often not viewed as science. Science seemed to include understanding disease biology and identifying effective interventions but not ensuring patients received those interventions; this work was viewed as the art of medicine. What I tried to do was to highlight to our hospital and medical school leadership that there is science in the delivery and that we often do this science poorly. As a result, patients suffer harm. By exposing that, we revealed the dissonance between our pride and our belief that we are a great institution and the reality that some people were being hurt by adverse events, or were not having the best outcomes or receiving evidence-based therapies. Through what admittedly was a bit of a risky strategy – discussing sentinel events with our CEO, department chairs, and board – and making the dissonance real to them – the institution was galvanized into realizing the need to apply science to the delivery of care, just as we apply it to everything else. The delivery of care is really a learning lab for safety and quality. We continually try to evaluate, in a rigorous way, how we are doing things and how we can do them better.
PETER PRONOVOST IN CONVERSATION WITH BOB WACHTER; ACCESSED WWW. AHRQ.ORG
The Hopkins team assumed from the outset that safety interventions could only take root if the front line staff were aware of the hazards patients faced and a need for change. A positive safety culture was regarded as essential, by no means sufficient to produce change but a necessary foundation. The safety critical attitudes, beliefs and behaviours need to be embedded at all levels of the organization, so that as far as possible everyone begins with a shared set of assumptions.
They administered a safety climate survey to 395 nurses, doctors and managers, including all 8 of the ICU physicians. A safety leadership survey was carried out in parallel, which assessed the priority given to safety by leaders at various levels of the organization and also the perceived priority; leaders might believe they were prioritizing safety, but those on the ground might see things very differently (Pronovost, Weast and Holzmueller, 2003). Taken together, these surveys suggested that:
- A proactive strategic planning process was needed.
- Senior leaders needed to become much more visible to front line staff in their efforts to improve patient safety.
- Greater efforts were needed to involve and educate physicians about patient safety.
The ICU reporting system discussed earlier also provided an important window onto the systemic issues the team faced. The reports were used to surface safety problems and to make an initial assessment of their causes and contributory factors, such as the need for training and inadequate staffing. Note that the reporting system is simply analytical and diagnostic, not viewed as a solution or as a safety intervention in itself.
Translating evidence into practice
The safety interventions are grounded in clinical practice and evidence based medicine. The goals are to deliver evidence based practice reliably and without harming patients. That is, of course, every clinician’s goal; however, as we have seen, the tricky part is making that happen. The approach has five key components (Pronovost, Berenholtz and Needham, 2008):
- A focus on systems (how work is organized) rather than care of individual patients;
- Engagement of local interdisciplinary teams to assume ownership of the improvement project;
- Creation of centralized support for the technical work;
- Encouraging local adaptation of the intervention;
- Creating a collaborative culture within the local unit and larger system.
This approach has evolved and matured into the Johns Hopkins Quality and Safety Research Group translating evidence into practice model (Figure 19.1). Considerable attention is paid to established clear objectives and the associated measures, such as infection rates from central lines. The evidence and the objectives form the core, but it is also necessary to understand the realities of the work process and its context, and the barriers to doing the job well. Some of this understanding comes from formal analytic methods, such as incident analysis, but much comes from simply watching and talking: