Chapter 28 Algorithms, clinical pathways and clinical guidelines
GUIDELINE HISTORY
In the medical profession, the development of clinical guidelines was first documented in the mid-1970s. Interest in guidelines which specifically address allied health practice took another 10–15 years to emerge. Guidelines were initially produced as statements of best practice regarding the structure and organization of healthcare facilities, and were usually associated with hospital accreditation programmes (Donabedian 1992). These guidelines dealt with issues such as staff registration and training, hygiene, safety and business management. They also highlighted adverse events, events that should not happen in a safe healthcare environment, such as avoidable deaths, unplanned readmission to hospital for the same condition within a specified time period, infections, falls and other avoidable injuries. In the Western world, public and private hospitals are expected to be accredited with a national body to demonstrate how they comply with specified standards of best practice relating to quality care. Compliance with accreditation guidelines is usually measured by performance indicators and benchmarking (within or between organizations), adjusted for size, staffing complement and location.
In the mid-1980s, the Australian Physiotherapy Association led the physiotherapy world, developing a private physiotherapy practice accreditation programme that specified quality frameworks for allied health service delivery (Grimmer et al 1998). This programme has gone from strength to strength in Australia and has formed the basis of accreditation programmes for private and public physiotherapy services in many other countries. Operating on the understanding that quality structural elements of care will produce good health outcomes, the accreditation programme applies consensus standards to each element involved in delivering high-quality physiotherapy service, including staff training, record keeping, length of appointments, regulating the performance of equipment, safety and cleanliness of premises, access, accounting systems and other business practices.
In recent years, academic and clinical interest has turned to developing recommendations to assist in the delivery of high-quality health care as a mechanism to improve health outcomes, reduce adverse events and variations in clinical practice and contain costs (Anderson & Mooney 1991, Antman et al 1992, Sackett et al 2000, Wilson & Harrison 1997). These recommendations have variably been called clinical guidelines, clinical (or care) pathways, care decision-making processes, algorithms or flowcharts. Whatever the nomenclature, these recommendations usually address aspects of clinical decision making such as what care should be provided, how it should be delivered, by whom it should be delivered, where it should be delivered, what equipment is required, and how much care should be provided (Hill 1998).
In this chapter we explore clinical guidelines within a clinical decision-making framework. The usefulness of clinical guidelines should be monitored by process indicators that answer questions such as, ‘Was the guideline, or guideline elements, applied using appropriate clinical reasoning approaches?’, or ‘Did the application of the guideline achieve desired health and cost outcomes?’ (Burgers et al 2003). Thus the application of any clinical guideline should include monitoring processes that allow reflection on both the processes and the outcomes of care (Wilson & Harrison 1997), the quality of care provision (Donabedian 1992) and stakeholder satisfaction with the care encounter (Cleary & McNeil 1988).
There has been a recent explosion of allied health clinical guidelines around the world, either as discipline-specific recommendations or as multidisciplinary approaches. Underpinning the development of quality allied health guidelines is the recently published ‘Framework for clinical guideline development in physiotherapy’ (Van der Wees & Mead 2004), which specifies the processes of guideline development relevant to any allied health discipline.
THE LINK WITH CLINICAL REASONING AND PATIENT OUTCOMES
Evidence-based medicine was famously discussed by Sackett et al (2000) as the judicious use of current best evidence in making decisions about individual patient care. Clinical guidelines are a synthesis of current best evidence, ‘systematically developed statements to assist practitioner and patient decisions about appropriate health care for specific clinical circumstances’ (Field & Lohr 1990, p. 38). Resistance to implementing clinical guidelines by clinicians is believed to reflect a fear that guideline use will undermine the autonomy of their clinical decision making (Grol & Grimshaw 1999). It seems that social factors have a strong influence on compliance. Thus guidelines are more likely to be accepted if produced locally, and if they reinforce local consensus rather than requiring a change in routine (Fairhurst & Huby 1998). Unfortunately, this is somewhat counterproductive. Guidelines are intended to promote better clinical reasoning and better care on a community-wide basis. Merely reinforcing local practice may not lead to practice according to the best available evidence.
Other barriers to implementing guidelines have been outlined by Entwistle & Shiffman (2005, p. 1), including:
Guideline-based care is meant to provide a framework for decision making by clinicians, not replace the clinician-patient interactions and decision-making processes.
The most useful clinical practice guidelines should be comprehensive, based on valid sources of high-quality research evidence, regularly updated and widely disseminated for comment prior to implementation (Feder et al 1999, Grol et al 1998). Clinical guidelines provide recommendations for the management of specific conditions, based on current best evidence. Best evidence could either reflect the highest available level of research evidence or, in the absence of such evidence for a specific clinical question, could draw on expert or consensus opinion. Best evidence may also change from day to day, based on findings from new research, or debate on interventions with equivocal evidence. The Australian National Health and Medical Research Council (NH&MRC) provides a comprehensive guide for guideline developers, identifying the levels and strength of evidence that should be considered for guidelines (NH&MRC 1998), how recommendations should be framed, and the process of obtaining stakeholder (clinicians, patients, referrers) feedback on the guidelines prior to implementation. Guidelines need to reflect best practice, applied appropriately within the local environment, which addresses patient choice and values.
Much continues to be written about the need for guideline development processes that are rigorous and transparent, as well as for ongoing research to test whether guidelines actually influence clinical practice decisions, improve patient outcomes or contain costs (Grol 2000, Haycox et al 1999, Margolis 1999). There is continuing debate about whether or not guidelines really do improve patient outcomes and decrease costs. These concerns are particularly relevant when guideline recommendations contradict current clinical practice, or when significant change in practice behaviours is required in order to implement guidelines in a sustainable manner (Feder et al 1999, Grimmer et al 2003, Shekelle et al 1999).
CONSTRUCTING GUIDELINES
Guideline users should be provided with sufficient information by guideline developers to demonstrate the validity and clinical utility of the recommendations (NH&MRC 1998), and to allow those users to access the recommended evidence sources to make up their own minds about the validity of recommendations. Guideline information should be presented simply and should fully document both the process of developing the guideline and the sources of supporting evidence. Guidelines should be based on current best practice and recognition of therapist and patient autonomy in decision making (Sackett et al 2000), thus presenting clinicians with fewer barriers to their implementation (Feder et al 1999, Haycox et al 1999, NH&MRC 1998).
GRADING THE EVIDENCE
Most guidelines deal with the effectiveness of specific approaches to treatment, and hence seek to provide users with information from secondary research (systematic reviews or meta-analyses of experimental studies) or from primary research of individual experimental studies, quasi-experimental studies or case studies. However, as guidelines become more sophisticated, they more frequently contain recommendations on diagnosis and risk assessment, drawn from epidemiological and diagnostic classification studies, as well as recommendations on cost-effectiveness of interventions drawn from studies that have investigated some manner of cost-benefit analysis.
A recommended process in developing guidelines is to evaluate the relevant literature using several evidence dimensions (NH&MRC 2000, Sackett et al 2000). This approach provides a framework for establishing the strength of a body of evidence. The NH&MRC guideline development recommendations suggest that the two primary evidence dimensions are first, the hierarchy (or level of evidence which ranks the study design based on potential for error in interpreting findings), and second, the methodological quality of the study. The method of evaluation may be chosen from one of the already published critical appraisal tools (Katrak et al 2004) or by establishing key quality criteria relevant to the topic (Higgins & Green 2005).
HIERARCHY OF EVIDENCE
The lack of international consensus about appropriate ranking of systematic reviews, experimental studies, epidemiological studies, case studies, and expert opinion could result in a situation where two guidelines derived from the same literature could produce two completely different sets of recommendations underpinned by the same evidence, albeit ranked differently with respect to hierarchy and importance. This raises a deeper philosophical point about the ways in which different kinds of evidence are privileged. Evidence-based medicine tends to privilege randomized control trials over case studies, in the belief that such trials bring us accurate knowledge. However, there is increasing support for the case that clinical reasoning is largely based on narrative knowing (Greenhalgh 1998). The proponents of narrative knowing argue that experts are experts and provide better care because they are familiar with a greater number of cases and the subtle differences between them, and that novices learn their practical clinical reasoning by building up their own case knowledge. Therefore, case studies should enjoy a higher status than they do at present within evidence-based medicine theory. Greenhalgh (1999) argued that clinical reasoning is a combination of narrative and the best evidence.