Implementation Science and Quality Improvement


Model

Dissemination and/ or implementation

Description

Diffusion of Innovations

D-only

Explains the factors and processes contributing to how, why, and how quickly new advances are spread and adopted in societies over time

RE-AIM (Reach, Efficacy, Adoption, Implementation, Maintenance)

D = I

Assists with the planning, evaluation, reporting and review in translating research into practice

Precede-Proceed Model (Predisposing, Reinforcing, and Enabling Constructs in Educational/Environmental Diagnosis and Evaluation – Policy, Regulatory and Organizational Constructs in Educational and Environmental Development)

D = I

Diagnoses factors contributing to the health of the target population to assist in planning health programs (PRECEDE) and provides measures to evaluate the implementation process, impact, and outcome (PROCEED)

PRISM (Practical, Robust Implementation and Sustainability Model)

I > D

Evaluates how the inter-relationships between the external environment, the implementation and sustainability infrastructure, the intervention, and the recipients influence implementation

PARIHS (Promoting Action on Research Implementation in Health Services Research)

I-only

Describes successful implementation as a function of the evidence, the context in which the evidence is being introduced, and the facilitation strategies utilized

CFIR (Consolidated Framework for Implementation Research)

I-only

Provides a taxonomy consisting of five domains (characteristics of individuals, intervention characteristics, inner setting, outer setting, and process) and multiple constructs that influence implementation. Unifies key constructs from multiple published theories in order to build implementation knowledge across settings and studies


Examples and classification as dissemination or implementation models taken from Tabak et al. [4]





8.2 QI and Implementation Interventions (or Strategies)


Changing practice through QI efforts or implementation of clinical interventions (i.e., peri-operative antibiotic or venous thromboembolism prophylaxis) can be challenging. There are multiple QI and implementation interventions or strategies that have been utilized either alone or in combination to facilitate or compel change. These interventions may target multiple levels including patients, providers, organizations, and/or communities. Examples of QI or implementation interventions include patient or provider incentives, financial penalties, audit and feedback, educational initiatives, computerized reminders, collaborative involvement, and community-based QI teams. Systematic reviews suggest that most of these interventions have resulted in at least a modest improvement in performance, although the quality of the evidence is poor. Although these interventions are often complex and target multiple levels, there is no clear evidence to support the superiority of multi-faceted versus single interventions. Ultimately, there is no single or multi-faceted intervention that is effective across all settings, nor is a single intervention necessarily the only effective method for facilitating change within a specific setting. Multiple approaches may be equally successful in producing the same outcome, a concept known as equifinality. Further research is necessary to determine which QI or implementation interventions are most effective in which settings.

Continuous quality improvement (CQI): A widely used QI strategy is CQI, also known as Total Quality Management (TQM). CQI is derived from a strategy used by the manufacturing industry which evaluates a process over time to determine whether variation is present that causes the process to be unstable and unpredictable. If a process is out of control, iterative cycles of small changes are made to address the problem. These small-scale changes are referred to as Plan-Do-Study-Act (PDSA) or Plan-Do-Check-Act (PDCA) cycles (Fig. 8.1a).

A306166_1_En_8_Fig1a_HTML.gifA306166_1_En_8_Fig1b_HTML.gif


Fig. 8.1
Tools of continuous quality improvement ( CQI ) (a) Plan-Do-Study-Act ( PDSA ) cycles test a series of small-scale changes whereby each new change is informed by data from previous cycles of change (b) Statistical Process Control ( SPC ) Charts evaluate for special-cause variation as identified by outliers outside of three standard deviations from the mean, represented by the upper and lower control limits. (c) Pareto charts show the relative frequency of factors contributing to the problem in descending order (represented by the bars) as well as the cumulative percentage of contribution of the sum of the factors (represented by the line) (d) Fishbone diagrams (or cause-and-effect or Ishikawa diagrams) demonstrate the causes and sub-causes of the problem. (e) Flowcharts demonstrate the steps in a process (i.e., in preparing a patient for surgery and ensuring compliance with peri-operative antibiotic and venous thromboembolism, VTE, prophylaxis measures)

Several tools assist in the performance of CQI. Variation is evaluated using statistical process control (SPC) charts (Fig. 8.1b). Special-cause variation exists (i.e., a process is out of control) if the process is outside the upper and lower control limits, or in excess of three standard errors of the process mean in either direction. SPC charts can be used to both monitor and manage change. If special-cause variation or a problem is detected, various tools can be used to diagnose the contributing factors. Pareto charts (Fig. 8.1c) depict the frequency of each factor in descending order as well as the cumulative frequency. They are based on the principle that 80 % of the problem results from 20 % of the contributing factors. Identification of the major factors contributing to the problem can guide initial QI efforts and maximize their impact. Fishbone diagrams (also called cause-and-effect or Ishikawa diagrams, Fig. 8.1d) are also used to systematically identify and categorize all of the contributing factors to a problem. The “spine” represents the problem, while the major branches or “bones” depict the major causes of the problem. Minor branches represent sub-causes. Flowcharts (Fig. 8.1e) are used to depict the steps of a process to identify where changes are necessary. Other tools such as check sheets, scatter diagrams, and histograms are also used. More details about CQI and related tools can be found in textbooks and courses (see Resources).


8.3 Research Designs


There are several questions to consider in designing QI or implementation research: What is the quality of the evidence and strength of recommendation for the clinical intervention? Has the intervention been tested in real world conditions? Has the intervention been tested in the particular setting or population of interest? What are the optimal strategies for facilitating uptake or adoption of the clinical intervention in that particular setting or population?

Strong versus weak recommendations for an intervention: There are multiple tools and resources for evaluating the level of evidence and strength of recommendation for an intervention; one system that is frequently used in translating evidence into guidelines is GRADE (Grading of Recommendations Assessment, Development and Evaluation). The quality of the evidence is determined by the study design; sources of bias due to methodological limitations; and the magnitude, consistency, and precision of the estimate of treatment effect. The strength of the recommendation accounts for the overall benefits versus risks of an intervention, burdens, costs, and patient and provider values. The quality of evidence and strength of recommendation for an intervention or guideline affect the implementation process and evaluation. For example, guidelines based on high quality evidence may result in greater stakeholder acceptance and ease in implementation; measurement of adoption of the guidelines by the providers may be adequate to ensure success. On the other hand, guidelines based on only moderate evidence may be harder to implement and require more rigorous assessment of their effect on patient outcomes.

Efficacy versus effectiveness: Efficacy or explanatory trials test an intervention under tightly-controlled or “ideal” circumstances in order to isolate its effect in a small, homogeneous, highly compliant patient population. Effectiveness or pragmatic trials test an intervention in the “real world” in a large, heterogeneous patient population. Efficacy trials focus on internal validity or minimization of bias while effectiveness trials focus on external validity or generalizability. The PRECIS (Pragmatic-Explanatory Continuum Indicator Summary) framework is a tool that can be used by researchers in order to place a trial along the continuum. The ten dimensions are depicted as spokes on a wheel, with the explanatory pole at the hub and the pragmatic pole at the rim (Fig. 8.2). Trials may fall along a continuum between efficacy and effectiveness, but trials towards the effectiveness end of the spectrum tend to be more amenable to translation into practice.

A306166_1_En_8_Fig2_HTML.gif


Fig. 8.2
The pragmatic-explanatory continuum indicator summary (PRECIS) tool. (a) The indicators are at the periphery of the PRECIS wheel, which represents the pragmatic end of the continuum. (b) The indicators are at the center of the PRECIS wheel, which represents the explanatory end of the continuum

Effectiveness versus implementation: Effectiveness and implementation trials differ in their interventions, units of randomization and analysis, and outcomes; they also differ in the research methodologies used to assess these outcomes. Effectiveness trials evaluate clinical interventions (i.e., drugs or procedures), whereas implementation trials evaluate whether the intervention works when applied to a new patient population or in a different setting. Implementation trials may utilize one or more strategies aimed at promoting uptake of a clinical intervention as described above (i.e., audit and feedback). Effectiveness trials focus on the patient and individual health outcomes, whereas implementation trials focus on providers, units, or systems and proximal outcomes such as degree of adoption of a process measure. Rather than perform effectiveness and implementation trials sequentially, hybrid effectiveness-implementation designs have been proposed in order to minimize delays in translating evidence into practice. There are three types of hybrid designs ranging from a primary focus on effectiveness (Type I) to a primary focus on implementation (Type III). In general, these designs are intended to test interventions with minimal safety concerns and preliminary evidence for effectiveness, including at least indirect evidence of applicability to the new population or setting.

To randomize or not to randomize? Randomized controlled trials (RCTs) are considered the gold standard for traditional clinical interventional studies because they minimize imbalances in baseline characteristics between treatment arms. However, RCTs are infrequently performed in QI research for a variety of reasons: costs, resources, and time; desire for rapid change; perceived favorable risk-benefit ratio of the intervention; and lack of trial expertise. Nonetheless, even QI or implementation interventions may not be effective in the real world or may have unintended consequences. As with other therapies, QI interventions tested in RCTs have been demonstrated to be ineffective or even harmful such as a QI intervention to increase use of total mesorectal excision in rectal cancer [1], collaboratives to improve antibiotic prophylaxis compliance [2], and a bundle of evidence-based practices to prevent surgical site infections [3].

Only gold members can continue reading. Log In or Register to continue

Stay updated, free articles. Join our Telegram channel

Aug 19, 2017 | Posted by in GENERAL SURGERY | Comments Off on Implementation Science and Quality Improvement

Full access? Get Clinical Tree

Get Clinical Tree app for offline access