Using Data for Local Quality Improvement



Fig. 7.1
States adjusted bypass surgery 30-day mortality rate vs. adjusted average yearly decline in bypass surgery mortality risk (1987–1992), controlling for age, gender, race, admission for acute myocardial infarction, and comorbidity). NNE refers to Northern New England region states (Maine, New Hampshire, Vermont). LoVol refers to ten states performing 500 or less bypass operations per year (Alaska, Wyoming, Delaware, Idaho, New Mexico, Hawaii, Rhode Island, Montana, North Dakota, and South Dakota) (Reprinted from Peterson et al. [9], with permission from Elsevier/American College of Cardiology.)



Data reports which rank or benchmarks low performing hospitals are commonly used: rankings are based on risk-adjusted and/or reliability-adjusted outcomes or on observed:expected (O:E) ratios. Because the focus is usually on the outliers, the degree of quality improvement seen in programs with mediocre performances may be more limited unless there is motivation to get beyond a certain level. Further, fears of reporting may ultimately lead to case selection bias and choices to not treat higher-risk patients. The same fears may be magnified in extremely competitive markets, which may ultimately hinder meaningful quality improvement efforts and the ability to work together. The strategy of “pay for performance,” or P4P, leverages direct financial incentives (or penalties) to motivate quality improvement. Rewards are usually based on meeting specific benchmarks, such as process of care measurements or clinical outcomes. One of the biggest limitations to P4P is the inadvertent creation of a highly competitive environment which may ultimately hinder meaningful quality improvement efforts.

Alternatively, the “pay for participation” model has been used to incentivize collaborative efforts similar to that of the NNE Cardiovascular Disease Study Group [5]. Compensation for data collection and participation in quality improvement activities regardless of performance rankings is the underlying tenet of the statewide programs supported by Blue Cross Blue Shield of Michigan. The success of these multiple quality improvement programs in the state of Michigan, funded by a single large private payer, is measured by hospital-level improvements and reduced costs in several important areas [6]. For example, in general and vascular surgery, postoperative complications dropped over 2.5 % among participating Michigan hospitals and resulted in an estimated annual savings of approximately $20 million, far exceeding the cost of administering the program. The business case for quality improvement seems clear, but payers and/or policymakers need to have the resources for such upfront investments in these programs.

Most collaborative programs have commonalities though different approaches are used depending on the clinical scenario. Many interventions are evidence based and dissemination is based on accepted best practices. Other interventions are based on site visits which help examine and better understand organizational factors and culture, which can lead to better performance feedback, collaborative learning, and targeted interventions.



7.2 Translational Science (Implementation and Dissemination)


In another unrelated Michigan statewide collaborative effort, the Keystone ICU project successfully leveraged an intervention project to decrease the rate of catheter-related bloodstream infection using a unit-based safety program [7]. This large-scale study showed that a significant reduction in the incidence of catheter-related bloodstream infection is feasible through the use of education on best practices. The study intervention targeted clinicians’ use of five evidence-based procedures: hand washing, using full-barrier precautions during the insertion of central venous catheters, cleaning the skin with chlorhexidine, avoiding the femoral site if possible, and removing unnecessary catheters. Implementation strategies which were highlighted were the uptake of recommended practices identified as having the greatest effect on the outcome of interest with the lowest barriers to implementation.

Implementing evidence into healthcare practice is the next step after generating new knowledge and essential in maintaining fidelity to evidence-base medicine. Implementation science is the investigation of methods and interventions used in the adoption of evidence by individuals and organizations to improve clinical and operational decision making. In the implementation process, both facilitators and barriers must be identified in order to promote and sustain use of effective practices.

There are some major difficulties with targeting quality improvement efforts on processes of care. First, many processes are straightforward to measure but may only account for a small proportion of the observed variation in outcomes. Further, there is a ceiling effect on many of these processes such that nearly all hospitals and providers perform so extremely well on quality measures that there may not be much more room to improve. Some have retired such measures so that efforts can be focused on processes with much more performance variation.

Only gold members can continue reading. Log In or Register to continue

Stay updated, free articles. Join our Telegram channel

Aug 19, 2017 | Posted by in GENERAL SURGERY | Comments Off on Using Data for Local Quality Improvement

Full access? Get Clinical Tree

Get Clinical Tree app for offline access