Surveillance of Healthcare-Associated Infections



Surveillance of Healthcare-Associated Infections


Katherine Allen-Bridson

Gloria C. Morrell

Teresa C. Horan



Surveillance is the ongoing, systematic collection, analysis, and interpretation of health data essential to the planning, implementation, and evaluation of public health practice, closely integrated with the timely dissemination of these data to those who need to know. Surveillance of nosocomial or, as they are now known, healthcare-associated infections (HAIs) is a significant component of efforts to reduce and eventually eliminate HAIs in healthcare settings, including hospitals, long-term care facilities, and ambulatory surgical care centers (1). In the 1970s, the Study on the Efficacy of Nosocomial Infection Control found that if hospitals adopted intensive surveillance and multifaceted prevention and control programs, nearly one-third of HAIs could be prevented (2). Later, the Centers for Disease Control and Prevention (CDC) recommended infection surveillance as a way to evaluate the success of control measures (3,4). In addition, surveillance of HAIs may provide data that are useful for recognizing emerging trends and contributing factors, such as procedures and new technologies that reduce HAIs. Increasingly, HAI surveillance data is being used by both regulatory agencies such as The Centers for Medicare and Medicaid Services (CMS) and The Joint Commission (JC), to assess the quality of care in the healthcare setting as well as by state health department HAI programs to assist in and to disseminate infection prevention activities.


NATIONAL HEALTHCARE SAFETY NETWORK

Throughout this chapter, CDC’s National Healthcare Safety Network (NHSN) system is used to illustrate essential components of HAI surveillance. The NHSN is a secure, Internet-based surveillance system that integrates patient and healthcare personnel safety (HPS) surveillance systems. Three former CDC surveillance systems, including the National Nosocomial Infections Surveillance System (NNIS), National Surveillance System for Healthcare Workers, and the Dialysis Surveillance Network, were combined to form the NHSN.

The NHSN enables healthcare facilities to collect and use data about HAIs, adherence to clinical practices known to prevent HAIs, the incidence or prevalence of multidrug-resistant microorganisms (MDROs) within their organizations, trends and coverage of HPS and vaccination, and adverse events related to the transfusion of blood and blood products.

Since the inception of the NNIS system in 1970, CDC’s primary goals for HAI surveillance have been to describe the epidemiology of HAI, provide national-level HAI comparative rates for hospitals and other healthcare systems, and promote methodologically sound surveillance in healthcare systems (5,6). The NHSN includes four components, each concerned with an aspect of HAI control and prevention: patient safety, HPS, biovigilance, and e-surveillance.

The patient safety component of NHSN includes surveillance methods to identify and track device-associated infections, procedure-associated infections, antimicrobial use, MDROs, Clostridium difficile incidence and prevalence, and influenza vaccination of inpatient populations during the influenza season. Most of the modules require that a trained infection preventionist (IP) conduct active, patientbased, prospective surveillance of events and their corresponding denominator data.


The HPS component of NHSN includes methods to track and manage blood and body fluid exposures and influenza vaccinations of healthcare workers. The biovigilance component includes the collection of adverse event data to improve outcomes in the use of blood products, organs, tissues, and cellular therapies. The hemovigilance module is designed for monitoring adverse reactions and quality control incidents related to blood transfusion.

The e-surveillance component of NHSN is a work in progress and aims to make more extensive use of electronic data stored in healthcare application databases for the surveillance of HAIs and antimicrobial use and resistance (AUR). These efforts focus on standards-based solutions for conveying healthcare data and validation processes to confirm that the data received at CDC accurately reflect the data transmitted by healthcare facilities. Access to electronic information is of critical importance to healthcare as well as other segments such as science, business, and industry. Innovative methods of HAI surveillance require using new electronic tools to obtain healthcare information. The National Health Information Infrastructure (NHII) was created by executive order of President George W. Bush in April 2004 to develop a comprehensive network of interoperable systems to promote access to healthcare information and decision support (7). Although the NHII supported ongoing research, adapting electronic data and new communication methods to acquire surveillance data of HAIs has been a slow and complex effort (7). Increasing the use of electronic health record (EHR) systems and other tools will help automate data collection tasks previously performed manually (8). Financial incentives up to $24 billion are available as a result of the U.S. Health Information Technology for Economic and Clinical Health Act (HITECH), a component of the American Recovery and Reinvestment Act of 2009. HITECH funding is expected to accelerate progress in EHRs deployment (9,10). Although new data-mining methods expedite surveillance efforts, until they are validated for sensitivity and specificity, they do not replace traditional practices of surveillance for infection that must continue to be conducted (11). However data are collected, surveillance measures must be accurate, comparable, and reflect the particular area of healthcare being monitored (12,13). To ensure that data collected will support decision making, the healthcare facility should focus on its most critical and large-scale problems and use surveillance methodology that adheres to sound epidemiologic principles (11). A brief synopsis of the NHSN patient safety component modules follow (14). For complete and up-to-date information about the NHSN surveillance system components and criteria, access www.cdc.gov/nhsn/.


Procedure-Associated Module

Protocols in this module offer guidance relating to surgical site infection (SSI) and postprocedure pneumonia (PPP) monitoring. PPP events are monitored only for inpatient operative procedures and only during the patient’s stay (i.e., postdischarge surveillance methods are not used for PPP).


Device-Associated Module

The use of medical instruments increases the risk of developing an HAI, and most patients admitted for care are exposed to a medical device in the course of their treatment. These devices include, but are not limited to, vascular and urinary catheters and respiratory ventilators. NHSN enables facilities to monitor infectious complications associated with the use of these devices and also related processes that might increase infection risk, such as central-line insertion practices (CLIPs).


Antimicrobial Use and Resistance Module

As part of their facility’s antimicrobial stewardship efforts, the AUR module helps healthcare facilities electronically capture antimicrobial use and microorganism resistance to antimicrobials and analyze and report that data.


Multidrug-Resistant Microorganism and Clostridium Difficile Infection Module

The MDRO and C. difficile infection (CDI) module helps facilities meet criteria and metrics outlined in several organizational guidelines to control and measure the spread of MDROs and CDIs within their healthcare system. The module includes required and optional surveillance activities that can be tailored to the needs of the facility. In addition, process measures such as adherence to Contact Precautions when caring for patients known to be infected or colonized with an MDRO or C. difficile, as well as active surveillance testing for MDROs can be monitored. Finally, facilities may also measure the incidence and prevalence of positive cultures of these microorganisms in their patients.


Vaccination Module

Inpatient hospitalizations provide opportunities for routine influenza and infectious disease vaccinations in accordance with published recommendations. The vaccination module provides a means to monitor the success of efforts to capitalize on these opportunities.


PURPOSES OF SURVEILLANCE

A healthcare facility should have clear goals before implementing a program, and these goals must be reviewed and updated frequently using a tool such as an infection control annual risk assessment. This assessment should identify new infection risks resulting from evolving patient populations and facility priorities (15). Examples include the introduction of new high-risk medical interventions, an increasingly immunocompromised patient population, and changing pathogens or antibiotic resistance. It is vital to identify and state goals or purposes of surveillance before designing a system and starting surveillance (11,16,17).


Establishing Endemic Rates to Inform Prevention Strategy

Most HAIs are endemic, that is, not part of recognized outbreaks (18). A basic purpose of surveillance is to quantify endemic baseline HAI rates; 91% of hospitals reported using surveillance data for this purpose (19). Baseline infection rates provide facilities with objective knowledge of the ongoing infection risks in their patients, and calculating these metrics is a first step toward infection prevention (19) (see Data Analysis). Determining endemic rates helps advance activities to improve quality of care. Failure to use surveillance data or evidence-based results to guide
prevention efforts is misguided and costly, compromising patient care and unduly burdening today’s vulnerable healthcare system.


Identifying Outbreaks

Once endemic rates are established, focusing on deviations from the baseline may lead to identification of infectious outbreaks. The benefits of maintaining routine surveillance must be weighed against its heavy resource burdens. Outbreaks of HAIs are often identified more quickly by astute clinicians or laboratory personnel rather than by IP analysis of surveillance data. This lack of timeliness often limits infection prevention personnel’s use of routine surveillance in identifying outbreaks in a hospital. Automatic computerized tracking mechanisms found in infection prevention and laboratory-based software and new, innovative surveillance techniques have the potential to quickly identify outbreaks and unusual or rare laboratory findings requiring immediate follow-up (8,13,20,21).


Evaluating Control Measures

After a problem has been identified through surveillance and control measures have been initiated, monitoring is needed to ensure that the problem has been controlled or eliminated. Alternatively, monitoring may show that some control measures are actually ineffective or unnecessary. For example, daily changing of respiratory ventilator breathing circuits was instituted and believed to help prevent ventilator-associated pneumonia (VAP). However, surveillance data have proven this intervention to be a costly and ineffective method of lowering VAP rates (22,23). After the initial success of instituting control measures, it is also necessary to counteract complacency and not revert to preintervention routines. Monitoring efforts require vigilance and constancy in the collection and evaluation of surveillance data and the dissemination of findings to participants (24,25).


Collaborating with New and Existing Partners

During the past decade, individuals, consumer groups, legislative and regulatory agencies, and payors have heightened awareness of the problem of HAIs. Many state legislatures have instituted requirements for public disclosure of HAI rates, and many of these mandates require the use of the NHSN for facility reporting and state acquisition of HAI data. As a result, new state HAI programs have been developed and new relationships forged between these agencies, healthcare facilities, consumer groups, and federal HAI surveillance and prevention groups. States then have the information needed to inform the development of statewide initiatives to tackle HAI issues. Measures are also underway to link pay for performance with prevention of HAIs in acute care settings (26), both of which require collection of HAI data. Because of this increased focus, the Healthcare Infection Control Practices Advisory Committee developed a guidance document on public reporting of HAIs (27).

Many regulatory and accreditation organizations have interest in the measurement and prevention of HAIs. That interest has influenced how infection prevention programs develop policy and carry out their surveillance and infection reporting duties. Regulatory agencies within the U.S. Department of Health and Human Services, CMS, and the leading private sector accrediting body, the Joint Commission (28,29) are tasked with ensuring quality healthcare delivery to Medicare and Medicaid recipients. Since 1992, hospitals accredited by the Joint Commission have been required to use surveillance to bring about change in the risk of infection to patients (30). There is renewed and heightened focus on patient safety related to prevention of infection and transparency of mechanisms. CMS changed the rules for the hospital Inpatient Prospective Payment System for fiscal year 2011, authorizing a higher annual payment for hospitals reporting central line-associated bloodstream infection (CLABSI) rates with the other measures required under the Inpatient Prospective Payment System (31). SSI rate reporting will be required, beginning in 2012. This process gives hospitals a financial incentive to report the quality of their services and allows CMS access to data to help consumers make more informed decisions about their healthcare.

The CDC’s Division of Healthcare Quality Promotion (DHQP) has been charged to collaborate in the healthcare, computer, business, and government sectors to create the expertise, information, and tools necessary to implement processes and prevention strategies to reduce and prevent HAIs. One of DHQP’s partners, the National Quality Forum, is responsible for endorsing measures of healthcare quality—including HAI measures—that can be used for public reporting and quality improvement (29). State agencies are also partnering with DHQP, and many agencies have instituted regulatory controls by enacting laws that mandate public reporting of HAIs (32,33). In addition to participating in other infection prevention initiatives and research studies, healthcare facilities are also engaged with DHQP to implement strategies to reduce and prevent HAIs. The infrastructure needed by healthcare facilities to participate and report these prevention measures however, is sometimes not available (24,34,35 and 36).


Comparing Infection Rates among Healthcare Facilities and External Groups

Establishing the priorities of an infection control program is a difficult and ever-changing task. Surveillance allows a healthcare facility to compare their HAI rates with rates of other facilities. Interfacility rate comparison identifies outcomes that are most in need of improvement and identifies places where the finite resources of an infection control program should be directed. A healthcare facility’s high infection rate, as compared with other facilities, may signal an investigation of a potential problem. In recent years, there has been a greater focus on external comparisons of facilities and groups to other facilities and groups as well as comparisons to aggregates such as the NHSN. Mandatory reporting systems that will be used for interfacility comparisons should be based on established public health science (3,27). This type of comparison requires accurate data collection and appropriate risk adjustment of HAI outcomes. To adequately adjust infection data, patients’ intrinsic and extrinsic risks for infection must be examined (see Comparing Risk-Adjusted HAI Data). Progress has been achieved in suitable risk adjustment, but more data on specific risk factors are still needed (12). A facility’s overall HAI rate is not a valid measure of the efficacy of the infection prevention program (37,38,39,40,41 and 42), does not take underlying risk differences
into account, and should not be used for interhospital comparison. The standardized infection ratio (SIR) (see Standardized Infection Ratio), incorporating methodologically sound risk adjustment of HAI outcomes, is the preferred summary statistic to use for interhospital comparison.

Ensuring data accuracy is another challenge for healthcare facilities and the organizations responsible for aggregating their data. The independent determination of data accuracy or validation is an essential activity for organizations aggregating data from multiple collectors (13). Aggregating organizations should examine a facility’s data and screen for unusual patterns or other indications of inaccuracies. This should include reporting data back to the hospital to confirm that data received matches data sent. Determining the accuracy of the data includes confirming HAI case-finding methodology using three measures: sensitivity, positive predictive value (PPV), and specificity. Sensitivity is the percentage of all true infections that are reported. PPV is the percentage of reported infections deemed to be true infections. Specificity is the reported number of patients without HAI divided by the true number of patients without HAI (43). Using an independently trained observer to ascertain the sensitivity, PPV, and specificity of HAI case-finding will strengthen the credibility of the surveillance system, help identify the means to adjust rates for facilities with rates that vary, and enhance the overall strength of the surveillance system.

Although determining the sensitivity, PPV, and specificity of all facilities validates the credibility of the multifacility surveillance system or aggregate group, determining the variation in sensitivity and specificity among facilities in a multifacility system may be an even more critical measure of credibility. Surveillance rarely achieves 100% accuracy. However, if one hospital finds HAIs among only 30% of its patients compared with a second hospital, which finds HAIs among 90% of its patients, the disparity in infection rates may be caused entirely by differences in case-finding sensitivity.

Determining sensitivity and specificity is difficult and resource-intensive. Fortunately, the NNIS evaluation study suggested that IPs generally report HAI data accurately. Low sensitivity (underreporting of infections), which ranged from 59% to 85% for the four major sites of HAI—bloodstream, pneumonia, urinary tract, and surgical—was a more serious problem compared with reporting of other measures. PPV ranged from 72% to 92% for these sites, and specificity ranged from 97.7% to 98.7% (44). Because of the increase in state-based mandatory reporting of HAIs, there has been a corresponding increase in assessing data for inaccuracies and conducting formal validation studies (45,46,47,48 and 49). Additionally, federal funds have been allocated for this purpose (50).


ATTRIBUTES OF A SURVEILLANCE PROGRAM

A successful surveillance program uses thorough epidemiological principles in its planning. The data collected must be useful and complete. Attributes of a successful surveillance program are as follows:



  • Accuracy


  • Timeliness


  • Usefulness


  • Consistency


  • Practicality


Accuracy

Effective surveillance must produce accurate data. Inaccurate data can result in wasted effort, resources, and personnel time, as well as the initiation of inappropriate, potentially harmful interventions. Using case definitions advances the accurate collection of HAI surveillance. A case definition is a “set of standard criteria for deciding whether or not a person has a particular disease or healthrelated event” (51). The use of case definitions helps validate that patients identified as having an HAI do, in fact, have an HAI. The NHSN provides and requires the use of criteria (case defintions) for all types of HAIs.

Likewise, for surveillance data to be accurate, the denominator data, defined as the number of patients who are at risk for infection that is used to calculate infection rates, must also be accurate. For example, IPs must invest time to ensure that the identified number of central-line catheter days is correct to achieve an accurate CLABSI rate. Data collection can be time-consuming, and sampling methods offer a less time-consuming alternative (52). Sampling methods however, including electronic capture, must be validated by a proven method before implementation.

Gathering accurate, highly sensitive numerator data is possible only if patients are monitored for the entire period of the case definition. For instance, NHSN case definitions for SSI specify a period of 30 days to 1 year postsurgery; the extended time period allows data to be gathered on surgeries that involve surgical impacts. Limiting surveillance to 1 week would result in inaccurately low SSI rates caused by a lack of sensitivity, because it would exclude patients who developed an SSI in weeks 2 to 4 (in a surgical procedure without implant).

Decreasing length of stay (LOS) within hospitals could directly affect the SSI rate. During 1970 to 2005, the average LOS in US hospitals decreased from 7.8 to 4.8 days (53). This shortened LOS requires that surveillance be adapted to identify those patients whose infection develops in the postdischarge period (i.e., postdischarge surveillance). This adaptation is especially important for operative patients. Studies have shown that the percentage of SSIs that would be missed without postdischarge surveillance ranges from a low of 7% in trauma patients to a high of 85% to 95% in cesarean section patients (54,55,56,57,58 and 59).

Postdischarge surveillance could use one of several techniques including contacting the patient or physician by telephone or mail; the surgeon, nurse, or IP observing the patient in the clinic; and detecting SSIs on readmission (54,55,56,57,58 and 59). Significant methodological problems can exist, however, with postdischarge surveillance, such as physicians not responding in a timely manner to the IP; patients inaccurately diagnosing infection (60), and uncertainty about how to account for patients lost to follow-up. These problems are not easily addressed, and studies have mixed findings on proposed solutions. For example, education on signs and symptoms of SSI would seem to improve a patient’s ability to diagnose their own SSI. In one study, however, such education actually corresponded to a reduced sensitivity and significant reduction in specificity
of SSI identification (65.2%) compared with sensitivity and specificity of SSI identification in noneducated patients (83.3%) (59).

Some studies had contradictory findings. One study successfully used antimicrobial administration in discharged surgical patients as a case-finding method. The same study also found that using other methods, such as mailed questionnaires to surgeons or patients, resulted in poor sensitivity data (61). Yet in another study, the significant amounts of antibiotics dispensed following surgery, especially in breast surgeries (14%), led the authors to suspect preoperative prophylaxis extended into the postdischarge period may be a threat to the predictive value of postdischarge antimicrobial data as a case-finding tool (62). Clearly, postdischarge surveillance methods need refinement. Research may reveal that a variety of procedure-specific data sources and methods are needed to identify the majority of postdischarge SSIs. Until a standard method is developed and validated, the Surgical Wound Infection Task Force recommended in 1992 that facilities use a method that accommodates their resources and data needs (63).

Finally, accurate data also requires precise mathematical calculations. Many HAI software programs are currently available to assist with this, including the NHSN, which calculates risk-stratified rates, frequency tables, run charts, and SIRs.


Timeliness

A sound surveillance program produces useful and timely HAI data. There are two temporal methods of surveillance: prospective and retrospective. Prospective surveillance is monitoring patients during admission for symptoms and case-definition criteria so that the infection is identified as it develops. It involves reviewing patient records and visiting patient-care units during the patients’ stay. Retrospective surveillance involves looking back to identify infections after they have occurred. An example of retrospective surveillance is to identify infections using only the review of a patient’s chart following discharge.

Prospective surveillance can more quickly identify clusters of infection, and therefore, facilitate prompt investigation, analysis, and response activity and may prevent the development of more cases. It can also provide increased visibility of IPs on the wards, encourage staff reporting of suspected infections, and produce timely feedback of data for quality improvement purposes. One disadvantage is that it requires greater resources than retrospective surveillance, because multiple data sources need to be accessed rather than viewing all data gathered on a single completed patient chart. Retrospective surveillance “allows for a comprehensive review of sequential events in the closed record and avoids the often time-consuming efforts of locating and reviewing charts in busy patient care areas.” Retrospective surveillance is best suited for issues that “have little opportunity or need for intervention” because the identification of HAI issues may be delayed (11). NHSN participation requires prospective surveillance.


Usefulness

Because infection prevention efforts have competing priorities, limited surveillance resources are best spent on actionable issues, including those that have validated methods of improvement (e.g., bolstered standards of practice, instituted prevention bundles, or uses new or enhanced technology). Monitoring issues with no opportunity for improvement produces wasted effort, frustration, and does not support the principles of quality improvement.


Consistency

Surveillance data must be collected in a consistent manner to be useful. First, individuals and facilities must be consistent in their collection and interpretation of data. Surveillance personnel must uniformly apply case definitions (e.g., all data collectors should identify a case of VAP the same way). Consistency is achieved with uniform case definitions, surveillance methods, and data sources, as well as with targeted education of IPs. Within facilities, new case-finding staff should be mentored in correct methods and the determination of cases validated by experienced IPs. Consistency of case determinations within an infection control department should be validated routinely by cross-checking. Sharing case studies among facilities in which subject matter experts have made HAI determinations (64) produces greater consistency. A stable infection prevention and control department that has low rates of staff turnover encourages data consistency.

The targets of surveillance should also maintain consistency over time. Longitudinal data must be available to successfully analyze the value of prevention efforts. This does not mean that newly identified issues must be set aside. Considering that a facility’s high-risk procedures and patient populations will probably experience only incremental changes over time, ongoing monitoring and collecting longitudinal data can occur simultaneously with the monitoring of newly identified issues.


Practicality

Finally, the best surveillance plan is only as good as its execution. Although plans must be based on the needs of the facility, they must also reflect the actual resources available. According to a recent survey, 44% of an IP’s time is spent on surveillance activities, and a facility of 500 beds has, on average, just over 0.5 fulltime equivalents in an IP role (65). Facility-wide, active, prospective surveillance in such a setting is limited and cannot be completed accurately, comprehensively, or in a timely manner.


DEVELOPING A FACILITY SURVEILLANCE PLAN

Every facility should develop a formal surveillance plan, methodologically identifying the goals, types, approaches, and methods of surveillance to be undertaken. The essential steps involved in this process are listed in Table 89-1 and are explored further in this section.


Assessing the Population

The first step in developing a surveillance plan is to perform a facility-specific risk assessment, which identifies the facility’s patient populations at greatest risk of acquiring HAIs and procedures posing the greatest risk of infectious complications (11). From this information, surveillance efforts can be prioritized and valuable resources used efficiently. Both assessment and plan should be reviewed
routinely to determine changing facility needs. The following variables should be identified in the assessment and used to determine HAI risks, surveillance capabilities, and how efforts should be prioritized:








TABLE 89-1 Essential Elements of Surveillance









  • Assess the population



  • Select the outcome (event) or process to survey



  • Choose the surveillance method(s) keeping in mind the need for risk adjustment of data



  • Monitor for the event or process



  • Apply surveillance definitions during monitoring



  • Analyze surveillance data



  • Report and use surveillance information


(Adapted with permission from Lee TB, Baker OG, Lee JT, et al. Recommended practices for surveillance. Am J Infect Control 2007;35(7):427-440.)

Only gold members can continue reading. Log In or Register to continue

Stay updated, free articles. Join our Telegram channel

Jun 22, 2016 | Posted by in GENERAL & FAMILY MEDICINE | Comments Off on Surveillance of Healthcare-Associated Infections

Full access? Get Clinical Tree

Get Clinical Tree app for offline access