Fig. 2.1
Timeline representing the evolution of comparative effectiveness research in each of its forms within the U.S.
While there have been many public sector activities in CER including those funded by the AHRQ, NIH, the Veterans Health Administration (VHA), the Department of Defense (DoD), until recently it has not possible to estimate the total number of CER studies funded due to a lack of a standard, systematic means for reporting CER across the funding agencies. Additionally a number of public and private sector organizations have been engaged in CER, much of it has been fragmented and not aligned with a common definition or set of priorities for CER, resulting in numerous gaps in the research being conducted.
Thus the Patient Protection and Affordable Care Act of 2010, subsequently approved by Congress as the Affordable Care Act (ACA), established the Patient Centered Outcomes Research Institute (PCORI) to be the primary agency to oversee and support the conduct of CER. The ACA was enacted with provisions for up to $470 million per year of funding for Patient Centered Outcomes Research (PCOR) which includes greater involvement by patients, providers, and other stakeholders for CER. PCORI is governed by a 21 member board which has supplanted the FCCCER. The research that PCORI supports should improve the quality, increase transparency, and increase access to better health care [2]. However, the creation and use of cost-effectiveness thresholds or calculations of quality adjusted life years are explicitly prohibited. There is also specific language in the act that the reports and research findings may not be construed as practice guidelines or policy recommendations and that the Secretary of HHS may not use the findings to deny coverage, reflecting political fears that the research findings could lead to the potential for health care rationing.
CER has also been a national priority in many countries including the UK (National Institute for Health and Clinical Excellence), Canada (The Canadian Agency for Drugs and Technologies in Health Care), Germany (Institute for Quality and Efficiency in Health Care), and Australia (Pharmacy Benefits Advisory Committee) to name just a few.
2.5 CER and Stakeholder Engagement
A major criticism of prior work in clinical and health services research and potential explanation for the gap in practical knowledge with limited translation to real-world practice is that studies failed to maintain sustained and meaningful engagement of key decision-makers in both the design and implementation of the studies. Stakeholder engagement is felt to be a critical element for researchers to understand what clinical outcomes matter most to patients, caregivers, and clinicians in order to design “relevant” study endpoints that improve knowledge translation into usual care. Stakeholders may include patients, caregivers providers, researchers, and policy-makers.
The goal of this emphasis on stakeholder engagement is to improve the dissemination and translation of CER. By involving stakeholders in identifying the key priorities for research, the most relevant questions are identified. In fact this was a major activity of the IOM when it established the initial CER priorities [5]. Now PCORI engages stakeholders in a similar fashion to identify new questions that are aligned with its five National Priorities for Research: (1) assessing options for prevention, diagnosis, and treatment; (2) improving health care systems; (3) addressing disparities; (4) communicating and disseminating research; and (5) improving patient-centered outcomes research methods and infrastructure [6]. Engagement of stakeholders can also help identify the best ways to disseminate and translate knowledge into practice.
2.6 Types of CER Studies
Comparative effectiveness research requires the development, expansion, and use of a variety of data sources and methods to conduct timely and relevant research and disseminate the results in a form that is quickly usable by clinicians, patients, policymakers, and health plans and other payers. The principle methodologies employed in CER include randomized trials (experimental study), observational research, data synthesis, and decision analysis [7]. These methods can be used to generate new evidence, evaluate the available existing evidence about the benefits and harms of each choice for different patient groups, or to synthesize the existing data to generate new evidence to inform choices. CER investigations may be based on data from clinical trials, clinical studies, or other research. As a more detailed coverage of specific research methods are provided in subsequent chapters of this text, we will focus the discussion here on aspects particularly relevant for the conduct of CER.
2.6.1 Randomized Trials
Randomized comparative studies represent perhaps the earliest form of comparative effectiveness research for evidence generation in medicine. Randomized trials generally provide the highest level of evidence to establish the efficacy of the intervention in question and thus can be considered the gold standard of efficacy research. Randomized trials also provide investigators with an opportunity to study patient-reported outcomes and quality of life associated with the intervention, and also to measure potential harms of treatment. However traditional randomized trials have very strict inclusion and exclusion criteria, are typically performed after selection of the healthiest patients, and involved detailed and rigorous patient monitoring and management that is not routinely performed in day-to-day patient management. Thus while the traditional randomized controlled trial may assess the efficacy of an intervention, the real-world effectiveness of the intervention when performed in community practice may be quite different.
One of the main limits to generalizability in traditional randomized controlled trials is the strict patient selection criteria designed to isolate the effect of the intervention from confounding. Moreover, treatment in randomized controlled trials often occurs in ideal clinical conditions that are not readily replicated during real-world patient care. Some of this difference stems from the fact that traditional randomized trials are often used for novel therapy development and for drug registration or label extension. In contrast among the goals of CER trials are to compare the effectiveness of various existing treatment options, to identify patient and tumor subsets most likely to benefit from interventions, to study screening and prevention strategies, and to focus on survivorship and quality of life. The results of CER trials should be generalizable to the broader community and easily disseminated for broad application without the stringent criteria inherent in traditional randomized trials.
A number of alternative, non-traditional trial designs may be considered for CER and overcome some of the limitations outlined above. In Cluster Randomized Trials, the randomization is by group rather than the individual patient. Implementation of a single intervention at each site improves external validity as patients are treated as in the real-world and there is less risk for contamination across the arms. Statistical methods such as hierarchical models must be used to adjust for cluster effects effectively reducing the statistical power compared to studies with individual randomization. Pragmatic Trials are highly aligned with the goals of CER as they are performed in typical practice and in typical patients with eligibility criteria designed to be inclusive. The study patients have the typical comorbid diseases and characteristics of patients in usual practice. In keeping with the practical nature and intent of the pragmatic trials, the outcomes measured are tailored to collect only the most pertinent and easily assessed or adjudicated. While these trials have good both internal (individual randomization) and external validity, the lack of complete data collection precludes meaningful subsequent subgroup analysis for evaluation of treatment heterogeneity. Adaptive Trials change in response to the accumulating data by utilizing the Bayesian framework to formally account for prior knowledge. Key design parameters change during the execution based upon predefined rules and accumulating data from the trial. Adaptive designs can improve the efficiency of the study and allow for more rapid completion. But there are limitations to adaptive designs, that affect the potential for type I error in particular. Sample size estimations can thus be complex and require careful planning with adjustment of statistical analyses.