- Healthcare decision makers in search of reliable information comparing health interventions increasingly turn to systematic reviews for the best summary of the evidence. Systematic reviews identify, select, assess, and synthesize the findings of similar but separate studies and can help clarify what is known and not known about the potential benefits and harms of drugs, devices, and other healthcare services. Systematic reviews can be helpful for clinicians who want to integrate research findings into their daily practices, for patients to make well-informed choices about their own care, and for professional medical societies and other organizations that develop clinical practice guidelines.
In the Medicare Improvement for Patients and Providers Act of 2008, Congress directed the Institute of Medicine (IOM) to develop standards for conducting systematic reviews and to develop standards for clinical practice guidelines, which are evidence-based recommendations for clinicians to use when treating patients. The IOM formed two distinct committees to respond to this charge, and each committee assessed the relevant evidence and considered expert guidance to develop the standards. This report,Finding What Works in Health Care: Standards for Systematic Reviews, recommends standards for systematic reviews of the comparative effectiveness of medical or surgical interventions (see a list of the standards).
The quality of systematic reviews is variable. Too often, the scientific rigor of the collected literature is not scrutinized or there are errors in data extraction and meta-analysis. Reporting biases present the greatest obstacle to collecting all relevant information on the effectiveness of an intervention. Research is important to individual decision making, whether it reveals benefits, harms, or lack of effectiveness of a health intervention. Thus, the systematic review should identify all of the studies—and all of the relevant data from the studies—that may pertain to the research question.
The task of identifying relevant research data is challenging. Although hundreds of thousands of research articles are indexed in bibliographic databases each year, a substantial proportion of effectiveness data are never published or are not easy to access. Moreover, it is well documented that published data may not represent all of the findings on an intervention’s effectiveness. Positive findings are more likely to be published than null or negative results.
In many cases, the users cannot determine the quality of a systematic review because the details of the review are so poorly documented. Additionally, many systematic reviews do not focus on questions that are important for real-world healthcare decisions, such as determining whether the benefits of taking a specific medication outweigh the risks.
Standards can improve the quality of systematic reviews, which will minimize the likelihood of clinicians coming to the wrong conclusions and ultimately making the wrong recommendation for treatment. The standards presented in this report—developed by the authoring committee—are meant to ensure objective, transparent, and scientifically valid systematic reviews. The need for establishing standards for systematic reviews was underscored in the health reform legislation The Patient Protection and Affordable Care Act of 2010, which created the nation’s first nonprofit Patient-Centered Outcomes Research Institute (PCORI).
Developing Standards for Systematic Reviews
The committee defines a “standard” as “a process, action, or procedure for performing systematic reviews that is deemed essential to producing scientifically valid, transparent, and reproducible results.”
Systematic reviews of comparative effectiveness research—a type of research that compares different treatment options for the same disease—can be narrow in scope and consist of simple comparisons, such as the effectiveness of one drug versus another. They also can address more complex questions, such as the comparative effectiveness of drugs versus surgery for a specific condition. The committee’s standards apply principally to publicly funded systematic reviews of comparative effectiveness research that focus specifically on treatments.
The evidence base for how best to conduct systematic reviews is limited, and no set of standards is generally accepted or consistently applied. For example, there is little research on how to manage bias for individuals providing input into the systematic review, or on who should screen and select studies for the review. In developing its standards, the committee relied on the current methodological evidence and guidance from respected organizations that produce systematic reviews. The committee’s standards address the entire systematic review process, from locating, screening, and selecting studies for the review, to synthesizing the findings (including meta- analysis) and assessing the overall quality of the body of evidence, to producing the final review report.
The standards are current “best practices”; they are not the last word. All of the recommended standards must be considered provisional, pending better empirical evidence about their scientific validity, feasibility, efficiency, and ultimate usefulness in healthcare decision making. The standards will be especially valuable for systematic reviews of high-stakes clinical questions with broad population impact, where the use of public funds to get the right answer justifies careful attention to the rigor of the systematic review. Individuals involved in systematic reviews should be thoughtful about all of the recommended standards and elements, using their best judgment if resources are inadequate to implement all of them, or if some seem inappropriate for the particular task or question at hand. Transparency in reporting the methods actually used and the reasoning behind the choices are among the most important of the standards recommended by the committee.
Source: Institute Of Medicine
- Evidence-Based Medicine
- SORT (Strength Of Recommendation Taxonomy)
- The PRISMA Statement for Reporting Systematic Reviews and Meta-Analysis
The 7 Principles of EBM
- Os Sete Princípios da MBE (1, 2, 3) 10/16/11
- Ensaio sobre o pensamento lógico (4) 11/05/11
- A Evidência Científica e o Julgamento Clínico (7) 04/17/12
Critical Analysis of therapeutic relevance
- Análise Crítica da Relevância Terapêutica (2*)
- A Magia do NNT (3)
- Aprofundando Análise de Relevância Terapêutica
- Aplicabilidade de Evidências Sobre Terapia. Princípio da Complacência (6)
Critical Analysis of Diagnostic Methods
- O Que É Acurácia 05/28/11
- e Como Avaliar Criticamente Um Artigo Sobre Acurácia 06/01/1
- Utilidade de Métodos Diagnósticos 07/03/11
- O Paradigma do Benefício de Métodos Diagnósticos 11/03/11
- O Paradigma Less Is More
- Porquê Médicos Rejeitam Evidência Científica
- A Simple Health Care Fix Fizzles Out
Evidence-based medicine recognizes that expert opinion and
pathophysiological reasoning make important contributions to clinical
decision making, but that the greatest certainty about medical benefit
derives from studies that measure clinical outcomes directly and