Facebook Pixel
Systematic Review - Glossary Term Illustration

Systematic Review

A systematic review is a rigorous, methodical, and transparent synthesis of research evidence on a specific question or topic.

Systematic Review

Systematic Review (SR) is a rigorous, methodical, and transparent synthesis of research evidence on a specific question or topic. Unlike traditional narrative reviews, systematic reviews follow a predefined, replicable protocol to identify, appraise, and summarize all relevant studies, with the goal of minimizing bias and providing the most accurate and reliable answer possible.

Key Characteristics:

  • Comprehensive and systematic search strategy
  • Explicit inclusion and exclusion criteria
  • Data extraction and, where possible, quantitative synthesis (meta-analysis)
  • Transparent reporting of methods and results

Historical Context:
The systematic review framework emerged prominently in the 1970s and 1980s as researchers sought more objective, evidence-based approaches to summarizing medical literature. Organizations such as the Cochrane Collaboration helped formalize systematic review standards, which are now widely used in health sciences and other fields.

Applied Example:
A systematic review might address whether progressive resistance training improves strength in older adults, including a systematic search of randomized controlled trials, risk-of-bias assessments, and, if feasible, a meta-analysis to pool effect sizes.

Related Terms:

Frequently Asked Questions (FAQs)

How is a systematic review different from a narrative review?

  • A narrative review is more descriptive and often subjective, whereas a systematic review follows a structured, replicable methodology to minimize bias.

What is the difference between a systematic review and a meta-analysis (MA)?

  • No. A systematic review is the broader process of collecting and synthesizing evidence; a meta-analysis is a statistical technique that may be included within a systematic review to combine numerical results.

Why are systematic reviews important?

  • They help clinicians, researchers, and policymakers make evidence-informed decisions by summarizing high-quality, relevant studies in a transparent and unbiased way.

What makes it a systematic review?

  • A systematic review is defined by its transparent, replicable, and comprehensive methodology for gathering, analyzing, and synthesizing research.

Include all available peer-reviewed and published original research.

  • This principle may be considered the “anti–cherry-picking” guideline. Cherry-picking refers to the biased selection of research to support a predetermined assertion. However, any less-than-comprehensive approach, whether intentional or inadvertent, risks introducing selection bias. To minimize this bias and avoid the exclusion of conflicting data (confirmation bias), systematic reviews should include all relevant peer-reviewed and published original research, without restriction based on arbitrary quality ratings or oversimplified evidence hierarchies.

Review topics, not narrowly defined research questions.

  • Rather than starting with a predefined hypothesis, reviews should begin with a broad topic and allow conclusions to emerge from the full body of available evidence. This approach reduces hypothesis generation errors and confirmation bias by preventing early commitment to a specific claim that the researcher might subconsciously seek to validate.

Prioritize comparative research whenever available.

  • Intervention effectiveness is inherently relative. Comparative research is required to determine whether one intervention is more effective or reliable than another, and is essential for establishing best-practice recommendations. Additionally, comparative outcomes provide the data necessary to refine probabilistic models used in optimizing the selection of interventions. (See: ...Single Best Approach .)

Recognize that study design alone does not determine methodological rigor

  • Most traditional “levels of evidence ” hierarchies are flawed by the assumption that study design is the primary determinant of research quality. In reality, randomized controlled trials (RCTs), observational studies, and cohort studies may be well-designed or poorly executed. Further, study design should align with the research question. For example, RCTs are ideal for comparing acute effects between previously studies interventions, and prospective cohort studies are useful for modeling risk over time, and retrospective observational studies are appropriate for assessing prevalence or historical patterns. A more defensible hierarchy considers the number and type of controls used—such as peer review, replication, blinding, and statistical analysis—rather than assuming intrinsic superiority of one study design over another. (See: Levels of Evidence .)

Apply a structured vote-counting method

  • Vote counting synthesizes directional trends across studies and is more resistant to the distortions that may occur in meta-analysis. It avoids the compounding of unknown confounding variables and reduces the interpretive errors that arise from combining heterogeneous datasets into a single aggregate statistic. Importantly, vote counting must follow a clearly defined rubric, such as the one used by the Brookbush Institute:
    • A is better than B in all studies → Choose A
    • A is better than B in most studies, and additional studies show similar results between A and B → Choose A
    • A is better than B in some studies, and most studies show similar results between A and B → Choose A (with reservations)
    • Some studies show A is better, some show similar results, and some show B is better → Results are likely similar (unless there is a clear moderator variable such as age, sex, or injury status that explains the divergence)
    • A and B show similar results in the gross majority of studies → Results are likely similar.
    • Some studies favor A, others favor B → Unless the number of studies overwhelmingly supports one side, results are likely similar.
  • This method avoids reliance on null-hypothesis significance testing across pooled data (i.e., meta-analysis) and instead identifies the most probable trend in the literature.

Be cautious with meta-analyses (MA)

  • While meta-analyses can provide useful aggregate statistics, they should not be interpreted as inherently superior to trend-based synthesis (the methods of systematic review mentioned above). Potential problems with meta-analyses include the aggregation of effect sizes (averages of averages), which may obscure clinically meaningful patterns. Combining heterogeneous studies increases the potential for confounding variables. Failure to reject the null hypothesis in a meta-analysis may reflect regression to the mean or methodological flaws, rather than a true lack of difference. Meta-analyses should never be elevated above consistent trends demonstrated by direct, well-controlled, comparative research.

References:

  • Higgins JPT, Thomas J, Chandler J, et al., eds. Cochrane Handbook for Systematic Reviews of Interventions. Version 6.3. Cochrane, 2022.
  • Moher D, Liberati A, Tetzlaff J, Altman DG. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. PLoS Med. 2009;6(7):e1000097.

Discussion

Comments

Guest