Type of research literature review carried out by means of well-defined protocols
Systematic reviews are a type of literature review that uses systematic methods to collect secondary data, critically appraise research studies, and synthesize findings qualitatively or quantitatively. Systematic reviews formulate research questions that are broad or narrow in scope, and identify and synthesize studies that directly relate to the systematic review question. They are designed to provide a complete, exhaustive summary of current evidence relevant to a research question. For example, systematic reviews of randomized controlled trials are key to the practice of evidence-based medicine, and a review of existing studies is often quicker and cheaper than embarking on a new study.
While systematic reviews are often applied in the biomedical or healthcare context, they can be used in other areas where an assessment of a precisely defined subject would be helpful. Systematic reviews may examine clinical tests, public health interventions, environmental interventions, social interventions, adverse effects, and economic evaluations.
An understanding of systematic reviews and how to implement them in practice is highly recommended for professionals involved in the delivery of health care, public health, and public policy.
A systematic review aims to provide a complete, exhaustive summary of current literature relevant to a research question. The first step in conducting a systematic review is to create a structured question to guide the review. The second step is to perform a thorough search of the literature for relevant papers. The Methodology section of a systematic review will list all of the databases and citation indexes that were searched such as Web of Science, Embase, and PubMed and any individual journals that were searched. The searching of different databases is a hallmark of clinical trials. In this regard, more recently there has been increased recognition of the importance of using different search technologies, with artificial intelligence-based tools gaining recognition. The titles and abstracts of identified articles are checked against pre-determined criteria for eligibility and relevance to form an inclusion set. This set will relate back to the research problem. Each included study may be assigned an objective assessment of methodological quality preferably by using methods conforming to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement (the current guideline) or the high quality standards of Cochrane.
Systematic reviews often, but not always, use statistical techniques (meta-analysis) to combine results of eligible studies, or at least use scoring of the levels of evidence depending on the methodology used. An additional rater may be consulted to resolve any scoring differences between raters. Systematic review is often applied in the biomedical or healthcare context, but it can be applied in any field of research. Groups like the Campbell Collaboration are promoting the use of systematic reviews in policy-making beyond just healthcare.
A systematic review uses an objective and transparent approach for research synthesis, with the aim of minimizing bias. While many systematic reviews are based on an explicit quantitative meta-analysis of available data, there are also qualitative reviews which adhere to standards for gathering, analyzing and reporting evidence. The EPPI-Centre has been influential in developing methods for combining both qualitative and quantitative research in systematic reviews. The PRISMA statement suggests a standardized way to ensure a transparent and complete reporting of systematic reviews, and is now required for this kind of research by more than 170 medical journals worldwide.
Developments in systematic reviews during the 21st century included realist reviews and the meta-narrative approach, both of which addressed problems of methods and heterogeneity existing on some subjects.
Scoping reviews are distinct from systematic reviews in a number of important ways. A scoping review is an attempt to search for concepts, mapping the language which surrounds those and adjusting the search method iteratively. A scoping review may often be a preliminary stage before a systematic review, which 'scopes' out an area of inquiry and maps the language and key concepts. As it is a kind of review which should be systematically conducted (the method is repeatable), some academic publishers categorize them as a kind of 'systematic review', which may cause confusion. Scoping reviews are helpful when it is not possible to carry out a systematic synthesis of research findings, for example, when there are no published clinical trials in the area of inquiry. Scoping reviews are a helpful method when an area of inquiry is very broad, for example, exploring how the public are involved in all stages systematic reviews. There is still a lack of clarity when defining the exact method of scoping review as it is both an iterative process and is still relatively new and there have been a number of attempts to improve the standardisation of the method. PROSPERO does not permit the submission of protocols of scoping reviews, although some journals will publish protocols for scoping reviews.
The main stages of a systematic review are:
Defining a question and agreeing an objective method. It is considered best practice to publish the protocol of the systematic review before initiating it in order to help avoid unplanned duplication and to enable comparison of reported review methods with what was planned in the protocol.
A search for relevant data from research that matches certain criteria. For example, only selecting research that is good quality and answers the defined question. Contacting a trained information professional or librarian can improve the quality of the systematic review.
'Extraction' of relevant data. This can include how the research was done (often called the method or 'intervention'), who participated in the research (including how many people), how it was paid for (for example funding sources) and what happened (the outcomes).
Assess the quality of the data by judging it against criteria identified at the first stage. This can include assessing the quality (or certainty) of evidence, using criteria such as GRADE.
Analyse and combine the data (using complex statistical methods) which give an overall result from all of the data. This combination of data can be visualised using a blobbogram (also called a forest plot). The diamond in the blobbogram represents the combined results of all the data included. Because this combined result uses data from more sources than just one data set, it's considered more reliable and better evidence, as the more data there is, the more confident we can be of conclusions.
Once these stages are complete, the review may be published, disseminated and translated into practice after being adopted as evidence.
Living systematic reviews
Living systematic reviews are a relatively new kind of high quality, semi-automated, up-to-date online summaries of research which are updated as new research becomes available. The essential difference between 'living systematic review' and conventional systematic review is the publication format. Living systematic reviews are 'dynamic, persistent, online-only evidence summaries, which are updated rapidly and frequently'.
Medicine and biology
In 1753, James Lind published A Treatise on the Scurvy, a study which he described in the subtitle as A Critical and Chronological View of What has been Published on the Subject. This is the first which can be said as systematic review. Lind comments, "It became requisite to exhibit a full and impartial view of what had hitherto been published on the scurvy," and, "before the subject could be set in a clear and proper light, it was necessary to remove a great deal of rubbish." The first proper analysis of different sources was a report made by Karl Pearson in 1904. Pearson analysed the effectiveness of typhoid vaccine from five studies of immunity and six studies of mortality. What amazed him was the irregularity of the data. He wrote:
Assuming that the inoculation is not more than a temporary inconvenience, it would seem to be possible to call for volunteers... [and] only to inoculate every second volunteer... with a view to ascertaining whether any inoculation is likely to prove useful... In other words, the 'experiment' might demonstrate that this first step to a reasonably effective prevention was not a false one.
This is considered as the first analysis of clinical trials using what is later known as meta-analysis.
In 1925, Ronald Fisher introduced the statistical method (probability) for such analysis in his book Statistical Methods for Research Workers. A colleague of Fisher, William Gemmell Cochran made an improved method in 1937. With Frank Yates, Cochran reported analysis of agricultural data in 1938 titled "The analysis of groups of experiments" using the new method.
In 1974, Archie Cochrane and colleagues made analysis of the use of aspirin in the prevention of heart attack (myocardial infarction). This became one of the most influential systematic analysis in randomized controlled trial. In 1979, Archie Cochrane wrote 'It is surely a great criticism of our profession that we have not organised a critical summary, by specialty or subspecialty, adapted periodically, of all relevant randomised controlled trials'. Critical appraisal and synthesis of research findings in a systematic way emerged in the 1970s after Gene V. Glass introduced the term 'meta analysis' in 1976. According to Glass, meta-analysis is "the statistical analysis of a large collection of analysis results from individual studies for the purpose of integrating the findings". Early syntheses were conducted in broad areas of public policy and social interventions, with systematic research synthesis applied to medicine and health. The Cochrane Collaboration was founded in 1993, building on the work by Iain Chalmers and colleagues in the area of pregnancy and childbirth.
Named after Archie Cochrane, Cochrane is a group of over 37,000 specialists in healthcare who systematically review randomised trials of the effects of prevention, treatments and rehabilitation as well as health systems interventions. When appropriate, they also include the results of other types of research. Cochrane Reviews are published in The Cochrane Database of Systematic Reviews section of the Cochrane Library. The 2015 impact factor for The Cochrane Database of Systematic Reviews was 6.103, and it was ranked 12th in the “Medicine, General & Internal” category. There are six types of Cochrane Review:
Intervention reviews assess the benefits and harms of interventions used in healthcare and health policy.
Diagnostic test accuracy reviews assess how well a diagnostic test performs in diagnosing and detecting a particular disease.
Methodology reviews address issues relevant to how systematic reviews and clinical trials are conducted and reported.
Qualitative reviews synthesize qualitative and quantitative evidence to address questions on aspects other than effectiveness.
Prognosis reviews address the probable course or future outcome(s) of people with a health problem.
Overviews of Systematic Reviews (OoRs) are a new type of study in order to compile multiple evidence from systematic reviews into a single document that is accessible and useful to serve as a friendly front end for the Cochrane Collaboration with regard to healthcare decision-making.
The Cochrane Collaboration provides a handbook for systematic reviewers of interventions which "provides guidance to authors for the preparation of Cochrane Intervention reviews." The Cochrane Handbook outlines eight general steps for preparing a systematic review:
Defining the review question(s) and developing criteria for including studies
Presenting results and "summary of findings" tables
Interpreting results and drawing conclusions
The Cochrane Handbook forms the basis of two sets of standards for the conduct and reporting of Cochrane Intervention Reviews (MECIR - Methodological Expectations of Cochrane Intervention Reviews)
The Cochrane Library is a collection of databases in medicine and other health care specialties provided by Cochrane and other organizations. It is the collection of Cochrane Reviews, a database of systematic reviews and meta-analyses which summarize and interpret the results of medical research. It was originally published by Update Software and now published by the share-holder owned publisher John Wiley & Sons, Ltd. as part of Wiley Online Library.
Authors must pay an additional fee for their review to be truly open access. Cochrane has an annual income of $10m USD.
The quasi-standard for systematic review in the social sciences is based on the procedures proposed by the Campbell Collaboration, which is one of a number of groups promoting evidence-based policy in the social sciences. The Campbell Collaboration "helps people make well-informed decisions by preparing, maintaining and disseminating systematic reviews in education, crime and justice, social welfare and international development. It is a sister initiative of Cochrane. The Campbell Collaboration was created in 2000 and the inaugural meeting in Philadelphia, USA, attracted 85 participants from 13 countries.
Business and economics
Due to the different nature of research fields outside of the natural sciences, the aforementioned methodological steps cannot easily be applied in business research. Early attempts to transfer the procedures from medicine to business research have been made by Tranfield et al. (2003). A step-by-step approach has been developed by Durach et al.: Based on the experiences they have made in their own discipline, these authors have adapted the methodological steps and developed a standard procedure for conducting systematic literature reviews in business and economics.
Strengths and weaknesses
While systematic reviews are regarded as the strongest form of medical evidence, a review of 300 studies found that not all systematic reviews were equally reliable, and that their reporting can be improved by a universally agreed upon set of standards and guidelines. A further study by the same group found that of 100 systematic reviews monitored, 7% needed updating at the time of publication, another 4% within a year, and another 11% within 2 years; this figure was higher in rapidly changing fields of medicine, especially cardiovascular medicine. A 2003 study suggested that extending searches beyond major databases, perhaps into grey literature, would increase the effectiveness of reviews.
Roberts and colleagues highlighted the problems with systematic reviews, particularly those conducted by the Cochrane, noting that published reviews are often biased, out of date and excessively long. They criticized Cochrane reviews as not being sufficiently critical in the selection of trials and including too many of low quality. They proposed several solutions, including limiting studies in meta-analyses and reviews to registered clinical trials, requiring that original data be made available for statistical checking, paying greater attention to sample size estimates, and eliminating dependence on only published data.
Some of these difficulties were noted early on as described by Altman: "much poor research arises because researchers feel compelled for career reasons to carry out research that they are ill equipped to perform, and nobody stops them." Methodological limitations of meta-analysis have also been noted. Until now, there was no standardized methods to be used by researchers and the optimal methodological steps should be standardized. Another concern is that the methods used to conduct a systematic review are sometimes changed once researchers see the available trials they are going to include. Bloggers have described retractions of systematic reviews and published reports of studies included in published systematic reviews.
Systematic reviews are increasingly prevalent in other fields, such as international development research. Subsequently, a number of donors – most notably the UK Department for International Development (DFID) and AusAid – are focusing more attention and resources on testing the appropriateness of systematic reviews in assessing the impacts of development and humanitarian interventions.
Limited reporting of clinical trials
The 'All Trials' campaign highlights that around half of clinical trials have never reported results and works to improve reporting. This lack of reporting has extremely serious implications for research, including systematic reviews, as it is only possible to synthesize data of published trials. In addition, positive trials were twice as likely to be published as those with negative results. At present, it is legal for for-profit companies to conduct clinical trials and not publish the results. For example, in the past 10 years 8.7 million patients have taken part in trials that haven’t published results. These factors mean that it is likely there is a significant publication bias, with only 'positive' or perceived favorable results being published. A recent systematic review of industry sponsorship and research outcomes concluded that 'sponsorship of drug and device studies by the manufacturing company leads to more favorable efficacy results and conclusions than sponsorship by other sources' and that the existence of an industry bias that cannot be explained by standard 'Risk of bias' assessments.
Systematic reviews of such a bias may amplify the effect, although the flaw is in the reporting of research, not in the systematic review process.
A 2019 publication identified 15 systematic review tools and ranked them according to the number of 'critical features' as required to perform a systematic review, including:
DistillerSR: a paid web application
Swift Active Screener: a paid web application
Covidence: a paid web application and Cochrane technology platform.
^Eden J, Levit L, Berg A, Morton S, et al. (Institute of Medicine (US) Committee on Standards for Systematic Reviews of Comparative Effectiveness Research) (2011). Finding What Works in Health Care: Standards for Systematic Reviews. doi:10.17226/13059. ISBN978-0-309-16425-2. PMID24983062.
^Pawson R, Greenhalgh T, Harvey G, Walshe K (July 2005). "Realist review--a new method of systematic review designed for complex policy interventions". Journal of Health Services Research & Policy. 10 Suppl 1: 21–34. doi:10.1258/1355819054308530. PMID16053581.
^Cochran, W. G. (1937). "Problems Arising in the Analysis of a Series of Similar Experiments". Supplement to the Journal of the Royal Statistical Society. 4 (1): 102–118. doi:10.2307/2984123. JSTOR2984123.
^Yates, F.; Cochran, W. G. (1938). "The analysis of groups of experiments". The Journal of Agricultural Science. 28 (4): 556–580. doi:10.1017/S0021859600050978.
^Silva V, Grande AJ, Martimbianco AL, Riera R, Carvalho AP (2012). "Overview of systematic reviews - a new type of study: part I: why and for whom?". Sao Paulo Medical Journal. 130 (6): 398–404. doi:10.1590/S1516-31802012000600007. PMID23338737.
^Tranfield D, Denyer D, Smart P (2003). "Towards a methodology for developing evidence-informed management knowledge by means of systematic review". British Journal of Management. 14 (3): 207–222. CiteSeerX10.1.1.622.895. doi:10.1111/1467-8551.00375.
^Durach CF, Kembro J, Wieland A (2017). "A New Paradigm for Systematic Literature Reviews in Supply Chain Management". Journal of Supply Chain Management. 53 (4): 67–85. doi:10.1111/jscm.12145.
^Savoie I, Helmer D, Green CJ, Kazanjian A (2003). "Beyond Medline: reducing bias through extended systematic review search". International Journal of Technology Assessment in Health Care. 19 (1): 168–78. doi:10.1017/S0266462303000163. PMID12701949.
^Page MJ, McKenzie JE, Kirkham J, Dwan K, Kramer S, Green S, Forbes A (October 2014). "Bias due to selective inclusion and reporting of outcomes and analyses in systematic reviews of randomised trials of healthcare interventions". The Cochrane Database of Systematic Reviews (10): MR000035. doi:10.1002/14651858.MR000035.pub2. PMID25271098.
^Song F, Parekh S, Hooper L, Loke YK, Ryder J, Sutton AJ, Hing C, Kwok CS, Pang C (February 2010). "Dissemination and publication of research findings: an updated review of related biases". Health Technology Assessment. 14 (8): iii, ix–xi, 1–193. doi:10.3310/hta14080. PMID20181324.