A colleague asked for advice about writing a systematic review, for a team that had never written one before. Below is a refined version of my advice.
First, note that these days there is a systematic review or meta-analysis available on practically every subject. Therefore, make sure that it is worth conducting another one. If such a work has already been published, then why do another one? Possible reasons to do so include these possible factors
- The previous review(s) are out of date and there is much new study evidence to integrate.
- The previous review(s) had dubious methodological quality (keep reading for tips on how to know if a review has poor quality).
- The previous review(s) revealed heterogeneity in study effects and plausibly omitted important moderators that may explain the heterogeneity.
Of course, all three may be true. Of these factors, I would say #2 and #3 are the most important, as simply updating a literature is not that exciting.
Assuming it is worth doing a new systematic review or meta-analysis:
- My stock advice for guidance is a 2014 chapter I co-wrote (link here); this one assumes you are conducting a meta-analysis, not just a systematic review without the quantitative piece. (The chapter borrows from advances appearing across several disciplines but specifically targets those doing meta-analyses in social-personality psychology.)
- Of course you could go with the Cochrane Handbook for guidance to conduct a systematic review, but note it has not been updated since 2011 and an update is not expected until late 2018 at the earliest. Following Cochrane procedures is sure to be perceived as highly conventional, as Cochrane is quite popular. Still, note that doing so is also at least somewhat conservative. For example, they assume that best-evidence synthesis is the best way to review, such that reviews would routinely omit studies presumed to have low methodological quality. The problem is that not much is known about how risk of bias (and other methodological artifacts) actually affects results (see this article for more on this issue; and this link on this website for still more; of course, we do know how some artifacts affect study effects!). In some literatures, so-called weak studies (such as uncontrolled interventions) are sometimes the best evidence on a subject (e.g., HIV prevention interventions for injection drug users). Moreover, going all-in with Cochrane will be registering with them and committing to updates of the same literature over time.
- Browsing and understanding the PRISMA reporting standards would be wise; these ensure that you report enough detail that the quality of your review can be judged. Thus, PRISMA imply quality standards, but simply reporting that you "followed PRISMA guidelines" is a bit misleading unless you actually did a high-quality review. There are many standards available, so it is important to know the one(s) most relevant to your field; here is one for meta-analyses in psychology, for example. (Note that the current site has a group for reporting standards--it could use updating--would you like to volunteer?)
- The AMSTAR systematic review methodological quality scale should be required reading (citation 1; citation 2). Following these standards will help to ensure that your systematic review is up to the contemporary standard, though note that many have modified the AMSTAR for their own use or to incorporate additional quality dimensions, my team has done so in recent work and other teams have raised important issues (e.g., citation 3).
- Consult a librarian in the specialty area. An example of a dimension that is routinely becoming a quality check for meta-analyses and systematic reviews is consulting a librarian during the search process; some journals are requiring meta-analyses and systematic reviews to do so and some even go so far as to require that the librarian co-author. (It is not mentioned on the AMSTAR though it does show up on other standards.) These days, scholars grow up doing internet searches and therefore assuming they know how to conduct thorough literature searches; this growing standard is evidence that people routinely over-rate their search skills.
- Meta-analysis software. These days, there are many meta-analytic statistics packages readily available and easily executed in standard statistics packages like SAS, Stata, SPSS, and R; there is also easy-to-use specialty software (e.g., Comprehensive Meta-Analysis). The existence of these packages might suggest to a scholar that it is "easy" to do a meta-analysis. The sources cited in this blog attest otherwise: It takes understanding of the assumptions to use and interpret statistical output for meta-analysis, and this situation is only getting more complex over time, which leads me to my final advice.
- Finally, I strongly recommend that you conduct a meta-analysis during a semester-long course on the subject (which of course would include systematic reviewing): Conducting a meta-analysis under the close supervision of an expert will help you avoid errors and to maximize the efficiency of your work. Doing so will also provide a strong background in contemporary standards of meta-analysis. Note that so-called publication bias or small-studies statistics continue to advance quickly. One thing that is becoming conventional is the realization that such statistics are difficult to interpret in the face of significant heterogeneity. Of the sources cited above only one makes that point, chapter at the outset. The problem is that publication bias can exist side-by-side with real moderator effects, so publication bias tests conducted without considering moderators are unlikely to prove very useful. In the absence of a meta-analytic seminar, then consulting a meta-analysis expert would seem wise, as this person can answer questions tailored to your specific systematic review or meta-analysis.
Consulting the sources above will help identify the steps you need to take to conduct a high-quality review that is maximally influential.