From: YourHealthNet in Australia

Systematic reviews 101: Systematic reviews vs. Narrative reviews

Last week I went to a workshop on writing systematic reviews run by SYRCLE. The main focus of this workshop, and indeed the main focus within most of the literature on systematic reviews, is on clinical and preclinical research. However, I think that other disciplines can really benefit from some of the principles of systematic reviewing, so I thought I'd write a quick series on how to improve the scientific rigor of writing reviews within the field of language evolution.

So first thing's first, what is a systematic review? A systematic review is defined (by the Centre for Reviews and Dissemination at the University of York) as “a review of the evidence on a clearly formulated question that uses systematic and explicit methods to identify, select and critically appraise relevant primary research, and to extract and analyse data from the studies that are included in the review.”

This is in contrast to more narrative or literature reviews, more traditionally seen in non-medical disciplines. Reviews within language evolution are usually authored by the main players in the field and are generally on a very broad topic, they use informal, unsystematic and subjective methods to search for, collect and interpret information, which is often summarised with a specific hypothesis in mind, and without critical appraisal, and summarised with an accompanying convenient narrative. Though these narrative reviews are often conducted by people with expert knowledge of their field, it may be the case that this expertise and experience may bias the authors. Narrative reviews are, by definition, arguably not objective in assessing the literature and evidence, and therefore not good science. Some are obviously more guilty than others, and I'll let you come up with some good examples in the comments.

So how does one go about starting a systematic review, either as a stand alone paper or as part of a wider thesis?

Systematic reviews require the following steps:

From: YourHealthNet in Australia

From: YourHealthNet in Australia

1. Phrase the research question

2. Define in- and exclusion criteria (for literature search)

3. Search systematically for all original papers

4. Select relevant papers

5. Assess study quality and validity

6. Extract data

7. Analyse data (with a meta-analysis if possible)

8. Interpret and present data

In the coming weeks I will write posts on how to phrase the research question of your review, tips on searching systematically for relevant studies, how to assess the quality and validity of the papers and studies you wish to cover, and then maybe a post on meta-analysis (though this is going to be difficult with reference to language evolution because of its multidisciplinary nature and diversity within the relevant evidence, I'll have a good think about it)

 

References

Undertaking Systematic Reviews of Research on Effectiveness. CRD’s Guidance for those Carrying Out or Commissioning Reviews. CRD Report Number 4 (2nd Edition). NHS Centre forReviews and Dissemination, University of York. March 2001.

 

  • Thom

    "Narrative reviews are, by definition, arguably not objective in assessing the literature and evidence, and therefore not good science"

    That's too strong, surely. The first part of this sentence, and indeed the whole paragraph up to here, is correct - but it's a non-sequiter that a subjective approach makes narrative reviews bad science. I would argue back, but frankly I don't really see what the argument is for making that leap in the first place.

  • hvitskygge

    I agree with Thom. Sometimes "narrative" reviews can be really useful, as long as one is able to look behind the "story".. It depends on what you need it for. Personally, I always like to start with these kinds of reviews if I want to get into a new topic.

  • Hannah Little

    But if you're reading them to get into a topic, you presumably don't have enough prior knowledge to critically assess the claims of the authors. I admit they are easier to read, and therefore more user-friendly to the novice, but I guess my point was that making evidence fit a predefined story (i.e. not objectively looking at all of the evidence before even thinking about conclusions) is not scientific.

  • Thom

    So are all TiCS review papers bad science?

  • Hannah Little

    Some of them, I did say some were worse than others. Is it really controversial to say that to cherry pick evidence to fit a hypothesis is dodgy science?

  • Thom

    No, it's not controversial to say that cherry picking is bad science. But that's not the claim you originally made: "Narrative reviews are, by definition... not good science".

  • Hannah Little

    After the qualifying statement that they are "arguably not objective in assessing the literature and evidence", which is just a watered down way of saying cherry picking, whether the author is aware of it or not.

    It was perhaps a bit strong, and I see that narrative reviews are important. I think that the literature on reviews is a little bit to blame here, because it presents narrative and systematic reviews to be a dichotomy, when most reviews (those in TiCS included) aren't purely systematic or purely narrative, but lie somewhere in the middle of a continuum. I'd still say that making reviews more systematic than narrative will improve their scientific robustness though.

  • Pingback: Systematic reviews 101: Internal and External Validity | Replicated Typo()

  • There are many people who will going to like this kind of information especially that it talks about something that is really useful and effective for them to learn about like this one. It's good that you tackle the differences between those kind of reviews.