Raise your hand if you have said, or heard, this sentence.... Or, the close cousin of this sentence: "You should write a review paper and submit it to ___journal_name____", sometimes spoken by Editors seeking to game the citation and impact factor system. (Publishing more reviews is an easy way for journals to boost their IF).
We (Dan Bolnick and Andrew Hendry) recently collaborated on a review paper. We are honestly excited by the topic, and feel like we had something useful to say that would help shape people's view of the topic (especially newcomers to the field). But the process of wrangling 20 people's opinions - authors and reviewers and editors - into a coherent text was like the proverbial herding-cats, but with really big cats. Big grumpy cats. That's not the fault of any one (or group) of co-authors, it's just what happens when people with many different views and time constraints and agendas try to work together on something that is nobody's number-one-priority.
|RIP GrumpyCat, Patron Saint of Writing Review Papers|
It's easy to hate on review papers, and complain that they are just soap-box pontification. Just people gaming the citation-driven academic system. But then, we all can also think of great review papers we've read. Papers that helped us learn the layout of a subfield or topic outside our usual expertise. Papers that made us think differently about a familiar topic. Papers we found valuable to assign as reading for a graduate or undergraduate class to introduce newcomers to a topic. They are also great for senior scientists making a lateral move between fields. For instance, Dan has had a shift in the past 10 years into more evolutionary immunology, and found review papers immensely helpful in learning this complex field in a semi-efficient way.
So, what makes the difference between a hot-mess herd-of-cats review, and an effective one that people want to read? When is it appropriate to write a review, and when should you cease and desist? And, how do you steer clear of the common pitfalls that make the difference between an influential citation classic, versus a waste of your time (as a writer, or reader)?
Note, there are some other blogs on the topic of review papers. For instance, Andrew did an old EcoEvoEvoEco blog about citation rates to his reviews versus other papers:
And Steve Heard wrote one too. We didn't read Steve's blog in advance of this one, to not bias our own opinions *.
(* or perhaps because we were lazy)
1) Should you write a review paper (a handy flow chart)If you have to ask this question, then the answer is probably no. Some people would say the answer is ALMOST ALWAYS no:
Other people seem to think that everyone should write a review paper, and require their graduate students, or even students in graduate courses, to write and submit review papers:
Our view is that there are times for reviews, and times not to review. For every thing, turn, turn, turn, there is a synthesis, turn, turn turn. The question is, what's a good reason, what's a bad reason? Well, to help you in your deliberations, here's a handy and mostly tongue-in-cheek guide:
2) Pros and Cons of writing a review paper
- A good review can bring together existing knowledge in ways that generates a new insight.
- You get to make methodological recommendations for how people should move ahead to study a topic, setting a research agenda by laying out new questions or identifying more (and less) effective experimental approaches. If you can successfully define an agenda that other people actually follow, you can guide the future of your discipline. And that's rewarding.
- You can define the standards of evidence. For example, Schluter and McPhail 1992 (American Naturalist, of course) defined a set of observations they deemed necessary and sufficient to prove an example of ecological character displacement. In hindsight, these were very stringent and few papers have measured up even almost 30 years later. Andrew Hendry weighs in here: "I am generally negative about “standards of evidence” papers as they are always un-realistically stringent – and people think that you don’t have evidence for a phenomenon even if you are nearly there. Kind of like needing to reject a null hypothesis. Such papers would be better pitched as weighing evidence for or against a given phenomenon. Kind of like levels of support for alternative models. Robinson and Wilson wrote a “Schluter was too strict paper, I think.” Others have done the same for other such papers."
- It can be really enjoyable to attend a working group designed to brainstorm and write a review. The process can be challenging in a good way, as everyone hashes out a common set of definitions and views. Ten or twenty (diverse!) people in a room for a few days arguing over details of semantics and interpretation of data or models is a great way to reach a consensus on a subject, which you then want to convey to everyone who wasn't in the room, hopefully to their benefit (but perhaps to their annoyance).
- Review papers also help the writer organize their thoughts on a topic – often stimulating their own empirical/theoretical research. This is why many professors encourage their PhD students to make one PhD dissertation chapter be a review of a topic. Note, however, that while this might be a good motivation to write a paper for your own edification, it isn't necessarily a good reason to publish it for other people to read.
- Self-interested reason last: Review papers can become your most-cited work. That's certainly the case for both of us. [Dan: four of my five most-cited papers are straight-up reviews, the other is a meta-analysis. These five account for about 40% of my lifetime citations, though they are only 4% of my publications. For a more in-depth analysis, see the figure below. Overall, 19% of my lifetime papers are reviews. 65% are empirical. 16% are theory. In contrast 31% of my citations are to those 19% of my papers that are reviews, 32% to my empirical papers, 25% to meta-analyses, 7% to theory papers, 4% to methods. From this point forward in this blog, I'm going to consider meta-analyses as belonging more in the empirical study side, than as a review, because they entail both a great deal more work, and more de novo data analysis and interpretation.]
- Note, however, that Andrew and Dan may be in the minority in this regard. A twitter poll found that a majority of unscientifically sampled people had empirical papers as their most cited.
- A bad review can really really flop. Perhaps nobody wants to read it. Even worse, what if lots of people read it and disagree with the premise or conclusions? It can come across as narcissistic, or wasting people's time which makes them grumpy (refer back to GrumpyCat, above).
- Saturation: some topics (I'm looking at you, eco-evolutionary feedbacks) have a high ratio of reviews to actual data. More on this later.
- Takes your time away from 'real' science, generating and publishing data or models that really advance our collective knowledge forward. For that matter, it chews up reviewer and editor time also, so hopefully it is worth everyone's time, but it might not be.
- Citation "theft". There's a strong argument to be made that when we write about a scientific idea, we should cite whoever first proposed that idea (and/or whoever provided the first or the strongest-available evidence for the idea). Citations are the currency by which authors get rewarded for their work, and we want to reward people who generate new insights. By citing them. But, review papers tend to attract citations. It is easier to cite a review saying that "X happens" than to locate the first published example of X. And, the review lends a greater air of generality. You could cite 10 experimental articles showing X, or just one review. Especially when writing for journals like Nature or Science, where citation space is limited, one naturally gravitates towards citing reviews. Yet, this seems undesirable from the standpoint of rewarding innovation. The win-win good news is that most people preferentially cite a mixture of both review papers and original sources to make a point (though perhaps less so in leading journals with artificially short citation sections):
- Some people get really annoyed by an excess of review papers. They can be seen as "fluff", as a form of gaming the system or parasitism. Michael Turelli used to tell his students and postdocs that reviews counted as negative papers. He was only half joking. Well, less than half joking. So, by his rules, we propose a modified citation and H index. The Turelli's Penalized Citations is the total number of citations to non-review papers, minus the total number of citations to review papers. By that measure (including meta-analyses as data papers), Dan loses 2/3 of his total citations. If meta-analyses were also included in among reviews, he'd be in negative territory (negative 1800). Turelli's Penalized H Index is the H index just among non-review papers, minus the H index just of review papers. Dan's TPHI is 21. This must be why Turelli secretly harbors thoughts of disowning Dan. We assume. Andrew Hendry adds here: from Web of Science, my Turelli’s Penalized Citations are 1865 if meta-analysis is empirical and -213 if meta-analysis is review. My Turelli’s H Index is 18 if meta-analysis is empirical and 4 if meta-analysis is review. In short, we've clearly both benefitted from reviews.
3) Do's and Don'ts of writing a review paper
- Clarify terminology in ways that are consistent with past usage.
- Summarize existing knowledge, but this should be only a modest part of the review.
- Derive a new conclusion that follows from, but is not presently stated in, the existing literature. As you will see from the copied tweets below from a recent thread, the overwhelming consensus is that reviews must provide a serious new insight, some value-added.
- Easy to read and non-obvious diagrams conveying key ideas.
- Identify gaps in our knowledge.
- Describe specific experimental or other method innovations that allow people to advance beyond the existing knowledge.
- Write well.
- Think about your audience. Are you writing to experts in your field who have also read the same papers, but maybe haven't put the pieces together in your particular way? Are you writing to professionals from other disciplines to introduce them to your field? Are you writing to undergraduates or graduate students? Of course a great review paper might be accessible to everyone, but often these different audiences require different apprroaches. Most fundamentally, are you writing to fellow specialists, or to non-specialists?
- Provide specific examples to illustrate points, without overloading the reader with all the examples.
- Put the topic into historical context, including bringing to light older but very relevant papers. Many excellent old papers fall off the map, but deserve credit for their pioneering insights.
- Clearly state why a review is needed / appropriate at this juncture.
- Provide tables of relevant examples of phenomena you describe, with some annotation. These can go in supplements, but are useful for people entering into the subject.
- When there's enough empirical work available, make it a meta-analysis to derive quantitative conclusions.
- Think about the diversity of authors whose work you are highlighting. Do not just mention work by your friends, and do not just mention work by older white males.
- Co-author with a diverse group of colleagues to ensure your review's viewpoint represents a consensus opinion not just your own. Both Dan and Andrew have looked over their own review papers and, in retrospect, find themselves wanting in this regard and are trying to do better going forwards (http://ecoevoevoeco.blogspot.com/2016/04/subtle-sexism-self-evaluation.html.
- An exception to the "Say Something New" rule, is that review papers can do a great service if they bring an old idea to the attention of a new audience. Put another way, we can say that an idea is new to some group of people. For instance, the eco-evolutionary dynamics field saw a proliferation of review papers, some might say faster than the number of new empirical papers for a time. Partly this was because the time was right and multiple groups were converging independently on the theme. And partly, they were talking to different audiences, some to ecologists, some to evolutionary biologists, or to conservationists. So, bring something to a new audience is another option.
- Write well, and aim for a widely-read journal. Sometimes a topic has been reviewed before, but that review didn't land its punches and people aren't really paying attention. A follow-up review in another more visible location, that is better written or better-argued, may stick with previous reviews didn't. Even just getting the same paper (writ large) in a fancy journal (Science/Nature) can have a huge positive effect on the spread of the idea – and, of course, attention to the earlier review. Without this, rapid evolution would not be so prominent in the UN Global Assessment and many other such places.
- Redefine terms
- Introduce lots of new terms unnecessarily
- List lots of examples.
- Write a review just because you feel like you should
- Write a review just because there isn't one on the topic.
- Just summarize the state of the field
- Make recommendations for open research directions that aren't practical to pursue. Some poor grad student might go charging off to work on that and wind up in a dead end. Of course, they might also crack the barrier too and do something great, but that's a challenging hurdle.
- Write something really really long. You want to do a book? Write a book.
- Ignore relevant mathematical models. For most topics you consider, there's theory. Use it. Conversely when reviewing theory, keep a close eye on including some relevant data.
- Cite yourself disproportionately often. If anything, try to cite yourself disproportionately infrequently.
- Controversial opinion: Don't have a long list of co-authors just because they were in the room for conversations. They should be able to point to specific contributions that each of them made to the text.
- Here's one where Dan and Andrew disagree. Dan: Don't rebrand an existing idea. It pisses people off without adding new insights. For instance, many people see the 'extended evolutionary synthesis' as both making some sloppy claims, but also as claiming to be radical when its core tenets aren't really at odds with previous views. It has generated some serious ill-will and push-back. Andrew: Rebranding (redefining) can serve a very important role in bring an idea to a new community, reinvigorating an old idea (old wine in new bottles), and generating new enthusiasm. Eco-evolutionary dynamics has been argued by some to just be evolutionary ecology and community genetics. But, if we hadn’t rebranded it, the idea would not have spread nearly so far. Dan counters: but a lot of the writing about eco-evo makes it sound like this is a new insight emerging from recent work. In fact, the very idea of ecological character displacement (old wine indeed at this point) is an eco-evolutionary dynamic where evolution of resource-use traits is driven by competition, and ameliorates that competition to allow ecological coexistence. Maybe the problem isn't the rebrading per se, but giving the impression that one is whitewashing relevant older work to make the rebrand seem more innovative and new.
- Make a habit of writing too many reviews, too often. It comes across as pontificating, trying to shape the field without doing the hard work of writing real data or theory papers that advance our knowledge. Both of us have violated that rule (indeed, we are both in violation of that right now, and both regretting it a bit). It does seem that the tendency to over-review is a sign of a maturing career (see figure below from a series of twitter polls), but that might just be senility.
4) Life cycle of a research topic and key moments for reviews..When to review, when not to review? That very much depends on the state of the research topic you want to write about. A quick guide:
- Stage 1: A new paper is published describing a novel phenomenon not previously known. Too early for a review.
- Stage 2: A few theory paper(s) are written making some predictions about the new phenomenon. Too early for a review. Note that the order of Stage 1 and 2 can be reversed if the theory came first and made a testable prediction.
- Stage 3: A few more empirical and/or theory papers. Maybe 10 citations. Still too early by most people's count (see figure from a Twitter poll).
- Stage 4: There's a critical mass of information, awareness grows. Now the gold rush begins. Whoever does the first review gets some early credit. But, they risk being premature and writing a review without sufficient meat to it, that nobody reads.
- Stage 5. There were just a whole bunch of new review papers. All citing the same 15 papers. Not time for a review (unless you have something genuinely new and profound to say, that the others missed).
- Stage 6. The band wagon. Lots of people are studying the topic now, empirical and theoretical papers appearing all the time. But, that early burst of reviews is still fresh. Hold off.
- Stage 7. Round two of review papers now have enough material to work with to actually do meta-analysis and be more quantitative.
- Stage 8. Everyone things "that's a crowded field, I need to do something different to set myself apart". The band wagon disperses, a steady trickle of work on the topic continues but nobody reads them much because that's yesterday's research fad, people moved on to something else. Nobody's paying much attention anymore.
- Stage 9. A decade later, that steady trickle of work has built up to a large reservoir of material. Time to re-up the subject. People didn't really lose interest, it turns out, they just shifted away because of the competition. Now that you remind them of the subject and update the consensus view based on new information from stage 8, the bandwagon renews (return to Stage 6). Maybe by summarizing this large body of literature, you identify some really new insight, theory, result. That returns us to Stage 1.