Wednesday, May 22, 2019

When and How (Not) to Write a Review Paper

"Hey, we should write a review paper!"

Raise your hand if you have said, or heard, this sentence....  Or, the close cousin of this sentence: "You should write a review paper and submit it to ___journal_name____", sometimes spoken by Editors seeking to game the citation and impact factor system. (Publishing more reviews is an easy way for journals to boost their IF).

We (Dan Bolnick and Andrew Hendry) recently collaborated on a review paper. We are honestly excited by the topic, and feel like we had something useful to say that would help shape people's view of the topic (especially newcomers to the field). But the process of wrangling 20 people's opinions - authors and reviewers and editors - into a coherent text was like the proverbial herding-cats, but with really big cats.  Big grumpy cats. That's not the fault of any one (or group) of co-authors, it's just what happens when people with many different views and time constraints and agendas try to work together on something that is nobody's number-one-priority.

RIP GrumpyCat, Patron Saint of Writing Review Papers
And, at the same time, we are both involved in multiple other review papers simultaneously. Some of these are papers we really feel convey a new idea in a compelling way. Some of them were aggressively solicited by journals. We started emailing back and forth to commiserate about the joys and headaches of review papers. We both feel like we are probably overdoing the review paper thing. Which led us to wonder, when is it good, versus bad, to dive into writing a review? What are the pros and cons? How do you tell which review papers are worth your time, and which are not?

It's easy to hate on review papers, and complain that they are just soap-box pontification. Just people gaming the citation-driven academic system. But then, we all can also think of great review papers we've read. Papers that helped us learn the layout of a subfield or topic outside our usual expertise.  Papers that made us think differently about a familiar topic. Papers we found valuable to assign as reading for a graduate or undergraduate class to introduce newcomers to a topic. They are also great for senior scientists making a lateral move between fields. For instance, Dan has had a shift in the past 10 years into more evolutionary immunology, and found review papers immensely helpful in learning this complex field in a semi-efficient way.

So, what makes the difference between a hot-mess herd-of-cats review, and an effective one that people want to read? When is it appropriate to write a review, and when should you cease and desist? And, how do you steer clear of the common pitfalls that make the difference between an influential citation classic, versus a waste of your time (as a writer, or reader)?

Note, there are some other blogs on the topic of review papers. For instance, Andrew did an old EcoEvoEvoEco blog about citation rates to his reviews versus other papers:

And Steve Heard wrote one too. We didn't read Steve's blog in advance of this one, to not bias our own opinions *.

(* or perhaps because we were lazy)


1) Should you write a review paper (a handy flow chart)

If you have to ask this question, then the answer is probably no. Some people would say the answer is ALMOST ALWAYS no:

Other people seem to think that everyone should write a review paper, and require their graduate students, or even students in graduate courses, to write and submit review papers:

And pretty much every working group that has ever convened at NCEAS, NESCENT, NIMBioS, iDiv, or other such institutes or centers, feels an obligation to write a review.

Our view is that there are times for reviews, and times not to review. For every thing, turn, turn, turn, there is a synthesis, turn, turn turn.  The question is, what's a good reason, what's a bad reason? Well, to help you in your deliberations, here's a handy and mostly tongue-in-cheek guide:


2) Pros and Cons of writing a review paper


  • A good review can bring together existing knowledge in ways that generates a new insight. 
  • You get to make methodological recommendations for how people should move ahead to study a topic, setting a research agenda by laying out new questions or identifying more (and less) effective experimental approaches. If you can successfully define an agenda that other people actually follow, you can guide the future of your discipline. And that's rewarding.
  • You can define the standards of evidence. For example, Schluter and McPhail 1992 (American Naturalist, of course) defined a set of observations they deemed necessary and sufficient to prove an example of ecological character displacement. In hindsight, these were very stringent and few papers have measured up even almost 30 years later. Andrew Hendry weighs in here: "I am generally negative about “standards of evidence” papers as they are always un-realistically stringent – and people think that you don’t have evidence for a phenomenon even if you are nearly there. Kind of like needing to reject a null hypothesis. Such papers would be better pitched as weighing evidence for or against a given phenomenon. Kind of like levels of support for alternative models. Robinson and Wilson wrote a “Schluter was too strict paper, I think.” Others have done the same for other such papers."
  • It can be really enjoyable to attend a working group designed to brainstorm and write a review. The process can be challenging in a good way, as everyone hashes out a common set of definitions and views. Ten or twenty (diverse!) people in a room for a few days arguing over details of semantics and interpretation of data or models is a great way to reach a consensus on a subject, which you then want to convey to everyone who wasn't in the room, hopefully to their benefit (but perhaps to their annoyance).
  • Review papers also help the writer organize their thoughts on a topic – often stimulating their own empirical/theoretical research. This is why many professors encourage their PhD students to make one PhD dissertation chapter be a review of a topic. Note, however, that while this might be a good motivation to write a paper for your own edification, it isn't necessarily a good reason to publish it for other people to read.
  • Self-interested reason last: Review papers can become your most-cited work. That's certainly the case for both of us. [Dan: four of my five most-cited papers are straight-up reviews, the other is a meta-analysis. These five account for about 40% of my lifetime citations, though they are only 4% of my publications. For a more in-depth analysis, see the figure below. Overall, 19% of my lifetime papers are reviews. 65% are empirical. 16% are theory. In contrast 31% of my citations are to those 19% of my papers that are reviews, 32% to my empirical papers, 25% to meta-analyses, 7% to theory papers, 4% to methods. From this point forward in this blog, I'm going to consider meta-analyses as belonging more in the empirical study side, than as a review, because they entail both a great deal more work, and more de novo data analysis and interpretation.]

      Note, however, that Andrew and Dan may be in the minority in this regard. A twitter poll found that a majority of unscientifically sampled people had empirical papers as their most cited. 


  • A bad review can really really flop. Perhaps nobody wants to read it. Even worse, what if lots of people read it and disagree with the premise or conclusions? It can come across as narcissistic, or wasting people's time which makes them grumpy (refer back to GrumpyCat, above).
  • Saturation: some topics (I'm looking at you, eco-evolutionary feedbacks) have a high ratio of reviews to actual data. More on this later.
  • Takes your time away from 'real' science, generating and publishing data or models that really advance our collective knowledge forward. For that matter, it chews up reviewer and editor time also, so hopefully it is worth everyone's time, but it might not be.
  • Citation "theft". There's a strong argument to be made that when we write about a scientific idea, we should cite whoever first proposed that idea (and/or whoever provided the first or the strongest-available evidence for the idea). Citations are the currency by which authors get rewarded for their work, and we want to reward people who generate new insights. By citing them. But, review papers tend to attract citations. It is easier to cite a review saying that "X happens" than to locate the first published example of X. And, the review lends a greater air of generality. You could cite 10 experimental articles showing X, or just one review. Especially when writing for journals like Nature or Science, where citation space is limited, one naturally gravitates towards citing reviews. Yet, this seems undesirable from the standpoint of rewarding innovation.  The win-win good news is that most people preferentially cite a mixture of both review papers and original sources to make a point (though perhaps less so in leading journals with artificially short citation sections):
  • Some people get really annoyed by an excess of review papers. They can be seen as "fluff", as a form of gaming the system or parasitism. Michael Turelli used to tell his students and postdocs that reviews counted as negative papers. He was only half joking. Well, less than half joking. So, by his rules, we propose a modified citation and H index. The Turelli's Penalized Citations is the total number of citations to non-review papers, minus the total number of citations to review papers. By that measure (including meta-analyses as data papers), Dan loses 2/3 of his total citations. If meta-analyses were also included in among reviews, he'd be in negative territory (negative 1800). Turelli's Penalized H Index is the H index just among non-review papers, minus the H index just of review papers. Dan's TPHI is 21.  This must be why Turelli secretly harbors thoughts of disowning Dan. We assume.  Andrew Hendry adds here: from Web of Science, my Turelli’s Penalized Citations are 1865 if meta-analysis is empirical and -213 if meta-analysis is review. My Turelli’s H Index is 18 if meta-analysis is empirical and 4 if meta-analysis is review.  In short, we've clearly both benefitted from reviews.
You know who = Michael Turelli. Ironically, Michael covered the page charges for what was to become Dan's most-cited (review) paper, co-written with a group of fellow graduate students at UC Davis.


3) Do's and Don'ts of writing a review paper


  • Clarify terminology in ways that are consistent with past usage.
  • Summarize existing knowledge, but this should be only a modest part of the review.
  • Derive a new conclusion that follows from, but is not presently stated in, the existing literature. As you will see from the copied tweets below from a recent thread, the overwhelming consensus is that reviews must provide a serious new insight, some value-added.
  • Easy to read and non-obvious diagrams conveying key ideas.
  • Identify gaps in our knowledge.
  • Describe specific experimental or other method innovations that allow people to advance beyond the existing knowledge.
  • Write well.
  • Think about your audience. Are you writing to experts in your field who have also read the same papers, but maybe haven't put the pieces together in your particular way? Are you writing to professionals from other disciplines to introduce them to your field? Are you writing to undergraduates or graduate students? Of course a great review paper might be accessible to everyone, but often these different audiences require different apprroaches. Most fundamentally, are you writing to fellow specialists, or to non-specialists? 
  • Provide specific examples to illustrate points, without overloading the reader with all the examples.
  • Put the topic into historical context, including bringing to light older but very relevant papers. Many excellent old papers fall off the map, but deserve credit for their pioneering insights.
  • Clearly state why a review is needed / appropriate at this juncture.
  • Provide tables of relevant examples of phenomena you describe, with some annotation. These can go in supplements, but are useful for people entering into the subject.
  • When there's enough empirical work available, make it a meta-analysis to derive quantitative conclusions.
  • Think about the diversity of authors whose work you are highlighting. Do not just mention work by your friends, and do not just mention work by older white males. 
  • Co-author with a diverse group of colleagues to ensure your review's viewpoint represents a consensus opinion not just your own. Both Dan and Andrew have looked over their own review papers and, in retrospect, find themselves wanting in this regard and are trying to do better going forwards (
  • An exception to the "Say Something New" rule, is that review papers can do a great service if they bring an old idea to the attention of a new audience. Put another way, we can say that an idea is new to some group of people. For instance, the eco-evolutionary dynamics field saw a proliferation of review papers, some might say faster than the number of new empirical papers for a time. Partly this was because the time was right and multiple groups were converging independently on the theme. And partly, they were talking to different audiences, some to ecologists, some to evolutionary biologists, or to conservationists. So, bring something to a new audience is another option.
  • Write well, and aim for a widely-read journal. Sometimes a topic has been reviewed before, but that review didn't land its punches and people aren't really paying attention. A follow-up review in another more visible location, that is better written or better-argued, may stick with previous reviews didn't. Even just getting the same paper (writ large) in a fancy journal (Science/Nature) can have a huge positive effect on the spread of the idea – and, of course, attention to the earlier review. Without this, rapid evolution would not be so prominent in the UN Global Assessment and many other such places.


  • Redefine terms
  • Introduce lots of new terms unnecessarily
  • List lots of examples.
  • Write a review just because you feel like you should
  • Write a review just because there isn't one on the topic.
  • Just summarize the state of the field
  • Make recommendations for open research directions that aren't practical to pursue. Some poor grad student might go charging off to work on that and wind up in a dead end. Of course, they might also crack the barrier too and do something great, but that's a challenging hurdle.
  • Write something really really long. You want to do a book? Write a book.
  • Ignore relevant mathematical models. For most topics you consider, there's theory. Use it. Conversely when reviewing theory, keep a close eye on including some relevant data.
  • Cite yourself disproportionately often. If anything, try to cite yourself disproportionately infrequently.
  • Controversial opinion: Don't have a long list of co-authors just because they were in the room for conversations. They should be able to point to specific contributions that each of them made to the text.
  • Here's one where Dan and Andrew disagree. Dan:  Don't rebrand an existing idea. It pisses people off without adding new insights. For instance, many people see the 'extended evolutionary synthesis' as both making some sloppy claims, but also as claiming to be radical when its core tenets aren't really at odds with previous views. It has generated some serious ill-will and push-back.  Andrew: Rebranding (redefining) can serve a very important role in bring an idea to a new community, reinvigorating an old idea (old wine in new bottles), and generating new enthusiasm. Eco-evolutionary dynamics has been argued by some to just be evolutionary ecology and community genetics. But, if we hadn’t rebranded it, the idea would not have spread nearly so far. Dan counters: but a lot of the writing about eco-evo makes it sound like this is a new insight emerging from recent work. In fact, the very idea of ecological character displacement (old wine indeed at this point) is an eco-evolutionary dynamic where evolution of resource-use traits is driven by competition, and ameliorates that competition to allow ecological coexistence. Maybe the problem isn't the rebrading per se, but giving the impression that one is whitewashing relevant older work to make the rebrand seem more innovative and new.
  • Make a habit of writing too many reviews, too often. It comes across as pontificating, trying to shape the field without doing the hard work of writing real data or theory papers that advance our knowledge. Both of us have violated that rule (indeed, we are both in violation of that right now, and both regretting it a bit). It does seem that the tendency to over-review is a sign of a maturing career (see figure below from a series of twitter polls), but that might just be senility.


4) Life cycle of a research topic and key moments for reviews..

When to review, when not to review? That very much depends on the state of the research topic you want to write about. A quick guide:

  • Stage 1: A new paper is published describing a novel phenomenon not previously known. Too early for a review.
  • Stage 2: A few theory paper(s) are written making some predictions about the new phenomenon. Too early for a review. Note that the order of Stage 1 and 2 can be reversed if the theory came first and made a testable prediction.
  • Stage 3: A few more empirical and/or theory papers. Maybe 10 citations. Still too early by most people's count (see figure from a Twitter poll).
  • Stage 4: There's a critical mass of information, awareness grows. Now the gold rush begins. Whoever does the first review gets some early credit. But, they risk being premature and writing a review without sufficient meat to it, that nobody reads.
  • Stage 5. There were just a whole bunch of new review papers. All citing the same 15 papers. Not time for a review (unless you have something genuinely new and profound to say, that the others missed).
  • Stage 6. The band wagon. Lots of people are studying the topic now, empirical and theoretical papers appearing all the time. But, that early burst of reviews is still fresh. Hold off.
  • Stage 7. Round two of review papers now have enough material to work with to actually do meta-analysis and be more quantitative. 
  • Stage 8. Everyone things "that's a crowded field, I need to do something different to set myself apart". The band wagon disperses, a steady trickle of work on the topic continues but nobody reads them much because that's yesterday's research fad, people moved on to something else. Nobody's paying much attention anymore.
  • Stage 9. A decade later, that steady trickle of work has built up to a large reservoir of material. Time to re-up the subject. People didn't really lose interest, it turns out, they just shifted away because of the competition. Now that you remind them of the subject and update the consensus view based on new information from stage 8, the bandwagon renews (return to Stage 6). Maybe by summarizing this large body of literature, you identify some really new insight, theory, result. That returns us to Stage 1.

Monday, May 6, 2019

My Coincidental Journey to Relevance

1 pm Paris time today saw the official release of the long-awaited Global Assessment by the Intergovernmental Platform for Biodiversity and Ecosystem Services (IPBES). The product of intensive work by hundreds of people from more than 50 countries over more than three years, the document summarizes the state of biodiversity on Earth and discuss what we can do to improve it that state in the future. The assessment is hundreds of pages (with many many more pages of appendices) and the full document is therefore likely be read by only a small subset of people interested in the topic. For massive documents like this, what is much more likely to be read by many more people is the Summary for Policy Makers (SPM), in this case a 39 page document by itself. Even this SPM is much too long to be read by Important People (Presidents, Prime Ministers, Environmental Ministers) and – of course – by reporters. Hence, everything in the SPM is also distilled down to eight “Key Messages” spanning two pages at the very start of the SPM. These key messages are sure to be read by nearly everyone.

I here wish to draw your attention to Key Message #8, which reads in its entirety:

Human-induced changes are creating conditions for fast biological evolution - so rapid that its effects can be seen in only a few years or even more quickly. The consequences can be positive or negative for biodiversity and ecosystems, but can create uncertainty about the sustainability of species, ecosystem functions and the delivery of nature’s contributions to people. Understanding and monitoring these biological evolutionary changes are as important for informed policy decisions as in cases of ecological change. Sustainable management strategies then can be designed to influence evolutionary trajectories so as to protect vulnerable species and reduce the impact of unwanted species (such as weeds, pests or pathogens). The widespread declines in geographic distribution and population sizes of many species make clear that, although evolutionary adaptation to human-caused drivers can be rapid, it has often not been sufficient to mitigate them fully.

I would like to pause at this point to reflect on an astounding fact: rapid evolution is one of the eight Key Messages of the IPBES Global Assessment. This fact isn’t astounding because rapid evolution doesn’t belong as a key message; but rather because, only 20 years ago, very few people –few scientists even – would have acknowledge the practical relevance of rapid evolution.

It is with some pride that I can report that, in fact, I wrote much of Key Message #8 – with modifications resulting from many reviewers and also with final tweaks during the Plenary Discussion in Paris. I don’t profess to be responsible for, or to favor, each and every word and phrase in the key message; but I do claim to have a key role in this statement making it into the Key Messages, as well as the background material provided later in the SPM, and – of course – the numerous resonances of this “theme” across the rest of the Global Assessment.
I seem to have been relevant. How the Hell did that happen?


Many prospective graduate students start discussions with me by expressing their desire to be relevant – usually to conservation. As a memorable example, one student starting our conversation by saying “I am interested in evolutionary biology or conservation biology.” My response, of course, was “Well, in my lab, we do evolutionary biology and not conservation biology.” To such students wishing to be relevant in conservation biology, my suggestion has always been to NOT do Conservation Biology (note the capitals this second time). I then go on to argue that Conservation Biology is typically imagined to be helping species x or location y or – most commonly –helping species x in location y. Such work is important I acknowledge but we don’t often do it in my lab. The reason is that work to aid species x in location y often has no influence on anything other than species x in location y – and, often, no influence even on species x in location y. I go on to argue that simple basic science designed to understand “how the world works” is by far the best way to make an impact and be “relevant” on the largest possible scale – that is, far beyond only species x in location y. To make this case to students, I go on to give examples. One is the important work done on the effects of inbreeding on fitness, which started with general evolutionary theory and testing on organisms that were not species x in location y. Yet that general work went on to heavily influence policy for many species in many locations.
I then try another example from my personal experience. I try to argue that much of my research is based entirely on a fundamental interest in understanding of how rapid evolution shaped the world – yet it turned out that the insights gained from this research have, in fact, become very broadly relevant. That is, purely basic science at the time then later became applied in ways that influenced policy and, in fact, many species x in many locations y. As of today, I can point specifically to Key Message #8 from the Global Assessment as a concrete example of the contribution of pure basic evolutionary biology to global policy relevance.

This blog post might seem to smack, at least to some, of arrogance – that I am somehow touting my own awesomeness and importance in science and beyond. The key point, however, is not that I am somehow more intelligent or hard working or dedicated or whatever than are other researchers. Rather, the key point will be that focused interested on a basic research question has led to publications in academic journals that have precipitaed a few chance events that eventually snowballed into Key Message #8 in the Global Assessment. I tried to keep my description of this series of events short but had trouble doing so. I did consider what I might delete to make it shorter but then realized that, in fact, each step in this long chain of coincidental (or not) events was necessary to Key Message #8, or at least my contribution to it.


Part 1. Contemporary evolution …

In 1992-1993, Mike Kinnison and I both started studying rapid evolution in salmon at the University of Washington: Mike worked on New Zealand chinook salmon and I worked on Lake Washington sockeye salmon. Our choice of the overall topic (rapid evolution) and our specific study systems had nothing to do with our own insights or ideas – they were instead the suggestion of our MSc (and later PhD) supervisor Tom Quinn. At the time, we were both focus on salmon, not evolution.

Tom Quinn in 1995.
In 1995, my mother bought me a book for Christmas by Jonathan Weiner called The Beak of the Finch. This book about Darwin’s finches kindled my interest in evolution per se, as opposed to salmon evolution.

In 1998, Mike and I read a “News and Comment” article in Trends in Ecology and Evolution (TREE), written by Erik Svensson, about two 1997 studies. One study was by Jonathan Losos in Nature and the other was by David Reznick in Science, both reporting rapid evolution – the first in Anolis lizards introduced to small islands and the second in Trinidadian guppies introduced to predator-free environments. The key innovation of these new papers was that they calculated evolutionary rates for their studies and compared those rates to evolutionary rates estimated from the fossil record. This comparison revealed that evolution in lizards and guppies was several orders of magnitude faster (RAPID!) than rates of change observed in the fossil record.

Later in 1998, Mike and I wrote a letter to TREE, titled “Taking Time with Microevolution”, in which we criticized the current methods for estimating evolutionary rates. Exploring this question while writing the letter made us realize that much more needed to be said than we could effectively summarize in that short letter.

Also that year, I was invited by Eddie Beall to participate in salmon research at an INRA station (Saint-Pée-sur-Nivelle) in France. Without my friends and girlfriend – and before the internet was really that useful – and staying in a dorm at a small research station in a very small town in the Basque countryside, I had plenty of time. My goal to write a longer paper about evolutionary rates had been nagging at me, and one day – walking from my dorm to the small town – I simply said to myself: “Damnit, time to start writing.” A week later I had a first draft sent to Mike.

In early 1999, Mike and I submitted the paper to Evolution – a real stretch for two salmon-focused students who had never published any of our previous work in an evolutionary journal. Remarkably, Evolution published it as a “Perspective” with the start of the title being “The Pace of Modern Life.”

The paper quickly received considerable interest from the evolutionary community, as it was the first review of rapid evolution (which we argued was better called “contemporary evolution”). This interest included the editors of Genetica contacting me to ask if Mike and I wanted to edit a special issue on rapid evolution. This invitation came before the days of predatory publishers who are constantly asking you to edit special issues, and so we were shocked and agreed instantly. We then contacted all of the leaders in the field and, remarkably, nearly all of them agreed to contribute papers.

I edited this special issue during my postdoc at UBC, where “ecological speciation” was all the rage. All of the discussion I was hearing on this topic inspired me to re-examine my Lake Washington salmon studies for evidence of whether rapid evolution was leading to reduced gene flow between populations: i.e., the rapid evolution of reproductive isolation. Recruiting my friend John Wenburg and his supervisor Paul Bentzen, (then both at the University of Washington) to conduct genetic analyses, I submitted the findings to Science and – remarkably – the paper was accepted. (I have since had dozens of submissions rejected from Science & Nature – more about that here.)

Based on the above work on rapid evolution – probably especially the Science paper – I received the American Society of Naturalists Young Investigator Prize in 2001. Winners of this prize all give a talk in a symposium at the Evolution meeting. I did so and was afterward approached by the editor of TREE (Catriona MacCallum) who asked if I wanted to write a paper for them. I agreed and she asked me to send her some possible topics that I thought might be appropriate.

Part 2. … might be relevant for conservation biology …

One of the other people studying rapid evolution in the late 1990s was Craig Stockwell – and his work focused on endangered desert fishes. I had discussed this work with him several times and, on a whim, suggested to TREE that we could write about the relevance of contemporary evolution for conservation biology. This was the least favorite of my suggestions at the time (you know nothing Andrew Hendry!) and yet it was the one that TREE asked for.

Not knowing much about conservation biology, Mike and I invited Craig to lead the paper for TREE – and we are very thankful to have done so as Craig was able help position our shared basic knowledge of contemporary evolution into a solid conservation framework. The result, published in 2003, was the first review paper talking about the importance of contemporary evolution for conservation biology. Just last week it passed 1000 citations.

In 2004, I was invited to interview for a job at Yale University – I was then an Assistant Professor at McGill University where I had started in 2002. One person I met on the interview was Michael Donoghue. Surprisingly, he didn’t talk about my research specifically but rather invited me to bring my contemporary evolution perspective to a group called bioGENESIS, which he outlined was a “core project” of a biodiversity-focused NGO called DIVERSITAS. At that point, I had never heard of DIVERSITAS – and had no knowledge about, or interest in, NGOs in general. I just wanted to study rapid evolution as a basic question. However, I agreed to join bioGENESIS, perhaps because I thought it might help me get the job (it didn’t) and perhaps because I was flattered to be asked and have a hard time saying no to direct requests for such help. Afterall, how much time could it take?

Dinner after my first bioGENESIS meeting.

The first bioGENESIS meeting I can remember attending was held in Paris in 2007. Sitting around the table with a bunch of evolution-focused Professors, I listened to endless discussions the importance of injecting evolutionary thinking into conservation policy at the national and international levels. Countless NGO acronyms were used and I really had no idea what was going on; yet I could see that, perhaps, if I could eventually figure out what was going on, I might be able to contribute something new: everyone else around the table focused on past evolution, not contemporary evolution. I do also remember spending an inordinate amount of time debating the specific logo that would be used for bioGENESIS – and it is a nice logo!

I attended many subsequent bioGENESIS meetings – Brazil (twice), New York, Montreal, Paris again, and many others. My role in these meetings was generally to help inject contemporary evolution into various discussions and documents, such as the bioGENESIS Science Plan and a paper for Evolution titled “Evolutionary biology in biodiversity science, conservation, and policy: a call to action.”

In 2006, I was invited by Louis Bernatchez and Michelle Tseng to join the inaugural editorial board of the new journal Evolutionary Applications. I remain an Associate Editor at the journal, which has been extremely successful. I have also published a number of my own papers there

I eventually became Chair of bioGENESIS and started to attend the broader DIVERSITAS meetings, where I rubbed elbows with many movers-and-shakers in the international science-policy interface, such as DIVERSITAS Chairs Georgina Mace of University College London and Hal Mooney of Stanford University. I was also through these contacts invited to give talks at various general events, such as Darwin’s 200th birthday celebration at the National Academy of Science in Washington, DC – events at which many of these movers-and-shakers were again present.

Many global change programs, including DIVERSITAS, had long been funded by governments to provide advice and guidance to the Convention on Biological Diversity (CBD) and other governmental and intergovernmental programs. Around 2012, however, governments – especially the US – decided this piece-meal was too chaotic, expensive, and time consuming, and so they asked that all of these programs unite under a common banner, which came to be called FutureEarth. I continued to work with bioGENESIS under the new aegis of FutureEarth.

Part 3 … and IPBES.

For several years in bioGENESIS, I had been hearing about IPBES, the new IPCC-like organization that would be focused on biodiversity and ecosystem services. Some members of bioGENESIS were involved in IPBES as advisors or “observers” but I had not been.

Then, in 2016, I was contacted by Sandra Diaz with a request to participate in the upcoming Global Assessment to be conducted by IPBES. Although Sandra had herself done a lot of work on contemporary evolution, she was very busy as one of co-Chairs of the assessment and wished to invite the help of another expert on the topic. I presume my name come up through a combination of my previous papers and probably also my visibility to the movers-and-shakers I had encountered during interactions at DIVERSITAS, FutureEarth, and so on. Indeed, Hal Mooney and Georgina Mace were both involved in the Global Assessment as advisors/reviewers, and Anne Larigauderie – whom I knew as Executive Director of DIVERSITAS – was now Executive Secretary of IPBES.

I missed the first authors’ meeting for the IPBES Global Assessment owing to a previously-planned family trip and also the second meeting owing to a broken leg. However, I was able to attend the third (Cape Town) and the fourth (Frankfurt) authors’ meetings at which I worked, especially with Andy Purvis, on Chapter 2 – Nature, again always charged with bringing a contemporary evolution perspective to the document. To aid this effort, I arranged new meta-analyses of data led by my students Sarah Sanderson and David Hunt, which will be coming soon to a journal near you.

I also agreed, while at the Cape Town meeting, to write the appendix and other information for NCP 18 (Maintenance of Options) in the chapter on Nature’s Contributions to People (NCP). The core of that appendix was then written at a meeting in Montreal with the help of current members of bioGENESIS, with additional help from Rees Kassen from the University of Ottawa and Vicki Friesen from Queen’s University. Vicki’s postdoc Deborah Leigh also became involved and conducted a meta-analysis on rates of change in genetic diversity that will be published soon in Molecular Ecology: “Six percent loss of genetic variation in wild populations since the industrial revolution.”

Toward the end of the Global Assessment process, Sandra Diaz asked for my help in making her case for contemporary evolution to be a Key Message in the Summary for Policy Makers (SPM). I helped Sandra write a draft that went off for review and was returned with the argument that, although interesting, rapid evolution wasn’t “policy relevant” and therefore didn’t belong in the Summary for POLICY Makers.

The way I sought to deal with this was to entirely re-write the Key Message and Background Material for the SPM specifically around the policy relevance of contemporary evolution. That is, knowledge of evolutionary principles can be (and is) used to directly inform specific management actions that then have material effects on biodiversity and humans.

This change in emphasis seemed to make the point effectively and then the discussion became more about the details – the last of which I worked on while defending my house from the great Ottawa/Montreal flood of 2019. However, it still required lots of back and forth between myself and Sandra and Andy and others to make sure that the Key Message was clear – and would be approved by governments as a Key Message.


So ends my as-short-as-possible summary of my 27 year accidental – or coincidental – road to relevance. It started with a suggestion from my supervisor Tom Quinn and then passed through arguments with my office mate Mike Kinnison to a book from my Mom to a News & Comment by Erik Svensson to side trip to uneventful France, to lucky submissions to Evolution and Science to an award from ASN to an invitation from an editor who saw my talk to a failed job interview and then to series of snowballing contacts with movers-and-shakers in the world of international science policy. Throughout this process, I have maintained my conviction that basic science is the way to have the biggest impact and the greatest relevance. Problem-focused applied science is fine but – if you wish to be relevant – basic science is also a viable road, as I hope my own journey illustrates.

Mike and I trying out some new facial hair, not that long after we both became professors.
We had both interviewed for the same job, which Mike got!

I don’t wish to – in any way – criticize people who do applied science or conservation biology. However, funding agencies, the media, and now many students are so focused “making a difference” that they steer away from basic curiosity-driven research. I am here to tell you that these two things – curiosity-driven research and applied relevance – are not mutually exclusive. Sometimes doing the best possible basic research can be the best possible route to “making a difference.” We need more funding for basic research. We need more people doing basic research. Hopefully my experience will remind a few people of that fact.

Does my experience with bioGENESIS and IPBES motivate me to now parlay my relevance into more focused emphasis on applied issue. No. Not at all. I remain passionate about basic science – now, most directly, the influence on contemporary evolution on ecological process. I even wrote an esoteric very academic book about it, called Eco-Evolutionary Dynamics. I also have started several massive experimental studies on eco-evolutionary dynamics in nature that have no immediate practical relevance whatsoever. They won’t save species x in location y – for any specific species in any specific location for that matter. Rather, they will continue to bolster our general understanding of how evolution shapes the world around us. If policy makers find that insight useful, I am happy to provide my input and advice should I be requested to do so.

A 25-year quest for the Holy Grail of evolutionary biology

When I started my postdoc in 1998, I think it is safe to say that the Holy Grail (or maybe Rosetta Stone) for many evolutionary biologists w...