Wednesday, May 22, 2019

When and How (Not) to Write a Review Paper

"Hey, we should write a review paper!"

Raise your hand if you have said, or heard, this sentence....  Or, the close cousin of this sentence: "You should write a review paper and submit it to ___journal_name____", sometimes spoken by Editors seeking to game the citation and impact factor system. (Publishing more reviews is an easy way for journals to boost their IF).

We (Dan Bolnick and Andrew Hendry) recently collaborated on a review paper. We are honestly excited by the topic, and feel like we had something useful to say that would help shape people's view of the topic (especially newcomers to the field). But the process of wrangling 20 people's opinions - authors and reviewers and editors - into a coherent text was like the proverbial herding-cats, but with really big cats.  Big grumpy cats. That's not the fault of any one (or group) of co-authors, it's just what happens when people with many different views and time constraints and agendas try to work together on something that is nobody's number-one-priority.

RIP GrumpyCat, Patron Saint of Writing Review Papers
And, at the same time, we are both involved in multiple other review papers simultaneously. Some of these are papers we really feel convey a new idea in a compelling way. Some of them were aggressively solicited by journals. We started emailing back and forth to commiserate about the joys and headaches of review papers. We both feel like we are probably overdoing the review paper thing. Which led us to wonder, when is it good, versus bad, to dive into writing a review? What are the pros and cons? How do you tell which review papers are worth your time, and which are not?

It's easy to hate on review papers, and complain that they are just soap-box pontification. Just people gaming the citation-driven academic system. But then, we all can also think of great review papers we've read. Papers that helped us learn the layout of a subfield or topic outside our usual expertise.  Papers that made us think differently about a familiar topic. Papers we found valuable to assign as reading for a graduate or undergraduate class to introduce newcomers to a topic. They are also great for senior scientists making a lateral move between fields. For instance, Dan has had a shift in the past 10 years into more evolutionary immunology, and found review papers immensely helpful in learning this complex field in a semi-efficient way.

So, what makes the difference between a hot-mess herd-of-cats review, and an effective one that people want to read? When is it appropriate to write a review, and when should you cease and desist? And, how do you steer clear of the common pitfalls that make the difference between an influential citation classic, versus a waste of your time (as a writer, or reader)?

Note, there are some other blogs on the topic of review papers. For instance, Andrew did an old EcoEvoEvoEco blog about citation rates to his reviews versus other papers:

And Steve Heard wrote one too. We didn't read Steve's blog in advance of this one, to not bias our own opinions *.

(* or perhaps because we were lazy)


1) Should you write a review paper (a handy flow chart)

If you have to ask this question, then the answer is probably no. Some people would say the answer is ALMOST ALWAYS no:

Other people seem to think that everyone should write a review paper, and require their graduate students, or even students in graduate courses, to write and submit review papers:

And pretty much every working group that has ever convened at NCEAS, NESCENT, NIMBioS, iDiv, or other such institutes or centers, feels an obligation to write a review.

Our view is that there are times for reviews, and times not to review. For every thing, turn, turn, turn, there is a synthesis, turn, turn turn.  The question is, what's a good reason, what's a bad reason? Well, to help you in your deliberations, here's a handy and mostly tongue-in-cheek guide:


2) Pros and Cons of writing a review paper


  • A good review can bring together existing knowledge in ways that generates a new insight. 
  • You get to make methodological recommendations for how people should move ahead to study a topic, setting a research agenda by laying out new questions or identifying more (and less) effective experimental approaches. If you can successfully define an agenda that other people actually follow, you can guide the future of your discipline. And that's rewarding.
  • You can define the standards of evidence. For example, Schluter and McPhail 1992 (American Naturalist, of course) defined a set of observations they deemed necessary and sufficient to prove an example of ecological character displacement. In hindsight, these were very stringent and few papers have measured up even almost 30 years later. Andrew Hendry weighs in here: "I am generally negative about “standards of evidence” papers as they are always un-realistically stringent – and people think that you don’t have evidence for a phenomenon even if you are nearly there. Kind of like needing to reject a null hypothesis. Such papers would be better pitched as weighing evidence for or against a given phenomenon. Kind of like levels of support for alternative models. Robinson and Wilson wrote a “Schluter was too strict paper, I think.” Others have done the same for other such papers."
  • It can be really enjoyable to attend a working group designed to brainstorm and write a review. The process can be challenging in a good way, as everyone hashes out a common set of definitions and views. Ten or twenty (diverse!) people in a room for a few days arguing over details of semantics and interpretation of data or models is a great way to reach a consensus on a subject, which you then want to convey to everyone who wasn't in the room, hopefully to their benefit (but perhaps to their annoyance).
  • Review papers also help the writer organize their thoughts on a topic – often stimulating their own empirical/theoretical research. This is why many professors encourage their PhD students to make one PhD dissertation chapter be a review of a topic. Note, however, that while this might be a good motivation to write a paper for your own edification, it isn't necessarily a good reason to publish it for other people to read.
  • Self-interested reason last: Review papers can become your most-cited work. That's certainly the case for both of us. [Dan: four of my five most-cited papers are straight-up reviews, the other is a meta-analysis. These five account for about 40% of my lifetime citations, though they are only 4% of my publications. For a more in-depth analysis, see the figure below. Overall, 19% of my lifetime papers are reviews. 65% are empirical. 16% are theory. In contrast 31% of my citations are to those 19% of my papers that are reviews, 32% to my empirical papers, 25% to meta-analyses, 7% to theory papers, 4% to methods. From this point forward in this blog, I'm going to consider meta-analyses as belonging more in the empirical study side, than as a review, because they entail both a great deal more work, and more de novo data analysis and interpretation.]

      Note, however, that Andrew and Dan may be in the minority in this regard. A twitter poll found that a majority of unscientifically sampled people had empirical papers as their most cited. 


  • A bad review can really really flop. Perhaps nobody wants to read it. Even worse, what if lots of people read it and disagree with the premise or conclusions? It can come across as narcissistic, or wasting people's time which makes them grumpy (refer back to GrumpyCat, above).
  • Saturation: some topics (I'm looking at you, eco-evolutionary feedbacks) have a high ratio of reviews to actual data. More on this later.
  • Takes your time away from 'real' science, generating and publishing data or models that really advance our collective knowledge forward. For that matter, it chews up reviewer and editor time also, so hopefully it is worth everyone's time, but it might not be.
  • Citation "theft". There's a strong argument to be made that when we write about a scientific idea, we should cite whoever first proposed that idea (and/or whoever provided the first or the strongest-available evidence for the idea). Citations are the currency by which authors get rewarded for their work, and we want to reward people who generate new insights. By citing them. But, review papers tend to attract citations. It is easier to cite a review saying that "X happens" than to locate the first published example of X. And, the review lends a greater air of generality. You could cite 10 experimental articles showing X, or just one review. Especially when writing for journals like Nature or Science, where citation space is limited, one naturally gravitates towards citing reviews. Yet, this seems undesirable from the standpoint of rewarding innovation.  The win-win good news is that most people preferentially cite a mixture of both review papers and original sources to make a point (though perhaps less so in leading journals with artificially short citation sections):
  • Some people get really annoyed by an excess of review papers. They can be seen as "fluff", as a form of gaming the system or parasitism. Michael Turelli used to tell his students and postdocs that reviews counted as negative papers. He was only half joking. Well, less than half joking. So, by his rules, we propose a modified citation and H index. The Turelli's Penalized Citations is the total number of citations to non-review papers, minus the total number of citations to review papers. By that measure (including meta-analyses as data papers), Dan loses 2/3 of his total citations. If meta-analyses were also included in among reviews, he'd be in negative territory (negative 1800). Turelli's Penalized H Index is the H index just among non-review papers, minus the H index just of review papers. Dan's TPHI is 21.  This must be why Turelli secretly harbors thoughts of disowning Dan. We assume.  Andrew Hendry adds here: from Web of Science, my Turelli’s Penalized Citations are 1865 if meta-analysis is empirical and -213 if meta-analysis is review. My Turelli’s H Index is 18 if meta-analysis is empirical and 4 if meta-analysis is review.  In short, we've clearly both benefitted from reviews.
You know who = Michael Turelli. Ironically, Michael covered the page charges for what was to become Dan's most-cited (review) paper, co-written with a group of fellow graduate students at UC Davis.


3) Do's and Don'ts of writing a review paper


  • Clarify terminology in ways that are consistent with past usage.
  • Summarize existing knowledge, but this should be only a modest part of the review.
  • Derive a new conclusion that follows from, but is not presently stated in, the existing literature. As you will see from the copied tweets below from a recent thread, the overwhelming consensus is that reviews must provide a serious new insight, some value-added.
  • Easy to read and non-obvious diagrams conveying key ideas.
  • Identify gaps in our knowledge.
  • Describe specific experimental or other method innovations that allow people to advance beyond the existing knowledge.
  • Write well.
  • Think about your audience. Are you writing to experts in your field who have also read the same papers, but maybe haven't put the pieces together in your particular way? Are you writing to professionals from other disciplines to introduce them to your field? Are you writing to undergraduates or graduate students? Of course a great review paper might be accessible to everyone, but often these different audiences require different apprroaches. Most fundamentally, are you writing to fellow specialists, or to non-specialists? 
  • Provide specific examples to illustrate points, without overloading the reader with all the examples.
  • Put the topic into historical context, including bringing to light older but very relevant papers. Many excellent old papers fall off the map, but deserve credit for their pioneering insights.
  • Clearly state why a review is needed / appropriate at this juncture.
  • Provide tables of relevant examples of phenomena you describe, with some annotation. These can go in supplements, but are useful for people entering into the subject.
  • When there's enough empirical work available, make it a meta-analysis to derive quantitative conclusions.
  • Think about the diversity of authors whose work you are highlighting. Do not just mention work by your friends, and do not just mention work by older white males. 
  • Co-author with a diverse group of colleagues to ensure your review's viewpoint represents a consensus opinion not just your own. Both Dan and Andrew have looked over their own review papers and, in retrospect, find themselves wanting in this regard and are trying to do better going forwards (
  • An exception to the "Say Something New" rule, is that review papers can do a great service if they bring an old idea to the attention of a new audience. Put another way, we can say that an idea is new to some group of people. For instance, the eco-evolutionary dynamics field saw a proliferation of review papers, some might say faster than the number of new empirical papers for a time. Partly this was because the time was right and multiple groups were converging independently on the theme. And partly, they were talking to different audiences, some to ecologists, some to evolutionary biologists, or to conservationists. So, bring something to a new audience is another option.
  • Write well, and aim for a widely-read journal. Sometimes a topic has been reviewed before, but that review didn't land its punches and people aren't really paying attention. A follow-up review in another more visible location, that is better written or better-argued, may stick with previous reviews didn't. Even just getting the same paper (writ large) in a fancy journal (Science/Nature) can have a huge positive effect on the spread of the idea – and, of course, attention to the earlier review. Without this, rapid evolution would not be so prominent in the UN Global Assessment and many other such places.


  • Redefine terms
  • Introduce lots of new terms unnecessarily
  • List lots of examples.
  • Write a review just because you feel like you should
  • Write a review just because there isn't one on the topic.
  • Just summarize the state of the field
  • Make recommendations for open research directions that aren't practical to pursue. Some poor grad student might go charging off to work on that and wind up in a dead end. Of course, they might also crack the barrier too and do something great, but that's a challenging hurdle.
  • Write something really really long. You want to do a book? Write a book.
  • Ignore relevant mathematical models. For most topics you consider, there's theory. Use it. Conversely when reviewing theory, keep a close eye on including some relevant data.
  • Cite yourself disproportionately often. If anything, try to cite yourself disproportionately infrequently.
  • Controversial opinion: Don't have a long list of co-authors just because they were in the room for conversations. They should be able to point to specific contributions that each of them made to the text.
  • Here's one where Dan and Andrew disagree. Dan:  Don't rebrand an existing idea. It pisses people off without adding new insights. For instance, many people see the 'extended evolutionary synthesis' as both making some sloppy claims, but also as claiming to be radical when its core tenets aren't really at odds with previous views. It has generated some serious ill-will and push-back.  Andrew: Rebranding (redefining) can serve a very important role in bring an idea to a new community, reinvigorating an old idea (old wine in new bottles), and generating new enthusiasm. Eco-evolutionary dynamics has been argued by some to just be evolutionary ecology and community genetics. But, if we hadn’t rebranded it, the idea would not have spread nearly so far. Dan counters: but a lot of the writing about eco-evo makes it sound like this is a new insight emerging from recent work. In fact, the very idea of ecological character displacement (old wine indeed at this point) is an eco-evolutionary dynamic where evolution of resource-use traits is driven by competition, and ameliorates that competition to allow ecological coexistence. Maybe the problem isn't the rebrading per se, but giving the impression that one is whitewashing relevant older work to make the rebrand seem more innovative and new.
  • Make a habit of writing too many reviews, too often. It comes across as pontificating, trying to shape the field without doing the hard work of writing real data or theory papers that advance our knowledge. Both of us have violated that rule (indeed, we are both in violation of that right now, and both regretting it a bit). It does seem that the tendency to over-review is a sign of a maturing career (see figure below from a series of twitter polls), but that might just be senility.


4) Life cycle of a research topic and key moments for reviews..

When to review, when not to review? That very much depends on the state of the research topic you want to write about. A quick guide:

  • Stage 1: A new paper is published describing a novel phenomenon not previously known. Too early for a review.
  • Stage 2: A few theory paper(s) are written making some predictions about the new phenomenon. Too early for a review. Note that the order of Stage 1 and 2 can be reversed if the theory came first and made a testable prediction.
  • Stage 3: A few more empirical and/or theory papers. Maybe 10 citations. Still too early by most people's count (see figure from a Twitter poll).
  • Stage 4: There's a critical mass of information, awareness grows. Now the gold rush begins. Whoever does the first review gets some early credit. But, they risk being premature and writing a review without sufficient meat to it, that nobody reads.
  • Stage 5. There were just a whole bunch of new review papers. All citing the same 15 papers. Not time for a review (unless you have something genuinely new and profound to say, that the others missed).
  • Stage 6. The band wagon. Lots of people are studying the topic now, empirical and theoretical papers appearing all the time. But, that early burst of reviews is still fresh. Hold off.
  • Stage 7. Round two of review papers now have enough material to work with to actually do meta-analysis and be more quantitative. 
  • Stage 8. Everyone things "that's a crowded field, I need to do something different to set myself apart". The band wagon disperses, a steady trickle of work on the topic continues but nobody reads them much because that's yesterday's research fad, people moved on to something else. Nobody's paying much attention anymore.
  • Stage 9. A decade later, that steady trickle of work has built up to a large reservoir of material. Time to re-up the subject. People didn't really lose interest, it turns out, they just shifted away because of the competition. Now that you remind them of the subject and update the consensus view based on new information from stage 8, the bandwagon renews (return to Stage 6). Maybe by summarizing this large body of literature, you identify some really new insight, theory, result. That returns us to Stage 1.

Monday, May 6, 2019

My Coincidental Journey to Relevance

1 pm Paris time today saw the official release of the long-awaited Global Assessment by the Intergovernmental Platform for Biodiversity and Ecosystem Services (IPBES). The product of intensive work by hundreds of people from more than 50 countries over more than three years, the document summarizes the state of biodiversity on Earth and discuss what we can do to improve it that state in the future. The assessment is hundreds of pages (with many many more pages of appendices) and the full document is therefore likely be read by only a small subset of people interested in the topic. For massive documents like this, what is much more likely to be read by many more people is the Summary for Policy Makers (SPM), in this case a 39 page document by itself. Even this SPM is much too long to be read by Important People (Presidents, Prime Ministers, Environmental Ministers) and – of course – by reporters. Hence, everything in the SPM is also distilled down to eight “Key Messages” spanning two pages at the very start of the SPM. These key messages are sure to be read by nearly everyone.

I here wish to draw your attention to Key Message #8, which reads in its entirety:

Human-induced changes are creating conditions for fast biological evolution - so rapid that its effects can be seen in only a few years or even more quickly. The consequences can be positive or negative for biodiversity and ecosystems, but can create uncertainty about the sustainability of species, ecosystem functions and the delivery of nature’s contributions to people. Understanding and monitoring these biological evolutionary changes are as important for informed policy decisions as in cases of ecological change. Sustainable management strategies then can be designed to influence evolutionary trajectories so as to protect vulnerable species and reduce the impact of unwanted species (such as weeds, pests or pathogens). The widespread declines in geographic distribution and population sizes of many species make clear that, although evolutionary adaptation to human-caused drivers can be rapid, it has often not been sufficient to mitigate them fully.

I would like to pause at this point to reflect on an astounding fact: rapid evolution is one of the eight Key Messages of the IPBES Global Assessment. This fact isn’t astounding because rapid evolution doesn’t belong as a key message; but rather because, only 20 years ago, very few people –few scientists even – would have acknowledge the practical relevance of rapid evolution.

It is with some pride that I can report that, in fact, I wrote much of Key Message #8 – with modifications resulting from many reviewers and also with final tweaks during the Plenary Discussion in Paris. I don’t profess to be responsible for, or to favor, each and every word and phrase in the key message; but I do claim to have a key role in this statement making it into the Key Messages, as well as the background material provided later in the SPM, and – of course – the numerous resonances of this “theme” across the rest of the Global Assessment.
I seem to have been relevant. How the Hell did that happen?


Many prospective graduate students start discussions with me by expressing their desire to be relevant – usually to conservation. As a memorable example, one student starting our conversation by saying “I am interested in evolutionary biology or conservation biology.” My response, of course, was “Well, in my lab, we do evolutionary biology and not conservation biology.” To such students wishing to be relevant in conservation biology, my suggestion has always been to NOT do Conservation Biology (note the capitals this second time). I then go on to argue that Conservation Biology is typically imagined to be helping species x or location y or – most commonly –helping species x in location y. Such work is important I acknowledge but we don’t often do it in my lab. The reason is that work to aid species x in location y often has no influence on anything other than species x in location y – and, often, no influence even on species x in location y. I go on to argue that simple basic science designed to understand “how the world works” is by far the best way to make an impact and be “relevant” on the largest possible scale – that is, far beyond only species x in location y. To make this case to students, I go on to give examples. One is the important work done on the effects of inbreeding on fitness, which started with general evolutionary theory and testing on organisms that were not species x in location y. Yet that general work went on to heavily influence policy for many species in many locations.
I then try another example from my personal experience. I try to argue that much of my research is based entirely on a fundamental interest in understanding of how rapid evolution shaped the world – yet it turned out that the insights gained from this research have, in fact, become very broadly relevant. That is, purely basic science at the time then later became applied in ways that influenced policy and, in fact, many species x in many locations y. As of today, I can point specifically to Key Message #8 from the Global Assessment as a concrete example of the contribution of pure basic evolutionary biology to global policy relevance.

This blog post might seem to smack, at least to some, of arrogance – that I am somehow touting my own awesomeness and importance in science and beyond. The key point, however, is not that I am somehow more intelligent or hard working or dedicated or whatever than are other researchers. Rather, the key point will be that focused interested on a basic research question has led to publications in academic journals that have precipitaed a few chance events that eventually snowballed into Key Message #8 in the Global Assessment. I tried to keep my description of this series of events short but had trouble doing so. I did consider what I might delete to make it shorter but then realized that, in fact, each step in this long chain of coincidental (or not) events was necessary to Key Message #8, or at least my contribution to it.


Part 1. Contemporary evolution …

In 1992-1993, Mike Kinnison and I both started studying rapid evolution in salmon at the University of Washington: Mike worked on New Zealand chinook salmon and I worked on Lake Washington sockeye salmon. Our choice of the overall topic (rapid evolution) and our specific study systems had nothing to do with our own insights or ideas – they were instead the suggestion of our MSc (and later PhD) supervisor Tom Quinn. At the time, we were both focus on salmon, not evolution.

Tom Quinn in 1995.
In 1995, my mother bought me a book for Christmas by Jonathan Weiner called The Beak of the Finch. This book about Darwin’s finches kindled my interest in evolution per se, as opposed to salmon evolution.

In 1998, Mike and I read a “News and Comment” article in Trends in Ecology and Evolution (TREE), written by Erik Svensson, about two 1997 studies. One study was by Jonathan Losos in Nature and the other was by David Reznick in Science, both reporting rapid evolution – the first in Anolis lizards introduced to small islands and the second in Trinidadian guppies introduced to predator-free environments. The key innovation of these new papers was that they calculated evolutionary rates for their studies and compared those rates to evolutionary rates estimated from the fossil record. This comparison revealed that evolution in lizards and guppies was several orders of magnitude faster (RAPID!) than rates of change observed in the fossil record.

Later in 1998, Mike and I wrote a letter to TREE, titled “Taking Time with Microevolution”, in which we criticized the current methods for estimating evolutionary rates. Exploring this question while writing the letter made us realize that much more needed to be said than we could effectively summarize in that short letter.

Also that year, I was invited by Eddie Beall to participate in salmon research at an INRA station (Saint-Pée-sur-Nivelle) in France. Without my friends and girlfriend – and before the internet was really that useful – and staying in a dorm at a small research station in a very small town in the Basque countryside, I had plenty of time. My goal to write a longer paper about evolutionary rates had been nagging at me, and one day – walking from my dorm to the small town – I simply said to myself: “Damnit, time to start writing.” A week later I had a first draft sent to Mike.

In early 1999, Mike and I submitted the paper to Evolution – a real stretch for two salmon-focused students who had never published any of our previous work in an evolutionary journal. Remarkably, Evolution published it as a “Perspective” with the start of the title being “The Pace of Modern Life.”

The paper quickly received considerable interest from the evolutionary community, as it was the first review of rapid evolution (which we argued was better called “contemporary evolution”). This interest included the editors of Genetica contacting me to ask if Mike and I wanted to edit a special issue on rapid evolution. This invitation came before the days of predatory publishers who are constantly asking you to edit special issues, and so we were shocked and agreed instantly. We then contacted all of the leaders in the field and, remarkably, nearly all of them agreed to contribute papers.

I edited this special issue during my postdoc at UBC, where “ecological speciation” was all the rage. All of the discussion I was hearing on this topic inspired me to re-examine my Lake Washington salmon studies for evidence of whether rapid evolution was leading to reduced gene flow between populations: i.e., the rapid evolution of reproductive isolation. Recruiting my friend John Wenburg and his supervisor Paul Bentzen, (then both at the University of Washington) to conduct genetic analyses, I submitted the findings to Science and – remarkably – the paper was accepted. (I have since had dozens of submissions rejected from Science & Nature – more about that here.)

Based on the above work on rapid evolution – probably especially the Science paper – I received the American Society of Naturalists Young Investigator Prize in 2001. Winners of this prize all give a talk in a symposium at the Evolution meeting. I did so and was afterward approached by the editor of TREE (Catriona MacCallum) who asked if I wanted to write a paper for them. I agreed and she asked me to send her some possible topics that I thought might be appropriate.

Part 2. … might be relevant for conservation biology …

One of the other people studying rapid evolution in the late 1990s was Craig Stockwell – and his work focused on endangered desert fishes. I had discussed this work with him several times and, on a whim, suggested to TREE that we could write about the relevance of contemporary evolution for conservation biology. This was the least favorite of my suggestions at the time (you know nothing Andrew Hendry!) and yet it was the one that TREE asked for.

Not knowing much about conservation biology, Mike and I invited Craig to lead the paper for TREE – and we are very thankful to have done so as Craig was able help position our shared basic knowledge of contemporary evolution into a solid conservation framework. The result, published in 2003, was the first review paper talking about the importance of contemporary evolution for conservation biology. Just last week it passed 1000 citations.

In 2004, I was invited to interview for a job at Yale University – I was then an Assistant Professor at McGill University where I had started in 2002. One person I met on the interview was Michael Donoghue. Surprisingly, he didn’t talk about my research specifically but rather invited me to bring my contemporary evolution perspective to a group called bioGENESIS, which he outlined was a “core project” of a biodiversity-focused NGO called DIVERSITAS. At that point, I had never heard of DIVERSITAS – and had no knowledge about, or interest in, NGOs in general. I just wanted to study rapid evolution as a basic question. However, I agreed to join bioGENESIS, perhaps because I thought it might help me get the job (it didn’t) and perhaps because I was flattered to be asked and have a hard time saying no to direct requests for such help. Afterall, how much time could it take?

Dinner after my first bioGENESIS meeting.

The first bioGENESIS meeting I can remember attending was held in Paris in 2007. Sitting around the table with a bunch of evolution-focused Professors, I listened to endless discussions the importance of injecting evolutionary thinking into conservation policy at the national and international levels. Countless NGO acronyms were used and I really had no idea what was going on; yet I could see that, perhaps, if I could eventually figure out what was going on, I might be able to contribute something new: everyone else around the table focused on past evolution, not contemporary evolution. I do also remember spending an inordinate amount of time debating the specific logo that would be used for bioGENESIS – and it is a nice logo!

I attended many subsequent bioGENESIS meetings – Brazil (twice), New York, Montreal, Paris again, and many others. My role in these meetings was generally to help inject contemporary evolution into various discussions and documents, such as the bioGENESIS Science Plan and a paper for Evolution titled “Evolutionary biology in biodiversity science, conservation, and policy: a call to action.”

In 2006, I was invited by Louis Bernatchez and Michelle Tseng to join the inaugural editorial board of the new journal Evolutionary Applications. I remain an Associate Editor at the journal, which has been extremely successful. I have also published a number of my own papers there

I eventually became Chair of bioGENESIS and started to attend the broader DIVERSITAS meetings, where I rubbed elbows with many movers-and-shakers in the international science-policy interface, such as DIVERSITAS Chairs Georgina Mace of University College London and Hal Mooney of Stanford University. I was also through these contacts invited to give talks at various general events, such as Darwin’s 200th birthday celebration at the National Academy of Science in Washington, DC – events at which many of these movers-and-shakers were again present.

Many global change programs, including DIVERSITAS, had long been funded by governments to provide advice and guidance to the Convention on Biological Diversity (CBD) and other governmental and intergovernmental programs. Around 2012, however, governments – especially the US – decided this piece-meal was too chaotic, expensive, and time consuming, and so they asked that all of these programs unite under a common banner, which came to be called FutureEarth. I continued to work with bioGENESIS under the new aegis of FutureEarth.

Part 3 … and IPBES.

For several years in bioGENESIS, I had been hearing about IPBES, the new IPCC-like organization that would be focused on biodiversity and ecosystem services. Some members of bioGENESIS were involved in IPBES as advisors or “observers” but I had not been.

Then, in 2016, I was contacted by Sandra Diaz with a request to participate in the upcoming Global Assessment to be conducted by IPBES. Although Sandra had herself done a lot of work on contemporary evolution, she was very busy as one of co-Chairs of the assessment and wished to invite the help of another expert on the topic. I presume my name come up through a combination of my previous papers and probably also my visibility to the movers-and-shakers I had encountered during interactions at DIVERSITAS, FutureEarth, and so on. Indeed, Hal Mooney and Georgina Mace were both involved in the Global Assessment as advisors/reviewers, and Anne Larigauderie – whom I knew as Executive Director of DIVERSITAS – was now Executive Secretary of IPBES.

I missed the first authors’ meeting for the IPBES Global Assessment owing to a previously-planned family trip and also the second meeting owing to a broken leg. However, I was able to attend the third (Cape Town) and the fourth (Frankfurt) authors’ meetings at which I worked, especially with Andy Purvis, on Chapter 2 – Nature, again always charged with bringing a contemporary evolution perspective to the document. To aid this effort, I arranged new meta-analyses of data led by my students Sarah Sanderson and David Hunt, which will be coming soon to a journal near you.

I also agreed, while at the Cape Town meeting, to write the appendix and other information for NCP 18 (Maintenance of Options) in the chapter on Nature’s Contributions to People (NCP). The core of that appendix was then written at a meeting in Montreal with the help of current members of bioGENESIS, with additional help from Rees Kassen from the University of Ottawa and Vicki Friesen from Queen’s University. Vicki’s postdoc Deborah Leigh also became involved and conducted a meta-analysis on rates of change in genetic diversity that will be published soon in Molecular Ecology: “Six percent loss of genetic variation in wild populations since the industrial revolution.”

Toward the end of the Global Assessment process, Sandra Diaz asked for my help in making her case for contemporary evolution to be a Key Message in the Summary for Policy Makers (SPM). I helped Sandra write a draft that went off for review and was returned with the argument that, although interesting, rapid evolution wasn’t “policy relevant” and therefore didn’t belong in the Summary for POLICY Makers.

The way I sought to deal with this was to entirely re-write the Key Message and Background Material for the SPM specifically around the policy relevance of contemporary evolution. That is, knowledge of evolutionary principles can be (and is) used to directly inform specific management actions that then have material effects on biodiversity and humans.

This change in emphasis seemed to make the point effectively and then the discussion became more about the details – the last of which I worked on while defending my house from the great Ottawa/Montreal flood of 2019. However, it still required lots of back and forth between myself and Sandra and Andy and others to make sure that the Key Message was clear – and would be approved by governments as a Key Message.


So ends my as-short-as-possible summary of my 27 year accidental – or coincidental – road to relevance. It started with a suggestion from my supervisor Tom Quinn and then passed through arguments with my office mate Mike Kinnison to a book from my Mom to a News & Comment by Erik Svensson to side trip to uneventful France, to lucky submissions to Evolution and Science to an award from ASN to an invitation from an editor who saw my talk to a failed job interview and then to series of snowballing contacts with movers-and-shakers in the world of international science policy. Throughout this process, I have maintained my conviction that basic science is the way to have the biggest impact and the greatest relevance. Problem-focused applied science is fine but – if you wish to be relevant – basic science is also a viable road, as I hope my own journey illustrates.

Mike and I trying out some new facial hair, not that long after we both became professors.
We had both interviewed for the same job, which Mike got!

I don’t wish to – in any way – criticize people who do applied science or conservation biology. However, funding agencies, the media, and now many students are so focused “making a difference” that they steer away from basic curiosity-driven research. I am here to tell you that these two things – curiosity-driven research and applied relevance – are not mutually exclusive. Sometimes doing the best possible basic research can be the best possible route to “making a difference.” We need more funding for basic research. We need more people doing basic research. Hopefully my experience will remind a few people of that fact.

Does my experience with bioGENESIS and IPBES motivate me to now parlay my relevance into more focused emphasis on applied issue. No. Not at all. I remain passionate about basic science – now, most directly, the influence on contemporary evolution on ecological process. I even wrote an esoteric very academic book about it, called Eco-Evolutionary Dynamics. I also have started several massive experimental studies on eco-evolutionary dynamics in nature that have no immediate practical relevance whatsoever. They won’t save species x in location y – for any specific species in any specific location for that matter. Rather, they will continue to bolster our general understanding of how evolution shapes the world around us. If policy makers find that insight useful, I am happy to provide my input and advice should I be requested to do so.

Thursday, April 25, 2019

How to Do Field Work

I just got back from three consecutive 7-10 day trips into the field: Trinidad, Galapagos, and Argentina. Much of my research life has been in the field. I spent 10 consecutive summers in the Bristol Bay Region of Alaska. I have worked in Trinidad in 16 different years. I have made 14 research trips to Galapagos. I have worked on northern Vancouver Island in more than 10 years. I have done research in Chile, Argentina, Uganda, Panama, Kenai, Haida Gwaii, California, and many other places. Some of these are depicted in the videos that intersperse the suggestions.

From this experience over more than 30 years, I have picked up a few things that can help make field work pleasant and productive – or not. Many posts have been written on important field work topics such as preparation, equipment, and safety. What I will try to do here is focus on other, less often explored, topics in hopes of supplementing the advice of others.

Plan – but be flexible.
Field work can be easy or it can be hard – but most of the time it is hard. It can go according to plan or not – but most of the time it doesn’t. Yet one thing is certain: what seems like it will work on paper back in your office will almost certainly need to be changed when you go to implement it in the real world – even if you are already experienced at your field site. Thus, it is perhaps best to think of your pre-departure plan (including back-ups) as merely a first draft of a plan. That way when you get to the field and the things you planned don’t work out, you won’t feel like your project has failed. Instead, you will enthusiastically work to modify the plan into a second draft or a third draft and so on. Sometimes you even need to start over. But this is field work – and sometimes the complete redo of the plan leads to something better than what you had initially intended.

Be positive – always!
If you spend enough time in the field, things are almost certain to go way south at some point: hurricanes, floods, droughts, difficulties catching (or even finding) the target species, missing supplies, broken equipment, stranded vehicles, power-outages, personality conflicts, etc. These problems can cause small to large destruction of plans. Thus, as noted above, you will often need to throw yourself whole-heartedly into some exciting new plan that you develop on the spot if needed. But what will NEVER help is being outwardly negative about the project. Never complain about it to other people on the team. Never be defeatist. Never – to be blunt – be negative about your experience. This attitude will never help – ever – and it can sometimes deeply infect an entire field crew and cause problems that ramify far beyond the initial problem. Instead, be positive. Seek a solution. Collect new data. Focus on another species. Publish a paper on the effects of hurricanes or floods. Countless examples exist of this nimbleness that works around, or even takes advantage of, what initially seems a disaster.

Don’t restrict/dictate a person’s food
Some people are extremely uptight about their food. Unless supplies are severly limited, let people eat what they want. Nothing rankles and sets some people against each other more than trying to dictate what they eat. (Of course, under extreme conditions of food shortage, this suggestion might not apply.)

If others are working – you should be too.
If someone on your team is working, then you should be too. It can rankle others (and is bad form regardless) if – for example – you sit and read a novel while someone else on the team is processing samples. Ask if you can help. If not, cook dinner, do the dishes, sweep the floor, prepare for tomorrow, write down protocols, look up relevant papers, etc. Only read your book if the person working insists multiple times that there isn’t any work for you to do. Stated another way: try to work harder than everyone else on the team.

Share equally in the cooking and/or other chores.
If someone loves to cook, fine – let them cook as often as they like. But make sure you offer to cook too – or help with the preparation – or do the dishes (this is me!) – or process samples while they cook. Don’t just assume someone else is cooking.

Bring earplugs!
My experience in Latin America is that the dogs bark until 2 am and then the chickens start crowing at 4 am. And my experience everywhere is that people snore. When I discovered earplugs, I was a lot more sanguine about such things.

Make a list of daily equipment – and make sure someone is responsible.
In Trinidad, we once drove 1.5 hours only to find out that we had forgotten the nets – necessitating another 3 hours of driving just to get them. In Galapagos and Trinidad, people have forgotten their field shoes. Probably everywhere, people have forgotten to charge the batteries to this or that piece of equipement. Make a list of the equipment that is needed each day and tape it beside the door. Then assign different types of equipment to particular people. If a person knows they are in charge of the nets, then they are much less likely to be forgotten. Ditto for any other type of equipment.

And  of course:
Don’t be abusive – in any way or under any circumstance
Don’t be outwardly obnoxious – even if you can’t stand the person
Don’t be passive aggressive – it is obvious to everyone
If you drink, drink responsibly

The points noted above are important not just from the perspective of getting the work done but even more so from enjoying the field. And field work is what many of us are in this field for. Best of luck!

Thursday, March 28, 2019

How do you create a lab culture (the social kind, not just cells)

When starting a new lab, from scratch, it is easy to begin to contemplate the question of what I might do differently as a PI. I'd run a lab for >13 years at UT Austin, and you get into a bit of a rut, where it is hard to change the culture, hard to introduce new habits. People have been in place long enough that creating a new style of lab meeting, or suddenly initiating a "put everything on GitHub" rule doesn't necessarily take root. But with a brand new lab, it's a clean slate, tabula rasa. But how to articulate all of the many things that make for an effective lab culture - welcoming, curious, ambitious, supportive, and efficient? I decided to try articulating my expectations in a document, a "Lab culture" file.
To begin, I did what comes naturally these days: I tweeted looking for advice / examples. And I got some great feedback. You can see the thread here, with replies from many disparate PIs with different styles of lab culture documents. Using these ideas, I built my own version, for my own style.  Its not a finished product. There are things I may decide don't belong, or things I need to add. Maybe entirely new categories not covered. But its a start. You can see the current version below. 
I hope that by sharing, I can (1) get feedback, (2) inspire others to do the same, (3) give you a template you might find useful yourself.

The short version:
# 1   Be kind & supportive.
# 2   Have fun doing science!
# 3   Be productive.

A Mentoring Plan is at the end of this document.


 Challenge yourself
  • Be self-motivated. You are here to advance your career, not mine.
  • Be ambitious. Identify your personal definition of success, and aspire to exceed that.Discuss your definition of success with your mentors and peers.
  • We are here to challenge ourselves to learn new ideas. Be curious.
  • If you don’t understand, ask questions, don’t just be silent!
  • Practice asking questions. Write down > 1 question per seminar talk you see
  • Take intellectual risks, but have a “plan B” that is safe. A really ground-breaking experiment might fail, so have another study you could do instead for a reliable publication.

  • We value a supportive work environment where everyone is treated with respect and dignity and is able to work towards their aspirations.
  • We value and support diversity in the workplace.
  • We do not tolerate bigotry, abuse, or harassment.
  • Seek out frank but constructive and kind criticism. Return the favor.
  • Communicate openly with your colleagues.
  • Leave the lab, food area, office, cleaner than you found it.
  • You are a member of a community; contribute to it, and draw upon it when needed.
  • Meet visitors. At conferences make a point of introducing yourself to strangers.

  • Honesty is essential for correct science
  • We prefer to avoid mistakes, but mistakes do happen. Take a deep breath, acknowledge them and fix it.
  • Conserve, reduce, reuse, recycle.

  • Be productive: set clear goals and meet them.
  • A core part of this job is to publish good science in a timely manner. If it’s not published, nobody will know it ever happened except us.

  • Outreach is a key part of our job. Find “your” outreach style and pursue it. Education, science communication, art… there are many strategies. Pick one and do it well.
  • Mentoringundergraduates or other kinds of trainees helps you, and helps them.

  • Health and personal challenges, including mental illness, are common hurdles people in academia face, as in any other walk of life. Engaging with the problem by discussing it with your peers and supervisors can go a long way towards getting help and accommodations. We can’t help if we don’t know.
  • Find a work-life balance that lets you do your job to the level you aspire to and lets you be happy
  • Be safe. In the lab, and in the field:
  • Seek the training you need to avoid, and respond to, emergencies, including First Aid.
  • Plan carefully to avoid emergencies.
  • Find your work schedule that works for you. I work long & late; that does not mean you are required to do so. Whatever your choices about work schedules, be aware of its costs and benefits.

  • Discuss authorship expectations before embarking on a project.
  • You earn first authorship if you do most of the data collection, analysis, and writing.
  • You earn co-first-authorship if you and someone else either did equal amounts of work or each contributed most of different stages (collection, analysis, writing)
  • You earn co-authorship if you contribute essential effort to getting a substantial fraction of the data or writing the paper. There should be some distinct result or intellectual idea that you were the primary source for.
  • You must have read, understood, and approved any paper you are co-author on and be able to defend it.

The PI’s obligations to everyone in the lab:
  • My job is to help you achieve yourcareer and life goals, to the best of my ability.
  • Rapid feedback on ideas, manuscripts, etc.
  • Financial support for salary and research and travel to the extent I am able
  • Regular meeting to discuss science, and careers.
  • I write your recommendation letters. You can take this for granted, but please give me enough advanced warning.
  • I help you network with other scientists
  • We should discuss your aspirations, and realistic ways of realizing your goals.
  • Frank and constructive feedback on your science and career advancement.
  • Conflict resolution is my job. If people aren’t getting along, or something is wrong, talk to me.
  • I never ask about personal problems, because I don’t want to intrude. But, if there are issues at home, or especially with health (mental or otherwise) that are affecting your work, you are always free to talk to me.

Your obligations to the PI:
 Tell me when there is a problem, in the lab, with your data, or with other people.
  • Be independent to the extent you can, teaching yourself skills, solving problems. But, don’t get stuck doing this: talk to me before you are in a rut. Find a happy balance between independence and the preceding point.
  • Be creative and productive. That involves working efficiently, rather than super-long hours.
Obligations to yourself:  self-education
  • Attend seminars to learn what others are doing
  • Read science papers or books (almost) every day. If you don’t want to read extensively and intensively, then examine whether you are doing the kind of science that really engages you.
  • Keep a lab notebook with ideas, observations, and data.
  • Go to a conference & practice public speaking
  • Read about scientific ethics, philosophy of science, and history of science.
  • Learn to keep a budget of research expenses
  • Take time to read about personnel management
  • Set up literature auto-alerts

Obligations to yourself:  Data habits & repeatable science
  • Back up your data!!!!!!!
  • Everyone generating/analyzing data and papers should have a GitHub account or equivalent to share data, code, and text.
  • Write up Standard Operating Protocols (SOPs) for any commonly-used method so people who follow after you can replicate your methods exactly.

Obligations to yourself:  Self-care
  • None of the above are any good if you are too stressed or unhappy or depressed to benefit from them. Take care of your physical and mental well-being. That includes sleep, exercise, and activities that make you content.
  • If you are having difficulty with health (mental or otherwise), seek help.
  • You can talk to me about problems you are having so we can seek solutions together.
  • I have experience and some limited training in counseling people at risk of suicide, suffering from depression, or having experienced sexual harassment, so please don’t think that you can’t talk to me.

  • We will start you with a basic task to evaluate your reliability and dedication, then as we get a feel for your skills and interests we will start to talk with you about independent project ideas.
  • You should aspire to get co-authorship or even first authorship from your time in the lab.
  • Be punctual and reliable
  • Attend lab meeting to learn the gory details
  • Do some independent reading on the topic you are studying
  • Ask questions
  • Ask more questions
  • Keep copious notes in meetings and in lab.
  • Start to learn statistics, computation, and to embrace applied math

 You are the lynchpin of the lab, making sure core functions keep running.
  • Keep a daily lab notebook of what you do
  • Communicate regularly with me about what you should be doing next.
  • If you have down-time and aren’t sure what to do, ask me.
  • Ask questions & more questions

Graduate students:
  • Your PhD and career goals are yours, not mine. That means you should be self-motivated, and are responsible for your own research ideas.
  • Always ask yourself, “why is this interesting and important?” Be prepared to answer that.
  • Read more than you think you can. Your success is proportional to your mastery of the literature. You are a scholar training to be a world expert on a specific topic.
  • Know the history of the ideas you are studying. This includes reading the old classic papers, and reading theory. Become comfortable with the math in theory papers.
  • Study the natural history of some habitat or group of organisms.
  • Read some history & philosophy of science
  • Develop a thick skin. Your papers and grants will be rejected, and it will not always be kindly phrased. It happens to everyone, its not personal. The sooner you learn coping strategies, the happier you will be.
  • Just because someone says it won’t work doesn’t make them right.
  • Write regularly
  • Learn to code, and learn principles of reproducible code and data including database management and metadata.
  • Make a website
  • Meet with visiting speakers
  • Publish early and often, don’t wait till the end of your PhD
  • Master statistics
  • I encourage external collaboration. Talk to me about it first, though.

  • You are in the final stages of training to be a professor. What do you need to do/learn to succeed?
  • Have a website
  • Write a mock job application, and go over it with other people to both improve the text and identify weaknesses you need to fill before you go on the market.
  • Teach something
  • Get a grant
  • Learn about personnel management and budgeting.
  • I’m not so concerned about WHEN you work, as I am with output.
  • Go to conferences & network more than you feel comfortable doing.
  • I encourage external collaboration. Talk to me about it first, though.
  • Papers!!!!

Training resources for specific topics include:


When and How (Not) to Write a Review Paper

"Hey, we should write a review paper!" Raise your hand if you have said, or heard, this sentence....  Or, the close cousin of th...