Friday, December 16, 2016

The World Without Evolution

Nine years ago, Alan Weisman posed the scenario “The World Without Us.” The premise was that, all of a sudden, people disappear entirely from the world. "What happens next?” The rest of the book described the slow decay of buildings, roads, bridges, and other infrastructure, and the gradual encroachment of wildlife on formerly human dominated landscapes. The same scenario has been postulated in various movies, including Twelve Monkeys, where humans dwelling underground send out hazmat-suited convicts to collect biological samples from the surface in hopes of a cure for the devastating disease that destroyed most of humanity. The images of lions on buildings and bears in streets can seem as jarring – ok maybe not quite as jarring – as the Nazi symbols on American icons in the adaptation of Philip K. Dick’s The Man in the High Castle.

Twelve Monkeys
The premise of this blog post is related – but even more dramatic – what if evolution stopped – RIGHT NOW? What would happen? The context for this question is rooted in my recent uncertainty, described in a paper and my book, about how eco-evolutionary dynamics might be – mostly – cryptic. That is, whereas most biologists seek to study eco-evolutionary dynamics by asking how evolutionary CHANGE drives ecological CHANGE (or vice versa), contemporary evolution might mostly counteract change. A classic example is encapsulated by so-called Red Queen Dynamics, where it takes all the running one can do just to stay in the same place. More specifically, everything is evolving all around you (as a species) and so, if you don’t evolve too, you will become maladapted for other players in the environment, which will cause you to go extinct. The same idea is embodied – at least in the broad-sense – in the concept of evolutionary rescue, whereby populations would go extinct were it not for their continual evolution rescuing them from environmental change.

From Kinnison et al. (2015)

So how does one study cryptic eco-evolutionary dynamics? The current gold standard is to have treatments where a species can evolve and other treatments where they cannot, with ecological dynamics contrasted between the two cases. The classic example of this approach is that implemented by Hairston, Ellner, Fussmann, Yoshida, Jones, Becks, and others that use chemostats to compare predator-prey dynamics between treatments where the prey (phytoplankton) can evolve and treatments where they cannot. This evolution versus no-evolution treatment was achieved by the former having clonal variation present (so selection could drive changes in clone frequencies) and the latter having only a single clone (so selection cannot drive changes – unless new mutations occur). These experiments revealed dramatic effects of evolution on predator-prey cycles, and a number of conceptually similar studies by other investigators have yielded similar results (the figure below is from my book).

One limitation of these experiments is that the evolution versus no-evolution treatments are confounded with variation versus no-variation treatments. That is, ecological differences between the treatments could partly reflect the effects of evolution and partly the effects of variation independent of its evolution. An alternative approach is a replacement study, where the same variation is present in both treatments and, although both might initially respond to selection, genotypes in the no-evolution treatment are continually removed (perhaps each generation) by the experimenter and replaced with the original variation. In this case, you still have an evolution versus no-evolution treatment, but both have variation manifest as multiple genotypes – at least at the outset.

All of these studies – and others like them – impose treatments on a single focal species, and so the question is “what effect does the evolution of ONE species have on populations, communities, and ecosystems?” Estimates of the effect of evolution of one species on ecological variables in nature, regardless of the method, are then compared to non-evolutionary effects of abiotic drivers, with a common driver being variation in rainfall. These comparisons of "ecology" to "evolution" (pioneered by Hairston Jr. et al. 2005) generally find that the evolution of one species can have as large an effect on community and ecosystem parameters as can an important abiotic driver, which is remarkable given how important those abiotic drivers (temperature, rain, nutrients, etc.) are known to be for ecological dynamics (the figure below is from my book).

A more beguiling question is “how important is ALL evolution in a community?” Imagine an experiment could be designed to quantify the total effect of evolution of all species in a community on community and ecosystem parameters. How big would this effect be? Would it explain 1% of the ecological variation? 10%? 90%? Presumably, evolutionary effects of the whole community won’t be a simple summation of the evolutionary effects of each of the component species. I say this mainly because studies conducted thus far show that single species – albeit often “keystone” or “foundation” species – can have very large effects on ecological variables. A simple summation of these effects across multiple species would, very soon, leave no variation left to explain. Hence, the evolution of one species is presumably offset to some extent by the evolution of other species when it comes to emergent properties of the community and ecosystem.

It is presumably impossible to have a real experiment with evolution and no-evolution treatments at the entire community level in natural(ish) systems. We must therefore address the question (What would happen if all evolution ceased RIGHT NOW?) as a thought experiment. 

I submit that the outcome of a world without evolution experiment would be:
  1. Within hours to days, the microbial community at every place in the world will shift dramatically. The vast majority of species will go extinct locally and a few will become incredibly abundant - at least in the short term.
  2. Within days to weeks, many plants and animals that interact with microbes (and what organisms don’t?) will show reductions in growth and reproduction. Of course, some benefits will also initially accrue as – all of a sudden – chemotherapy, antibiotics, pesticides, and herbicides become more effective. The main point is that the performance of many plants and animals will begin to shift within a week.
  3. Within months, the relative abundance and biomass of plants and animals will shift dramatically as a result of these effects changing microbial communities and their influence on animal and plant performance.
  4. Within years, many animals and plants will go extinct. Most of these will go extinct because the shorter-lived organisms on which they depend will have non-evolved themselves into extinction.
  5. Within decades, the cascading effects of species extinction will mean than most animals and plants will go extinct, as will the microbes that depend on them. The few species that linger will be those that are very long lived and that have resting eggs or stages.
  6. Within centuries, all life will be gone. Except tardigrades, presumably.

The above sequence, which I think is inevitable, suggests several important points.

1. Microbial diversity – and its evolution – is probably the fundamentally irreducible underpinning of all ecological systems.

2. Investigators need to find a way to study the eco-evolutionary STABILITY, as opposed to just DYNAMICS.

3. Evolution is by far the most important force shaping the resistance, resilience, stability, diversity, and services of our communities and ecosystems.

Fortunately, evolution is here to stay!

Friday, December 2, 2016

Wrong a lot?

[ This post is by Dan Bolnick; I'm just putting it up.  – B. ]

In college, my roommates and I once saw an advertisement on television that we thought was hilarious. A young guy was talking to a young woman. I don’t quite recall the lead-up, but somehow the guy made an error, and admitted it. Near the end of the ad she said “I like a guy who can admit that he’s wrong”. The clearly-infatuated guy responded a bit over-enthusiastically, saying “Well actually, I’m wrong a LOT!” This became a good-natured joke/mantra in our co-op: when someone failed to do their dishes, or cooked a less-than-edible meal for the group, everyone would chime in “I’m wrong a lot!”

Twenty years later, I find myself admitting I was wrong – but hopefully not a lot.

A bunch of evolutionary ecology theory makes a very reasonable assumption: phenotypically similar individuals, within a population, are likely to have more similar diets and compete more strongly than phenotypically divergent individuals within that same population. This assumption underlies models of sympatric speciation (1) as well as the maintenance of phenotypic variance within populations (2, 3). But it isn’t really tested directly very much. In 2009, a former undergraduate and I published a paper that lent support to this common assumption (4). The idea was simple: we measured morphology and diet on a large number of individual stickleback from a single lake on Vancouver Island, then tested whether pairwise difference in phenotype (between all pairwise combinations of individuals) was correlated with pairwise dissimilarity in diet (measured by stomach contents, or stable isotopes). The prediction was that these should be positively correlated. And that’s what we reported in our paper, with the caveat (in the title!) that the association was weak.

An excerpt from Bolnick and Paull 2009 that still holds, showing the theoretical expectation motivating the work.

Turns out, it was really, really weak. Because we were using pairwise comparisons among individuals, we used a Mantel Test to obtain P-values for the correlation between phenotypic distance, versus dietary overlap (stomach contents) or difference (isotopes). I cannot now reconstruct how this happened, but I clearly thought that the Mantel test function in R, which I was just beginning to learn how to use, reported the cumulative probability rather than the extreme tail probability. So, I took the P reported by the test, subtracted it from 1 to get what I thought was the correct number, and found I had a significant trend. Didn’t look significant to my eye, but it was a dense cloud with many points so I trusted the statistics and inserted the caveat “weak” into the title. I should have trusted my ‘eye-test’. It was wrong.

Recently, Dr. Tony Wilson from CUNY Brooklyn tried to recreate my analysis, so that he could figure out how it worked and apply it to his own data. I had published my raw data from the 2009 study in an R package (5), so he had the data. But he couldn’t quite recreate some of my core results. I dug up my original R code, sent it to him, and after a couple of back-and-forth emails we found my error (the 1-P in the Mantel Test analysis). I immediately sent a retraction email to the journal (Evolutionary Ecology Research), which will be appearing soon in the print version. So let me say this clearly, I was wrong. Hopefully, just this once.

The third and fourth figures in Bolnick and Paull 2009 are wrong. The trend is not significant, and should be considered a negative result.

I want to comment, briefly, on a couple of personal lessons learned from this.

 First of all, this was an honest mistake made by an R-neophyte (me, 8 years ago). Bolnick and Paull was the first paper that I wrote using R for the analyses. Mistakes happen. It is crucial to our collective scientific endeavor that we own up to our individual mistakes, and retract as necessary. It certainly hurt my pride to send that retraction in (Fig. 3), as it stings to write this essay, which I consider a form of penance. Public self-flagellation by blogging isn’t fun, but it is important when justified. We must own up to our failures. Something, by the way, that certain (all?) politicians could learn.

Drowning my R-sorrows in a glass of Hendry Zinfandel.

Second, I suspect that I am not the only biologist out there to make a small mistake in R code that has a big impact. One single solitary line of code, a “1 –“ that does not belong, and you have a positive result where it should be a negative result. Errors may arise from a naïve misunderstanding of the code (as was my problem in 2008), or from a simple typographic error. I recently caught a collaborator (who will go unnamed) in a tiny R mistake that accidentally dropped half our data, rendering some cool results non-significant (until we figured out the error while writing the manuscript). So: how many results, negative or positive, that enter the published literature are tainted by a coding mistake as mine was. We just don’t know. Which raises an important question: why don’t we review R code (or other custom software) as part of the peer-review process? The answer of course is that this is tedious, code may be slow to run, it requires a match between the authors’ and reviewers’ programming knowledge, and so on. Yet, proof-reading, checking, and reviewing statistical code is at least as essential to ensuring scientific quality as proof-reading our prose in the introduction or discussion of a paper. I now habitually double- and triple-check my own, and my collaborators’, R code.

Third, R is a double-edged sword. Statistical programming in R or other languages has taken evolution and ecology by storm in the past decade. This is mostly for the best. It is free, and extremely powerful and flexible. I love writing R code. One can do subtle analyses and beautiful graphics, with a bit of work learning the syntax and style. But with great power comes great responsibility. There is a lot of scope for error in lengthy R scripts, and that worries me. On the plus side, the ability to save R scripts is a great thing. I did my PhD using SYSTAT, doing convoluted analyses with a series of drag-and-drop menus in a snazzy GUI program. It was easy, intuitive, and left no permanent trail of what I did. So, I made sure I could recreate a result a few times before I trusted it wholly. But I simply don’t have the ability to just dust off and instantly redo all the analyses from my PhD.  Saving (and annotating!!!!!) one’s R code provides a long-term record of all the steps, decisions, and analyses tried. This archive is essential to double-checking results, as I had to do 8 years after analyzing data for the Bolnick and Paull paper.

Fourth, I found myself wondering about the balance between retraction and correction. The paper was testing an interesting and relevant idea. The fact that the result is now a negative result, rather than a positive one, does not negate the value of the question, nor does it negate some of the other results presented in the paper about among-individual diet variation. I wavered on whether to retract, or to publish a correction. In the end, I opted for a retraction because the core message of the paper should be converted to a negative result. This would entail a fundamental rewriting of more than half the results and most of the discussion. That’s more work than a correction could allow. Was that the right approach?

To conclude, I’ve recently learned through painful personal experience how risky it can be to use custom code to analyze data. My confidence in our collective research results will be improved if we can find a way to better monitor such custom code, preferably before publication. As Ronald Reagan once said, “Trust, but verify”. And when something isn’t verified, step forward and say so. I hereby retract my paper:
Daniel I. Bolnick and Jeffrey S. Paull. 2009. Morphological and dietary differences between individuals are weakly but positively correlated within a population of threespine stickleback. Evol. Ecol. Res. 11, 1217–1233.
I still think the paper poses an interesting question, and might be worth reading for that reason. But if you do read (or, God forbid, cite) that paper, keep in mind that the better title would have been: “Morphological and dietary differences between individuals are NOT positively correlated within a population of threespine stickleback”  , and know that the trends shown in Figures 3 and 4 of the paper are not at all significant. Consider it a negative-result paper now.
The good news is that now we are in greater need of new tests of the prediction illustrated in the first picture, above.

 A more appropriate version of the first page of the newly retracted paper.

1. U. Dieckmann, M. Doebeli, On the origin of species by sympatric speciation. Nature 400, 354-357 (1999).
2. M. Doebeli, Quantitative genetics and population dynamics. Evolution 50, 532-546 (1996).
3. M. Doebeli, An explicit genetic model for ecological character displacement. Ecology 77, 510-520 (1996).
4. D. I. Bolnick, J. Paull, Diet similarity declines with morphological distance between conspecific individuals. Evolutionary Ecology Research 11, 1217-1233 (2009).
5. N. Zaccarelli, D. I. Bolnick, G. Mancinelli, RInsp: an R package for the analysis of intra-specific variation in resource use. Methods in Ecology and Evolution, DOI:10.1111/2041-210X.12079, (2013).

Monday, November 21, 2016

Flexible, interactive simulations: SLiM 2 published in MBE

Hi all!  Back in April 2016, I wrote a post about SLiM 2.0, a software package that I've developed in collaboration with Philipp Messer at Cornell.  SLiM 2 runs genetically-explicit individual-based simulations of evolution, on the Mac or on Linux, either at the command line or (on the Mac) in an interactive graphical modelling environment (great for teaching and labs!).  SLiM 2 is scriptable, with an R-like scripting language, making it extremely flexible; the manual for SLiM 2 has dozens of example "recipes" for different types of models that can be implemented in SLiM, including genetic structure, population structure, complex types of selection, complex mating systems, and complex temporal model structure.  Even relatively complex models (quantitative genetics models backed by explicit loci, kin selection and green-beard models, models of behavioral interactions between individuals, models of social learning, etc.) can be written with just a few lines of script.  And yet despite all this flexibility, it's also quite fast, and it works well on computing clusters if you have projects with long runtimes.

What I'm announcing today is that our paper on SLiM 2 has now been published online by Molecular Biology and Evolution.  This paper introduces the software and provides an interesting model as an example (a CRISPR/Cas9-based gene drive in an stepping-stone island model with spatial variation in selection acting on the drive allele).  It also provides performance comparisons with other forward genetic simulation packages (SFS_CODE and fwdpp).  If you're interested in SLiM, this paper is a good place to start; and if you're already using SLiM, it's now the correct paper to cite, not Philipp's 2013 paper on SLiM 1.0.

If you're got questions or feedback about SLiM 2 you can either contact me by email (bhaller squiggly mac point com), or you can post on SLiM's discussion list, slim-discuss.  Enjoy!


Haller, B.C., & Messer, P.W. (2016.) SLiM 2: Flexible, interactive forward genetic simulations.  Molecular Biology and Evolution (advance access).  DOI: 10.1093/molbev/msw211

Saturday, November 12, 2016

The healing power of optimism

Recent events can leave one pessimistic about the future of our world and the merits of its humans. Climate change is running amok. Deforestation abounds. Invasive species destroy native communities. Terrorists cause unprecedented fear and suffering. Racist, misogynistic, serial liars are elected to the most powerful positions. Indeed, talking to young people makes clear that they often think the world is spiraling into Hell and taking humanity with it. Biodiversity is destroyed. Our kids have no future. Humans are on the path to extinction. In this miasma of pessimism, it is perhaps useful for us old timers to bring a bit of personal historical perspective.

When I was growing up, nuclear war was the specter hanging over all our heads.

Many people – including all my friends – were almost sure that we were all going to die in a ball of flame or frozen in the subsequent nuclear winter. Bunkers were constructed. Supplies were stockpiled. Fear shaped nearly all aspects of life. Now, the fear is mostly gone. 


Another werewolf of my childhood was smog.

Take Los Angeles as an microcosm. Smog was so bad that people were told not to go outdoors much of the year. Crops withered. People died of lung problems. Then clean air legislation led to emission control devices, particularly the catalytic converter. Now, smog alerts are much less common.


Then came the ozone layer depletion. 

CFCs and other pollutants were causing it to shrink, increasing the bombardment of the world’s DNA with damaging UV radiation. We were all going to need umbrellas all day long. But then regulation banned CFCs and the ozone layer stabilized to the point that it is no longer a paramount concern.

And don’t forget DDT (solved by legislation), acid rain (reduced through emission controls), mercury poisoning (reduced through awareness), eutrophication (reduced through waste processing), George W. Bush (followed by Obama), Stephen Harper (followed by Trudeau), and so on. Sure, some of these problems still exist, especially in the developing world, but they are nowhere near the front of our consciousness and concerns anymore because – to a point – we have learned how to deal with them and have taken steps to reduce them.

Now we have deforestation, climate change, terrorism, Brexit, and – of course – Trump. Just like nuclear war, smog, ozone depletion, DDT, acid rain, and the rest of it, these problems can make it seem like the end of the world is just around the corner. I would submit, however, that these problems will be solved (or at least reduced) through human ingenuity, legislation, and social change. It won’t be instant, it won’t be everywhere (e.g., smog and eutrophication are still huge problems in the developing world), and it won’t be complete. But – just like seemingly unsolvable problems of the past – today’s problems are also solvable.

As today’s problems fade (some of them – most notably climate change – very slowly), new problems will emerge. Those problems will cause pessimism in the future’s youth. But those of us old timers who have seen unsolvable problems emerge and then be solved will be more sanguine about things – optimistic even. Of course, this optimism is no cause for complacency or inaction - in fact, just the opposite. The key is for all of us scientists, citizens, and humans to do what we can to improve the state of the planet and our society.

I expect this post to engender many thoughts and opinions about how I am glossing over how horrible the state of the world is – and will become. Rest assured, I fully acknowledge that yesterday's problems are not entirely (or maybe even mostly) gone and that today’s problems are huge – and will remain so into the future. My point is simply that a personal historical perspective from us old timers can perhaps bring some healing by promoting optimism. That optimism will then hopefully stimulate action than helps to solve the problems. Yes we can.

Wednesday, November 9, 2016

Street smarts

On a Bajan terrace, under the mystified gaze of local customers, two men stare at a sugar packet that was placed on the next table, without blinking the eyes. Are they waiting for the sugar packet to reveal the answer to life and the universe, or that it shows them the way of the holy sugar cane? In fact, these two seemingly enlightened guys are actually conducting a scientific study. The excellent Simon Ducatez, a French evolutionary biologist and me, Jean-Nicolas Audet, neuroethologist from Montreal, are in Barbados to study bird behavior.

Waiting for the bullfinches. Field work is never easy.

Those that we are waiting for are Barbados bullfinches. When you sit at a terrace in Barbados, it's almost guaranteed that you will share your table with bullfinches. Of all the street smarts (see Figure 4 and sup. 1 and 2 movies) they use to forage, the bullfinch steal sugar packets and they are able to open them to extract the sugar (see movie below). Our multiple terrace visits allowed us to discover that there was independent appearance of this innovation (and not just social transmission).

 Barbados bullfinch opening a sugar packet. (from:

But what about bullfinches that live in the country side, where there are no sugar packets lying around? Would rural bullfinches be capable of accomplishing such feats, if they had the opportunity to do so? My supervisor Louis Lefebvre and we decided to test this idea by comparing the behavior of rural and urban bullfinches.

The goal was to capture bullfinches in places with different degrees of urbanization, from highly rural to highly urbanized (see map below). The northeastern zone of Barbados is one of the few areas that are relatively untouched by human presence, so rural sites are concentrated in this area of the island. In contrast, the west coast is very populated, partly because of the very high tourist activity. Going out in the wild (and in the human wilderness), in uncharted territories to capture birds represents some challenges. We often had to chase out monkeys, mongooses, giant bumblebees or even horses that were too interested by our mist nets but we were also chased out ourselves by angry farmers who though we were poaching on their land. We also needed some street smarts to elaborate the logistics with very limited means and we even had to manufacture some specialized equipment. In any case, this adventure was a lot of fun and it is my best field work experience to date.

Our 8 capture sites. Red indicators designate rural sites and yellow, urban sites.

Once we captured our birds – and many more other wonderful bird species that happened to fly in the nets – we brought the bullfinches in the “lab” at the Bellairs research institute. The “lab” was in fact 4 walls and a roof. For the rest, we had to figure out how to make it look like an aviary.  Again, a lot a streets smarts was needed there.

Me, proud of my artisanal mist net installation on a rural site.

And that is when, finally, the real science began. Our first behavioral task aimed at measuring the birds’ boldness, by recording how long it takes for the birds to come at the feeder after a human disturbance. Expectedly, the urban birds were bolder, probably because they are more habituated to the human presence. We also measured neophobia, the fear of novelty. We used the same protocol as for boldness but a novel object was placed beside the feeder. Surprisingly, the urban birds were more neophobic than the birds from rural areas. While we don’t know the real reason for this, this could be explained by the fact that birds living in urbanized areas learn to fear the novel situations because of their potential danger, whereas rural birds live in very predictable environments and never learn to fear weird situations. For more details on the temperament results, see the original article.

Our most striking result is the finding that urban bullfinches are more street smart than country birds, as reported by IFLScience. In fact, birds captured in urbanized areas were faster at solving two different problem-solving tasks. Those problem-solving tasks (see video made by National Geographic, below) were specifically designed to mimic technical foraging innovations in the wild, like the sugar packets opening. Having a better ability to solve problems in a city could mean life or death.

We have also measured immunocompetence in birds from both environments. To do so, we injected PHA into the wing of bullfinches and 24 hours later we measured the intensity of the reaction. This measurement is a proxy for the strength of the immune system. We first hypothesized that the immunity would be reduced in animals that have better cognitive abilities, since it is costly to maintain both systems at the same time. We imagined that the immunity would be a good candidate for a trade off trait against problem-solving ability. We were wrong. It appears that the urban birds’ immunocompetence is much higher than in rural birds. It seems that in this case, the urban birds have it all, although I find this hard to believe. Another possibility is that the city birds live well, but they die faster than country birds. In fact, in a study involving great tits, telomeres were found to be shorter in urban birds compared to rural birds. In any case, if I were a bird, I would probably be an urban bullfinch.

The article « The Town Bird and the Country Bird: problem-solving and immunocompetence vary with urbanization. » was published in Behavioral Ecology, 2016; 27(2):637.

Tuesday, October 18, 2016

What if all my papers were reviews?

When I was a young professor, I looked down my nose a bit at professors who only published review papers, leaving all the empirical papers to their students. Now that I am a middle-aged professor, it feels like all I do these days is write review papers. Just these last few years, I have written (or helped to write) reviews for Heredity, Journal of Heredity, Philosophical Transactions of the Royal Society B, Evolutionary Applications (2), Science, Annals of the New York Academy of Sciences, Trends in Ecology and Evolution, and others. I haven’t written a true empirical paper since 2013, when I published two. Moreover, even my forthcoming book can be thought of as one long review paper.

A major reason for this shift is that “older” Professors are better known and so are increasingly asked to write reviews. Indeed, most of the review papers listed above were invited by the journals – and the others were requests to participate from younger researchers. As all of the requests came from friends or colleagues, were in good journals, and gave us the opportunity to say pretty much what we wanted, I accepted them. Of course, I also enjoyed writing them and I hope they stimulate new ideas and research interest. Or could this simply be lipstick on a pig.

Review papers have been criticized for not generating new knowledge, which presumably would be much better than simply summarizing existing knowledge. Indeed – on this blog – one post criticized review papers in eco-evolutionary dynamics for being more frequent than empirical papers in eco-evolutionary dynamics. The basic argument is that people should stop spending their time writing reviews and should instead go out and collect new data and run new experiments. Otherwise, progress in science will be stifled – or perhaps more appropriately it will be like one of those “bubbles” they talk about for sub-prime mortgages. Or a house of cards. Etc.

So what are review papers good for then – and should you take the time to write one? I would argue that – while all of the above is true – review papers (some of them anyway) are very valuable and should be deployed early and often in your career.

1. One benefit of review papers is that they bring together and synthesize a large amount of literature. So many papers are being published these days that it is impossible to keep on top of all (or even most) of them. Review papers thus become great ways to see what is in the literature in a single reading and can help to identify empirical papers that you might not have known about.

2. Another potential benefit of review papers is that they often allow more subjectivity in interpretation and more speculation. Writing an empirical paper can constrain you to only asserting conclusions that are strongly supported by the data. This constraint is good, of course, because empirical papers are specifically claiming that original data support a conclusion. At the same time, the constraint can be limiting in that empirical data might inspire new ideas that are not strictly supported by the data, yet are nevertheless good ideas that can move the field forward. Reviews/syntheses/opinions are great places to get these bold new ideas out there even if they aren’t yet supported by (much) data.

More pragmatically, review papers are great ways for younger researchers to increase their exposure. In some cases, a student can write an excellent series of empirical papers that don’t gain much attention, simply because so many papers are out there. Review papers often gain more attention (or at least exposure through citaitons) and can thereby help a young researcher become known as an expert and an original thinker in a particular area, and they can also bring attention to that researcher’s empirical papers. This pragmatic benefit was certainly the case for me, where a review paper I published just after my PhD (Hendry and Kinnison 1999 - Evolution) helped to promote the importance of contemporary (rapid) evolution and remains one of my highest-cited papers (495 citations on Web of Science, 682 citations on Google Scholar).
Citations to all of my papers by year of publication (to 2010). Comments, responses, etc. are omitted.

But perhaps now that I am getting longer in the tooth, I should stop writing these things – or at least so many of them. Maybe if I stopped writing so many, I could write more and better empirical papers. Maybe it is all a big trade-off and I have shifted too far to one pole. These sorts of questions got me to wondering, what would my CV look like if I subtracted all of my review papers? So I did precisely that. I downloaded Web of Science data for all of my papers, subtracted commentaries, and divided them into review papers and primary data papers.

Review papers are clearly boosting my stats or, more importantly, increasing awareness of my work. Yet the above table isn’t exactly a fair comparison given that writing review papers presumably reduces the number of empirical papers I can publish. So, on the charitable assumption that I could write roughly one empirical paper for each review paper, I simply replaced my 29 review papers with the average empirical paper (ignoring date of publication). That is, I assumed that my 29 review papers were replaced with 29 copies of my average empirical paper, yielding 149 total papers with 7181 citations and an h index of 48.

To continue the absurdity; what if all my papers were review papers? Here I replaced all 120 empirical papers with the average review paper. The stats are shown below and – if all else was equal and if citations were all that mattered – I should simply publish review papers.

Of course all else is not equal. For instance, my tendency to be invited to provide reviews might require a firm empirical footing based on original data. Alternatively (or additionally), my empirical papers might be cited more heavily because of the exposure brought by reviews. Or both, which indeed is my point. Not only are reviews good for the reasons described above but they also form a nice conceptual and promotional complement to related data papers.

In short, I strongly recommend that advanced PhD students and postdocs should write review papers. Those papers can strongly influence your research field. They will be cited. They will simulate your own thinking. They will enhance your empirical research.

As for what makes a good review paper, I have a couple of suggestions:

1. Meta-analyses are much better than conceptual thought pieces. Indeed, I have included meta-analyses in my empirical category above. Of course, the best reviews would be both – conceptual and meta-analytical.

2. Don’t just review the evidence, present new ideas and advance new hypotheses. Although you can make some hay from rehashing previous review topics, the way to make a real influence is to come up with new ideas and review new topics.

The day after writing the above, I (by coincidence) was a guest editor at the annual meeting of the board of the Annual Review of Ecology, Evolution, and Systematics, where I agreed to write another paper. And, of course, I have another paper in preparation for Trends in Ecology and Evolution, and several other reviews papers beyond those. So this trend will clearly continue for awhile longer. I am starting to really miss those empirical papers!

Tuesday, September 27, 2016

Undergraduate Field Research: Making it Happen

A group of female Himalayan Tahr looking at us!
It’s really amazing how far you can get when you aren’t afraid to “just go for it”, no matter what point you are in your career! My dream started sitting next to a colleague and best friend of mine at our graduation from the Panama Field Study Semester, an amazing program offered to undergraduates at McGill University. We turned to each other that day and made a promise that one day we would work together on a project. A few months later when I was back in school in my last year of my undergraduate degree in Biology, I got a phone call from this friend who was all the way in India with the exciting news that the local community he was working in, had approached him with a project. Our golden opportunity was here! As intimidating as starting this project from nothing was, we begun doing our research: what is a Himalayan Tahr and Himalayan Serow? Where is Uttarakhand, India? How do you write a project proposal? Which grants do we apply to? Slowly but surely, everything started coming together and I learnt many lessons along the way (and am still learning today!).

And so we were off to India! 
One of the most important lessons I learnt is: do not be afraid to contact people to ask for help! Though experts in their field such as professors may seem intimidating at first, they are just people! Every single professor we contacted was more than willing to help us out whether it was through providing specialist knowledge on a species, teaching us parasitology techniques, writing a letter of recommendation or being a constant support and helping us fund our project (thank you Andrew for being one of those!).  

In the end, through dedication, support from researchers and professors we had contacted worldwide, and above all a positive attitude, we received the necessary funding to go out and do our project. It is so that the Wild Ungulate Research and Conservation Initiative was born! (Check us out on Instagram: @wurc_itindia).

Our two study sites: Rudranath and Sohkark (near Tungnath)
Our project had two main goals: (1) To assess the health impact of livestock grazing on two ungulate species in the Kedarnath Wildlife Sanctuary, Uttarakhand, India, and (2) To conduct an ecological review on these two ungulate species as there has been little research done on them to date. The two species we were studying were Hemitragus jemlahicus (Himalayan tahr) and Capricornus tahr (Himalayan serow). To tackle the first aspect of our project, we collected tahr, serow and livestock faecal samples and used the FLOTAC method to analyze parasite eggs. For the ecological data, we took many focal and scan samples to record behaviour of individuals and the group. At each point of collection we also took a GPS point and will map their distribution, with special focus on the serow as their habitat preference remains unclear. We split our time between two research sites, the first being Rudranath: a site where tahr and serow are known to overlap with livestock herds, located between 3000-3800m along a pilgrimage trail to Rudranath Temple. The second site, Shokarkh, was expected to be devoid of livestock (but as so often happens in the field, this was not the case) and encompassed heights between 2700-4000m.

Me on the lookout for tahr - gotta get 
low to hide from their view 
Although the fieldwork is still in progress, we have some preliminary results. From scans, it was found that between both sites, Himalayan tahr spend the majority of their time foraging (58% and 65%). However, focal results indicated that adult females allocated more time in Rudranath to foraging and less time resting compared to in Sohkark. Even more interesting is that sub-adult and adult males showed seasonal differences in foraging activity. Furthermore, we observed curious subdivision of groups by age and sex in the Himalayan Tahr.

For prevalence and intensity of parasites, we found there were more parasites in both livestock and tahr in Sohkarkh than in Rudranath, however there was no significant change in intensity. This is interesting as livestock were in closer proximity to the tahr in Rudranath than in Sohkarkh so we had predicted the opposite to be true.

Our lab set up in the field
As you might have noticed, the Himalayan serow was mentioned a lot less in our findings. This is because we were not able to obtain any direct observations as we have yet to spot an individual. Nevertheless, we were able to infer some information from indirect observations and informal interviews. The most alarming finding was that Himalayan Serow populations are suffering big losses this year as they are being affected by sarcoptic mange disease, a skin disease caused by a mite (species unknown). Affected serow have been found on both sides of the Mandal valley. This was further reinforced when we found a diseased and deceased serow near Mandal village (base for trek to Rudranath), as well as by the lack of serow spotted this year which according to our local guide, is very unusual. This is super interesting and a huge red flag, so we are hoping to really keep exploring this issue and learning more about the species.

Deceased Himalayan Serow found near Mandal village

As fascinating as all the research was, what I will really take away from this time in the field was all the lessons learnt. And by this I mean more than just learning how to improve our record keeping sills and how to adapt our schedule to maximize and improve data collection. This project was deeply satisfying in two main ways. The first was through the confirmation of my love for field biology and research, which has given me the drive and assurance that this is what I want to be doing for the next few years of my life. The second was through shaping my personal growth, mainly through showing me that I am capable of more than I give myself credit for. A lot of the field work was very physically demanding as it required us to trek for long hours at high altitudes every day, and I came to appreciate my mental strength proving to myself how many barriers you can overcome through being mentally strong. Furthermore, the fact that this project went from a dream to a reality was proof enough of itself that if you work hard enough and put heart into something you want, results will show!

A great field example of the lesson we have all heard before that goes something like “don’t give up hope when things don’t seem to be going your way” was our situation with finding male tahr. For days we had been searching for a herd of male tahr without fruition. Some team members even embarked on a gruesome hike up to 5000m in altitude in attempt to go find them. Then suddenly, we got notice that they were a 15min hike from our camp! And one night when we had set out to go have tea with a local shepherd in the area, without the intention of collecting any data at all, we spotted the group of male tahr!
The group of subadult and adult males tahr found in Rudranath

Perhaps what I developed the most appreciation for this summer is something that too often goes unmentioned: the essentiality of local team members. Our local field guide was invaluable when it came to finding our way around the mountains, recognizing animal signs, establishing crucial local contacts and even in what would appear to be simple tasks as shopping for food supplies. Our field cook was another invaluable and not-frequently-enough-mentioned part of our team as by him taking on the cooking duties, we were able to go out into the field for longer periods of time and really dedicate our efforts on doing the research.  Our project would have never gotten as far without their collaboration, and the value of local knowledge is something I will always treasure.

Harsh Maithani, our local field guide,
posing for his picture
On a similar note, what was amazing to see is the amount a project like ours can do for a community. 
In addition to learning more about the species they treasure, it created well-paid jobs (alas temporary for now) for locals in a field they are passionate about and gave them the opportunity to work doing something they love. As an example, Prabhat is a boy of 18 who worked as a field assistant and relief cook with us this summer during his summer vacation. Instead of hanging around the village with his friends, he was ecstatic about the opportunity to learn what it is like to be a field guide and discovered that it is possible to have a career that matches his love for the mountains.                                                                                     

All the girls at camp and Prabhat
posing in front of the beautiful
I could not be more grateful to have had this incredible experience. We are now working on finding additional funding to continue this project next year and hopefully expand our research initiative. It is astounding what can happen if you really put your mind and heart towards something and give it your best effort. If there is something you have always wanted to do, but have always thought is not quite feasible or is too much work, I encourage you to give it a try. You never know what will come of it!

Our field team for Sohkark, site 2!

Friday, September 16, 2016

No prize for finishing (or starting) your PhD first

The first time I came to Montréal was a couple of years ago. I was just finishing up my undergraduate degree at the University of Notre Dame and had the opportunity to attend the Genomes to/aux Biomes conference where I presented some research I was doing on speciation genomics of apple maggot flies out of Jeffrey Feder’s lab.

discovering the culinary delicacies of the north

It was a pleasant and sunny few days packed with science and poutine. For me, it was also a chance to explore the city and McGill University where I was to begin my Ph.D. in the fall and meet up with Rowan Barrett, my supervisor. I knew Rowan from years back when we were both working in Hopi Hoekstra’s lab and I had already gone out to the sandhills of Nebraska with him a few times to catch mice for the project that I would work on in my Ph.D. I had already been accepted to McGill and the funding was in place. The situation was ideal and everything was going according to plan. But sometimes life has other things in store.

corn thuggin with Rowan

Besides this Ph.D., the one thing I applied for was the MEME Erasmus Mundus Master Programme in Evolutionary Biology. It’s a joint 2-year master programme between four European universities (University of Groningen, Ludwig Maximilians University of Munich, Uppsala University, and University of Montpellier) and had been somewhat of a dream of mine ever since I found out about it in my freshman year.

our logo is pretty lit

The programme is set up such that students choose to take courses in either Groningen, Netherlands or Uppsala, Sweden in the first semester. Students then go to either Munich, Germany or Montpellier, France for more courses and a half-semester research project. In the final year, students conduct two separate thesis projects in any of the four universities, Harvard University (a partner of the programme), or basically any university or research institution in the world so long as a professor from one of the four universities is willing to supervise the project. In the end, students earn double or even triple M.Sc. degrees and often come out with multiple publications. Having never been to Europe before in my life and having been awarded a full scholarship, MEME was a once-in-a-lifetime opportunity I could not refuse. So I decided to take a detour to my Ph.D. I figured, if it was meant to be, I would find my way back eventually. Thankfully, Rowan agreed. :)

What I can say is it was simply the best time of my life. We were 22 representing 17 countries, each bringing something different to the table from our diverse cultural and educational backgrounds. Our discussions ranged from Dawkins and The Selfish Gene to the insanity of dealing with French banks to which new country was going to be our next adventure. And I won’t lie, it was fun to see a Syrian doctor interested in evolutionary medicine and bacteriophages struggle on the mudflats of the northern Dutch island of Schiermonnikoog doing field work, wondering out loud why the entire field of evolutionary ecology exists in the first place.

flatness can be beautiful too

Travel became life and life fit in a backpack. MEME took me from the Netherlands to France to California to Sweden to China, all within a span of 24 months. Each new country came with its own set of challenges and trying to open and close entire chapters of your life within months wasn't easy. A Malaysian classmate of mine put it best as going through breakup after breakup, but with each new relationship, you learn and become more experienced. The projects I worked on were equally diverse from the genetics of starvation tolerance in European seabass with Bruno Guinand to genetic mark-recapture of giant pandas with Per Palsbøll, Matt Durnin, Katja Guschanski and Jacob Höglund and taxonomic assignment of metabarcoding data with Douglas Yu. It was an intense, crazy, unforgettable experience. A rollercoaster or a whirlwind... or a rollercoaster caught in a whirlwind. And don’t get me started on the parties. Oh the parties…

MEME graduation 2016 - Erken, Sweden

So a full 2 years later, I now have 3 M.Sc. degrees in evolutionary biology from 3 countries, 1 paper accepted, 1 submitted and more to come. I have a deeper understanding of what it really means to be a global citizen and greater personal and scientific maturity to start my new life and Ph.D. at McGill. So if you’re reading this and this all sounds pretty cool to you, the next application cycle opens soon on October 15, 2016. My advice to any undergrads out there is to take your time. In fact, I almost wish I had taken more. The academic road is a long one and there is no prize for who gets their Ph.D. first. Of course, its best to be productive by becoming a research assistant or doing a masters, especially if you want a career within academia, but if you're not sure about your next step, it wouldn't be the wisest idea to jump straight into a Ph.D. Or perhaps this is just the European culture rubbing off on me (which isn't so bad!). In part because I decided to do MEME before starting my Ph.D. at McGill, I successfully applied for the Vanier Canada Graduate Scholarship so it seems like I made the right decision after all. The next big challenge for me will be to settle myself in at McGill, get used to living in one place for more than 5 months, and sink my teeth into some long term projects, which I now gladly accept.

Tuesday, September 13, 2016

Is Prediction an Exquisite Fiction?

As I described in a previous post, a long-standing topic of discussion is the usefulness of a given scientific endeavor or study. Along these lines, science is often divided into BASIC and APPLIED. Applied science is – by definition – useful. It cures some disease. It improves crop levels. It saves some endangered species. Basic science is – at least classically – not obviously or immediately useful. Instead, it addresses a (hopefully) interesting question – interesting at least to the researcher. Sometimes called “curiosity-driven” science, basic research might one day have great utility but, at the time it is conducted, its uses aren’t obvious.

The motivation for my earlier post on basic vs. applied science.
Basic science was once considered an admirable pursuit – perhaps even preferable as an intellectual, university-based enterprise. More recently, however, universities and funding agencies want to hear how your research – whether basic or applied – will have “broader impacts” or “direct benefit to the people of ...” No longer is it enough for the science itself to be interesting and clever and well designed; it also has to have a clear utility. When justifying a research project, these pay-offs are expected to be clearly and forcefully presented, usually at the outset of a proposal and in an explicit section at the end.

For basic scientists in ecology and evolution, these applied justifications tend to involve conservation (e.g., saving some endangered species or place), management (e.g., of natural resources), discovery (e.g., new drugs), or ecosystem services (e.g., greater biodiversity generates greater productivity or resilience or whatever). In many cases, the specific link between the science and the proffered application is PREDICTION. For example, “we need to be able to predict what is going to happen, in the face of environmental change or management actions, if we are going to design effective strategies for conservation or management.” This sort of justification is a natural and easy one because we can always say “If we don’t understand the system well, we can’t predict it. My research will help us to understand the system better, which will improve prediction, which will be useful, right?”

Just last week I – along with 21 other scientists – published an opinion/review paper in Science amplifying this last point. Specifically, we need to predict what will happen with climate change and – to do so accurately – we need much more information about organisms, communities, and ecosystems than we currently have. In this post, I would like to play Devil’s Advocate to my own paper by arguing that prediction is often hopeless.

From our Science paper.
A first important distinction is whether we wish to make a prediction or whether we wish to make an ACCURATE prediction. It might seem obvious that we want the latter but even the former is sometimes hard. That is, we might not have enough information about a given system to even speculate effectively as to whether or not some action (e.g., climate change) will have a particular effect on a particular species. Most of the time, however, we are able to make some sort of prediction based on intuition or similar systems or mathematical models or experiments or whatever. So the real concern becomes “how correct (accurate/precise) will be our predictions?”

The accuracy of prediction will depend on the type and precision of prediction. For instance, we might first want to predict simply WHETHER a given environmental change or management action will have an effect at all. Here we might be safe in many instances. Will climate change influence biological diversity? Yes! If the environmental change is large, something will respond to it. However, this isn’t the sort of prediction that we – or the public or managers or governments – care about.

We might next want to predict the DIRECTION of an effect. In some cases, this will work fairly well. For instance, we can safely say – based on many examples from nature – that climate warming will advance the timing of reproduction of many plants and animals and that commercial fisheries will lead to smaller body size in harvested populations. A few exceptions will certainly occur but these will tend to be of the type that “prove the rule.” In many other cases, however, predictions as to the direction of an effect will be incorrect. Will climate warming increase or decrease local biodiversity? Hard to say. Will fish harvesting increase or decrease productivity? It depends. In such cases, increased information – including from “basic science” – might improve predictions. 

Experience teaches, however, that expectations developed from theory, from related systems, and from detailed information are – not infrequently – incorrect.
At the most precise level, we might want to predict an effects size, such as a particular rate or endpoint state. How fast will species be lost with climate warming? How many species will be present 25 years from now – and where will they be? How small will harvested fish become and how quickly will they recover when fishing ceases? I suggest that – in many cases – predictions of this sort will be hopelessly inaccurate, except perhaps by blind luck. Each system (and year) has so much contingency that prior information will not be sufficient. Of course, this is precisely the logic that we invoke when seeking funding: “We can’t make accurate predictions unless we get more information, so give me some money to get it.” It is certainly true that if one had complete information on the driving forces in any given system and complete information about how those driving forces will change in the future, then accurate predictions of endpoints and rates might be possible. But this “complete” information is generally unattainable.

Another opinion in Science about prediction
In short, many of the arguments one reads in proposals that the particular basic science being proposed is critical for better prediction are really just smoke-and-mirrors or, perhaps more accurately, a bait-and-switch. Five years later: “Although I didn’t make better predictions, I did do some cool stuff anyway, no?” Of course, these studies can also weasel out of accountability by saying “Here is some new information that other people might find useful in making better predictions” or they might say “Here are some new predictions.” – with the last being particularly disingenuous because the accuracy of those predictions won’t be known for sometimes decades.

My point in this post isn’t that basic science should be abandoned in favor of applied science. My point instead is that it would be nice if we could all just drop the applied BS at the start and end of our proposals. That isn’t why we are doing the study – it is just what we think the reviewers want to hear. The reality is that science has made incredible strides in the past few centuries – and most of those advances, I will speculate, were made by basic rather than applied science. Think of all of the ramifications Darwin’s theory or natural selection, and – coincidentally – all of the incredible and amazing applications. At the time, however, Darwin – and the people who read his book – didn’t focus on its potential applications but rather its potential to explain how the world around us came to be.

I had better circle back to that Science paper for which I am here playing Devil’s Advocate. It is certainly true that we don’t have enough information to make good predictions of how biodiversity and species ranges will change with climate change. It is also true that getting more information about those species and environments has the potential to improve predictions – although we won’t know if we are correct for decades. Thus, I am not disputing the main arguments we made in the paper. Instead, I am using it as a jumping-off point to argue that additional information is probably even more useful simply in improving our understanding of the world around us, whether or not we attempt predictions. Sometimes this improved basic understanding will eventually have massive benefits for biodiversity and the humans that depend on it.

I think it cheapens, and potentially slows, progress in science to require it (or encourage it) to have obvious immediate applications. The best route to the best possible future applications is to simply turn researchers loose to study what they feel is most interesting, whether applied or basic. Basic research isn’t flawed and in need of an applied crutch to hold it up.


After I posted this, I was told about a similar post on Dynamic Ecology:

A 25-year quest for the Holy Grail of evolutionary biology

When I started my postdoc in 1998, I think it is safe to say that the Holy Grail (or maybe Rosetta Stone) for many evolutionary biologists w...