Saturday, March 18, 2017

Over-citation: my papers that should be cited less often

Ever write a great paper – an important one – and publish it to great expectations? “Surely everyone will love this paper,” you think. It is going to be a barn-burner. It is going to bust Web of Science – maybe even Google Scholar – with citations. Then, as the weeks and months and years go by, pretty much nothing happens. The paper gets a few citations (mostly from your own group), a few people seem to have read it, but not much else. And you think, “How did this happen”? “That was one of my best papers ever – it should be more widely cited.” Perhaps you start to think, “Maybe folks just missed it. If I could only get it in front of them again, people would recognize its greatness and it would go viral.” So you write a blog about “hidden gems” or you emphasize the paper on your website or you send out a few tweets or all of the above. And …. nothing happens. So you carry a (mild) resentment to your retirement, where you give your “exit seminar” and talk about your great work that just didn’t get the attention it deserved. (Yes, I have seen this happen.) Well, this post is about the exact opposite situation – papers that get way more attention than they deserve.

When one applies for a research grant, one usually has to talk about how wonderful one is – at least partly in relation to publications and citations. This need usually takes one to Web Of Science or Google Scholar to find out numbers of citations and H-indices and so on. Whenever I do this (such as yesterday while preparing a grant application), I see my top cited papers. I look at some of them and think, “Well, yeah, that paper was indeed useful and influential” but, about the same amount of time, I think “What the hell, why does THAT paper have so many citations?” So, I thought I would here take the opposite tack to the usual “papers of mine that should be cited more” and write about “papers of mine that should be cited less.” In doing so, I first need to point out that there isn’t anything wrong with these papers, they simply seem to have received more attention (or at least citations) than their content might deserve – or that we, as authors, expected.

One choice for an over-cited paper might be a short note we published in Conservation Biology about how species distribution models that predict massive extinction under climate change generally ignore evolution and are therefore probably often wrong. Models of this sort look at the abiotic conditions where a species is currently found, ask how the geographical distribution of those conditions is expected to change into the future, and then – if the conditions currently occupied by a given species in a given area shrink excessively – make a prediction of likely extinction. The problem, of course, is that species might evolve to occupy the changing abiotic conditions as selection forces them to do so – which is the only point we made in this paper. This point is certainly correct and many papers have now shown that such modelling is likely to be wrong much of the time, partly because of evolution. Yet it just seems so obvious as to not warrant a citation and – really – all our note did was point out that evolution could be rapid and that it could cause a mismatch between predicted and realized future species distributions. Does this rather obvious insight in a very small note really deserve 200+ citations in 7 years?

And the third most cited paper on Eco-Evolutionary Dynamics is ....
(coauthors redacted to protect the innocent)
Another choice for an over-cited paper might be the introduction we wrote to a Philosophical Transactions of the Royal Society special issue on Eco-Evolutionary Dynamics. The introduction simply pointed out that evolution could be rapid and that evolution could influence ecological process, before then summarized the papers in the special issue. Again, nothing wrong with the paper, but a summary of papers in a special issue is hardly cause for (soon) 300+ citations, nor is that typical of such a summary. I here assume that people are citing this paper mainly for the first two general points we make as listed above. This is fine, but excellent papers that treat eco-evolutionary dynamics as a formal research subject, rather than a talking point, are out there and should be cited more. Indeed, several papers in that special issue are precisely on that point, and yet our introduction is cited more. Similar to this example of over-citation, I could also nominate the introduction to another special issue (in Functional Ecology) – which is my fourth most cited paper (437 citations).

Why are these “OK, but not that amazing” papers so highly cited? My guess is that two main factors come into play. The first is that these papers had very good “fill in the box” titles. For instance, our PTRSB paper is the only one in the literature with Eco-Evolutionary Dynamics being the sole words in the title. Thus, any paper writing about eco-evolutionary dynamics can use this citation to “fill in the citation box” after their first sentence on the topic. You know the one, that sentence where you first write “Eco-evolutionary dynamics is a (hot or important or exciting or developing) research topic (REF HERE)” The Functional Ecology introduction has much the same pithy “fill in the box” title (Evolution on Ecological Time Scales) and, now that I look again, so too does the Conservation Biology paper (Evolutionary Response to Climate Change.) The second inflation factor is likely that citations beget citations. When “filling in the box”, authors tend to cite papers that other authors used to fill in the same box – perhaps partly because they feel safe in doing so, even if they haven’t read the paper. (In fact, I will bet that few people who cite the above papers have actually read them.) One might say these are “lazy citations” – where you don’t have to read anything but can still show you know the field by citing the common-cited papers.

Of course, I too sometimes take the lazy citation strategy. Sometimes when I am busting out an introduction and initially write “This [topic here] is a (hot or important or exciting or developing) research area (REF HERE)”, I simply fall back to my usual set of citations that I haven’t looked at for years and years. Doing so is a quick, easy, and safe way to simply move on to the more interesting stuff that really requires reading papers. Or, if I don’t know what to cite, but I know I am stating a well-known fact, I will simply search for the topic on Google Scholar to see what is most cited and then check the title and abstract to make sure citing it is safe. Perhaps this is a bad scholarship – or perhaps it is clever efficiency in the sense that these citations don’t really matter. They are generally known phenomena that have been discussed before and for which detailed additional reading would simply be a waste of time – so I am not exactly condemning “lazy citations” here.

My final closing point is that numbers of citations to a paper don’t always reflect the originality, importance, and quality of the paper. Sometimes papers are dramatically under-cited given their quality and potential importance. Sometimes papers are dramatically over-cited given their quality and importance. Of course, this point isn’t a new one but perhaps I am making it in a slightly novel way.


1.       Patrick Nosil first pointed out to me the “fill in the box” citation-inflation phenomenon.
2.       While writing this post, I noticed that the Google Scholar link for the Conservation Biology paper doesn’t even list me as an author – irony!
3.       No disrespect to my co-authors on the papers discussed above. In fact, my favorite part of all of the above papers was the collaborative writing efforts they involved. Clearly, we did a great job in the writing!

4.       Of course, I have my own papers that I think are way under-cited, particularly several awesome ones published in PLoS ONE (an analysis here). Check it how Humans are less morphologically variable (within populations) than are other animals and Bear predation drives the evolution of salmon senescence in unexpected ways. (And, no, I didn’t write this post simply to plug these under-cited papers.)

Friday, March 3, 2017

Maladaptation to chemical exposure – what may be happening and where do we go from here?

Guest post written by Mary Rogalski

Industrial, residential, commercial, and agricultural development greatly benefit human populations, but with the unintended and widespread consequence of increasing the release and availability of chemical pollution.  Surface runoff and atmospheric deposition introduce a complex mixture of heavy metals, pesticides, pharmaceuticals, and other contaminants into our water bodies. While some pollutants are regulated in an effort to protect human and environmental health, we have very little knowledge of how pollution exposure affects organisms over the long-term. To effectively manage pollution risk, we need to have a better understanding of these consequences.

Exposure to chemical pollution can have a host of negative effects on organisms, including reduced reproductive output, and at high enough doses, death. Individuals in the wild have been found to vary in their sensitivity to pollution exposure, both within and among populations. Based on these negative effects on individuals and the variation in sensitivity, most evolutionary biologists would likely predict that pollution exposure should select for more tolerant individuals. In other words, populations should be able to adapt to exposure.

This was my hypothesis when I set out to investigate the evolutionary consequences of long-term exposure to heavy metals in Daphnia populations in New England lakes (Rogalski 2017). Daphnia (aka “water fleas”) are tiny crustacean zooplankton that are extremely efficient at grazing algae in lakes but are also pretty sensitive to contaminant exposure. To my surprise, I found the opposite of my predicted trend. Daphnia had evolved to become more sensitive to metal exposure over decades of increasing contamination.

 Image of a Daphnia ambigua mother (approx. 1 mm in length). Photo: Eric Lazo-Wasem
 When I first started to see this result, I thought there must be some mistake. However, the pattern was repeated. I saw the same increase in sensitivity to copper following increasing historic exposure in two different populations, and also in response to cadmium in a third population.  In one lake, thirty years after peak copper levels, the sensitivity remained.

Grey points show copper or cadmium contamination through time. Black points show copper or cadmium sensitivity of individual Daphnia clones hatched from different time periods. LC50 is a measure of acute sensitivity, with lower LC50s indicating greater sensitivity. Further details on the study.

I tried to think of what could explain my unexpected result. Surely adaptation must really be happening here, right? But my study is clearly showing the opposite trend. Perhaps there’s an evolutionary trade off at play? Or some other reason why the populations have not only failed to adapt to metal exposure but also became more sensitive?

Alexander Lake, the study site where Daphnia have become more sensitive to rising cadmium concentrations

While the evolutionary pattern is striking and repeated, at this point I just don’t have enough information to understand the mechanism underlying the pattern. I certainly wouldn’t rule out the possibility that these Daphnia populations are adapting to their changing environmental conditions but just happen to be getting more sensitive to acute copper and cadmium exposure. In particular, I am curious to know if the acute and chronic toxicity responses might be inversely correlated. My assays measured acute toxicity – is it possible that being good at chronic chemical exposure makes a Daphnia worse at dealing with acute exposure? At least one study looked at this question in Daphnia with mixed results, finding no evidence of such a trade-off in response to cadmium, and no obvious pattern in response to copper (Barata et al. 2000).

Yet after having spent a lot of time reflecting on my results, I no longer find the trend so unexpected.
First of all, while maladaptation has received relatively little attention by evolutionary biologists, a metaanalysis by Hereford (2009) suggests that maladaptation happens fairly frequently. Of all reciprocal transplant studies examined in this metaanalysis, Hereford found that local maladaptation (defined as foreign population advantage) happened in 29% of cases. If we see evidence of maladaptation when we expect to see local adaptation nearly a third of the time, my result of increasing sensitivity to metals seems much less unexpected.

My study is not the first evidence of maladaptation to pollution conditions in wild animal populations. Researchers found that barnacles (Balanus amphitrite) in polluted estuarine environments were more sensitive to exposure to an antifouling biocide, copper pyrithione, compared with animals from less polluted conditions (Romano et al. 2010).  Rolshausen et al. (2015) found that Trinidadian guppy (Poecilia reticulate) populations have failed to adapt to crude oil pollution, despite devastating effects of exposure to oil. Also, a PhD student in the lab where I did my dissertation work, Steve Brady, found that wood frog (Rana sylvatica) populations in Connecticut ponds were more sensitive to road side environments in general, and road salt in particular compared with salamanders from forested ponds (Brady 2013). Steve found that there were also some overall fitness consequences of this increasing sensitivity to roadside environments. Interestingly, he found the opposite trend of adaptation to roads and road salt in another amphibian species, spotted salamanders (Ambystoma maculatum), inhabiting the exact same ponds (Brady 2012).

Results from Brady’s 2013 study of wood frogs.
In addition, just because pollution can have fitness consequences, I don’t think we should expect chemical exposure to act like other forces of selection such as predation, parasitism, or changing temperatures. Pollution exposure can also lead to increasing rates of developmental malformations, cause changes in sex ratios, and cause cancer. Some pollutants can bioaccumulate in tissues, including those of offspring. When the cadmium chloride that I had ordered for my toxicity trials arrived in the mail, the hazards listed on the safety sheet sounded pretty scary. In particular, cadmium exposure “may cause heritable genetic damage”. The other metal I studied, copper, has been linked with increasing mutation rates in exposed Daphnia populations. It’s not hard to imagine how some of these toxicological impacts could have accumulating consequences over the course of many generations of exposure.

One thing that is valuable about my study is that it tracks evolution through time. While local adaptation studies have provided valuable insight into how populations have evolved in response to contaminant exposure, we are missing three critical pieces of information. 1) We don’t know how pollution conditions have changed in the past in a given habitat. We can only compare organisms in polluted and unpolluted conditions; 2) we don’t know what the historical evolutionary trajectories of these populations look like; and, 3) in most cases the phenotypic responses that we observe may include genetic, plastic, epigenetic, and/or maternal effects.

In my study, I was able to address these issues. I used lake sediment archives to track both environmental and evolutionary trajectories over time. I measured metal contaminants in dated sediments to put together the history of exposure experienced by these populations over the past century. I hatched Daphnia from resting egg banks buried in sediments from high and low metal time periods. I then tested these Daphnia for sensitivity to copper or cadmium to see if the populations had evolved in their tolerance for these stressors. Since we can raise Daphnia clonally in the lab I was able to minimize any maternal effects that might have been present.

Extracting metals from lake sediments using hot block acid digestion. Photo: Sara Demars
In closing, I’ve come away with two key points from this study. First, maladaptation appears to be fairly common but our theoretical understanding of why it happens is pretty limited. This leaves us trying to explain seemingly counterintuitive results with a bit of hand waving and throwing around terms like “genetic drift”, “trade offs”, and “dispersal rates”. As evolutionary biologists we need to do more to understand what drivers may lead to maladaptation to improve our ability to both explain and predict evolutionary trends. Second, if species as different as Daphnia, barnacles, and wood frogs are becoming maladapted to pollution, we should think critically about the risk associated with multi-generational pollution exposure. How common is maladaptation to pollution exposure, and how does this affect the ability of organisms to adapt to other stressors? In what contexts might we expect to see adaptation vs. maladaptation to a contaminant? Could pollution exposure be having long-term damaging impacts on human populations? Those who oppose pollution regulation focus on the financial costs today, but the cost of inadequate pollution control for humans and other species could be much greater over the long-term.

Daphnia resting egg cases from one of the study lakes. Photo: Eric Lazo-Wasem.

Friday, February 24, 2017

A Tale of Two Thousand Cities - by Charles Darwin

The number of cities in the world depends on how you count but it’s a big number. Brilliant Maps says more than 4000 cities have more than 100,000 inhabitants. The UN says 1692 cities have more than 300,000 inhabitants and 512 cities have at least 1,000,000 inhabitants (totalling 23% of the World’s population). And, hey, helpfully narrows the number of cities and towns down to somewhere between 600,000 and (apparently) 4,784,754,000. No matter how you slice it, I am certain there are 2,000 major urban areas with lots of people in them.

"4,037 cities in the world that have over 100,000 people" SOURCE

Cities change everything for the organisms that live in them. Temperature changes. Noise changes. Available habitat changes. Prey changes. Predators change. Food changes. Pollution. Eutrophication. Invasion species. For years, the temptation was simply to write these areas off from the perspective of biodiversity and nature; but – over many years – a shift has occurred to establish a vigorous field of “urban ecology”. The idea is that cities are ecosystems too and we should manage them and their biodiversity as such. And where ecology goes, evolution follows. That is, any sort of environmental change is expected to impose selection on the organisms that remain in that environment, which should lead to evolutionary adaptation to urban environments. This post is about how urbanization dramatically shapes the evolution of many species. I might have called it “Darwin Comes To Town” but Menno Schlithuizen’s forthcoming book has already appropriated that wonderful title.

The past few decades saw a smattering of studies demonstrating evolution in response to urban environments. Byrne et al. (1999) showed that a new form (species?) of mosquitoes had evolved in the London Underground. Cheptou et al. (2008) showed that plants evolved reduced dispersal in cities because dispersers were likely to end up in the in hospitable “concrete matrix.” Following from these earlier, somewhat sparse demonstrations, studies of urban evolution have really heated up recently: cool new papers are coming out, books are being written, grants are under review, symposia are being organized, and working groups are being convened. Inspired by this recent enthusiasm, I want to highlight some of my own work in this area, some of the exciting new work that has come out this year, and some attempts to tie it altogether through meta-analysis.

Darwin's finches of multiple species near Puerto Ayorra pigging out on rice provided by humans. 

My own foray into urban evolution started with coincidental discoveries in Darwin’s finches of Galapagos. Up to the 1970s, medium ground finches (Geospiza fortis) at Academy Bay, beside the small town of Puerto Ayora on Santa Cruz Island, were bimodal in beak size: many large or small birds with relatively few intermediates. By the time we started working there in the 2000s, Puerto Ayora had grown dramatically, and the collection of new beak size data did not reveal the same bimodality as in the past. Yet at the same time, we uncovered bimodality at a site (El Garrapatero) well removed from the town where finches were not exposed to urban conditions. Compiling data from 1964 to 2005, we confirmed that beak size bimodality was lost the finches living in and around Puerto Ayora coincident with the dramatic human population increase. We then showed in later work that this collapse of diversity was associated with a degradation of the diet differences that normally differentiate the species. In short, all the finches are now feeding on human foods, which has removed the selection pressures formerly favoring diversification in this group. Indeed, additional work we currently have in review shows that urban finches are actively attracted to humans and their foods, whereas finches outside of the city are not.

Darwin's finch beak size distributions, with the arrow showing situations tending toward bimodality.

Acorn ant colonies are entirely contained within acorns, which is pretty darn cool – and makes for a wonderful experimental system. One can pick up an acorn and move it to a new site, or to the lab, and thereby test for thermal tolerances and local adaptation. And – conveniently for the question at hand – oak trees producing lots of acorns are found both inside and outside cities. Sarah Diamond and colleagues tested whether urban acorn ants had different temperature tolerances than rural ants. Consistent with the “urban heat island” effect (temperatures are higher inside cities than outside), the authors found that city acorn ants had higher thermal tolerances, and that this difference could be attributed to a complementary combination of plasticity (warmer rearing temperatures increased thermal tolerance) and genetic differences (city ants had higher tolerances for a given rearing temperature). But the temperature effects of cities might not always be so straightforward.

From Diamond et al. (2017)

One of the most ubiquitous plants in urban environments is clover – as a kid, I spent many hours searching for 4-leafed versions. Clover is also abundant outside cities, and so might be a good model for understanding how evolution proceeds in response to urban conditions. Marc Johnson, Ken Thompson, and colleagues hypothesized that the urban heat island effect should lead to the evolution of reduced freeze tolerance in clover, which is controlled by a known genetic polymorphism for hydrogen cynanide. Surprisingly, they found exactly the opposite – freeze tolerance genotypes were more common inside Toronto than outside. The same result was obtained for New York and for Boston, whereas no pattern was evident for Montreal. After a long trip down the rabbit hole, the authors showed that, because snow cover is less common in cities than without, some cities are actually “urban cold islands” in winter that favor the evolution increased – rather than decreased – cold tolerance in plants. (Montreal has so much snow both in and out of the city than it doesn’t matter.)

The use of multiple urban-nonurban gradients, as above, allows greater insight than only a single gradient. Also this year, Liam Revell, Kristin Winchell, and their collaborators studied Anolis lizards on Puerto Rico, comparing those in three cities to those just outside the cities. In forests, these lizards are commonly found on branches that can be quite narrow, whereas in cities they tend to occur on the much broader substrates of walls. Previous work showed that hindlimb length tends to evolve according to substrate size – being longer on broader substrates. That was just what the authors found here: city lizards have longer legs and a common-garden rearing environment confirmed that at least some of this difference was genetic.


The above examples are just a few studies from this year. Many other studies are also demonstrating trait responses to urbanization, although, in some cases, it isn’t yet clear if the change is genetically based. City birds sing different songs, appear smarter, have different behaviors and stress responses, sometimes have different clutch sizes, and so on. City mice differ in key genes that might reflect adaptation, Daphnia evolve to be smaller in urban ponds, and so on. These wonderful examples of phenotypic changes (at least some evolutionary) in urban environments raise the question: are they exceptional? Humans influence evolution in all sorts of contexts apart from cities (hunting/harvesting, fragmentation, climate change, pollution, eutrophication, invasive species, etc.), as we recently reviewed in a special issue of PTRSB. Are urban environments any different, such as by driving faster rates of change than in other contexts?

Marina Alberti has led the recent charge in reviewing work on urban evolution and contacted me with an idea to use our database of rates of phenotypic change to quantitatively ask if changes were greater in cities than in “natural” or other human-disturbance contexts. The same database had previously been used to show that – among other things – human disturbances accelerated rates of change, that the most dramatic effects were evident when humans acted as predators, but that evolution was not exceptionally rapid in the context of invasive species. Georeferencing all the observations in this database and linking them to urbanization estimates, the study – a collaboration among many people – showed that adding information on urbanization substantially improved the ability to predict rates of change – a number of which are confirmed to be genetically based. I speculate that the main reason is that urbanization is associated with many forms of environmental change occurring all together (a subset are listed at the outset of this post), which should impose particularly strong and diverse selection on the organisms that persist.

Locations of rate data used in our analysis.

The next time you walk through a city, take a look past the steel, glass, and concrete, to see the plants and animals that live there. (And, of course, to not see all the microbes.) Each of these organisms is experiencing selective pressures that simply didn’t exist in most places until relatively recently. Selection is the engine of evolution and – indeed – many of these organisms have evolved to better suite them for urban conditions. Indeed, some of those organisms might not exist in cities were it not for adaptive evolution keeping pace with increasing urbanization.

Urban evolution is the new hot Broadway (and off-Broadway) play in the evolutionary – and eco-evolutionary – cannon. See it now.

Here are some Darwin's finches pigging out in the Baltra Airport, Galapagos.

Saturday, February 11, 2017

On Sabbatical

Sabbaticals might seem a strange thing to students, administrators, politicians, the general public, and – well – everyone who doesn’t take them. A common perception is that professors who take a sabbatical are “taking a year off” – and certainly that sometimes happens. As a result of perceptions such as this, some countries don’t allow paid sabbaticals, some states within countries don’t allow paid sabbaticals, and some particular universities don’t allow paid sabbaticals. In many other cases, only partial support is provided or the time between sabbaticals increases beyond the normal every-7th-year. In this post, I make the case for fully paid sabbaticals every 7th year as the greatest benefit to everyone.

About the above: I started my Eco-Evolutionary Dynamics book on my first sabbatical and finished it on my second sabbatical! Only sabbaticals made it possible. For more see

Teaching (and service) improves

Most people who do not attend university – and even many people at universities – think that what professors are for is teaching (and various committee-style “services”). Certainly, most professors do a lot of teaching, which is how most students know them. So, if the role of professors is to teach, and they don’t teach on sabbatical, then they aren’t doing their job on sabbatical – so they shouldn’t be paid. This logic is precisely why legislators in some countries and states forbid paid sabbaticals. Professors have other important jobs besides teaching and service – and those other jobs (research!) benefit dramatically from sabbaticals. However, I first want to make the point that even teaching benefits from sabbaticals. The main reason is that: “The biggest thing for the professors is they get the chance to refresh themselves and to escape. They come back … invigorated.”

Teaching the same course year after year after year (or even different courses year after year after year) can whittle away at enthusiasm and the motivation to make major improvements. A year away can completely re-invigorate a professor’s motivation to teach, teach often, and teach well. (Part of this motivation comes from the guilt a professor feels when his/her colleagues have to teach those courses for a year.) From my own experience, I definitely feel this benefit is critical. Just this fall – right after my sabbatical – I taught three courses: my graduate class in Advanced Evolutionary Ecology, an undergrad class in Evolution, and our Introductory Biology class. I also took over coordination of the last of these and gave guest lectures in a number of other classes. Teaching was exciting again – fun again – motivating again. I wanted to do new things, exciting things, more things. This sort of excitement and motivation really improves with a year away from teaching.

Importantly, classes rarely suffer from sabbaticals in the sense that most of the classes are taught anyway – just by other professors. Hence, the long-term benefit to teaching does not come with any major short-term costs. Sabbaticals are good for teaching!

Research improves

The primary thing that many professors do is research. In fact, research at many universities is what professors are supposed to spend most of their time doing. This is critical. Universities are not just about the transfer of information and ideas from experts (professors) to trainees (students), they are just as much about the generation of new ideas and new knowledge. Moreover, this generation of knowledge benefits the transfer of knowledge because students respond much more strongly to professors who are speaking from their own experience – and often injecting examples from their own work. And then undergraduate (and graduate) students can become involved in the research and thereby have real “hands-on” training. In my lectures, I specifically emphasize research conducted by McGill undergraduates who were sitting in the same seats as the current crop of students in the class. Research benefits teaching!

Sabbaticals have a HUGE effect on research because they afford the time and motivation to learn new methods, write new grants, publish that backlog of papers, do intensive field or lab work, etc. Some professors travel to places where they can get training in new technologies. Some professors travel to places where they can be close to their field work, or their collaborators, or important infrastructure. Some professors remain local and focus on publishing papers. On sabbatical, professors have the time to think about science, do science, write science, learn science. Sabbaticals are critically important for research success, particularly “taking it to the next level.”

Apparently not everyone (or every study) finds that average research productivity goes up after sabbaticals. This doesn’t mesh with my experience. Some years ago, Keith Crandall was telling me a story about how he was fighting to convince the administration of a university of the value of sabbaticals. Among other things, he showed a graph of his publication rate in relation to the timing of his sabbaticals. When preparing this post, I asked Keith about graph and he was able to recreate it from Web Of Science – showing big jumps in publication productivity with each sabbatical. 

Keith: thanks for the idea and the graph!

I did the same calculations for myself and found the same thing – big jumps in productivity with each sabbatical. Beyond benefits accruing to the professor and the people influenced by his/her research, universities are often ranked based largely on professor research productivity – and these rankings can have major consequences for funding, recruitment, and continued success of a university.

As an aside, you will see another message in the graph – starting a faculty position is often coincident with a big drop in productivity. For all you new profs out there worried about your slow start, take heart, it is only temporary. It takes time to build up a lab and a research program – and this is the case for EVERYONE.

Sabbaticals rule

In summary, sabbaticals are good for everyone involved. Ok, fine, a politician might say, but “we don’t need to pay the full salary – go out and get some yourself.” To those people, I would say: “Sabbaticals when you travel are extremely expensive, particularly if you have a family.” If you don’t provide full pay to professors, they are much less likely to go to new places, which is of great benefit to many. (Of course, a great sabbatical can also be had while staying in the same location.) My own university provides full support for sabbatical every 7th year (or 6 months support after every 3 years) – THANKS MCGILL – KEEP IT UP. However, even then, I lose money. The only way I can make it work is because I can stay almost for free with family in California and, most recently, the wonderful Miller Institute for Basic Research helped fund my sabbatical at UC Berkeley.

So, please everyone, from someone who has now had two sabbaticals, keep full support for sabbaticals every 7th year. Everyone wins – except those countries, states, and universities who don’t have them. 


To be honest, some graduate students might not benefit so much from their professor going away on sabbatical. Physical proximity between a professor and his/her students is more conducive than is skype to progress on their thesis. Of course, skype, joint field work, and visits can help minimize the cost to these students. Personally, I need to be better in this area on my next sabbatical.

Friday, February 3, 2017

Yes, humans influence evolution. Yes, that has consequences for humans!

Some might think it would be empowering to earn the moniker of "world's greatest evolutionary force" where it would seem some superhero can, Superman style, rapidly make something "evolve". Like tossing a common ancestor into a phone booth and out pops all the species of Darwin's Finches. Whelp, the reality is that this might not be such a great thing. What if this greatest evolutionary force might be causing detrimental things to happen such as biodiversity loss? What if this greatest evolutionary force is affecting human society?

Well, what might this greatest evolutionary force be? For better or for worse, it's humans. For most of us living in our privileged world with a roof over our heads and food in our stomach, it is easy to become ensconced in our little bubble of consumables and disposables while hiding behind some electronic device, and not think about how we, humans, are affecting evolutionary processes. For example, when something is domesticated for human consumption, be it crops or pets, what happens when those domesticated individuals intermix with wild individuals? What happens when we put antibiotics into the wild? What happens when we want to put taxidermy heads on our walls? What happens when we move something from one place to another, intentionally or by accident (does it matter?)? So, perhaps it's time to come out from behind your screen to think about these questions.

All of these questions are important, but it is not enough to just ask how we are influencing evolution, but to also ask what are the consequences of this? How is this feeding back on to humans and our societies? Well, in working with Andrew Hendry from McGill University and Erik Svensson from Lund University, we set out to agglomerate a special issue of Philosophical Transactions of the Royal Society - B focusing on just this question. We came up with the unimaginative, but descriptive title of "Human Influences on evolution, and the ecological and societal consequences".

We structured this issue in a slightly different way where we considered individual contexts of human influence as a 'topic'. These topics include things such as domestication, habitat fragmentation, hunting, urbanization, medicine, disease, and more. For most of the topics, we opted to have two manuscripts related to the topic: a review type article, focusing on a particular context, how that affects evolution, and then in turn, how this might affect humans, and an empirical paper that set out to test some of the ideas laid out in the review.

We considered everything within two theoretical frameworks. The first is the phenotypic adaptive landscape, where a three dimensional surface is pictured with the peaks representing high fitness phenotypes and the valleys representing low fitness phenotypes. Selection would then be acting to push a population up the adaptive peak. If that landscape is altered (say by humans introducing a novel predator), however, then the landscape, and thus selection will shift. Eco-evolutionary dynamics would then consider that change in a distribution of phenotypes and how that would affect the ecology of that population, including changes at the community and ecosystem level.

 As you can imagine, trying to understand how everything is connected can be rather confusing, so we modified the "traditional" eco-evolutionary framework to incorporate all the different parts together.

Using this, we then set out to try and predict which human influence might have the strongest effects on evolutionary and ecological processes. By no means is what we've done comprehensive, it's merely a tiny stepping stone to fully comprehending the impacts that we humans are having on evolution and how that is feeding back to human society. One of the reasons making predictions is so difficult is because evolution can be influenced by a myriad of factors. For instance, predicting how invasive organisms will respond, as well as how the native populations will respond, are dependent on so many biotic and abiotic interactions, it's extremely difficult to predict if the invasion will succeed, and if so, what will the consequences of it be and then how that would subsequently affect human populations.

Let's take a look at one of our thoughts about human influences on evolutionary processes. Depending on the context, specific components can affect selection itself, or other components of evolution such as standing genetic variation or as we put it, evolutionary potential. In some contexts,  a strong effect is quite obvious - hunting and harvesting are usually size specific, which will result in evolutionary changes in size. However, the hunting can not persist forever at high rates as the population would eventually go extinct. Perhaps more important is our efforts to control for pests or perceived "enemies" because it will result in increased tolerance and resistance, which then means we need stronger/better ways to control the enemies, and then they will again evolve increased tolerance. This perpetual "arms race" could, and has, led to things like superbugs that cannot be controlled with any current medicine.

What about potential ecological consequences as a result of this human induced evolutionary change? If we again consider hunting and harvesting, perhaps we are reducing the population size of or even removing a keystone species. If certain species are targeted, where individuals have a strong role in the community structure, their removal will have obvious cascading effects. For example, otters are essential in maintaining the giant kelp ecosystem in the Pacific Ocean, and the loss of otters in this ecosystem, perhaps due to pollution, then cascades into human society as important fisheries and a carbon sequestration source are subsequently lost.
I just wanted an excuse to post a photo of cute otters. 

In our introductory article, we actually ask a total of eight questions relating to human influences on evolution and the consequences they have on human society. We know you will have opinion and agree or disagree with us, so we would love to hear from you in the comments. In a nutshell, we hope this issue make you realize just how much humans are influencing evolution, and that these human induced shifts in evolution have societal consequences. Unfortunately, a large number of those consequences are detrimental to humans. And we're not alone on this planet, so those negative consequences are also affecting everything else!!

Link to special issue:
Link to our introductory article:

Friday, January 27, 2017

ALTERNATIVE FACTS: Scientists got 'em too!

“WHAT”, I hear you saying, “That is precisely the point. Science has the facts and the facts are the facts – no alternatives. Anyone who makes statements not supported by the data are presenting fiction, not alternative facts.”

Seen in an Ottawa book store.
Yet maybe we scientists shouldn't be so smug about our FACTS. As the editor and reviewer of countless papers, I am struck by how often two (or more) fully qualified, insightful, and fair scientists can make opposite assertions of fact based on the same data and analysis. Most obviously, data are often interpreted in the light of preconceived expectations (we are all Bayesians) and, so, faced with a series of mixed results, scientists consciously or unconsciously emphasize the subset of results that support their expectations, largely ignoring or discounting or explaining away the rest.  “Ah”, I hear you saying “Those are interpretations, not facts. Facts belong in the results, interpretations are in the discussion.”

I am not so sure. Everyone knows that the same data can be analyzed in multiple ways, each yielding different outcomes. Hence, the statements and statistics in the results are interpretations too, not facts. “Ok, then, the DATA are the facts.” Yet, data are not collected with error, and – sometimes – are biased. Data collected in different ways at different times in different years can yield different outcomes. Different measure of central tendency (mean, median, etc.) yield different numbers. Different data transformations yield different outcomes. Given that data have no meaning without interpretation, even in the narrowest possible sense of a central tendency, data are not facts either.

Alternative facts?
“Ah, but this is what measures of uncertainty are for.” Indeed, standard errors, confidence intervals, credibility intervals and so on can – and should – be calculated but, again, they have no meaning without interpretation. Anytime you look at a result and a credibility interval and say “I therefore conclude the evidence more strongly supports option 1 than option 2.” Sure, but some other scientist can come along and say “I require a higher degree of confidence, so my conclusion is different.” Or. “I would use a different measure of uncertainty.” Or. “You did not include all sources of uncertainty.” Or “You didn’t consider option 3.” And, of course, we ultimately need to accept an outcome one way or the other or we never act on the information we have collected.

So, are there any real facts out there, any “objective truths?” Yes, of course, there are many of them, but many, perhaps most, of those are – ultimately – unknowable. Instead, they are “latent variables” about which we try to make inferences based on our imperfect surrogate measures.

Andy Gonzalez and I with our alternative facts
Alternative facts, which I have just argued are pervasive in science, depend on the level of inference. “Evolution isn’t a theory, it is a fact.” Sure, evolution as a process is a fact. However, evolution in any given case (a particular population facing a particular environmental change) might not be. Don’t believe me? Just look at the knots people studying adaptation to climate change or fisheries have tied themselves into trying to say that evolution has or has not occurred in any given instance. “Look at allele frequency changes,” you might say. But no one cares if those are changing (and they ALWAYS are), we care about the evolution of the traits. “Ok, then, measure the genes influencing those traits.” Yes, that works but what matters is the effect size: how much of the change in phenotype is the result of evolution? That is extremely hard to measure. Climate change is a fact, certainly, but stating that it is a fact is not informative or helpful. We need to know the effect size (rate of change) and its consequences – and then it depends on the place, the time, and the measure taken. Plenty of room for “alternative facts” in this context.

Alternative facts?
So, maybe us scientists shouldn’t be so smug. If we are ever going to state conclusions as facts (as opposed to facts being unknowable), then room exists for alternative “facts” in the same sense. Otherwise we wouldn’t have debates and arguments. Otherwise, we wouldn’t need to replicate our studies, and those of other scientists. Otherwise, we wouldn’t have retractions.

The difference between science and propaganda is that scientists are willing to give up our “facts” when enough evidence suggests they aren’t “facts” after all. 


This post was motivated by reading scientific papers where authors are testing an hypothesis, obtain results that are mixed with respect to that hypothesis, emphasize only those results supporting the hypothesis, and then conclude in favor of the hypothesis. Presumably this sequence of events usually transpires without INTENT to deceive - and, regardless, the consequences are rarely dire. The same cannot be said for the current US government.

Wednesday, January 18, 2017

Eco-Evolutionary Dynamics Spotlight Session at Evolution 2017

Want to give a talk in our spotlight session on Eco-Evolutionary Dynamics at the joint SSE-ASN-SSB Evolution 2017 meeting in Portland, Oregon, June 23-27?

To quote the organizers: "A Spotlight Session is a focused, 75 min. session on a specific topic. Spotlight Session talks are solicited in advance, unlike regular sessions that are assembled, often imperfectly, from the total pool of contributed talks. Each Spotlight Session is anchored by three leading experts (each giving a 14 min talk) and rounded out with six selected speakers (each giving a 5 min. ‘lightning' talk) pursing exciting projects in the same field. By having a focused session with high-profile researchers on a specific topic, there will be high value in presenting even a 5 min. talk as the room is likely to contain the desired target audience as well as other relevant and well-known speakers in the field. The 14 min. talks are invited by the organizer, while the 5 min. talks are selected via an open application process also run by the organizer." Giving a talk in a spotlight sessions does NOT preclude also giving a regular talk in the meeting. More information is here.

For our Eco-Evolutionary Dynamics spotlight session, the "leading experts" giving 14 minute talks will be myself, Fanie Pelletier, and Joe Bailey. Now are seeking contributions from six "selected speakers" to round out our session. 

Please send me an email at with your proposed title and a short abstract by Feb. 10. We will then quickly review the talks and tender an invitation to six of them. Our hope is to highlight exciting new research on interactions between ecology and evolution. While we will consider all contributions, we particularly encourage young investigators (students, postdocs, new profs) and especially those developing new systems for studying eco-evolutionary dynamics.

Thanks to Matt Walsh for encouraging me to organize this spotlight session. 

If you want to see what Eco-Evolutionary Dynamics can do for you, check out #PeopleWhoFellAsleepReadingMyBook

Saturday, January 14, 2017

Blinded by the skills.

OK, I’m just gonna come right out and say it: I ain’t got no skills. I can’t code in R. I can’t run a PCR. I can’t do a Bayesian analysis. I can’t develop novel symbolic math. I can’t implement computer simulations. I don't have a clue how to do bioinformatics. I simply can’t teach you these things.

So why would anyone want to work with me as a graduate supervisor. After all, that’s what you go to graduate school for, right – SKILLS in all capitals. You want to be an R-hero. You want to be a genomics whiz. You want to build the best individual-based simulations. You want to be able to have these things so you can get a job, right? So clearly your supervisor should be teaching you these skills, yeah?

I most definitely cannot teach you how to code this Christmas tree in R. Fortunately, you can find it here

I will admit that sometimes I feel a bit of angst about my lack of hard skills. Students want to analyze their data in a particular way and I can’t tell them how. “I am sure you can do that in R,” I say. Or they want to do genomic analysis and I say “Well, I have some great collaborators.” I can’t check their code. I can’t check their lab work. I can’t check their math.

Given this angst, I could take either of two approaches. I could get off my ass and take the time and effort to learn some skills, damn it. Alternatively, I might find a way to justify my lack of skills. I have decided to take the latter route.

I think your graduate supervisor should be helping you in ways that you can’t get help for otherwise. Hence, my new catch-phrase is: “If there is an online tutorial for it, then I won't be teaching it to you.” Or, I might as well say: “If a technician can teach it to you, I won't be.” Now it might seem that I am either trying to get out of doing the hard stuff or that I consider myself above such things. Neither is the case - as evidenced by the above-noted angst. Instead, I think that the skills I can – and should be – offering to my graduate students are those one can’t find in an online tutorial and that can’t be taught by a technician.

Check out these crazy-ass impressive equations from my 2001 Am Nat paper. (My coauthor Troy Day figured them out.) 
I should be helping students to come up with interesting questions. I should be putting them in contact with people who have real skills. I should be helping them make connections and forge collaborations. I should be helping them write their proposals and their papers. I should be giving them – or helping them get for themselves – the resources they need to do their work. I should be challenging them, encouraging them, pushing them in new directions with new ideas. These are the things that can’t be found in an online tutorial; the things that a technician can’t teach them. In short, I should be providing added value beyond what they can find elsewhere.

Hey, in 1992, my genetic skills weren't bad - although, to be honest, my allozyme gels usually weren't this pretty
You might say I could, and should, do both – teach hard skills and do all the extra “soft” stuff just noted. Indeed, some of my friends and colleagues are outstanding at teaching hard skills and also at the “soft” skills I am touting. However, certainly for me personally, and – I expect – even for my polymath colleagues, there is a trade-off between teaching hard skills and doing the soft stuff. If a supervisor is an R whiz, then the student will sit back and watch (and learn) the R skills. The supervisor will have less time for the other aspects of supervision, the student will rely on the supervisor for the skills, the student might not take the initiative to learn the skills on their own, and the student might not experience the confidence-building boost of “figuring it out for themselves.”

Beyond my personal shortcomings when it comes to hard skills, it is important to recognize that graduate school is not about learning skills. Yes, hard skills come in handy and are often necessary. Certainly, skills look good on the CV – as long as they are reflected in publications. But, really, graduate school is not about technical aspects, it is about ideas (and, yes, data). PhDs aren’t (or shouldn’t be anyway) about learning bioinformatics or statistics – those are things that happen along the way, they aren’t the things that make you a Doctor of Philosophy. Most research universities don’t hire people with skills, they hire people with ideas. (I realize there are exceptions here – but that is another post.)

So, don’t come to me for skills. Don't come to any supervisor for skills. Come to us for ideas and enthusiasm. Come to us for arguments and challenges. Come to us for big ideas, stupid ideas, crazy ideas, and even a few good ideas. Come to us expecting us to expect you to learn your own skills – and to help point you to the place you can find them. We will tell you who has the skills. You will learn them on your own. 

We supervisors will help you with the things you can’t find on your own.



1. I have mad field-work skills - come to me for those!
2. Max respect to my colleagues who do actually have real skills.
3. Sometimes skills ARE the research/ideas, such as development of new methodologies.
4. Thanks to Fred Guichard (with Steph Weber and Simon Reader) for the "blinded by the skills" title - suggested during our weekly "periodic table" at McGill.

OK, so I do have a few some skills I can actually teach my students. I can catch guppies better than most.