Monday, December 16, 2019

Darwin’s finches adapting to human influences

This post was originally posted on the British Ornithologist's Union Blog here:
https://www.bou.org.uk/blog-gotanda-antipredator-behaviour-darwins-finches/


Can Darwin’s finches adapt to invasive predators and urbanization?

Dr. Kiyoko Gotanda
University of Cambridge

Gotanda, K.M. Human influences on antipredator behaviour in Darwin’s finches. J. Anim. Ecol.
A small ground finch in the Galapagos

"All of [the terrestrial birds] are often approached sufficiently near to be killed with a switch, and sometimes, as I myself tried, with a cap or a hat." -Charles Darwin in "The Voyage of the Beagle"

The Galapagos Islands are renowned for their biodiversity and large numbers of endemic species, including the Darwin’s finches. When Charles Darwin visited the Galapagos Islands back in 1835, he noted that land birds could be approached near enough to toss a hat over the birds (Darwin 1860)! These and other Galápagos organisms evolved for millions of years in the absence of humans and mammalian predators, and thus developed a remarkable naïveté to humans and their associated animals.

Humans now have a permanent presence on the Galapagos and with that comes a variety of influences. A major, contemporary threat is the introduction of non-native predators (Fritts and Rodda 1998; Low 2002) which is often correlated with extinctions on islands (Clavero and García-Berthou 2005; Sax and Gaines 2008; Clavero et al. 2009). House cats (Felis silvestris catus) are particularly problematic because they can decimate bird populations (Lever 1994; Nogales et al. 2004; Wiedenfeld and Jiménez-Uzcátegui 2008; Loss et al. 2013). Recently, humans have introduced to the Galapagos Islands invasive house cats ( Phillips, Wiedenfeld, & Snell, 2012) that opportunistically prey on land bird species, including Darwin’s finches, which poses a major threat to the biodiversity on the islands(Stone et al. 1994; Wiedenfeld and Jiménez-Uzcátegui 2008). The Galapagos National Park has taken extreme conservation measures and successfully eradicated cats and rats on some of the islands (Phillips et al. 2005, 2012; Carrión et al. 2008; Harper and Carrión 2011).

The second human influence is we have established permanent human populations creating urban areas. The increase in urbanization can have a strong effect on ecological and evolutionary processes (Alberti 2015; Alberti et al. 2017; Hendry et al. 2017; Johnson and Munshi-South 2017). Urbanization on the Galápagos Islands has rapidly increased from ~1000 permanent residents to ~25,000 in just 40 years, presently distributed across four towns each on four different islands (UNESCO 2010; Guerrero et al. 2017). We are already seeing the impact of human habitats on behaviour in Darwin’s finches. Urban finches in the largest town on the Galapagos, Puerto Ayora (permanent human population = ~12,000) have shifted their behaviour and now exploit human foods which has changed the finches’ ecological niche (De León et al. 2018).
Figure 1. Map of the Galapagos Islands showing islands that vary in the presence, absence, or eradication of invasive predators. Islands in orange have invasive predators, islands in green are pristine, and islands in purple have had invasive predators eradicated. Islands that have the presence of invasive predators are also the islands with permanent human populations.

So, how have Darwin’s finches adapted to these different human influences? I wanted to know how the finches might be adapting to the presence, absence, or eradication of invasive mammalian predators, and to urbanization. Specifically, I was interested in their antipredator behaviour. To study this, I focused on flight initiation distance (FID), the distance at which an individual flees an approaching predator, and is a metric of fear.

An invasive house cat in the Galapagos

On islands that have invasive predators, the finches have adapted by increasing their antipredator behaviour. What’s most interesting, though, is that on islands where invasive predators had been successfully eradicated either 8 or 13 years prior to my data collection, the increased behaviour has been maintained. This suggests that the increased antipredator behaviour could be an evolved adaptation. However, it could also be due to other things such as learned behaviour or cultural transmission of the behaviour through the generations. Either way, invasive predators can have a lasting effect on antipredator behaviour in Darwin’s finches.

I also compared antipredator behaviour in urban and non-urban populations of Darwin’s finches on all four islands that have permanent human populations. I found that on the three islands with the three largest human populations, antipredator behaviour was significantly less in urban finches when compared to non-urban finches, likely due to habituation. Furthermore, antipredator behaviour was lower than what I found on islands that were pristine and had no history of any human influences.
 
Figure 2. Flight initiation distance in finches in relation to the presence, absence, or eradication of invasive predators and between urban and non-urban populations of Darwin’s finches.

Thus, my study shows that Darwin’s finches are adapting their antipredator behaviour to different human influences. These findings can help us better understand how the presence and subsequent removal of predators can have lasting effects on antipredator behaviour, and that that urbanization, and the likely habituation of Darwin’s finches to the presence of humans and other large stimuli, can strongly counteract any effects of the presence of invasive predators.


References
Alberti, M. 2015. Eco-evolutionary dynamics in an urbanizing planet. Trends in Ecology & Evolution 30:114–126.
Alberti, M., J. Marzluff, and V. M. Hunt. 2017. Urban driven phenotypic changes: empirical observations and theoretical implications for eco-evolutionary feedback. Philosophical Transactions of the Royal Society B 372:20160029.
Carrión, V., C. Sevilla, and W. Tapia. 2008. Management of introduced animals in Galapagos. Galapagos Research 65:46–48.
Clavero, M., L. Brotons, P. Pons, and D. Sol. 2009. Prominent role of invasive species in avian biodiversity loss. Biological Conservation 142:2043–2049.
Clavero, M., and E. García-Berthou. 2005. Invasive species are a leading cause of animal extinctions. Trends in Ecology & Evolution 20:110.
Darwin, C. 1860. The Voyage of the Beagle (Natural Hi.). Doubleday and Co., London, UK.
De León, L. F., D. M. T. Sharpe, K. M. Gotanda, J. A. M. Raeymaekers, J. A. Chaves, A. P. Hendry, and J. Podos. 2018. Urbanization erodes niche segregation in Darwin’s finches. Evolutionary Applications 12:132–1343.
Fritts, T. H., and G. H. Rodda. 1998. The role of introduced species in the degredation of island ecosystems: a case history of Guam. Annual Review of Ecology and Systematics 29:113–140.
Guerrero, J. G., R. Castillo, J. Menéndez, M. Nabernegg, L. Naranjo, and M. Paredes. 2017. Memoria Estadística Galápagos.
Harper, G. A., and V. Carrión. 2011. Introduced rodents in the Galapagos: colonisation, removal and the future. Pages 63–66 in C. R. Veitch, M. N. Clout, and D. R. Towns, eds. Island Invasives: Eradication and Management. IUCN, Gland, Switerzland.
Hendry, A. P., K. M. Gotanda, and E. I. Svensson. 2017. Human influences on evolution, and the ecological and societal consequences. Philosophical Transactions of the Royal Society B 372:20160028.
Johnson, M. T. J., and J. Munshi-South. 2017. Evolution of life in urban environments. Science 358:eaam8237.
Lever, C. 1994. Naturalized Animals: The Ecology of Successfully Introduced Species. Poyser Natural History, London.
Loss, S. R., T. Will, and P. P. Marra. 2013. The impact of free-ranging domestic cats on wildlife of the United States. Nature Communications 4:2961.
Low, T. 2002. Feral Future: The Untold Story of Australia’s Exotic Invaders. University of Chicago Press, Chicago.
Nogales, M., A. MartÍn, B. R. Tershy, C. J. Donlan, D. Veitch, Nés. Puerta, B. Wood, et al. 2004. A review of feral cat eradication on islands. Conservation Biology 18:310–319.
Phillips, R. B., B. D. Cooke, K. Campbell, V. Carrion, C. Marquez, and H. L. Snell. 2005. Eradicating feral cats to protect Galapagos Land Iguanas: methods and strategies. Pacific Conservation Biology 11:257–267.
Phillips, R. B., D. A. Wiedenfeld, and H. L. Snell. 2012. Current status of alien vertebrates in the Galápagos Islands: invasion history, distribution, and potential impacts. Biological Invasions 14:461–480.
Sax, D. F., and S. D. Gaines. 2008. Species invasions and extinction: the future of native biodiversity on islands. Proceedings of the National Academy of Sciences 105:11490–11497.
Stone, P. A., H. L. Snell, and H. M. Snell. 1994. Behavioral diversity as biological diversity: introduced cats and lava lizard wariness. Conservation Biology 8:569–573.
UNESCO. 2010. Reactive Monitoring Mission Report.
Wiedenfeld, D. A., and Jiménez-Uzcátegui. 2008. Critical problems for bird conservation in the Galápagos Islands. Cotinga 29:22–27.

Thursday, December 5, 2019

Games Academics (Do Not) Play

By Andrew Hendry (this post, not the paper under discussion)


I read the above paper with a strange mixture of agreement and annoyance. Agreement comes from the fact that citation metrics indeed are NOT a good way of measuring research quality. Annoyance comes with the fact that the paper represents a very cynical view of academia that is most definitely not in keeping with my own experience. In this post, I want to provide a counterpoint in three parts. First, I want to summarize the points of agreement that I have with the authors – especially in relation to journal impact factors. Second, I will argue that the “gaming” the authors suggest academics involve in is, in fact, extremely uncommon. Third, I will explain how evaluation procedures that do not involve citation metrics are also – indeed probably more so – subject to gaming by academics.

But before that, I need to comment on another extreme cynicism in the paper that sends a very bad and incorrect message to young academics. Specifically, the authors argue that academics is an extremely competitive profession, and that academics are constantly under pressure to “win the game” by someone defeating their colleagues. I am not doubting that space in any given journal is limited and that jobs in academia are limited. However, I need to provide several less cynical counterpoints. First, space in journals overall is NOT limited, especially in this open access era. There are tons of journals out there that you can publish in – and many do not have any restriction on numbers of papers accepted. So, while competition is stiff for some journals, it is not stiff for publication per se. (Yes, rejection is common but there are plenty of other journals out there – see my post on Rejection and How to Deal with It.) Second, although competition for any given job is stiff, competition for a job in academia or research is much less so. For instance, a recent analysis showed that the number of advertised faculty positions in Ecology and Evolution in the USA was roughly the same in that year as the number of PhD graduates in Ecology and Evolution in the USA. This (lack of) differences is that many of the jobs were not at institutions that grant PhDs. So, the issue isn’t so much the availability of jobs overall, but rather how particular a job seeker is about location or university or whatever – see my post on How to Get a Faculty Position. (Please note that I realize many excellent reasons exist to be picky.) Moreover, many excellent research jobs exist outside of academia. Third, the authors seem to imply that getting tenure is hard. It isn’t. Tenure rates are extremely high (>90%) at most universities.

In short, academia is NOT universally an exceptionally competitive endeavor (except for big grants and for big-shot awards) – rather, some individuals within academia, as in other endeavors, are extremely competitive. You do not need to be a cutthroat competitive asshole to have a rewarding career in academics and/or research. (See my post on Should We Cite Mean People?)

Now to the meat of my arguments – a few disclaimers will appear at the end.

Citation metrics are imperfect measures of research quality

The authors point out that some funding agencies in some countries, in essence, “do not count” papers published in journals with low impact factors. This is true. I have been on several editorial boards where the journal bounces back and forth across the impact factor = 5 threshold. Whenever it is over 5, submissions spike, especially from Asia. This is obviously complete non-sense. We (all of us) should be striving to publish our work in ethical journals that are appropriate for the work and that will reach the broadest possible audience. Sometimes these factors correlate with impact factor – sometimes they do not.

As an interesting aside into the nonsense of journal impact factors, the journal Ecology Letters used to have a ridiculously high impact factor that resulted from an error in impact factor calculation. Once that error was corrected, the impact factor decreased dramatically. The impact factor is still relatively high within the field of ecology – perhaps as a lasting legacy of this (un?)fortunate error early in the journal’s history.



The paper goes on to point out that this emphasis on impact factor is killing formerly important (often society-based) journals. This is true and it is a damn shame. Importantly, however, an even bigger killer of these journals is the pay-to-publish open access journals, especially those that involve referral from more exclusive journals. The “lower tier” journals in a specialty are now in dire straights owing to falling submissions. Yet it is those journals that built our field and that deserve our work.

I also dispute the paper’s argument that submissions serially work their way down the citation factor chain. That does certainly sometimes happen but, if one excludes the general journals (Science, Nature, PNAS), I often move up to “better” journals in my submissions – and this is often just as effective as moving “down” the journal pile. In the past year, one paper rejected from The American Naturalist was accepted right away at Ecology Letters, and another paper rejected from Ecology Letters was accepted right away at PNAS

I also object to the paper’s cynical view of reviews and meta-analyses. They seem to think these are written to game the citation system. I disagree. They are written to give a synthetic, comparative, and comprehensive view of a given topic. They are fantastic for early career researchers to have an important impact beyond just their very specific empirical contributions from their own study systems. Yes, these types of papers are heavily cited – see my post on What if All My Papers Were Reviews (the table below is from that) – but that reflects their utility in their intended function, not an effort to game the citation system. I highly recommend that early career researchers write these papers for these reasons, and also because they tend to be easier to publish and to attract more attention. (Attracting attention to your ideas is important if you want your research to resonate beyond the confines of your specific study system.)



Very few academics are “gaming" the system

The authors have a very cynical view of academics. They state that researchers try all kinds of tricks to increase their citation rates, including gratuitously adding their names to papers, adding additional authors who do not deserve to be included on a paper (perhaps the PI’s trainees), and creating quid pro quo citation cabals, where the explicit or implicit agreement is “you cite me and I will cite you.” I know this gaming does occasionally happen – but it is extremely rare or, at least, minor.  


As one concern, the paper argues that many authors currently on papers do not belong there. As evidence they refer to the increasing number of authors on papers (the figure below is from the paper). No one is disputing this increase; but why is it happening? Some of it – as the authors note – is the result of needing more people with more types of complementary (but not duplicate) experience for a given paper. This explanation is definitely a major part of the reason for increasing author numbers. For instance, modern statistical (or genomic, etc.) specialization often means that statistical or bioinformatic experts are added to papers when that is all they did for the paper.  


However, I suspect that the majority of the increase in numbers of authors on papers is more a reflection of how, in the past, deserving authors were EXCLUDED from authorship whereas now they are more often included. For instance, a recent examination showed that women contributors to research tended to be in the acknowledgments but not in the author list. I would argue that, now, we more often give credit where it is due. Consider the lead collectors of data in the field. Obviously, every person that contributed to data collection in, for example, long term field studies shouldn’t be included on every paper resulting from those data. However, the key data collectors surely should be included – without them no one would have any data. Stated in a (partly) joking way, once data are collected by the field crew, any hack can write a paper from the data.


Self-citation is another perceived “gaming strategy” that is much discussed. Yes, some people cite their own work much more than do others, a topic about which I have written several posts. (For example, see my post on the A Narcissist Index for Academics.) But it is also clear that self-citation has only a minimal direct influence on citation metrics. That is, self-citation obviously doesn’t directly influence numbers of papers, it has only a modest influence on total citations, and it has minimal influence on a researchers h-index. (See my post on Self-Citation Revisited.) Both of these later minor effects become increasingly weak as the author has more and more papers.

Beyond all of this, my personal experience is that PIs are extremely sensitive to these gaming issues and would be mortified to be involved in them – regardless of whether or not someone knows about it. Most PIs that I know actively seek to not over-cite their own work, to make sure that only deserving authors are on papers (but that deserving authors are not missing), to publish in society journals, and so on.

Other assessment methods are gamed – but more so.

The authors argue that, for the above and other reasons, citation metrics should be down-weighted in evaluation for grants, positions, tenure, promotion, raises, and so on. Sure, fine, I agree that it would be great to be able to somehow objectively judge the “quality” of the work independent of these measures. So how does one do that?

Imagine first a committee of experts that read all of the work (without knowing the journals or citation rates) of the people in a given “competition” and then tries to rank the candidates or proposals. Sounds good in theory, but – in practice – each reviewer is always at least somewhat biased to what they think is high quality, which I can tell you from experience is highly variable among evaluators. From having been on many grant review panels, award committees, and so on, I can assure you that what one evaluator thinks is important and high quality is often not what another evaluator thinks is important and high quality. If you want a microcosm of this, just compare the two reviews on any one of your submitted papers – how often do they agree in their assessment of the importance and quality of your work?

This evaluator-bias (remember we are postulating they don’t know impact factors or citations or h indexes) compounds at higher levels of increasingly inter-disciplinary panels. I have served on a number of these and I simply have no objective way to judge the relative importance and quality of research from physics, chemistry, math, engineering, and biology – and that is before getting into social sciences and the humanities. The reason is that I only have direct knowledge of ecology and evolution. So, in these panels, I – like everyone – have to rely on opinion of others. What does the expert in the room have to say about it? What do the letters of recommendation say? What did the chair of the department have to say? And so on. Of course, these opinions are extremely biased and written to make the work sound as important as possible – and, believe me, there is a major skill to gaming letters of recommendation. In short, letters and explanations and expert evaluations are much more susceptible to “gaming” than are quantitative metrics of impact.

So we are back to considering quantitative metrics again. The reality is that citation rates or h indices or whatever are not measures of research QUALITY, they are measures of research volume and research popularity. As long as that is kept in mind, and standardized for variation among disciplines and countries and so on, they are an important contributor to the evaluation of a researcher’s performance. However, journal impact factors are complete nonsense.

Conclusion

Stop worrying about trying to game the system - don't worry, everyone is definitely NOT "doing it." Study what you like, do good work, seek to publish it in journals where it will reach your audience, and stick with it. The truth will out.



A few disclaimers:

1. I know the first and last authors of the paper reasonably well and consider them friends.
2. I have a high h-index and thus benefit from the current system of emphasis on citation metrics. I note my citation metrics prominently in any grant or award or promotion competition. See my post on: Should I be Proud of My H-Index.

Thursday, November 21, 2019

The parable of Bob and Dr. Doom. Or, "When the seminar speaker craps on your idea"

By Dan Bolnick

Disclaimer: the following story may or may not have actually happened as I describe it, but it does happen; it has happened to my own students, students in my graduate program, and to me personally.


I recently met with a distressed graduate student, whose identity I will conceal. Let's call this person 'Bob'.  Bob met with a visiting seminar speaker, and gave a 5 minute pitch about his research on... let's say the evolution of spontaneous combustion in corals.  A few months ago, Bob did what we encourage our graduate students to do: he signed up to meet with a visiting seminar speaker. Someone really well-known, highly respected. Let's call the visitor, "Dr. Doom".



 Bob is a first year PhD student, and is really excited about his research plans. He presents his research plan, briefly, to Dr. Doom. You know, the five minute version of the 'elevator pitch'.  Dr. Doom listens, asks some polite questions, then pronounces:

"Are you mad? This will never work! It is: [pick one or more of the following]
a) impossible to do
b) too risky
c) too expensive
d) sure to be biased by some uncontrollable variable
e) uninterpretable
f) uninteresting
How can your committee have possibly approved such a hare-brained scheme? Don't waste your time. Instead, you should do [thing Dr. Doom finds interesting]."


Bob, the first year student, is of course shattered. Here's this famous biologist, a fountain of wisdom and knowledge, crapping on Bob's idea. Bob obsesses over this criticism for weeks, considers completely changing his research. Considers dropping out of graduate school and becoming a terrorist, or an energy company executive, or something equivalent. Finally, Bob came to me. This post is about my advice to Bob.  And, to Dr. Doom, whoever you may be. (Note, Dr. Doom is quite possibly a very nice and well-meaning professor who wasn't watching their phrasing carefully, for lack of coffee going into their 9th meeting of the day).

Point 1:  Bob, you have spent the past 6 months obsessively thinking about your research: the theory, the hypotheses, the study system, and the methods. Maybe you have preliminary data, or perhaps not. No matter what, I can almost guarantee you that even a few months into your studies, you have thought in more depth about your experiment, than Dr. Doom has. Doom got a 5 minute elevator pitch and probably wasn't entirely paying attention the whole time (he has a grant proposal due in 10 minutes, after all), and leapt to some assumptions about your ideas. You have thought this through more than Doom.

Point 2:  If Dr. Doom says your hypothesis/prediction is surely false, and thus testing it not worth while, there is a chance that Dr. Doom is wrong. Let's be Bayesians for a second. You and Doom have different priors concerning your hypothesis. Dr. Doom knows a great deal, it is true. But that does not mean that Doom's priors are correct. You have chosen to work on a question whose answer is, I hope, not fully known. That's the point of doing it, right?  So, in truth neither you nor Doom know the correct answer. As long as you have some good reason for your prior, then proving Dr. Doom wrong will be all the more worthwhile, right?
Case study:  In graduate school, Tom Schoener was on my dissertation committee. I have immense respect for Tom.  Such enormous respect, in fact, that I think its okay to name names here, because his skepticism really drove me to some important projects, so I owe him a great deal. You see, my PhD work was going to focus, in part, on how frequency-dependent selection acts on populations where different individuals eat different things from each other. Thus competition is more severe for common-diet individuals, less so for rare-diet individuals, generating disruptive selection that was the foundation for some models of speciation.  So, I present the idea and Tom says "but Roughgarden, and Lister, and Taper, and Case, all proved that this among-individual diet variation does not happen much. You are barking up the wrong tree, your basic premise that justifies your research is wrong." [ I am paraphrasing the idea here, it has been 20 years after all]. I disagreed. So, I got together a group of fellow grad students and we reviewed the relevant literature together. The result was my most-cited paper, Bolnick et al 2003 American Naturalist on individual specialization. The very next year, Tom used this paper in the Davis Population Biology core course class reading list.  The point is, the more sure your famous critic is that you are wrong, the more impactful it might be if you turn out to be right.

Point 3: If Dr. Doom says that you have a good question, but the methods are flawed, pay attention. Doom may or may not be right, but you certainly need to take a careful look and give your approach a rethink. That's not a reason to abandon the work, but it is a chance to maybe tweak and improve your protocol to avoid a possible mistake. Sometimes Doom has a point. But, a fixable one.
Case study:  My PhD student Chad Brock wanted to study stickleback male color differences between lakes.  We hosted a certain Dr. Hend- er -Doom as a seminar speaker. Dr. H told Chad that male color was too plastic: if you trap males in minnow traps, their color is changed before you can pull them out of the water, and by the time you photograph them, the data has no meaning. If you euthanize them, they change color. if you co-house them with other males, they change color. Chad was devastated, so he rethought his approach. He snorkled rather than trapped. Hand-caught males and immediately handed them to a boater, who immediately took spec readings. Over time, we learned that (1) MS-222 euthanasia keeps the color pretty well, whereas some other forms of euthanasia do not, (2) housing males singly in opaque dark containers keeps their colors just fine for hours, and (3) color is stable enough to see really strong effect sizes. So, Dr. Hend... I mean, Doom was wrong in that case (happens to the best of us), but his criticism did push us to change Chad's approach in a way that ended up yielding great dividends. By measuring color on hand-caught males from nests, we knew male nest depth (not knowable when trapping). This led to the discovery of depth-dependent variation in male color within lakes, that became Chad's actual thesis.


Stickleback from various lakes in BC



Point 4: What seems obvious to Dr. Doom (who is, after all, an expert in the field and has read every single paper ever published), might not be obvious to other people. Doom remembers that back in 1958, somebody-or-other published an observational study with low sample size that resembled your hypothesis, and in 1978 Peter Abrams probably wrote 5 papers each containing a paragraph that could be read as establishing your idea [note, Peter Abrams truly has thought of pretty much everything, the guy is amazing]. But the rest of us were still learning to read Dr. Suess back then and haven't caught up. So, the broader community might be fertile ground for your ideas even if Dr. Doom already is on board.
Case study: In 1999 I was entranced by a couple of theory papers on sympatric speciation that were published in Nature (Dieckmann and Doebeli;   Kondrashov and Kondrashov). I designed my thesis around testing whether competition drove disruptive selection, which was a core assumption in both papers' models.  Soon after I designed my experiments, Alex Kondrashov himself visited. I showed him my plan, much like Bob showed Dr. Doom.  I explained how I wanted to test his model experimentally, with Drosophila. I figured he'd be thrilled. But, Alex asked, "Why would you do this?" I was floored. He explained that if the experimental system exactly matched all of the assumptions of the math, then the outcome was not in question.  On the other hand, if the experimental system deviated one iota from the math assumptions then it wasn't a test of the model. In short, he dismissed the very idea of an experimental test of the model's predictions. Either it is inevitable, or irrelevant. I realized that in a fundamental epistemological sense, he's not wrong. In fact, he taught me something crucial about the relationship between theory and data that day. Testing a model is a tricky thing. Often we are better off evaluating the model's assumptions, how common and strong are they, what's the slope and curvature of a function of interest?  And yet, I went ahead anyway and did the experiment. The result was my second-ever paper. Bolnick 2001 Nature was a single-authored paper, confirming experimentally that resource competition drives niche diversification.  I really truly owe my career to the fact that I ignored Alex's critique, and Tom Schoener's skepticism (which is why I name them, not to call them out but to thank them).


So, Doom's critique should be taken seriously, but not too close to your heart. Think hard about whether you can improve what you are doing. We should always do that, for every idea and plan, but sometimes an outside catalyst helps. But, don't jump ship right away. Doom may be wrong, biased, or just not thinking it through. Don't give up hope, but rather proceed with deliberation and caution and self-confidence. Senior famous people do not have a monopoly on being right. Far from it.


Speaking to Dr. Doom now
There is a fine line between lending constructive advice, and being a Mean Person. When we visit, we want to lend our expertise and experience to students at the host institution, give them advice and also warn them away from potential problems. We think we are doing them a favor in the process. But, when one does this clumsily, one sometimes steps into the realm of just being insensitive to their self-esteem. Be constructive, be helpful, offer advice, but don't be Dr. Doom. Not that you need to be Dr. Flowers instead, and praise everything as flawless and brilliant. The key of course is to dole out helpful constructive advice in a way that is compelling but kind.
Case study: Six months after Alex Kondrashov visited, Doug Emlen visited Davis. I showed Doug my preliminary data. He was really encouraging and enthusiastic, and more than anyone else I remember talking to he made me feel like a colleague and a potential success.  Yet, in amidst the encouragement he pointed out some real concerns. I didn't feel the sting of the critique because it was delivered so deftly, yet he was right, and as that realization gradually grew on me I revised my apprroach, ultimately changing how I went about measuring my proxies for fitness.

One of the reasons departments invite seminar speakers to visit, is to encourage meetings between top scientists and the department's graduate students, postdocs, and junior faculty.  The graduate students get feedback on their ideas and network with possible future postdoc mentors or colleagues.  Same for postdocs.  Junior faculty get to know people who might write letters for their tenure case, network with possible future collaborators.  And yet, all too often we let senior faculty hog the limited meeting slots. Even when programs put a stop to that, graduate students are often reticent to sign up for meetings, especially individual meetings. I suspect one reason is, fear of the Dr. Doom's of the world.  Don't be Dr. Doom. Be like Doug Emlen.


A 25-year quest for the Holy Grail of evolutionary biology

When I started my postdoc in 1998, I think it is safe to say that the Holy Grail (or maybe Rosetta Stone) for many evolutionary biologists w...