Thursday, December 5, 2019

Games Academics (Do Not) Play

By Andrew Hendry (this post, not the paper under discussion)


I read the above paper with a strange mixture of agreement and annoyance. Agreement comes from the fact that citation metrics indeed are NOT a good way of measuring research quality. Annoyance comes with the fact that the paper represents a very cynical view of academia that is most definitely not in keeping with my own experience. In this post, I want to provide a counterpoint in three parts. First, I want to summarize the points of agreement that I have with the authors – especially in relation to journal impact factors. Second, I will argue that the “gaming” the authors suggest academics involve in is, in fact, extremely uncommon. Third, I will explain how evaluation procedures that do not involve citation metrics are also – indeed probably more so – subject to gaming by academics.

But before that, I need to comment on another extreme cynicism in the paper that sends a very bad and incorrect message to young academics. Specifically, the authors argue that academics is an extremely competitive profession, and that academics are constantly under pressure to “win the game” by someone defeating their colleagues. I am not doubting that space in any given journal is limited and that jobs in academia are limited. However, I need to provide several less cynical counterpoints. First, space in journals overall is NOT limited, especially in this open access era. There are tons of journals out there that you can publish in – and many do not have any restriction on numbers of papers accepted. So, while competition is stiff for some journals, it is not stiff for publication per se. (Yes, rejection is common but there are plenty of other journals out there – see my post on Rejection and How to Deal with It.) Second, although competition for any given job is stiff, competition for a job in academia or research is much less so. For instance, a recent analysis showed that the number of advertised faculty positions in Ecology and Evolution in the USA was roughly the same in that year as the number of PhD graduates in Ecology and Evolution in the USA. This (lack of) differences is that many of the jobs were not at institutions that grant PhDs. So, the issue isn’t so much the availability of jobs overall, but rather how particular a job seeker is about location or university or whatever – see my post on How to Get a Faculty Position. (Please note that I realize many excellent reasons exist to be picky.) Moreover, many excellent research jobs exist outside of academia. Third, the authors seem to imply that getting tenure is hard. It isn’t. Tenure rates are extremely high (>90%) at most universities.

In short, academia is NOT universally an exceptionally competitive endeavor (except for big grants and for big-shot awards) – rather, some individuals within academia, as in other endeavors, are extremely competitive. You do not need to be a cutthroat competitive asshole to have a rewarding career in academics and/or research. (See my post on Should We Cite Mean People?)

Now to the meat of my arguments – a few disclaimers will appear at the end.

Citation metrics are imperfect measures of research quality

The authors point out that some funding agencies in some countries, in essence, “do not count” papers published in journals with low impact factors. This is true. I have been on several editorial boards where the journal bounces back and forth across the impact factor = 5 threshold. Whenever it is over 5, submissions spike, especially from Asia. This is obviously complete non-sense. We (all of us) should be striving to publish our work in ethical journals that are appropriate for the work and that will reach the broadest possible audience. Sometimes these factors correlate with impact factor – sometimes they do not.

As an interesting aside into the nonsense of journal impact factors, the journal Ecology Letters used to have a ridiculously high impact factor that resulted from an error in impact factor calculation. Once that error was corrected, the impact factor decreased dramatically. The impact factor is still relatively high within the field of ecology – perhaps as a lasting legacy of this (un?)fortunate error early in the journal’s history.



The paper goes on to point out that this emphasis on impact factor is killing formerly important (often society-based) journals. This is true and it is a damn shame. Importantly, however, an even bigger killer of these journals is the pay-to-publish open access journals, especially those that involve referral from more exclusive journals. The “lower tier” journals in a specialty are now in dire straights owing to falling submissions. Yet it is those journals that built our field and that deserve our work.

I also dispute the paper’s argument that submissions serially work their way down the citation factor chain. That does certainly sometimes happen but, if one excludes the general journals (Science, Nature, PNAS), I often move up to “better” journals in my submissions – and this is often just as effective as moving “down” the journal pile. In the past year, one paper rejected from The American Naturalist was accepted right away at Ecology Letters, and another paper rejected from Ecology Letters was accepted right away at PNAS

I also object to the paper’s cynical view of reviews and meta-analyses. They seem to think these are written to game the citation system. I disagree. They are written to give a synthetic, comparative, and comprehensive view of a given topic. They are fantastic for early career researchers to have an important impact beyond just their very specific empirical contributions from their own study systems. Yes, these types of papers are heavily cited – see my post on What if All My Papers Were Reviews (the table below is from that) – but that reflects their utility in their intended function, not an effort to game the citation system. I highly recommend that early career researchers write these papers for these reasons, and also because they tend to be easier to publish and to attract more attention. (Attracting attention to your ideas is important if you want your research to resonate beyond the confines of your specific study system.)



Very few academics are “gaming" the system

The authors have a very cynical view of academics. They state that researchers try all kinds of tricks to increase their citation rates, including gratuitously adding their names to papers, adding additional authors who do not deserve to be included on a paper (perhaps the PI’s trainees), and creating quid pro quo citation cabals, where the explicit or implicit agreement is “you cite me and I will cite you.” I know this gaming does occasionally happen – but it is extremely rare or, at least, minor.  


As one concern, the paper argues that many authors currently on papers do not belong there. As evidence they refer to the increasing number of authors on papers (the figure below is from the paper). No one is disputing this increase but why is it happening? Some of it – as the authors note – is the result of needing more people with more types of complementary (but not duplicate) experience for a given paper. This explanation is definitely a major part of the reason for increasing author numbers. For instance, modern statistical (or genomic, etc.) specialization often means that statistical or bioinformatic experts are added to papers when that is all they did for the paper.  


However, I suspect that the majority of the increase in numbers of authors on papers is more a reflection of how, in the past, deserving authors were EXCLUDED whereas now they are included from author lists. For instance, a recent examination showed that women contributors to research tended to be in the acknowledgments but not in the author list. I would argue that, now, we more often give credit where it is due. Consider the lead collectors of data in the field. Obviously, every person that contributed to data collection in, for example, long term field studies shouldn’t be included on every paper resulting from those data. However, the key data collectors surely should be – without them no one would have any data. Stated in a (partly) joking way, once data are collected by the field crew, any hack can write a paper from the data.


Self-citation is another perceived “gaming strategy” that is much discussed. Yes, some people cite their own work much more than do others, a topic about which I have written several posts. (For example, see my post on the A Narcissist Index for Academics.) But it is also clear that self-citation has only a minimal direct influence on citation metrics. That is, self-citation obviously doesn’t directly influence numbers of papers, it has only a modest influence on total citations, and it has minimal influence on a researchers h-index. (See my post on Self-Citation Revisited.) Both of these later minor effects become increasingly weak as the author has more and more papers.

Beyond all of this, my personal experience is that PIs are extremely sensitive to these gaming issues and would be mortified to be involved in them – regardless of whether or not someone knows about it. Most PIs that I know actively seek to not over-cite their own work, to make sure that only deserving authors are on papers (but that deserving authors are not missing), to publish in society journals, and so on.

Other assessment methods are gamed – but more so.

The authors argue that, for the above and other reasons, citation metrics should be down-weighted in evaluation for grants, positions, tenure, promotion, raises, and so on. Sure, fine, I agree that it would be great to be able to somehow objectively judge the “quality” of the work independent of these measures. So how does one do that?

Imagine first a committee of experts that read all of the work (without knowing the journals or citation rates) of the people in a given “competition” and then tries to rank the candidates or proposals. Sounds good in theory, but – in practice – each reviewer is always at least somewhat biased to what they think is high quality, which I can tell you from experience is highly variable among evaluators. From having been on many grant review panels, award committees, and so on, I can assure you that what one evaluator thinks is important and high quality is often not what another evaluator thinks is important and high quality. If you want a microcosm of this, just compare the two reviews on any one of your submitted papers – how often do they agree in their assessment of the importance and quality of your work?

This evaluator-bias (remember we are postulating they don’t know impact factors or citations or h indexes) compounds at higher levels of increasingly inter-disciplinary panels. I have served on a number of these and I simply have no objective way to judge the relative importance and quality of research from physics, chemistry, math, engineering, and biology – and that is before getting into social sciences and the humanities. The reason is that I only have direct knowledge of ecology and evolution. So, in these panels, I – like everyone – have to rely on opinion of others. What does the expert in the room have to say about it? What do the letters of recommendation say? What did the chair of the department have to say? And so on. Of course, these opinions are extremely biased and written to make the work sound as important as possible – and, believe me, there is a major skill to gaming letters of recommendation. In short, letters and explanations and expert evaluations are much more susceptible to “gaming” than are quantitative metrics of impact.

So we are back to considering quantitative metrics again. The reality is that citation rates or h indices or whatever are not measures of research QUALITY, they are measures of research volume and research popularity. As long as that is kept in mind, and standardized for variation among disciplines and countries and so on, they are an important contributor to the evaluation of a researcher’s performance. However, journal impact factors are complete nonsense.

Conclusion

Stop worrying about trying to game the system - don't worry, everyone is definitely NOT "doing it." Study what you like, do good work, seek to publish it in journals where it will reach your audience, and stick with it. The truth will out.



A few disclaimers:

1. I know the first and last authors of the paper reasonably well and consider them friends.
2. I have a high h-index and thus benefit from the current system of emphasis on citation metrics. I note my citation metrics prominently in any grant or award or promotion competition. See my post on: Should I be Proud of My H-Index.

Thursday, November 21, 2019

The parable of Bob and Dr. Doom. Or, "When the seminar speaker craps on your idea"

By Dan Bolnick

Disclaimer: the following story may or may not have actually happened as I describe it, but it does happen; it has happened to my own students, students in my graduate program, and to me personally.


I recently met with a distressed graduate student, whose identity I will conceal. Let's call this person 'Bob'.  Bob met with a visiting seminar speaker, and gave a 5 minute pitch about his research on... let's say the evolution of spontaneous combustion in corals.  A few months ago, Bob did what we encourage our graduate students to do: he signed up to meet with a visiting seminar speaker. Someone really well-known, highly respected. Let's call the visitor, "Dr. Doom".



 Bob is a first year PhD student, and is really excited about his research plans. He presents his research plan, briefly, to Dr. Doom. You know, the five minute version of the 'elevator pitch'.  Dr. Doom listens, asks some polite questions, then pronounces:

"Are you mad? This will never work! It is: [pick one or more of the following]
a) impossible to do
b) too risky
c) too expensive
d) sure to be biased by some uncontrollable variable
e) uninterpretable
f) uninteresting
How can your committee have possibly approved such a hare-brained scheme? Don't waste your time. Instead, you should do [thing Dr. Doom finds interesting]."


Bob, the first year student, is of course shattered. Here's this famous biologist, a fountain of wisdom and knowledge, crapping on Bob's idea. Bob obsesses over this criticism for weeks, considers completely changing his research. Considers dropping out of graduate school and becoming a terrorist, or an energy company executive, or something equivalent. Finally, Bob came to me. This post is about my advice to Bob.  And, to Dr. Doom, whoever you may be. (Note, Dr. Doom is quite possibly a very nice and well-meaning professor who wasn't watching their phrasing carefully, for lack of coffee going into their 9th meeting of the day).

Point 1:  Bob, you have spent the past 6 months obsessively thinking about your research: the theory, the hypotheses, the study system, and the methods. Maybe you have preliminary data, or perhaps not. No matter what, I can almost guarantee you that even a few months into your studies, you have thought in more depth about your experiment, than Dr. Doom has. Doom got a 5 minute elevator pitch and probably wasn't entirely paying attention the whole time (he has a grant proposal due in 10 minutes, after all), and leapt to some assumptions about your ideas. You have thought this through more than Doom.

Point 2:  If Dr. Doom says your hypothesis/prediction is surely false, and thus testing it not worth while, there is a chance that Dr. Doom is wrong. Let's be Bayesians for a second. You and Doom have different priors concerning your hypothesis. Dr. Doom knows a great deal, it is true. But that does not mean that Doom's priors are correct. You have chosen to work on a question whose answer is, I hope, not fully known. That's the point of doing it, right?  So, in truth neither you nor Doom know the correct answer. As long as you have some good reason for your prior, then proving Dr. Doom wrong will be all the more worthwhile, right?
Case study:  In graduate school, Tom Schoener was on my dissertation committee. I have immense respect for Tom.  Such enormous respect, in fact, that I think its okay to name names here, because his skepticism really drove me to some important projects, so I owe him a great deal. You see, my PhD work was going to focus, in part, on how frequency-dependent selection acts on populations where different individuals eat different things from each other. Thus competition is more severe for common-diet individuals, less so for rare-diet individuals, generating disruptive selection that was the foundation for some models of speciation.  So, I present the idea and Tom says "but Roughgarden, and Lister, and Taper, and Case, all proved that this among-individual diet variation does not happen much. You are barking up the wrong tree, your basic premise that justifies your research is wrong." [ I am paraphrasing the idea here, it has been 20 years after all]. I disagreed. So, I got together a group of fellow grad students and we reviewed the relevant literature together. The result was my most-cited paper, Bolnick et al 2003 American Naturalist on individual specialization. The very next year, Tom used this paper in the Davis Population Biology core course class reading list.  The point is, the more sure your famous critic is that you are wrong, the more impactful it might be if you turn out to be right.

Point 3: If Dr. Doom says that you have a good question, but the methods are flawed, pay attention. Doom may or may not be right, but you certainly need to take a careful look and give your approach a rethink. That's not a reason to abandon the work, but it is a chance to maybe tweak and improve your protocol to avoid a possible mistake. Sometimes Doom has a point. But, a fixable one.
Case study:  My PhD student Chad Brock wanted to study stickleback male color differences between lakes.  We hosted a certain Dr. Hend- er -Doom as a seminar speaker. Dr. H told Chad that male color was too plastic: if you trap males in minnow traps, their color is changed before you can pull them out of the water, and by the time you photograph them, the data has no meaning. If you euthanize them, they change color. if you co-house them with other males, they change color. Chad was devastated, so he rethought his approach. He snorkled rather than trapped. Hand-caught males and immediately handed them to a boater, who immediately took spec readings. Over time, we learned that (1) MS-222 euthanasia keeps the color pretty well, whereas some other forms of euthanasia do not, (2) housing males singly in opaque dark containers keeps their colors just fine for hours, and (3) color is stable enough to see really strong effect sizes. So, Dr. Hend... I mean, Doom was wrong in that case (happens to the best of us), but his criticism did push us to change Chad's approach in a way that ended up yielding great dividends. By measuring color on hand-caught males from nests, we knew male nest depth (not knowable when trapping). This led to the discovery of depth-dependent variation in male color within lakes, that became Chad's actual thesis.


Stickleback from various lakes in BC



Point 4: What seems obvious to Dr. Doom (who is, after all, an expert in the field and has read every single paper ever published), might not be obvious to other people. Doom remembers that back in 1958, somebody-or-other published an observational study with low sample size that resembled your hypothesis, and in 1978 Peter Abrams probably wrote 5 papers each containing a paragraph that could be read as establishing your idea [note, Peter Abrams truly has thought of pretty much everything, the guy is amazing]. But the rest of us were still learning to read Dr. Suess back then and haven't caught up. So, the broader community might be fertile ground for your ideas even if Dr. Doom already is on board.
Case study: In 1999 I was entranced by a couple of theory papers on sympatric speciation that were published in Nature (Dieckmann and Doebeli;   Kondrashov and Kondrashov). I designed my thesis around testing whether competition drove disruptive selection, which was a core assumption in both papers' models.  Soon after I designed my experiments, Alex Kondrashov himself visited. I showed him my plan, much like Bob showed Dr. Doom.  I explained how I wanted to test his model experimentally, with Drosophila. I figured he'd be thrilled. But, Alex asked, "Why would you do this?" I was floored. He explained that if the experimental system exactly matched all of the assumptions of the math, then the outcome was not in question.  On the other hand, if the experimental system deviated one iota from the math assumptions then it wasn't a test of the model. In short, he dismissed the very idea of an experimental test of the model's predictions. Either it is inevitable, or irrelevant. I realized that in a fundamental epistemological sense, he's not wrong. In fact, he taught me something crucial about the relationship between theory and data that day. Testing a model is a tricky thing. Often we are better off evaluating the model's assumptions, how common and strong are they, what's the slope and curvature of a function of interest?  And yet, I went ahead anyway and did the experiment. The result was my second-ever paper. Bolnick 2001 Nature was a single-authored paper, confirming experimentally that resource competition drives niche diversification.  I really truly owe my career to the fact that I ignored Alex's critique, and Tom Schoener's skepticism (which is why I name them, not to call them out but to thank them).


So, Doom's critique should be taken seriously, but not too close to your heart. Think hard about whether you can improve what you are doing. We should always do that, for every idea and plan, but sometimes an outside catalyst helps. But, don't jump ship right away. Doom may be wrong, biased, or just not thinking it through. Don't give up hope, but rather proceed with deliberation and caution and self-confidence. Senior famous people do not have a monopoly on being right. Far from it.


Speaking to Dr. Doom now
There is a fine line between lending constructive advice, and being a Mean Person. When we visit, we want to lend our expertise and experience to students at the host institution, give them advice and also warn them away from potential problems. We think we are doing them a favor in the process. But, when one does this clumsily, one sometimes steps into the realm of just being insensitive to their self-esteem. Be constructive, be helpful, offer advice, but don't be Dr. Doom. Not that you need to be Dr. Flowers instead, and praise everything as flawless and brilliant. The key of course is to dole out helpful constructive advice in a way that is compelling but kind.
Case study: Six months after Alex Kondrashov visited, Doug Emlen visited Davis. I showed Doug my preliminary data. He was really encouraging and enthusiastic, and more than anyone else I remember talking to he made me feel like a colleague and a potential success.  Yet, in amidst the encouragement he pointed out some real concerns. I didn't feel the sting of the critique because it was delivered so deftly, yet he was right, and as that realization gradually grew on me I revised my apprroach, ultimately changing how I went about measuring my proxies for fitness.

One of the reasons departments invite seminar speakers to visit, is to encourage meetings between top scientists and the department's graduate students, postdocs, and junior faculty.  The graduate students get feedback on their ideas and network with possible future postdoc mentors or colleagues.  Same for postdocs.  Junior faculty get to know people who might write letters for their tenure case, network with possible future collaborators.  And yet, all too often we let senior faculty hog the limited meeting slots. Even when programs put a stop to that, graduate students are often reticent to sign up for meetings, especially individual meetings. I suspect one reason is, fear of the Dr. Doom's of the world.  Don't be Dr. Doom. Be like Doug Emlen.


Wednesday, November 13, 2019

How to Manage Your Time - some ideas

The DRYBAR (Hendry-Barrett) lab meeting lastweek was about time management. Everyone in the group shared their strategies for best managing their time. The most important overall message was that different things worked for different people - there isn't any one-size-fits all (or even many) strategy. Many excellent and diverse ideas were raised, and so it seemed most profitable to simply share these diverse ideas. Try them out - one could work well for you. 

I (Andrew) have written a few relevant posts on this already:

How to Be Productive

From Work-Life Balance to Like-Dislike Optimization

The rest of the ideas below are from a diversity of students in the DRYBAR labs - each paragraph in each grouping is generally from a different person.

Organization

I started to use Trello for looking at all my tasks. The cool thing about it is that you can share your activities so is great for group projects too. Here's a link: https://trello.com/en Also some tips about writing goals: https://www.mindtools.com/pages/article/smart-goals.htm Personally, I like to have everything in different categories. Academic related stuff, personal stuff, ideas, etc. Something that I started recently is having something to write down random thoughts to look for future ideas, projects or anything really.


I make to do lists for separate subjects/areas of my life and most days work on my main focus (job/school) but every few days I'll take a day and do a bunch of little things from the other areas so I don't get too behind in them. It also helps that I focus all on the same thing one day so I remember things a bit better because I'm working on related tasks.

Having a weekly to-do list versus a daily to-do list helps me achieve a work-life balance. On Sunday evening, I make a list of tasks I wish to complete over the course of the next 7 days, and in my physical planner I break them down into smaller, daily activities. Even though I have a long list in front of me, subdividing tasks into smaller components spread over multiple days, allows me to feel I am advancing towards my goals. Thus, after I have achieved some items on my daily task list, I am content and can come home with small victories. Outside the office and lab, I can relax knowing tomorrow will be just as productive as today.

Avoiding distractions

Personally, I found when I started my PhD that I was easily distracted. When I ran in to little chunks of empty time (while code ran, or when I got a brief bit of writer’s block, or just lost focus while reading a paper), I’d check the news, or Facebook, or my email, and then get sucked in to that. I’d spend a lot of time “working,” but the distractions meant I never was getting as much done as I wanted. For the past couple years I’ve used  https://freedom.to/ to block these distractions. It blocks websites, but when I really need to hunker down I also have it block my email app so that I only check it once or twice a day. I’ve found it really helpful for making me more productive when I’m at work, which means I can relax more in the evenings and weekends.


I mostly try to focus on one thing at a time, but still reserve some time to work on something else. This allows me to get a break from my main activity while working on something that will be my main activity later on.

Reward systems

Be sure to take holidays and don't feel guilty about it. Do what works for you. Try things out. Sometimes it works, sometimes it won't.


Environment

Environment is also very important to me, depending on what I'm doing. If I'm writing or reading I like to go to the library in my neighborhood where there is complete silence so I can focus. Oddly enough, the coffee shop also works for me because there is so much noise that it all counteracts each other.



Exercise


Commute by bike! Almost always saves time. Also improves mental and physical health (which saves time indirectly). Managing work time is perhaps less important than managing time off. Make sure you get enough time off, and that it is is well spent doing things that make you happy and recharged for working again. Binge-watching Netflix might feel good in the moment, but can often make you feel like crap afterwards so you won’t feel rejuvenated and motivated to go back to work.


Recharging

I keep myself productive and happy as a researcher by taking frequent breaks throughout the day. Besides needing to rest my eyes after looking at a screen for prolonged periods of time, that downtime gives me an extra boost of energy to finish my task at hand. This often takes the form of brewing a cup of coffee or tea, or calling my parents to check-in. In the spirit of the Pomodoro technique, I like to take a long break after 1 hour of continuous work. 

One thing I also find important, personally, is taking a "chill" or mental health day where I watch my favorite movies and lounge around etc. I struggle with this sometimes because you feel guilty for "doing nothing" all day however doing nothing once in a while can be a good thing!

Fuck it (from Andrew's How To Be Productive Post): Go for a walk. Binge watch Game of Thrones. Read a book. Go to the climbing gym. Play guitar. Cuddle the cat (or dog). Play with the kids. Do the weekly ironing. These mental breaks will make you more efficient when you get back to work. Here is a compiled list of cool procrastination techniques of ecologists and evolutionary biologists.

Task switching

In general, switching between activities also keeps my brain stimulated and prevents me from feeling bogged down. For instance, I tend to dedicate my mornings to reading papers and answering emails, and in the afternoons I prefer to do lab work. I push forward with my morning tasks as I look forward to the exciting lab work I have planned for later on in the day.

Serial multi-task (from Andrew's How To Be Productive Post): By serial, I don’t mean do many things at the same time – unless that works for you. What I mean by serial is that, if you have multiple projects on the go, try to stay on the maximal effort-to-payoff area of the function. If one project is slowing, send it to coauthors, and work on the other projects. If one project looks like it will have a higher payoff overall (first authored papers), then work on that first.


Thursday, October 31, 2019

Evolutionary ecology SHOULD be wet and dry

This is a guest post by Erik Sotka
Twitter @eriksotka 

I am part of an NSF-funded Research Coordinated Network on "Evolving Seas" (https://rcn-ecs.github.io). Our goal is to summarize what we know, don’t know but should know about the evolutionary responses of marine organisms to climate change. In preparation, I had invited a butterfly-ecologist friend to join the effort. His response was, to paraphrase: “so, NSF pays for folks to get together and chat about how marine biology should proceed…Do I really have anything to contribute?” 

It is not a surprising question. 

The cynical view is that his response reflects an unfortunate bias, in which terrestrial ecologists pay less attention to wet ecosystems (aquatic and marine). Terrestrial papers on population and community ecology are cited 10x more often by aquatic papers than the reverse (Menge et al 2009). Look at the journal Oikos for example: (x-axis; biome of a paper; y-axis: proportion of citations from each biome). 



This bias extends to the principal tools by which we teach the next generation of ecologists as well. Among general ecology textbooks such as Ricklefs 1997, Krebs 2001, Begon et al. 1990, a review found 5 marine examples out of 186 examples, indicating a clear underrepresentation of aquatic studies (Munguia and Ojanguran 2015). 

There’s no reason to think that these biases do not extend to evolutionary ecology. While I didn’t have the time to look at some of our evolutionary journals (e.g., Evolution, Evolutionary Applications), I did survey chapters 9 (Variation) and 11 (Population structure and gene flow) of Futuyma 1997 and counted one (1) marine example out of 61. 

Why does this bias exist? Well, you might suspect that because there are more terrestrial biologists, there are more terrestrial studies. However, while the ratio of terrestrial to aquatic ecologists is 57:43 (Stergiou 2005), the ratio of articles in the top ecological journals is 78:22 (Menge 2009). So, that doesn’t seem to completely explain the pattern.

It seems that the bias is generated by multiple conscious and unconscious decisions made by both terrestrial and marine ecologists. Menge et al. 2009 propose that there is “a tendency for aquatic ecologists to focus more on across-habitat comparisons than do terrestrial ecologists, an increased likelihood that editors and/or reviewers of aquatic papers demand greater citation breadth”, and/or “greater effort by authors to include terrestrial citations to improve chances of acceptance by editors.” 

There is also a “benign neglect” due to the lag in development of marine and terrestrial studies. “Terrestrial ecology as a major subdivision of biology preceded that of aquatic ecology, and especially marine ecology, by several decades.” (Menge et al. 2009). There are likely other reasons, as pointed out by the responses to this Twitter thread earlier this year (https://twitter.com/evolvingseas/status/1112727871775232000).

“So what?” you may ask. Unfortunately, the consequence of this asymmetry in literature bias is clear. With less integration, there is less advancement of generalizable theory. There are some really nice summaries of the benefits of integration within Webb 2012 and Dawson 2012.

The notion of integration was one of the gifts that my PhD advisor Mark Hay wisely gave me 2 decades ago. He suggested that I write my publications or give my oral presentations at ESA so that “the spider biologist in Kansas” would be interested. And as the biome-minority at every Gordon Conference, SICB, ESA or SSE meeting I go to, I try to keep this advice in mind.

At the same time, the spider biologist in Kansas would benefit from writing studies that interest a marine evolutionary ecologist in Charleston South Carolina (for example J). This is because integrated studies get *far* more citations at American Naturalist and Evolution journals. American Naturalist articles published between 1980-2019 had ~85% more citations on average if you had both terrestrial and marine in the title or abstract relative to all articles (133.4 vs 71.4). Evolution papers published between 1980-2019 had ~25% more citations on average if you had both terrestrial and marine in the title/abstract relative to Evolution as a whole (80.5 vs 57.5).

So, integrated papers get more citations. Integrated papers avoid missing concepts. Integrated papers move the field forward. We need more interplay between marine and terrestrial evolutionary ecology, and for that, I’m really excited by this RCN on Evolving Seas. 


Menge et al 2009a Front Ecol Environ 7:182
Munguia and Ojanguran 2015 Ecosphere 6:25
Stergiou 2005 MEPS 304:292
Webb 2012 TREE 27:535
Dawson 2012 Frontiers of Biogeography 1.2

Wednesday, September 25, 2019

Does startup size predict subsequent grant success?

Warning: the following is a small sample size survey not conducted especially scientifically. This is simply out of curiosity and should not guide policy decisions.



Based on a twitter query about start-up sizes, I found myself wondering whether the size of a professor's start up package has a measurable effect on their subsequent grant writing success. In particular, do people who get larger start-up packages then get more money, representing a larger return on the larger investment? I designed a brief 5-question survey on google forms, advertised it on twitter, and got 65 responses. This blog post is a brief summary of the results, which surprised me only somewhat.

First off, a brief summary of who replied:




I then wondered whether initial start-up package size depends on gender or the university type, and found a clear and expected result: R1 universities have larger start-up packages. Encouragingly, with this small self-reported sample size there was no sign of a gender bias:




I'd show you the statistical results, but they just obviously match the visuals above.
Subject matter had no significant effect on start-up package size (mostly ecology versus evolution).


Now for the big reveal: does initial start up package size matter for later grant income?

Using the first five years as a focus, the answer is...
no, once you account for university type.



Black dots in the figure are R1 universities, green are non-R1 universities, yellow are private colleges, blue is other. There's no significant trend within either of the well-represented categories (R1, non-R1). If we do a single model with grant income as a function of university type and start-up, only university type is significant.  The pattern after 10 years is even stranger:


In both cases it is certainly clear that (1) there's a lot of noise, and (2) people who get start-up packages in excess of about 400,000 do seem to have an advantage at getting more grant money. After 10 years people who got more than 400,000 in start-up all had at least 2 million in grant intake. That seems like a good return on investment. But more is not all better: the biggest grant recipients were middle-of-the-road start-up recipients. Note that gender had no detectable effect in the trend above, but sample sizes were low and the data self-reported.


Briefly, here's my own story: In 2003 I negotiated a position with the University of Texas at Austin. They gave me just shy of $300,000 (might have been 275,000), plus renovation costs. Before even arriving at UT I already had a >$300,000 NSF grant. Within 5 years of arriving I also had received an $800,000 Packard Foundation grant, and a position at HHMI whose total value exceeded $3,000,000. By the time I left UT, I had obtained five additional NSF grants and an NIH R01 and had pulled in somewhere in excess of $10 million cumulatively. I recognize that I have been quite lucky in funding over the years. My point though is that I was able to achieve that leveraging relatively little start-up compared even with many of my peers at the time. That anecdotal experience is confirmed, tentatively, by this survey, which finds that lots of people end up being quite successful with relatively small start-ups. The data seem to suggest that above a certain start-up package size, universities see little additional benefit. It is essential to recognize, however, that this is a small sample size with questionable data and poor distribution (a few days via twitter). So, this should not guide policy. But, it does make me wonder: surely someone has done a proper job of this analysis?








Thursday, September 19, 2019

Inspiration

Inspiration

Part 2 of a series on choosing a research topic.

One of my favorite songs in college was by Natalie Merchant:

Climbing under 
A barbed wire fence 
By the railroad ties 
Climbing over 
The old stone wall 
I am bound for the riverside 
Well I go to the river 
To soothe my mind 
Ponder over 
The crazy days of my life 
Just sit and watch the river flow 

It still brings me joy to hear the song, and more and more it feels like its about my research process. Academics is so hectic: teach, write proposals, publish, committee meetings, editing... where does one find time to contemplate, to let ideas bubble up and mature? Personally, I find that some of the most valuable time is sitting by water, hence my continued love of the song above.

Which leads me to the question, where do ideas come from? What can you do to generate ideas?


Suggestion 1: Go Forth And Observe: I used to teach a class on Research Methods at UT Austin, aimed at future K-12 science teachers. The students had to come up with their own questions in any area remotely science-like, and do a project. Four projects, actually, of increasing complexity. And finding questions was always hard. So we walked them through an exercise: everyone blew up balloons, then all at once we popped our balloons. It wakes them up. As an aside: if you do this, check that nobody has a latex allergy, PTSD that might be triggered by the sound [we had a veteran in the class once and learned this], and don't hold the balloons close to eyes or they can tear corneas. Then, everyone wrote down five observations: it split into 3 pieces, they are different sizes, it was loud, the inside of the balloon is damp, there are little ripples on the edge of the tear. Then, convert those into questions. Then we discussed which questions were testable in the classical sense (yes/no answers), quantitative (numbers are the answer), and which were untestable. We'd talk about how a well phrased question clearly points you towards what you should do to answer it. And about how poorly phrased or vague questions (why did it make a sound) can be broken down into testable and more specific sub-questions. Its a great exercise, not least because my co-instructor Michael Marder, a physicist, had actually spent two decades working on the physics of that sinusoidal ripple at the margin of the torn rubber (inspired by noticing this at a child's birthday party), and discovered it has applications to predicting how earthquake cracks propagate through the earth's crust. So, students could see how a mundane thing like a balloon can lead to big science.

You can do the balloon exercise, or something like it that's more biological: go out in the woods, or snorkle in a stream or ocean. Watch the animals around you. Visit the greenhouse. Write down observations, and turn them into questions. Write down 50. There's got to be a good one in there somewhere, right?

Suggestion 2: Don't take the first idea or question that you can do. The exercise described above will almost surely lead you to a question that you can answer. But, is it the question that you SHOULD answer? Will other people care about it? If so, why?  There's this idea in economics of "opportunity cost". Sure, writing this blog is valuable. But it is taking time I could otherwise be spending on revising that manuscript for Ecology Letters, or writing lectures for my Evolutionary Medicine class, or preparing my lecture for a fast-approaching trip to Bern and Uppsala. Is this blog the best thing I could be doing with this hour of my day? Choosing a research project is even more prone to opportunity costs: you are embarking on a project that may take you six months, a year, or five years. Sure, you can do it, and you can publish some results. But is it the best and most impactful (variously defined) thing you can do with that time? In general I bet that the first idea that crosses your mind isn't the best idea you'll have. Personally, I had two ideas for research when I first entered grad school, then I went through a series of maybe 6 ideas over the course of my first year, and only landed on my actual project in early fall of my second year. The other ideas weren't bad, just not as exciting to me (and, I think, to others).
Opportuity cost, by SMBC Comics' Zach Weinersmith


Suggestion 3: Don't get stuck on local optima. I love to think of self-education in a research field as a Bayesian Monte-Carlo Markov Chain search on an intellectual landscape. Search widely, visit many different topics and ideas and questions. The ones that you keep coming back to, and spend the most time on, are probably a good indicator of a high posterior probability for your future research. But, again, if you start on an actual project too soon, you limit your ability to explore that intellectual landscape by slowing your search rate and might falsely conclude you are on a great peak for an idea, when really you've just stopped making those long jumps to new places in the intellectual landscape of relevant topics.

Suggestion 4: Know your history There are vast number of ideas, and study systems, stashed away in the literature, going back decades and beyond. As a mid-stage graduate student, I read Ernst Mayr's Animal Species and Evolution, and I was struck by how many hundreds of study systems were left lying by the wayside because someone lost interest, retired, left academia, or whatever. The questions were never fully resolved, just left partly answered. There are so many great ideas and systems waiting for your attention. And the great thing is, when pitching an idea to readers or grant reviewers, they tend to love to see the historical context: it helps to justify in their own mind that this is something interesting, if it is a topic people have been thinking of for a long time. Also, knowing your history helps you avoid repeating it. Being scooped by a contemporary is frustrating, but being scooped by somebody 40 years ago because you didn't know it was done already, that's worse.
Ernst Mayr

Suggestion 5: Read theory. A lot of evolution and ecology students are wary of mathematical theory. That's unfortunate, because it means you are cutting yourself off from a major fountain of inspiration. Learn to read theory, including the equations. It is absolutely worthwhile. Here's why. From my viewpoint, theory does a lot of things that an empiricist should pay attention to. 

First, it makes our thinking more rigorous. For example, it is intuitive to think that co-evolution between host and parasite can lead to frequency-dependent cycles where the host evolves resistance to parasite A, so parasite evolves phenotype B, then hosts evolve resistance to B but thereby become susceptible to A again, so the parasites switch back. Cyclical evolution, maintenance of genetic variation in both players. Sure, its possible, but by writing out the math theoreticians identified all sorts of requirements that we might have overlooked in our verbal model. This cyclical dynamic is harder to get that we might think, an the math helps us avoid falling into a trap of sloppy thinking that leads us down a blind alley. 

Second, and related, the math identifies assumptions that we might not realize we were making. Is assortative mating during sympatric speciation based on a magic trait that affects both mating and adaptation, or a sexual-signalling trait unrelated to adaptation? Do individuals really tend to compete for food more strongly with phenotypically similar members of their population? When writing out theory, those assumptions are often brought into the light of day (though sometimes theoreticians are unclear about them, making implicit assumptions too). These assumptions are often things we empiricists don't know much about. How strongly do females prefer phenotypically  like males within a panmictic population? I didn't know. How many stickleback males does a searching female visit before settling on a mate? No idea... Theory brought my attention to these assumptions, and they become something I can go and measure. So, the assumptions underlying the equations are an opportunity for empirical investigation, with a ready-made justification: "Theory assumes X, so we need to know if/when/where/how often this is biologically valid".

Third and hardest, theory makes predictions: if X is true, then Y should happen. These predictions can, in princple, be tested. But beware: If the entire set of assumptions X are true, then the math argues that Y is inevitable. Is it really worth testing, then? If you don't know that all features of X are true, then the theory no longer guarantees Y. If you fail to demonstrate Y, arguably you weren't actually testing the theory.



Suggestion 6: P-hack and prepare to be surprised. Having read theory, read the literature, and been observant, go back out and do something. Do a little experiment, start a little observational pilot study, just get some data. Now, do something everyone tells you not to: P-hack it. Analyze the data in many possible ways, look for relationships that you might not have identified a priori. Sure, this can lead to false positives. A lot of people argue strongly against unguided post-hoc data analysis for this reason. But we aren't at the stage yet of publishing, this is exploration, an information-finding foray. Here's a concrete example: most stickleback biologists like myself have long treated each lake as a single genetic population and assumed it is well-mixed in terms of genotypes and phenotypes (except in a few lakes with 2 species present). This has practical consequences. This past summer I watched colleagues throw 10 traps into a lake, along a mere 10 meter stretch of shoreline, then take the first trap out and it has >100 fish, so we use them and release the fish from the other 9 traps. BAD IDEA. It turns out, we now know, there is a lot of trap-to-trap variation in morphology and size and diet and genotype that arises from microgeographic variation within lakes. Here's how I got clued into this. A graduate student of mine, Chad Brock, hand-collected ~30 nesting male stickleback from each of 15 lakes in British Columbia, and immediately did spectroscopy to measure color wavelength reflectance on each male. He also happened to note the substrate, depth, and so on, of the male's nest. Six months later, back in Texas, he P-hacked, and noticed that in the first lake he was examining intensively, male color covaried with nest depth: males 0.5 meters deep were redder and males 1.5 meters deep (just meters away horizontally) were bluer. The different-colored males were within maybe 10 seconds' swimming distance of each other. This clued us in to the fact that something interesting might be going on, and we later confirmed this pattern in 10 other lakes, replicated it across years, and ultimately replicated it experimentally as well. I'm not here to tell you about our male color work though. The key point is, theory would have told me to never expect trait variation among individuals at this spatial scale, because gene flow should homogenize mobile animals at this spatial scale. But it doesn't, apparently. Here's a case where theory puts blinders on us, telling us to not bother looking for microgeographic variation. Then, when we P-hacked we were surprised and ultimately cracked open what turns out to be a very general phenomenon that we might otherwise have overlooked. 

(a caveat: P-hacking should't be the end-game, and if you do try that, please at least be totally up front when you write about which analyses are predetermined, and which (and how many) were post-hoc analyses).


Suggestion 7: Have a portfolio. In financial investment theory, it is always recommended that you invest in a portfolio. Some investments (e.g., stocks of start-ups) have the potential to go sky-high, but also the potential to crash out entirely. Other investments are solid safe bets with little risk. If you put all your money in the former, you will either be spectacularly wealthy or lose everything. If you put all your money in the latter, you are guaranteed to have some savings in the future, but maybe just keeping up with inflation. The recommendation, therefore, is to have a portfolio that mixes these alternatives. The same is true in research. There are projects you can do that would be super-cool if they suceeded and gave you a particular answer. They'd make you famous, get you that Nobel Prize your mom has been pestering you about. But, either it might not work at all, or perhaps a negative result is just uninterpretable or uninteresting. High potential reward, high risk.  Or, you could go to the nearest 10 populations of your favorite organism, and do some sequencing and build a phylogenetic tree or a phylogeographic model. Guaranteed to work, not very exciting. Low reward, no risk. Pick some of each to work on, and be aware which is which.

Note also that in economics the optimal ratio of risky to safe investments shifts with time: as you age, you have less time before retirement to recover from a crash, so you want to shift your investments increasingly into the safe category. In science I'd say the opposite is true. A consequence of the tenure system is that once people get tenure, they become less risk-averse, more likely to shoot the moon (a card game reference, not a ballistic one) for that wildly risky but high-reward idea. As a grad student, though, if you want to end up at an R1 university (disclaimer, other careers are great too!) don't get sucked into a safe-bet-only philosophy, because it probably won't make the splash you need to be recognized and excite people.

Suggestion 8: Have a toolbox. Whatever question you pick, you'll need a toolkit of skills to answer it. Bioinformatics. Bayesian hierarchical modeling. ABC. Next generation sequencing. GIS. CRISPR. These skills are "just tools". But, sometimes academic departments choose to hire faculty who can bring in skill sets that existing faculty lack (e.g., so we can have you help us analyze the data we collected but don't really know how to use). And, those "just tools" are often highly sought-after by industry. So, if you are thinking of moving into NGOs, or the private sector, often the skills you gain along the way turn out to be far more valuable for landing a job, than the splashy journal article you published.

Suggestion 9: Don't be dissuaded. Here's the riskiest advice yet. If you have a truly transformative idea, don't be dissuaded by nay-sayers. There will be people on your PhD committee, or among your colleagues and peers, who think you are full of $#it, on the wrong track, or it just won't be feasible.  Listen to them. And defend yourself, rather than just abandoning your idea. Sure, you might be wrong. But, they might be wrong too. A personal example. Tom Schoener was on my PhD committee. I was intimidated by him, he was so foundational to ecology, so smart, so prolific. So when I presented my research plan, I was initially dismayed by his response. My ideas on disruptive selection and competition depended on the assumption that individuals within a population eat different foods from each other. So, whoever eats commonly-used foods competes strongly, whoever eats rarely-used foods escapes competition, and voila, you have disruptive selection. Tom, however, pointed to a series of theoretical papers from the 1980s by Taper and Case, and by Rougharden, to argue that selection should ultimately get rid of among-individual diet variation. Therefore, Tom said, most natural populations should be ecologically homogenous: every individual eating pretty much the same thing as every other individual if they happen to encounter it. But, that didn't jive with my reading of the fish literature. So, I assembled a group of fellow graduate students (as yet uncontaminated by preconceptions on the topic) and we did a review / meta-analysis of diet variation within populations. In a sense, I did it just to prove to myself, and to Tom Schoener, that the real core of my dissertation wasn't a wild goose chase. The resulting paper has turned out to be my most-cited article by far (Bolnick et al 2003 American Naturalist). And I did it to prove a PhD committee member wrong, on a minor point of disagreement. To be clear: Tom loved that paper and assigns it in his ecology graduate course, and we get along great. But the point is, your committee members and peers have both accumulated wisdom that you should draw on, but also have preconceptions and biases that may be wrong. Defend your ideas, and if you are able to, you might really be on to something.
Tom Schoener


Fads

Part 1 of a series on choosing your research topic

FADS

I might be guilty of stereotyping here, but I suspect relatively few readers of this blog would consider themselves fashion-conscious. Do you go to fashion shows? Regularly read fashion magazines? Discard last month's clothes in favor of the latest trends? That's not something I normally associate with the crunchy-granola environmentally-conscious caricature of an evolutionary ecologist. [if you do, my apologies for stereotyping]












But we do follow fashions in our own way. Science too has its academic fashions, and in particular I'm thinking of fads in research topics (see Fads in Ecology by Abrahamson Whitham and Price, 1989 Bioscience). My goal today is to contemplate the role of fashions, for good and ill, and what you should do about them when planning your own research. This post is inspired by a discussion I co-led yesterday with Janine Caira, with our first year Ecology and Evolutionary Biology graduate students at the University of Connecticut. The focal topic was, "How to choose a good research question".

A core rule I tell students is: when choosing a research topic you must have an audience in mind. Who will want to read your resulting paper? How large is that audience, and how excited will they be? If the audience is small (e.g., researchers studying the same species as you), you aren't going to gain the recognition (citations, speaking invitations, collaboration requests) you likely crave and which will help your career progress. If your audience is large, but you are doing incremental work that will be met with a widespread yawn, that's not very helpful either. Ideally of course you want to present something that is really exciting to as many people as possible. But, the more exciting and popular it is, the more likely it is somebody has gotten there first.

Which is what brings me to fads. A fad is defined (in Google's Dictionary) as "an intense and widely shared enthusiasm for something, especially one that is short-lived and without basis in the object's qualities; a craze". Intense. Widely-shared. And with at least a hint of irrational exuberance (a reference to former Federal Reserve Chairman, Alan Greenspan).



Fads happen in science, with a caveat that they aren't always irrational exuberance: there are research topics that genuinely have value, but which nevertheless have a limited lifespan. I'll give an example: When I was a beginning graduate student, Dolph Schluter [for whom I have immense respect] had recently started publishing a series of papers on ecological speciation, along with his Ecology of Adaptive Radiations book which I heartily recommend. The core innovation here was that ecology plays a role in (1) driving trait divergence between populations that leads incidentally to mating isolation, and (2) eliminating poorly-adapted hybrids. Both ideas can be found in the literature of course, few ideas are truly 100% new. But what Dolph did was to crystallize the idea in a simple term, clearly explained, and solidly justified with data, making it compelling. And suddenly everyone wanted to study ecological speciation, it seemed to me. There was a rapid rise in publications (and reviews) on the topic. Then at a certain point it seemed like fatigue set in.  I began encountering more conversations that were skeptical: how often ecological speciation might fail to occur, where and why is it absent, how common is it really. At one point, an applicant for a postdoc position in my lab said he/she wanted to work on ecological speciation and I couldn't help wondering, okay that's interesting material but what do you have to say that's new, or is this yet another case study in our growing stockpile of examples? And I think I wasn't alone: the number of papers and conference talks on the topic seemed to wane. Its not that the subject was misled, wrong, or uninteresting: I'm not saying it was irrational exuberance. Just that the low hanging (and medium-hanging) fruit had been picked, and people seemed to move on. To drive that point home, below is a Web of Science graph of the peak and maybe slight decline in the number of publications per year invoking "ecological speciation" in a topic word search. Interestingly, total citations to articles about "ecological speciation" peaked just three years ago, after a steady rise, and the past two years showed somewhat lower total citations to the topic.
Ecological speciation articles by year

Meanwhile, other topics seem to be on the rise, such as "speciation continuum" (next bar chart), which Andrew Hendry, Katie Peichel, and I were the first to use in a paper title in 2009 (it showed up in sentences in 2 prior papers) and was the topic of a session at the recent Gordon Conference on Speciation [still not anywhere near a fad, just 72 papers use the term, and there are reasons to argue it shouldn't catch on]
Speciation continuum
And of course "eco-evolutionary dynamics" and its permutations are fast-rising and very popular these days:
Eco-evolutionary dynamics, total citations


Life cycle of a scientific fad:

1) Birth: someone either has a great new idea, or effectively re-brands an old idea in a way that makes it catch on. Sometimes an old idea gets new life with a clever experiment or model (e.g., both reinforcement and sympatric speciation were old ideas, that caught fire in the early 1990's and late 1990's respectively after new data or theory rekindled the topics). The simplest and least valuable way to start a new fad is re-branding. Don't do this, it sometimes works but really annoys people. Take a familiar idea that's been in the literature for ages and give it a name, or rename it, and pretend it's an innovative concept.
2) The sales pitch. For the idea to become a fad, someone needs to really hit the streets (or, printed pages) and sell the idea. Giving lots of talks, writing theory/empirical/data papers in journals where the idea is seen.

3) People get excited, and start thinking what they can do to contribute. There's a lag here, where the idea spreads slowly at first, then accelerates as people start to find the time to run models and write papers. For empiricists, there's a lag while people design experiments, get funding, do the experiments, analyze and write. This takes years, and doesn't all come out in one burst, so there's an exponential growth phase. This is a good time to get in on the topic. Personally, as a second year graduate student I read the Dieckmann & Doebeli 1999 and Kondrashov & Kondrashov 1999 Nature papers on theory of sympatric speciation, and immediately started designing lab and field experiments to test their model assumptions about disruptive selection and assortative mating, work that I started publishing in 2001, peaked around the mid-2000's, and touched on only occasionally since then. In short, I was part of the rising initial tide, after their theoretical advance rekindled the topic. In the graph below on "sympatric speciation" papers you can see an uptick after the 1993 paper by Schliewen et al on Cameroon crater lake cichlids, and again an acceleration after 1999 theory papers. I came in right in the middle of the wave, and published my AREES paper with Ben Fitzpatrick in 2007, right as it crested and soon began to fall off again.
Sympatric speciation


4)  Fads don't go away entirely, usually. Both Ecological Speciation and Sympatric Speciation, for example, declined slightly after their peaks (see graphs above), but are very much still with us. Because they have value. But the initial excitement has passed, the honeymoon is over.

5) Fall from favor. At some point, it becomes increasingly hard to say something creative and new about a topic. Not impossible, mind you. And so grant reviewers and journal editors become increasingly skeptical. Journals that favor innovative and flashy results get harder to publish in. I hit this, sort of, when I briefly toyed with gut microbiome research: we studied how natural variation in diet among individuals affected the gut microbiome. Science reviewed it, and the Editor was enthusiastic but wanted some more manipulative experiments to prove a core claim of ours in a controlled setting. It took a year (of postdoc salary, time, and $10,000 in sequencing) to get the data the Editor asked for. It confirmed our initial claim, beautifully. But in the intervening year, gut microbiome research had become increasingly saturated. To get a Science paper you now needed molecular mechanisms, not just documenting that phenomena occur. The same Editor who had expressed enthusiasm before, now said it wasn't interesting enough. I'm not complaining (too much), but use this to point out that when you hit a fad at its crest, standards of publication become more stringent and its harder to impress or surprise.

6) Rebirth. Some fads come in waves. Think Bell Bottoms. Or jazz swing-dancing. But I'm wrestling with finding a good example. Lamarckian evolution seems a safe one, or even sympatric speciation which in the 1960's Ernst Mayr said was dead, but like the Lernean hydra, it would grow new heads again (which it did).

Avoid or embrace the fad?

Given that fads exist, what should you do about them? On the one hand, they represent a ready-made audience. This is the hot topic of the day, and publishing in that area will surely draw many readers to your work, right? Perhaps. That depends on when you are coming in on the fad. Here are some options:

1) Start a new fad. Come up with an idea so brilliant and widely appealing that many people pile on and build on your work. This is a guaranteed ticket to fame, if not fortune. Of course, it rarely happens and you have to have some combination of exceptional brilliance and luck and good salesmanship. So, don't bank on this approach: a lot of attempted new fads quickly become failed fads (see photo below). 




2) Catch the wave: Contribute to a fad in its early days. This requires reading the current literature very closely and widely, and acting quickly on great new ideas as they appear in print (or, in conference talks, etc). You still need a good intuition for what people will find exciting in your field, but less initial creativity than option (1). This is more or less where I came into the sympatric speciation field, with a couple of somewhat skeptical theory papers, and some somewhat supportive lab and field experiments on disruptive selection. 



3) As a fad nears its peak, the audience is now very large, but truly new ideas are becoming more and more scarce. Still, there are still usually new directions you can take it. Sure we know X and Y are true, but what about Z? Be careful though: as fads near their peak, your audience starts to experience some fatigue with the topic and are more likely to say, "oh, its another paper on gene expression, yawn". Might be a good time to avoid. Or, do a meta-analysis or review that synthesizes the topic, wrapping it all up in an easily accessible package.



4) Be contrarian.  Sure, this fad thing exists. But how common is it? How strong is its effect size relative to other things we might get excited by? Might we be over-interpreting the evidence or being too simplistic? One of the reasons fads go away, is that people shift from being excited that a phenomenon even happens, to taking a more measured quantitative and objective view. Sure, there's parallel evolution, but are we just cherry-picking extreme cases and ignoring the bulk of traits and situations where evolution is less parallel? 



5) Merge fads. There used to be these TV advertisements for Reeses Peanut-butter Cups. Two people walking down the street, one eating peanut butter with a spoon (really??? who does this?), the other eating a bar of chocolate. They collide, and discover their combined food is so much better than either alone. Some great scientific papers are like Reeses Peanut-butter cups. They take two familiar subjects and merge then in an unfamiliar way. Two fads put together can make a super-fad. 
















6) Revive old fads (zombie ideas). Old fads never truly die, they just hide away in a quiet steady tick of more papers that aren't making a big splash anymore perhaps. The key thing is, their audience never truly went away, they just reached a point where they moved on. But like many failed relationships, you often never truly stop loving your ex. So, if you can locate a former fad and give it new life, you have a ready-made audience and a small field of competitors. This is especially easy to do when a previous fad ran out of steam because people in the old days lacked analytical tools that we have now: sequencers or flow cytometers or Bayesian statistics or whatever. If you can apply modern lab or computational technology to an old fad, you might make fundamental new progress, on a widely-known topic. Doing this requires reading your history, to know where the good zombies are buried. When I was a graduate student, I spent a summer reading Ernst Mayr's Animal Species and Evolution. Its a seriously dry book, packed to the gills with case studies and examples, and ideas. Many of these were abandoned, for various reasons, and are just waiting around to be exhumed, re-examined in light of new perspectives and tools, and maybe re-animated.



I'm sure there are more variants on this theme, but I think the point is made:  fads are a great way to make your name in academic science. They are also a trap, if you hop on the band wagon just as it goes over the cliff into obscurity. To know which is which, you need to read read and read, and go to conferences and talk and listen, to get a sense for the pulse of your field.

Now, your turn: 

What do you see as passed or passing fads in your field? How can we know if something is a fad-to-be and get in on it early?





Games Academics (Do Not) Play

By Andrew Hendry (this post, not the paper under discussion) I read the above paper with a strange mixture of agreement and annoyan...