I read the above paper with a strange mixture of agreement and
annoyance. Agreement comes from the fact that citation metrics indeed are NOT a good
way of measuring research quality. Annoyance comes with the fact that the
paper represents a very cynical view of academia that is most definitely not in
keeping with my own experience. In this post, I want to provide a counterpoint
in three parts. First, I want to summarize the points of agreement that I have with
the authors – especially in relation to journal impact factors. Second, I will argue
that the “gaming” the authors suggest academics involve in is, in fact,
extremely uncommon. Third, I will explain how evaluation procedures that do not
involve citation metrics are also – indeed probably more so – subject to gaming
by academics.
But before that, I need to comment on another extreme cynicism
in the paper that sends a very bad and incorrect message to young academics. Specifically,
the authors argue that academics is an extremely competitive profession, and
that academics are constantly under pressure to “win the game” by someone
defeating their colleagues. I am not doubting that space in any given journal
is limited and that jobs in academia are limited. However, I need to provide
several less cynical counterpoints. First, space in journals overall is NOT limited,
especially in this open access era. There are tons of journals out there that
you can publish in – and many do not have any restriction on numbers of papers
accepted. So, while competition is stiff for some journals, it is not stiff for
publication per se. (Yes, rejection is common but there are plenty of other
journals out there – see my post on Rejection and How to Deal with It.) Second, although
competition for any given job is stiff, competition for a job in academia or
research is much less so. For instance, a recent analysis showed that the number
of advertised faculty positions in Ecology and Evolution in the USA was roughly
the same in that year as the number of PhD graduates in Ecology and Evolution
in the USA. This (lack of) differences is that many of the jobs were not at institutions
that grant PhDs. So, the issue isn’t so much the availability of jobs overall,
but rather how particular a job seeker is about location or university or
whatever – see my post on How to Get a Faculty Position. (Please note that I realize many
excellent reasons exist to be picky.) Moreover, many excellent research jobs
exist outside of academia. Third, the authors seem to imply that getting tenure
is hard. It isn’t. Tenure rates are extremely high (>90%) at most universities.
In short, academia is NOT universally an exceptionally competitive endeavor (except for big grants and for big-shot awards) – rather, some individuals within academia, as in other endeavors, are extremely competitive. You do not need to be a cutthroat competitive asshole to have a rewarding career in academics and/or research. (See my post on Should We Cite Mean People?)
In short, academia is NOT universally an exceptionally competitive endeavor (except for big grants and for big-shot awards) – rather, some individuals within academia, as in other endeavors, are extremely competitive. You do not need to be a cutthroat competitive asshole to have a rewarding career in academics and/or research. (See my post on Should We Cite Mean People?)
Now to the meat of my arguments – a few disclaimers will
appear at the end.
Citation metrics are imperfect measures of research
quality
The authors point out that some funding agencies in some
countries, in essence, “do not count” papers published in journals with low
impact factors. This is true. I have been on several editorial boards where the
journal bounces back and forth across the impact factor = 5 threshold. Whenever
it is over 5, submissions spike, especially from Asia. This is obviously complete
non-sense. We (all of us) should be striving to publish our work in ethical
journals that are appropriate for the work and that will reach the broadest
possible audience. Sometimes these factors correlate with impact factor –
sometimes they do not.
As an interesting aside into the nonsense of journal impact
factors, the journal Ecology Letters used to have a ridiculously high impact
factor that resulted from an error in impact factor calculation. Once that
error was corrected, the impact factor decreased dramatically. The impact
factor is still relatively high within the field of ecology – perhaps as a
lasting legacy of this (un?)fortunate error early in the journal’s history.
The paper goes on to point out that this emphasis on impact factor
is killing formerly important (often society-based) journals. This is true and
it is a damn shame. Importantly, however, an even bigger killer of these
journals is the pay-to-publish open access journals, especially those that involve
referral from more exclusive journals. The “lower tier” journals in a specialty
are now in dire straights owing to falling submissions. Yet it is those
journals that built our field and that deserve our work.
I also dispute the paper’s argument that submissions
serially work their way down the citation factor chain. That does certainly
sometimes happen but, if one excludes the general journals (Science, Nature,
PNAS), I often move up to “better” journals in my submissions – and this is
often just as effective as moving “down” the journal pile. In the past year, one paper rejected from The American Naturalist was accepted right away at Ecology Letters, and another paper rejected from Ecology Letters was accepted right away at PNAS.
I also object to the paper’s cynical view of reviews and meta-analyses.
They seem to think these are written to game the citation system. I disagree.
They are written to give a synthetic, comparative, and comprehensive view of a
given topic. They are fantastic for early career researchers to have an important
impact beyond just their very specific empirical contributions from their own study
systems. Yes, these types of papers are heavily cited – see my post on What if All My Papers Were Reviews (the table below is from that) –
but that reflects their utility in their intended function, not an effort to
game the citation system. I highly recommend that early career researchers
write these papers for these reasons, and also because they tend to be easier
to publish and to attract more attention. (Attracting attention to your ideas
is important if you want your research to resonate beyond the confines of your
specific study system.)
Very few academics are “gaming" the system
The authors have a very cynical view of academics. They state
that researchers try all kinds of tricks to increase their citation rates,
including gratuitously adding their names to papers, adding additional authors
who do not deserve to be included on a paper (perhaps the PI’s trainees), and
creating quid pro quo citation cabals, where the explicit or implicit agreement
is “you cite me and I will cite you.” I know this gaming does occasionally happen
– but it is extremely rare or, at least, minor.
As one concern, the paper argues that many authors currently
on papers do not belong there. As evidence they refer to the increasing number
of authors on papers (the figure below is from the paper). No one is disputing this increase; but why is it
happening? Some of it – as the authors note – is the result of needing more people
with more types of complementary (but not duplicate) experience for a given paper.
This explanation is definitely a major part of the reason for increasing author
numbers. For instance, modern statistical (or genomic, etc.) specialization
often means that statistical or bioinformatic experts are added to papers when
that is all they did for the paper.
However, I suspect that the majority of the increase in numbers
of authors on papers is more a reflection of how, in the past, deserving
authors were EXCLUDED from authorship whereas now they are more often included. For
instance, a recent examination showed that women contributors to research tended
to be in the acknowledgments but not in the author list. I would argue that,
now, we more often give credit where it is due. Consider the lead collectors of
data in the field. Obviously, every person that contributed to data collection
in, for example, long term field studies shouldn’t be included on every paper
resulting from those data. However, the key data collectors surely should be included –
without them no one would have any data. Stated in a (partly) joking way, once
data are collected by the field crew, any hack can write a paper from the data.
Self-citation is another perceived “gaming strategy” that is
much discussed. Yes, some people cite their own work much more than do others,
a topic about which I have written several posts. (For example, see my post on
the A Narcissist Index for Academics.) But it is also clear that self-citation has only a
minimal direct influence on citation metrics. That is, self-citation obviously doesn’t
directly influence numbers of papers, it has only a modest influence on total
citations, and it has minimal influence on a researchers h-index. (See my post
on Self-Citation Revisited.) Both of these later minor effects become increasingly weak as the author
has more and more papers.
Beyond all of this, my personal experience is that PIs are
extremely sensitive to these gaming issues and would be mortified to be involved
in them – regardless of whether or not someone knows about it. Most PIs that I
know actively seek to not over-cite their own work, to make sure that only
deserving authors are on papers (but that deserving authors are not missing), to
publish in society journals, and so on.
Other assessment methods are gamed – but more so.
The authors argue that, for the above and other reasons,
citation metrics should be down-weighted in evaluation for grants, positions,
tenure, promotion, raises, and so on. Sure, fine, I agree that it would be great
to be able to somehow objectively judge the “quality” of the work independent
of these measures. So how does one do that?
Imagine first a committee of experts that read all of the
work (without knowing the journals or citation rates) of the people in a given “competition”
and then tries to rank the candidates or proposals. Sounds good in theory, but –
in practice – each reviewer is always at least somewhat biased to what they think
is high quality, which I can tell you from experience is highly variable among
evaluators. From having been on many grant review panels, award committees, and
so on, I can assure you that what one evaluator thinks is important and high
quality is often not what another evaluator thinks is important and high
quality. If you want a microcosm of this, just compare the two reviews on any
one of your submitted papers – how often do they agree in their assessment of
the importance and quality of your work?
This evaluator-bias (remember we are postulating they don’t
know impact factors or citations or h indexes) compounds at higher levels of
increasingly inter-disciplinary panels. I have served on a number of these and
I simply have no objective way to judge the relative importance and quality of
research from physics, chemistry, math, engineering, and biology – and that is
before getting into social sciences and the humanities. The reason is that I
only have direct knowledge of ecology and evolution. So, in these panels, I –
like everyone – have to rely on opinion of others. What does the expert in the
room have to say about it? What do the letters of recommendation say? What did
the chair of the department have to say? And so on. Of course, these opinions are
extremely biased and written to make the work sound as important as possible –
and, believe me, there is a major skill to gaming letters of recommendation. In
short, letters and explanations and expert evaluations are much more susceptible
to “gaming” than are quantitative metrics of impact.
So we are back to considering quantitative metrics again. The
reality is that citation rates or h indices or whatever are not measures of
research QUALITY, they are measures of research volume and research popularity.
As long as that is kept in mind, and standardized for variation among disciplines
and countries and so on, they are an important contributor to the evaluation of
a researcher’s performance. However, journal impact factors are complete
nonsense.
Conclusion
Stop worrying about trying to game the system - don't worry, everyone is definitely NOT "doing it." Study what you like, do good work, seek to publish it in journals where it will reach your audience, and stick with it. The truth will out.
A few disclaimers:
1. I know the first and last authors of the paper reasonably well
and consider them friends.
2. I have a high h-index and thus benefit from the current
system of emphasis on citation metrics. I note my citation metrics prominently
in any grant or award or promotion competition. See my post on: Should I be Proud of My H-Index.
No comments:
Post a Comment