Monday, February 5, 2018

Why have a gatekeeper, and who should it be?

Let's face it: scientific publishing is changing, fast. Open access journals. Online-only journals. Preprints. Post-publication peer review. For-profit redistribution services like Academia.edu or Researchgate that siphon web traffic away from the original publishing journals, to their detriment. Predatory journals.  Accelerated peer-review. Double-blind peer review. Open Peer review. Et cetera, and then some.

There are, without a doubt, good ideas in there. And, there are ideas that are maybe utopian but impractical. Open to abuse, or just not likely to work well in practice. So, when a new idea pops up on the landscape, it is worth taking skeptical notice. This past week, the President of the Howard Hughes Medical Institute, Erin O'Shea, co-authored a policy essay, "Scientific Publishing in the Digital Age" with the HHMI Chief Strategy Officer Bodo Stern. This is worth taking notice of, because these are two very smart people, and because they run the largest private non-profit biomedical research program in the world. HHMI helped kick-start the PLoS journals. Later, they helped kick-start eLife. So when HHMI's leaders say scientific publishing should change, you can bet they will implement the policy they propose, and implement with gusto and deep pockets.

So, I'm going to take a skeptical read through their article, and write my reaction as I go. I want to emphasize a few things. The following is my personal opinion.  Also, the following is my knee-jerk reaction, which is risky to blog. So if you disagree with me on something, try reasoning me out of it rather than getting agitated. Because people are certainly getting agitated about the changing publishing landscape, or agitated because some of us are traditionalist sticks-in-the-mud.

Before I dive into Stern and O'Shea's article, let's see if we can agree on a few (almost) universally shared goals and values, just to help remind readers we are on the same team, aiming at the same goals, even if we disagree on how to get there.

Research Quality:
   Goal 1)  Do not publish papers that are fraudlent, logically flawed, technically incorrect, misinterpret or misrepresent results,  incomprehensible prose, and so on...  This is the analogue of "do no harm" in medicine. We don't want to promulgate false conclusions, or fraud. We don't want scientific publishing platforms to give voice to unsound anti-vax papers, unscientific young-earth creationist articles, or racist or sexist rants.

   Goal 2) Take flawed papers and, if possible, improve them through (iterative?) review and revision until they are good enough to publish (e.g., to not violate Goal 1).
   Goal 3) Maximize the readability of the paper. This spans both the technical veracity of the research, but also the quality of the scholarship and its presentation (compelling prose, clear figures, etc).

Speed:
   Goal 4)  Get new ideas, methods, data, results, insights into the public sphere as quickly as possible, without violating Goal 1, and hopefully also meeting Goals 2 and 3.  There is a trade-off here. Meeting Goals 1-3 typically requires in-depth review.  Hurried review is often cursory. So if we are to get good quality independent reviews, we need to be willing to sacrifice time. If we are to revise thoroughly and properly, we need even more time. So, we often must balance Goal 4 versus Goals 1-3.  Also, good copy-editing is valuable (helps mostly with Goal 3), and copy-editing takes time.


Financing:
Publishing costs money (e.g., see this earlier blog), so someone has to cover these costs.
 Goal 5) Make taxpayer-funded research freely available to taxpayers. Proponents of Open Access also argue that open-access papers will be read more, cited more, and thus have more impact all else being equal. Typically, this means the authors pay. As has been noted elsewhere, this creates a potential conflict of interest in which predatory journals benefit from selling authors the right to publish, leading to violations of Goal 1.
   Goal 6)  Let researchers with limited funding publish. Graduate students doing independent work. Postdocs operating on a shoestring budget. Labs between grants. They all have valid ideas, good data, things to say. But, an article in some top Open Acccess journals can cost upwards of >$5,000.  That's a huge barrier to entry into publishing. So, Goal 5 and Goal 6 pose a Gordian Knot of a trade-off. Personally, I prefer the American Naturalist's model in which those who can pay to publish open access, do, and those who can't, get cheap page charges (or even a waiver), with operating costs met by subscription fees.


Reaching a target audience:
  Goal 7) It should be easy for scientific readers to find the highest-quality articles that interest them most, without having to wade through a thicket of irrelevant papers. It's also nice to be steered to the papers that are most likely to change how we think about, or how we do, our own research. This was a problem with PLoS One: too much chaff compared to the wheat (and there are good papers there, to be sure), and poorly organized along conceptual themes. This goal is getting easier and easier with recommendations from software like SciReader, and social media (though the latter can fall into group think and can amplify cultural biases).

   Okay, I'm sure you can add other Goals (post a message!), but that'll do for now. We'll use these in my comments below on the Stern and O'Shea.

One more thing before I dive in: I have a few disclaimers. First, I am Editor-In-Chief of The American Naturalist, which is a smallish non-profit society journal. It is still printed on paper and sent to libraries. So, I'm part of the traditional "System". Second, the following is my personal opinion and not the stance of the journal nor its publisher.

So here we go. This'll be sort of like a live twitter feed as I read. If you want to read along with me, here's the link again: http://asapbio.org/digital-age 

I've read but won't comment on the abstract. Presumably these ideas are all developed more below.

Introduction:

So the Royal Society started peer review in the 17th century? I wonder how that worked then. I know that The American Naturalist's editors in the 1860's through 1890's used a very informal review system at least sometimes, but that often involved showing it to the professor down the hall, who would write things like "it is one of the most miserable and inadequate things ever printed ". Formal peer review as we know it now seems to have started later, maybe as late as the 1950's, when the instructions for authors printed in the journal mentioned the need to submit two copies of a manuscript, for review purposes.

Stern and O'Shea write: "It made sense for publishers to charge consumers subscription fees in exchange for hard copies of journals and to establish editors as the gatekeepers of publishing, when printing and distributing scientific articles was expensive and logistically challenging. These limitations no longer apply" I want to point out that many journals are still in print, and there are benefits to print journals (seredipity of discovering something unlooked for when leafing through, for instance; and I retain info better from the printed page still, but maybe that's just me). Also, journals still cost money to run. There's a website to maintain, staff to handle communications with authors and editors, copy editors: there's a lot that goes on behind the scenes and that's not free. Still.

Next S&O'S are reiterating arguments for Open Access, but saying it isn't enough. They repeat the claim that paywalls are a barrier that slow science, and limit who can build on existing knowledge. True, to a point. For many journals (like AmNat), the large majority of universities have subscriptions. The University of Chicago Press gives away thousands of institutional subscriptions for free to universities in impoverished and middle-income countries. So more people have access than you might think. But it is true that you might not have access from home, which might stop you from downloading and reading my students' excellent AmNat papers, if you didn't want to take the time to log in remotely or from campus. And high-school teachers can't easily read paywalled scientific papers for their classes. There is a problem, for sure. Then there are new journals like Nature Ecology and Evolution which most libraries won't subscribe to for some waiting time. The University of Texas library won't subscribe for at least 5 years they told me, when I asked whether or how I could access my own publication there (Stuart et al 2017).

Now S&O'S are arguing that "The subscription price that publishers charge is inflated because it is not based on the specific value that publishers add. By imposing a toll for access to scientific articles that were created and evaluated by scientists for free, publishers hold these scientists’ products “for ransom,” charging for the whole product instead of for the publisher’s specific contributions to that product.". There's some truth to this, especially for many journals from commercial publishers. As the Editor of a not-for-profit journal, the cost we hand off to consumers covers the publisher's contributions, and that's it. So I do object to S&O'S stating this as a broad generality, painting us with the same brush as some journals from high-profit publishers you might name.

Ah, good, S&O'S do recognize the conflict of interest that Open Access creates, favoring a pay-to-play system where predatory journals and fake editorial boards can thrive (violating Goal 1). Their solution is to make the review process transparent: publish reviews, so that fake-review journals are exposed for the frauds they are. I agree that will help. But at the end of the day, an author who needs more lines of publications on their CV for promotion may still gladly pay to publish something shoddy at a journal that does half-hearted review. I'm not convinced this solution will fix the problem.

Wrapping up the section on Open Access. I agree with most of what they say here, even if they over-generalize a bit (in ways that directly concern the journal I Edit). But they totally ignore Goal 6 (cheap publishing for authors), as you might expect for people with a history of great research funding. HHMI started eLife which initially was free to authors AND readers. But no longer free to authors, sadly.

Now we are on to Impact factors and the academic incentive system.
"Journal name is used as ... an indicator of quality". That's mostly true, though we can all think of papers in Nature or Science where we thought "how did THAT get published?" - but maybe that's just sour grapes (more on that soon, I think). I totally agree with them here, that it would be nice if articles were judged on their own merits and not so much on the merits of the articles' neighbors. To use a personal example my 2003 AmNat paper is cited 10 times more than my 2001 Nature paper. But the latter is surely what got me my job interview at UT Austin as a finishing PhD student. Okay, so we should judge papers on their own merits. I don't think anyone disagrees in principle. But I can't read everything, and so I rely on someone (Editors, reviewers) to collate the things most likely to interest me into nice succinct tables of contents (that's meeting Goal 7).

Interesting point here: "the opinions of two to four peer reviwers... by chance [may] not be representative". Everyone with experience as an author knows this - things get published with a casual nod from someone who doesn't take the time. Or a great paper can be misunderstood and savaged by someone with an axe to grind or not enough coffee (though a reminder: if a reviewer misunderstands, it may be the author wasn't clear enough). But in the context of this essay, this made me think about the role of sample size: the more people who read and rate and comment on a paper, the more accurate the evaluation is. Let's imagine each paper has some 'quality' parameter. Sampling N=2 isn't really enough to estimate that parameter accurately, we have a high standard error. So it really is with many reads and ratings/comments by readers, that we converge on a high-confidence measure of its quality. We need an Amazon 1-to-5-star rating system? But would it be used?

S&O'S point out correctly that hyper-competition for high impact factor journals is creating an incentive system that can drive people to fraud (violating Goal 1).

Integration of peer review with the publishing decision.
On to the next section. Here, they take issue with the privacy of reviews. "nontrasparent". "Most journals keep peer reviews a confidential exchange among editors, reviewers, and authors, which gives editors flexibility to use their own judgement in deciding what to publish".   I don't quite see the link between this and impact factor, as they claim, but maybe I'm missing something.  This non-transparency is certainly true. Before I read on about why they dislike it, I'll mount a pre-emptive defense for sake of argument. Submitted manuscripts contain omissions, mistakes, and other potentially embarrassing flaws. Many are minor, but some are big. A young scientist is nervous enough submitting a paper for the first time, to expose themselves to the criticism of strangers. How much more horrifying if that criticism were broadcast for all to see? I suspect many trainees especially, but also senior scientists, when asked, would really rather have a chance to quietly correct mistakes outside the limelight. We always tell our students and postdocs when they write, speak, interview: "put your best foot forward first", or some variant on that. Public posting of peer review does the opposite. This may be a major disincentive to anyone with self-doubt or anxiety over their place in science. That's my guess, at least.

S&O'S write : "The main purpose of peer review should be to provide feedback to authors in order to improve a manuscript before publication. But, in service of the publishing decision, peer review has morphed into a means of assisting editors in deciding whether a paper is suitable for their journal. " This is obviously true, especially for top journals. At Editor at AmNat, I can't publish everything that comes in. We have a limited page budget and limited copy-editing staff. So I have to be picky. And I feel I have a duty to my readers, to bring them the papers that I and my co-Editors think will be most likely to interest them. That's not a decision I take lightly, and I am keenly aware that "whether a paper is 'novel enough'" (as S&O'S put it) is subjective and the hardest criterion to use. But that doesn't mean I totally agree with their characterization of peer review being a means for making this cut. Usually that cut is made without peer review, just me or me and an Associate Editor. When I do so, I explain my logic. I handle a few papers every day, and when I make an Editorial decline because something isn't suited to our journal I often write a page, sometimes several pages, of my own comments and recommendations. Our goal, as a journal, is to leave every paper better than when it came to us, whether or not we publish it. In this regard, the intense efforts of the reviewers, Associate Editors, and Editors, is very much focused on improving the papers. In this regard, I disagree with the  claim by S&O'S quoted above. At AmNat, review is still very much focused on helping the paper. If it weren't, the AE's and Editors wouldn't bother writing long and careful commentaries on papers we reject. The fact that we do sets our journal apart, to be sure. We are proud of that (and a bit exhausted).

"The intense competition for publication in high impact factor journ
als likely increases how often and to what extend [sic] scientific articles are revised before publication". Hm. That mistake might have been caught by a reviewer or copy-editor.

"While most papers are significantly improved through revisions suggested by reviewers and editors, there is a sense among scientists that a significant fraction of the time spent on revisions, resubmissions and re-reviews is not adding sufficient value and needlessly delays the sharing of findings" Maybe I'm just a crappy scientific author, but I consistently feel that my articles are improved by review and revision. I am always surprised by the sentiment in the quote above. So, about 6 months ago I did a totally unscientific poll. About half of the responses indicated they felt review greatly improved their paper. About half said it somewhat improved. And only about 5% (if I remember right) said review had no effect or negative effect. That 5% may be a very vocal minority.

"it is time to acknowledge that peer review before publication is just the initial step in scientific evaluation": Interesting. I don't disagree, but that's not a reason to water down peer review either, or change how publish / not publish decisions get made.

Now we hit the author's recommendations. This is where it gets fun, I bet.

Improvements to peer review.
- Publishing reviews along with the papers. My main problem is what I noted above: the disincentive arising from authors' fear of having their mistakes aired. Would reviewers get credit? Named? Can that go on their CV? That might create an incentive to do more reviews, and more careful reviews. Certainly when I became an Associate Editor, and knew my name would be listed at the end of a paper, I became more cognizant of doing a thorough job.

- Consultative peer review: conversations among reviewers and editor before a decision.  I like this idea. It gives everyone a chance to correct each other's misreadings. At present, AmNat's Editorial Manager web system isn't designed to do this in a way that would maintain mutual anonymity, which is a barrier. That's just a technical barrier. The other barrier is the extra time it takes. In another unscientific poll I did on twitter, it seemed most people would be okay with this as long as it didn't delay publication more than a couple extra days.

- Peer reviews should focus on technical quality: are conclusions warranted. I do often see reviewers commenting that they don't think the paper is suitable for our particular journal, though it would be fine elsewhere. Given that we have a constraint on how many pages we can publish, I find that slightly useful, but for the most part I reach that conclusion on my own based on the technical details. I rarely pay close attention to the 'suitability' comments, and sometimes override them.

- Ah, now they are saying 'Give recognition for peer review'. (see three paragraphs up here). Specifically they want reviews signed. I agree that recognition for good reviews is important (Maybe AmNat should come up with some sort of award for great reviewing). But objections are well known. When a reviewer is critical, being outed can create animosity that can hurt younger reviewers. There's an unrecognized flip side here: when a reviewer is positive and names herself, this creates a feeling of obligation / patronage. For instance I know now that Joe Travis and David Reznick reviewed my 2017 Nature paper. And that feels more awkward for me than if I had known they reviewed and rejected it. Because now I feel like I owe them something as a thank-you.

Next up: "Put dissemination of scientific articles in the hands of authors".
This is weird. They argue that funders trust scientists to do the research, so we should trust them to choose what when and how to publish. "why don't we ask independent parties to oversee experimental design and execution as well?". Um, two things. First, we do: they are called grant panels. Unlike at HHMI, at NSF and NIH you need to get your experimental design past a critical panel. Second, we do: manuscript review serves this purpose.

So here's what they are arguing for, this is the crux: Authors submit a paper, it gets reviewed. Authors can choose to revise (or not), then decide whether or not to publish. Its sort of like putting something on BioRxiv, but after getting reviews. You can heed the reviews or not, then post on BioRxiv or not. Up to you. The barrier to publishing is not an editor, but is your own self-respect: have you gotten enough feedback and done enough revision that you are comfortable posting it?

Okay, right away I have a problem with this. By this criterion, someone could go ahead and publish creationist rants and call it a scientific publication. You'd be really surprised what comes in the door to journals:  creationism, offensively sexist or racist, and so on. Heck, some people tried to publish the bigfoot genome. It wasn't until it was soundly rejected (with review!) from some respectable journals that the authors bought their own predatory journal and self-published in what they said was an open-access reviewed journal. As soon as you let authors be "the decider", I promise there will be bunk. And that bunk will inflame the creationist movement and intelligent design (the latter was smacked down by the judge in the Kitzmiller vs Dover court case specifically because they weren't publishing in scientific journals. Now we let to get them decide?)

That's my main knee-jerk objection, now let's see what S&O'S have to say in favor:

"Since authors have such a clear self-interest in publishing their own work, nobody would equate the author’s decision to publish with a stamp of quality. This stamp of quality has to come from elsewhere, including the published peer reviews and post-publication evaluations described below"  Okay, I can see that. But that means that newly published papers are not organized into batches of higher-quality articles that are more likely (on average) to be worth my very limited time. That is, this means that Goal 7 is set aside until papers have had time to develop a following, or not.

"the peer reviewers would direct their comments to the authors focusing their peer reviews on improving the manuscript as opposed to advising the editor on suitability for a journal." Yes please!  But actually, this is the standard way people review things, at least at AmNat, and at Evolution, and most society journals. I think this is mostly a problem if you are trying to mostly publish in Nature and Science and PNAS and Cell. At those journals, the reviewers have all had their own rejections. They then have sour grapes, and think "well if my paper wasn't good enough, neither is this". So I think S&O'S are right, but in a limited domain; more often for us publishing mortals we work with journals where reviewers already take this advice.

"the time and resource savings would be significant: authors wouldn’t have to perform experiments that they deem unnecessary; " Again, this is more of a Nature & Science problem, not so common in our fields.

"demanding revisions and multiple rounds of review": When revisions are cosmetic, Associate Editors shouldn't be sending things out to re-review anyway. When revisions are substantial (new data, new statistical analyses, substantially large chunks of new text), it is entirely appropriate to have that new material reviewed, which means another round of review to ensure quality (Goal 1). That's appropriate. To be sure, I've had papers sent out to re-review and thought "you're kidding me, this was cosmetic, just make the decision yourself and speed it up".

Now S&O'S are tacking possible objections. The first one is what I raised above as my knee-jerk reaction. I think I'm more cynical than they are. They write: "Few authors will knowingly want to put out poor-quality work."  I'm not so sure. As long as promotion is based on counting papers, this will be a hard sell. And as long as there are crackpots out there with pseudo-scientific ideas, their proposal will be an open invitation. (by the way, is any of that crackpot stuff showing up on BioRxiv?)

Here's the most compelling part: "The peer reviews themselves will be a powerful restraint on the author, since they will be published together with the paper (see above). An author may, for example, prefer to withdraw a paper submitted to a journal if the reviews reveal fundamental flaws that cannot be addressed with revisions. And if an author decides to publish a paper despite serious criticism from reviewers, at least those criticisms will be accessible to readers, who can decide for themselves whether to side with the author’s or the reviewers’ point of view." In a utopian world, I totally agree this is a great model. Most scientists are conscientious, careful, and will use reviews as a source of feedback to improve, then publish or not publish their work. But wait: do we REALLY think most people would say "oh, that's a great point, there was a mistake, I'll just delete this paper that represents a year of my life"? That's a hard thing to do (I can attest personally).

So "where does this leave journals and editors", they ask. They envision a hybrid model, part way between tradition and the open wild-west of BioRxiv or F1000Research. They suggest that papers go to to journals (still organized by theme, to meet my Goal 7). Editors assign reviewers, as we have done. But, Editors stop making the publish/don't publish decisions. Instead, that is up to the authors, once they get reviews.

Ah! Here we go: Here near the end they write: "In rare cases, editors may need to step in and stop publication of an article when the peer review process reveals that publication would be inappropriate – for example, in cases of plagiarism, data fabrication, violation of the law, or reliance on nonscientific methods."  So the Editor still does some triage to keep out the riff-raff.

And here's another nod to something I was objecting to: "At the moment, society journals are between a rock and a hard place. They can’t afford to switch to open access, since the open access fees required to replace their subscription income would be too high for readers. On the other hand, they feel considerable pressure from for-profit publishers who are launching competing journals at breakneck speed. Academic publishers risk becoming obsolete if they don’t adjust." That is a concern. The solution they propose is that society journals charge for peer review, then basically guarantee that authors can publish when they have received the reviews and feel ready to do so. They figure journal income goes up, reducing the per-article open access fee. Okay, but that assumes that the journal's costs are flat. In reality copy-editing and data-archiving fees and some other features are on a per-article basis, so cost per article will be less sensitive than S&O'S think, especially when most reviewed articles eventually get published. And of course this won't work for print journals, which wouldn;t be able to keep page numbers to within budget for the printer's.

The last part of the essay is about post-publication processing. They argue that after an author decides their paper is ready, the paper and its reviews go online. Then, the reviewers and/or subsequent readers can 'tag' the paper with various kinds of tags, for rigor, interest level, data sharing, code review, data downloads, citations, pdf downloads. This is crowd-sourcing the process of rating and ranking papers for my Goal 7. Its like going on Amazon and seeing a product has lots of positive reviews, though more multi-dimensional in the kinds of metrics. What could possibly go wrong giving people the chance to comment & review & tag things online????? (hint, read this Washington Post article on scientists posting reviews on Amazon if you haven't already)  



Now here's the interesting point where they start to back-peddle a little bit. They say, although the editor gave up the role of gate-keeper, the editor could place a warning tag on a paper basically saying the article was published by the authors against the reviewers' recommendations. A "read the reviews carefully & take this with a big grain of salt" tag. The Editor could place high general interest tags on the articles they would normally publish, and low general interest tags on articles they wouldn't usually touch.

Well, that's it. I really should have spent tonight doing some data analysis, but their essay was interesting to read and thought provoking. And I, for one, benefitted from writing my thoughts as I read it.

So, will AmNat implement this? No, not soon.'

Update: Based on a comment on their article, Bodo Stern shifted from "Tags" to "Badges". To which I must, in a fit of late-night infantile humor, respond:


No comments:

Post a Comment

Predicting Speciation?

(posted by Andrew on behalf of Marius Roesti) Another year is in full swing. What will 2024 hold for us? Nostradamus, the infamous French a...