Sunday, December 16, 2018

Abiding in the midst of ignorance

"Abiding in the midst of ignorance, thinking themselves wise and learned, fools go aimlessly hither and thither, like blind led by the blind." - Katha Upanishad (800 to 200 BCE; ancient Sanscrit writing with core philosophical ideas in Hinduism and Buddhism)

Double-blind*  review is widely seen as a positive step towards a fairer system of publication. We all intuitively expect this to reduce implicit bias on the part of reviewers, making it easier for previously underrepresented groups to publish. But, is that true? Not surprisingly there's a fair amount of research on the impact of double-blind. Here are a few links:

The Case For and Against Double-blind Reviews is a new BioRxiv paper that I learned about on Twitter, which finds little benefit. But, the paper isn't well replicated at the level of journals, and lacks information on submission sex ratios.

The effects of double-blind versus single-blind reviewing is a classic 1991 experimental study from economics, which found that double-blind led to more critical reviews across the board, equally for men and women, and lower acceptance rates. The strongest drop in acceptance rate was for people at top institutions.

In contrast "Double blind review favors increased representation of female authors" followed the 2001 shift by Behavioral Ecology to double-blind review, finding an increase in female authorship. But again its not clear whether this is an increased acceptance rate, or an increased submission rate.

Then there's a meta-analysis of the issue that found fairly ambigious evidence, though with some evidence of bias (especially in favor of authors from top-ranked universities).

In short, the literature on the literature isn't a slam dunk. Most people tend to agree that double-blind is a good thing. There are some administrative costs (time editorial staff spend policing the system), and time authors spend blinding their own work. But all in all it seems worth it. Indeed, some people won't review for journals that don't do double-blind. (although some people refuse to review when it IS double-blind, a catch-22 for journals who just want to find good qualified reviewers).

To wade into this, I wanted to offer a bit of data from The American Naturalist. The journal went double blind in early 2015, one of the earlier journals in out field to do so, but not the very first to be sure. There's a blog post by Trish Morse in Nov 2015 summarizing the results in the first 10 months. I have the luxury of being able to reach into our historical database and see what double-blind review is doing. The journal is especially valuable in this regard because we have an opt-out double-blind review policy. Our default is to go double-blind, but authors may choose to ignore that, and some do. This gives us an imperfect but useful comparison between those who do and do not opt for double blind. What's their acceptance rate, how does this depend on gender of the first author?

Methods:
I'm lazy, doing this on a Sunday afternoon while my kids are watching a movie, so forgive me some imperfections here. This isn't a peer-review-ready study.

I looked back at the 500 most recent acceptances at AmNat. The actual number is a bit less than this because some things in the database include editorials, notes from the ASN secretary, and so on, that I don't want to count.  I also looked at the 500 most recent declines at AmNat (including Decline Without Prejudice that sometimes turn into subsequent acceptances, but I didn't have a simple way to trace this). I did my best to infer the gender of the first author based on their first name. I didn't count papers where I couldn't tell.

Note that because we decline more papers than we accept (20% acceptance rate), the ~500 acceptances covers a couple-year period, whereas the 500 declines were all from 2018. That's not ideal, I know, but see the first Methods paragraph above. That also means that the exact acceptance rates here are not exact values; it is their relative values (double-blind or not; male vs female) that are useful for us. Here's the data with marginal totals

MaleFemaleTotal
Double blindAccepted205136341
Declined257155412
Total462291753
Opt out of double blindAccepted8440124
Declined692493
15364217


To digest this for you a bit, here are a few observations:


1) As we might expect, women were less likely to opt out than men
Proportion submitted
MaleFemale
Double blind0.750.82
Opt out0.250.18

This is significant (Chi-square test P = 0.017), though the effect size is modest (7%). This fits with the notion that double-blind is fixing a bias problem and should protect female authors whereas men are privileged to have the option to be identified without harm. 


2) Double-blind papers are less likely to be accepted than opt-out papers (P = 0.002). This is partly because things like the ASN Presidential Address articles and other invited papers of award winners are by necessity not anonymous, and have a higher acceptance rate. But note that it seems like women get a stronger benefit from NOT double blind than men do (though this is not significant). So, there's clearly a cost to going double-blind which is it seems to hurt authors prospects overall. And the most widely-cited benefit (reducing bias against women) does not seem to be visible for our journal in the time period covered here. That matches the experimental study from economics from 1991, linked to above, which also found double-blind reduces acceptance rates. Here's the detail:


Proportion accepted
MaleFemale
Double blind0.4440.467
Opt out0.5490.625

Remember, these acceptance rates aren't the overall acceptance rate of the journal, because I chose to examine 500 accepted and 500 declined papers. But their relative values are telling: opt-out is the way to go, It's tempting to suggest that might be especially true for woman authors, but the gender by review-type interaction is not significant in a binomial GLM. So it doesn't seem to matter. There's no gender difference in acceptance rates in double-blind papers. There's no gender difference in acceptance rates in non-double-blind. But, double-blind reduces everyone's chances by a bit. Not much, but...

3)  And now the bad news: we still aren't at gender parity in submissions. Overall in the time period covered by this survey, 36.5% of our SUBMISSIONS were by women first-authors. That's not good. I'm not sure what to do to fix this, because it concerns the pipeline leading up to our front door. So I;ll be continuing my attempt to be proactive at encouraging people, especially students, and women, to submit articles to us. The good news, I can tell them, is we have strong evidence there's no gender bias between submission and decision.

So, why should double blind reduce acceptance rates? That's odd at first glance. But as Editor I've received notes from reviewers. Some say they won't review something because its not-double blind. But quite a few have told me they prefer to review non-blind. They note that if they are aware the author is a student, for example, they are more likely to go an extra mile to provide guidance on how to improve the paper. Now, I would hope that we would do that for everyone. All our reviews should clearly identify weaknesses, and should point towards paths towards improvement. But the truth is we feel protective towards students, including other people's students. There's only so much time each of us can afford to put into reviewing (though a ton of thanks to the many AmNat reviewers who put their heart and soul into it). So we make decisions on how much time to invest in a given paper. Knowing someone is a student can make us be a bit more generous with our time, which might in the long run help them get accepted. When that information is obscured, perhaps reviewers become equal-opportunity grinches.

Interestingly, a year or two after AmNat went double blind, our Managing Editor and lodestar Trish Morse looked at the results. There was a dip in acceptance rates then, as well. It wasn't quite statistically significant, and we wanted to accrue a few more years' worth of data. But it matches what I find in the more recent data.

An alternative hypothesis is that people opting out of double-blind are famous, and influential, and more likely to get a pass. That fits a bit with the observation that men are more likely to opt out. But who are those people? I obviously won't name names. What I will say is that I was surprised. Sure, there were big names who opted out (some by necessity, such as authors of the ASN Presidential Address articles). But there were also many lesser-known authors opting out. And many big-names who went ahead with double-blind. In fact, only a small minority of the opt-out crowd were celebrity authors. Many opt-outs were actually by authors from non-US and non-EU nations who might not be as aware of the double-blind  cultural trend.

To summarize: Double-blind seems to slightly reduce everyone's acceptance rate regardless of gender.
 That matches results from Economics in the late 1980's. Not a strong endorsement of double-blind, which we tend to think of as fixing gender (and other) bias in publication. For the past 1000 decisions I don't see evidence of such bias. So did we adopt double blind to fix a problem that itself has faded away (# see footnote below)?

Some important caveats:
1) I didn't look at last author or other author names. I didn't categorize by career stage, or university. 

2) As noted above, I sampled 500 accept and 500 decline, not over exactly the same time period.

3) AmNat practices reviewer blind. But the Editors (e.g., me, Judie Bronstein before me, Russell Bonduriansky, Alice Winn, Yannis Michalakis) and the Associate Editors can see author names. That's by necessity: someone needs to be able to choose reviewers without asking the authors or their advisors or closest colleagues to do the job. That requires knowledge of names. 

4) This might be worth a more careful analysis and publication, but I don't have the time & energy to do that right now. And its not ethical to give someone else access to all our data on accept/decline decisions and author identities.

Footnotes:
*   I have been told that the phrase double-blind is abelist, and upsetting to some visually impaired people. This is the phrase we have inherited and we discussed last year switching to doubly-anonymous or something like that, but once a term becomes entrenched it is hard to change.

#  There are clearly other barriers still present, generating the strongly significant male-bias in the papers that come in our door. These need to be addressed proactively. So my comment that double-blind might be meant to fix a problem that has faded away refers only to the review and decision-making process as a statistical aggregate effect. I also recognize that if there were one or two sexually biased reviewers, affecting decisions on just a few papers, that would be undetectable in this statistical analysis yet still constitutes a problem worth fixing.


Tuesday, November 27, 2018

I spent WHAT?

Journals today are much maligned. We all want to publish our work for free, and read papers without paywalls. Many of us also want to have nicely typeset and carefully copyedited articles. These are incompatible goals. Until we get governments that fund all journals' expenses (good luck with that!) or we get a Paul Allen or Howard Hughes to fund publication (ditto!), we need to make two choices.

1) Do we want the value that journal editorial staff add to the production of a snazzy final version? Journals provide website support, database management, and staff that enable careful anonymous peer review. They provide copyediting that (*if done well*) catches errors (rather than introduces them) and improves the final product, and typesetting that makes it all look good. And, if you publish in a traditional print journal they put it on nice archival paper so it'll still be readable after the Zombie Apocalypse, unlike those e-articles that'll be forever lost.

For more detail on the value added by journal office work, see this previous blog post of mine on "The Secret Lives of Manuscripts". I did a very informal (e.g., unscientific) twitter poll on this question: of 500 respondents to my (certainly poorly worded) question, 92% prefer the typeset journal version. The other 8% were all named Andy Kern, I think (kidding).


Today on twitter, my grad school friend Andy Kern suggested maybe we need a hybrid approach where authors can choose to pay for all these services, or can have their raw manuscript posted. Interesting idea. I wonder how many authors would opt to pay for the services. And, would the decision have any impact on how widely our papers are read and, eventually cited? I don't know.

2) If the answer to (1) is yes, then who pays for the value that journals add and that we desire? There's a lot written about this, and disparate strongly held opinions. Both have their drawbacks: author open access fees can pose insurmountable barriers to low-funded labs or students (*A STORY ABOUT THIS AT THE END*). Traditional journals can be inconvenient to access: if you don't have an institutional subscription access you either need to pay (few people do), or contact the author (long lag times or no answer), or go to SciHub, or read another article entirely. I'm not going to delve into that here. There be dragons.

Instead of tackling those dragons head-on, I want to address a point that Andy Kern raised in our twitter conversation today. He pointed out that I had published 8 papers in 2018, and estimated I probably spent $20,000 on the page charges (2500 per article) plus the indirect costs my university charges on those expenses for a total of (he estimated) $40,000.  What, Andy asked, could I have done with that money instead?

It is a fair question. I always budget for some page charges, but I had never specifically tallied up my page charge total across all grants and projects. So, that's what I did. The table below shows all my papers from 2014-2018 (a 5 year period), where they were published, who paid, and how much. I should note (before you read the table too closely) that I couldn't find all of my page charge invoices from the whole 5 year period, having switched computers twice in that interval. So some of the numbers are estimates derived from the journal websites. Others (marked with asterisks) are from memory because I know the APCs (Author Page Charges) were lower a few years ago than the websites indicate today.

YEARJOURNALAUTHORSAPCNOTES
2018American NaturalistBronstein and Bolnick1000Open Access (voluntary)
2018Annual Reviews of Ecology Evolution and SystematicsBolnick et al0
2018Animal BehaviorStockmaier0
2018Ecology and EvolutionBrock et al1,560
2018PeerJFrench et al1095
2018ScienceKuzmin et al0
2018EvolutionThompson550
2017Frontiers in ImmunologyLohman et al2950
2017Molecular EcologyLohman et al0
2017Molecular EcologyStutz and Bolnick0
2017MsystemsDhielly et al1200Lead author paid
2017EvolutionBrock et al700
2017Molecular EcologyVeen et al0
2017Frontiers in ImmunologySteinel and Bolnick2950
2017PNASWeber et al1640
2017NatureBolnick and Stutz0
2017Nature Ecology & EvolutionStuart et al0
2017Ecology and EvolutionAhmed et al1,560
2017Biol J Linn SocStuart et al0
2017American NaturalistLohman et al975
2017Journal of Evolutionary BiologyJiang et al0
2017Medical Science EducatorBolnick et al2000
2017EvolutionWeber et al700
2017American NaturalistWeber et al1050
2016Proceedings of the Royal SocietyPruitt et al1600Lead author paid
2016Molecular Ecology ResourcesLohman et al0
2016Biology LettersBolnick et al0
2016Evolutionary Ecology ResearchIzen et al0
2016Journal of Evolutionary BiologyOke et al0
2015ISME JSmith et al3300* Estimate
2015EvolutionJiang et al350
2015Molecular EcologyStutz et al0
2015Ecology and EvolutionIngram et al1,560
2015PLoS OneBolnick et al1595
2015EvolutionBolnick et al500
2015OecologiaSnowberg et al0
2014Molecular EcologyPuritz et al0
2014Ecology and EvolutionParent et al1,560
2014Trends in Ecology & EvolutionWarren et al0
2014Molecular EcologyBolnick et al0
2014Ecology LettersBolnick et al0
2014Nature CommunicationsBolnick et al2000* Estimate
2014PloS OneStutz and Bolnick1595
2014American NaturalistStutz et al1125
2014Trends in Ecology & EvolutionRichardson et al0
2014EvolutionStuart et al0

The 46 articles listed above added up to a whopping $35,115 over a five year period (of which I paid $32,315 plus roughly 56% overhead for a total of $50,411.  That's $10,000 per year. Ouch.



But let's put this into some perspective. First, I have been exceptionally lucky to be well- and continuously funded. For much of the time period covered here I was supported by the Howard Hughes Medical Institute, which paid for open access fees. Come to think of it (as I write this), my 2014 & 2015 papers with $0 APCs were probably wrong, as I almost certainly had HHMI pay for open access fees. That lifts up my per-article average and 5-year total above what I report here, but I don't have the details on hand. That said, the open access fees mandated by HHMI didn't come out of my research budget, but out of separate HHMI funds. That supplemental publication funding disappeared in 2016 and after, when I ceased to be an HHMI Early Career Scientist (but was still publishing work from my HHMI time). So, the grand total listed above is a lot, and is an underestimate, but I actually paid less than what is listed above (thanks Uncle Hughes!) and it was a modest fraction of my very generous HHMI funding.

 Next, lets look at this another way. That total translated to an average of $702 per article (not including overhead), with a standard deviation of $921. That's a lot of variation:

The point is, there were just a couple of boutique journals that charged an arm and a leg. All were open access and for-profit. Then there is a set of mid-range articles that were expensive, but between $1500 and $2000 (usually closer to 1500), mostly society open access journals. But the majority of my publications cost under $1000 each. A number of journals were free of page charges, because they gain all their income to cover expenses from subscriptions. Some readers I know will object, but given my research output it was frankly either that or not publish these papers at all, if I can't afford to pay OA fees for them all. Maybe some of you will suggest I take that latter approach, in which case see that story at the end, which I promised (in all caps) up above.

Let's return now to Andy's radical suggestion: what if we lived in a world where we skipped copyediting and typesetting and distribution and archiving, and we just posted our (reviewed and revised, I hope) manuscripts on servers. Its then up to us to format our papers, proofread them super carefully, and typeset them as best we can. An interesting notion. I would have saved roughly $50,000 over the last five years. Save that up and its a year of graduate student support, or a year of postdoc salary (but not enough for fringe & overhead on that salary). Tempting. So what's the downside? First, I'd need to learn how LaTeX or something to make my manuscripts look better. Second, the copyeditors at (some of) the journals I publish in are excellent and help nit-pick to fix the last few items in every paper that I inevitably miss. That's an expensive service. To put it another way, my cumulative output of scientific writing in the last 5 years is well over 500 pages. That's a long book. If I were to copyedit that myself, and typeset that myself... ugh. I'd pay for a professional (again, I know some journals' copy editors are know for messing things up more than they help, but that's not always true. This is where I make a plug for AmNat's excellent team).

As an Editor I can also attest to the fact that many manuscripts come in poorly written and in desperate need of copyediting, more than the reviewers and Associate Editors and I have time to do. So I have a pretty good idea of what the scientific literature would look like if we all skipped these (costly) services, and it's not pretty. Copyediting is especially valuable to authors for whom English is a second (or third, or...) language. We do sometimes have articles that are excellent science but need a fair bit of word smithing to be up to standards.

So, my conclusion? I'm really torn about this. Once I added it all up, I've spent a lot on publication in the last half decade. That's paying for a service I appreciate and value (especially because I see the process from the journal perspective as well). But, its a lot of money. Considering the cumulative totals I am spending, there's more science I could do with that money. But, doing all the copyediting and typesetting in house takes us away from doing science. And there's still the issue of archiving on paper for the Zombie Apocalypse.  Now that I've seen the totals I am spending, I will certainly choose my target journals even more carefully based on financial and policy concerns, which is a shame because it steps away from my scientific ideals. And maybe I'll hesitate about that extra little paper, and wonder 'is it worth the APCs to write this'?

I promised you a closing story, so here it is. When I was a graduate student some friends and I got together to discuss some papers we read together, and it turned into a reading group and a Synthesis paper (Bolnick et al 2003, American Naturalist). It was an all-student group of authors, self-organized, with no funding. When we finally got it accepted (after 2 revisions, at least), we reckoned with the cost of publication. It had become a really long paper with extensive tables. I was an ASN member, which got me 10 (?) free pages, but the article was way longer. Page charges for those excess pages rang up something like $1500. And not one of us had the funds to spare. It almost stopped there, but Michael Turelli offered to pay the APCs. None of us were Turelli students. He's just that kind of generous*. We accepted his offer, the paper was published, and it became my most cited article by a long shot. So, I am very sensitive to the barriers that APCs impose on unfunded authors. This is why I am such a proponent of mixed publishing models (like The American Naturalist uses), where people can opt in to pay for Open Access, or can pay less for regular page charges, or can get waivers if they are members ($20 for students!) who lack funds for APCs. That's a powerfully flexible model. But, one that is threatened by the EU's Plan S.

*Turelli had one condition: that we list him in the acknowledgements of our paper using a humorous phrase that he specified.

Abiding in the midst of ignorance

" Abiding in the midst of ignorance, thinking themselves wise and learned, fools go aimlessly hither and thither, like blind led by the...