Wednesday, May 12, 2021

17 months

By Dan Bolnick

This past month, The American Naturalist published what I hope is the final step in the Editorial Board's evaluation of work by Jonathan Pruitt, 17 months after concerns first came to my attention. This wraps up the journal's (and, hopefully, my) role in this process, after two rounds of institutional investigations. The first round, conducted by authors and a small group of the editorial board, was made publicly transparent in January-March 2020, after we had reached clear conclusions. Due to the chilling effect of legal threats we launched a second more extensive institutional investigation in March, which predominantly confirmed the conclusions of the first investigation. However, that second investigation was entirely behind closed doors. However, I think it is important to convey some of the lessons learned during this  entire process, about the intersections between science, rigor, transparency versus confidentiality, journalism, and legal risk. It is my hope that the following inside retrospective view of the process will (while respecting confidential aspects like some whistle-blowers identities) prove instructive to people interested in evaluating future claims of misconduct or error.

Before I launch into my narrative, I want to emphasize several key points:

1) I have been involved in this saga from several perspectives. I am Editor In Chief of The American Naturalist, and so have been central to investigating six of his papers. I will seek to avoid saying anything here that would break reasonable expectations of Editorial confidentiality. But, it is also essential that editorial processes be transparent enough to the community to engender trust in the fairness and rigor of the journal. I also was a co-author of Pruitts on one paper in Proceedings B. Lastly, Jonathan Pruitt had been a personal friend of mine, and at no point did I wish him any ill or wish to harm his career. It pained me to see his work collapse, which I had so respected once, and I tried hard to give him paths to come clean. 

2) For the purpose of this essay any opinions are my own, as a private individual with professional expertise in the biological and statistical fields in question, exercising my First Amendment rights to express my views. I will try throughout to flag clearly anything that is 'mere' opinion, but I will seek to primarily stick to  factual statements about documented processes and events that I have had a front seat view of as Editor or a co-Author. My experience as editor cannot be separated from my experience of this series of events as a whole, but this blog post is being posted on EcoEvoEvoEco and not on the American Naturalist journal's editor blog, because I am posting this from my personal perspective, not as an official journal statement.

3) Where I discuss flaws in data, below, I am going to stick solely to what I see as my reasonable professional judgement about what is biologically or statistically plausible. Where data contain implausible patterns, the scientific inferences arising from them are suspect and corrective action is needed by the relevant journal, which has been in part my job. How those flaws came to exist is a separate question about which I will not speculate. That is a matter for McMaster University's investigative team to determine whether the biologically implausible data are the result of conscious fraudulent manipulation of data files, or whether they result from accidental mismanagement of data (e.g., errors in transcription of data from paper records onto spreadsheets). From the standpoint of scientific conclusions, which was my focus, the distinction is irrelevant because either way the biological inferences are not reliable.

I begin my retrospective with a rough overview of the time-line of events, as documented in a search through my email records (which I assiduously kept, and which have been the subject of a massive Freedom of Information request by Pruitt, presumably to seek evidence against me for legal action). I then move to a description of some of the major lessons that I've learned in the process. In a separate essay I ponder some questions about the ethics and philosophy of retraction.

1. The Chronological Narrative

This chronology is a greatly streamlined version, based on a 112-page document I wrote in early May 2020, in response to a letter from Jonathan Pruitt's lawyer to the University of Chicago Press, demanding that I be removed from making any decisions about the case (we'll get to that...). And of course with additions concerning the events between early May 2020 and today. 

1.1 Prehistory. Jonathan Pruitt and I share academic interests in behavioral variation within populations. In the late 2000’s he applied for a postdoctoral position in my laboratory, and I was deeply impressed by his research (though he made the short list, I took on another candidate). Shortly thereafter we met when I was visiting the University of Tennessee where he was a finishing graduate student, and we had an excellent conversation. I followed his career closely thereafter and was consistently impressed with the acuity of his research questions and the elegance of his experiments and clarity of their results. We had the opportunity to meet again when he hosted me as a visiting speaker at the University of Pittsburgh, and our interactions were very positive, and I in turn hosted him as a seminar speaker at the University of Texas at Austin. I considered him a good friend, on the rare occasion we crossed paths at conferences we would always go grab a beer and talk interesting science questions. We also collaborated on a publication (Pruitt et al 2016, Proceedings of the Royal Society of London Ser. B), in which he had invited me to comment on and join an in-progress paper he was working on with data he generated. On the basis of this research record I was happy to write him letters of recommendation for promotion and tenure, first at the University of Pittsburgh, then again at the University of California Santa Barbara.  I nominated him for a prestigious 2-year Harrington Fellowship visiting research position at the University of Texas at Austin (where I was working at the time), which he received. However, he did not take the position because he instead was offered an even more prestigious Canada 150 Research Chair (for which I also wrote a letter of recommendation). I wrote another letter recommending him for a Waterman Prize from the National Science Foundation in 2015, at the request of others organizing this nomination. And, in fall 2019 I was involved with a group of faculty who sought to nominate him for a Mercer Award from the Ecological Society of America. I have copies of these letters. I mention this history to establish that far from being a enemy seeking to undermine Dr. Pruitt, I have been a tireless proponent of his career for a decade, going above and beyond the call of duty to advance his reputation and job prospects. 

1.1 Genesis. On November 17, 2019 I received an email from Niels Dingemanse, alerting me to concerns about Laskowski et al 2016 (AmNat). Kate had already been notified, and was cc'ed on the email. Two days later, Jonathan emails me independently to say he had heard of these concerns. He provided an explanation that later proved to be (1) not in the originally published methods, and (2) unable to explain the concerns Niels raised. Specifically, he said spiders were timed in batches, and all responded simultaneously in a group, which is why certain time values occurred for many different individual spiders.  I asked Jonathan, hopefully, if he had videos of his behavioral trials, and he said he didn't. A week later (November 26), Erik Postma provided a detailed R code documenting their concerns (via Niels),  showing that the excess of duplicated numbers were not restricted to the batches of spiders measured together, undermining Jonathan's claim.This was the point where I began to be genuinely alarmed, because the failed attempt to explain away concerns struck me as indicative of a much deeper problem. At this point I sought advice of current and former Editors of the journal. One of them suggested I recruit some external help evaluating the concerns. I did, but while waiting for their evaluation events moved ahead without me. Jonathan sent me a Correction, in which he averaged spiders with duplicated values to avoid the pseudoreplication that would have plagued his published analyses (if we accepted his explanation for timed groups). Soon after, Kate Laskowski (first author of the paper in question) emailed me to request retraction. I want to emphasize this point, the retraction came from the authors and the wording of Kate's emails made it clear that Jonathan agreed to the retraction. These emails were also the first time I learned that other papers, at other journals, were affected by similar concerns. The same day (December 17th) Jonathan emailed me directly confirming in writing that "We're therefore too iffy now to continue forward with a request to merely correct the manuscript, and would favor retracting the article instead." I immediately forwarded their request to the University of Chicago Press, who replied that we needed a retraction statement written by the authors. At this point we no longer needed an outside opinion that I had sought, so I cancelled the request. Note that all emails concerning the events described in this document are retained to prove my statements if needed. If there's one lesson I've learned from this mess, it  is the value of keeping all emails.

As an aside - Pruitt has frequently been asked to provide original paper copies of data to validate the information in the digital Dryad repositories, and so far has not done so. I did get confirmation from some former undergraduates in his lab who worked on the affected papers, and who stated "we always collected data in the lab on paper data sheets". They also challenged his claim that spiders were tested in batches of 40 (which he used to explain duplicated numbers, because all spiders in a batch might respond simultaneously and be given the same time). They stated that "the standard practice was to assay between 5-8 individuals at a time, each with a dedicated stopwatch".

In a minor irony, while we were processing the retraction statement at the journal, Niels emailed me to express concern about my friendship with Jonathan, and that maybe I would be too lenient towards him and should get another Editor to handle the case.

In early January 2020  I was contacted (note the passive voice) by the editor of one of the other journals considering concerns about a paper where Pruitt conveyed the data. We did not seek to affect each other's decision on the case, but simply discussed the process we were separately using to reach a decision.  Shortly after we became aware of a third affected journal. At this point it was clear that there is a repeated pattern that transcends journals, which (per guidelines from the Committee on Publication Ethics, CoPE) merits communication between Editors. We also decided it would be essential at this stage to notify the author's current and former institutions that three retractions were in the works. It is no particular secret, I believe, that I sent the emails to the Scientific Integrity Officers of McMaster, UC Santa Barbara, and the University of Pittsburg, and to his current and former department chairs. I feel this was an obligation on me as Editor, aware of scientific integrity concerns about their current/former employee.

On January 13th, Jonathan emailed both Spencer Barrett (Editor of Proceedings of Royal Society B) and I, to say (quoting just a part here): "Thanks again very much for working with us so swiftly to process these retractions." Also in that same email, Jonathan raises the topic of "revisiting data sets old and new to look for similar patterns" - something I had not yet thought to do systematically. Just the day before, one of Pruitt's co-authors had emailed me that an exchange with Jonathan "gave the impression they may not be accidental". 

The first retraction became public on January 17, 2020, for Laskowski et al 2016 American Naturalist.

1.3 Collateral Damage. In mid-January, I had a phone conversation with Dr. Laskwoski, who was concerned about the damage that the pending retraction would have on her career. I sought to reassure her that it is possible to survive retractions, and that in my personal opinion the key was transparency and honesty, which the scientific community would appreciate.  I had voluntarily retracted a paper of my own a few years previously due to a mistake in R code in 2008, when I was first using R for data analysis. At the time I had written a detailed blog post explaining the retraction, and was proactive about advertising the retraction on Twitter. The community responded very positively to that transparency, and I felt that no harm was done to my career as a result. I relayed that experience to Dr. Laskowski, as a possible strategy to use transparency to gain community support for the retraction process. Based on that conversation she began to consider a blog post or tweeting about the retractions. I want to be clear here that the goal wasn't to cast aspersions against Pruitt, but to clearly articulate the concerns about the data, and the reason for the retraction. For instance, I wrote: "I do think that there will be questions about WHY the paper is being retracted. In that case Kate's choice is either: 1) be entirely silent on why 2) say that the issue is being investigated and so she does not want to comment on the details 3) explain the problems in the data without openly saying that this constitutes evidence of any wrongdoing. I think (2) is the worst possible option for Jonathan, as it implies wrongdoing without explanation. So, as I think about it more I think a clearly explained summary of why the data were judged to be unreliable (option 3; maybe with screenshots from the dataset, especially the second sheet) would be the most open and transparent approach...I’ve come around to saying that being open about this is the best course of action at the moment, while carefully phrasing this to not make accusations". Kate did end up writing a blog post timed to come out with the second retraction (from Proc B). She did ask me for comments on it; I provided a very brief email feedback, but no major input on content or style. I retain a copy of that email and can prove that I provided no substantive guidance about what topics to put in or omit, or what to say.

1.4 Evaluation. On January 18th I learned that another journal was beginning a systematic evaluation of papers by Pruitt. Up to that point I had not planned to do so for The American Naturalist, mostly because I was still focused on managing the first example and hadn't come up for air to consider the bigger picture. The same day, Associate Editor Jeremy Fox emailed me to ask about the retraction. It occurred to me that Jeremy would be a good person to ask to evaluate the other data files for the other AmNat papers, because he didn't know Pruitt personally, wasn't a behavioral ecologist, and so would be entirely outside the realm of personal or professional conflicts. Jeremy got quickly to work and rapidly raised concerns about multiple other American Naturalist papers. On January 19th he raised concerns about Pinter-Wollman et al 2016 American Naturalist, representing the first clear indication that problems transcended a single paper at this journal. January 21st, Jeremy indicated he found no evidence for problems with the data in Pruitt et al 2012 AmNat (with Stachowicz and Sih). This paper did end up receiving a Correction from the authors and an Editorial Expression of Concern, as I'll detail below. I point this out because [1] it took about 14 months between Jeremy first looking at the data, and my reaching a final decision, because this was a particularly tricky case that one might reasonably argue should have been a retraction. Having finished evaluating AmNat datasets, Jeremy kept digging out of a concern that papers at other journals might not be recieving the evaluation they need. On January 21st he let me know about formulas embedded in a Dryad posted Excel file for a Journal of Evolutionary Biology paper, in which Pruitt had calculated the *independent* variable as a function of the *dependent* variable in his analysis. Speaking personally here, it sure looked like formulas were being used to fabricate data, but Pruitt as usual emailed me an explanation that I cannot directly evaluate or reject. I passed Jeremy's concerns about this paper on to Dr. Wolf Blackenhorn on January 22nd, which seemed especially egregious. This was the only instance in which I conveyed initial concerns to the Editor of another journal. 

February 4th, I received the second retraction request concerning one of the Pruitt papers in The American Naturalist, from Leticia Aviles. Co-author Chris Oufiero responded to agree. On February 11th I replied that I would like to recieve a written retraction statement for publication (unanimous if possible, but not necessary). The same day the remaining author (and Pruitt's PhD advisor) replied also confirming that she believed retraction was warranted (a position she reiterated on February 27th). The authors did not provide a retraction statement, until late fall, for reasons that will become clear further in this narrative. On February 6 I received an email from the lead author of Lichtenstein et al (2018 AmNat) also indicating that he felt that retraction was warranted based on flaws identified by Florence Débarre. Again, this initial momentum was soon derailed, but at the time it seemed like the strategy of relying on co-authors to evaluate and decide whether to correct or retract (if either was needed), would be effective. Our institutional investigation (by Fox and myself) had found problems, co-authors agreed, and co-authors were deciding to retract. On February 10th I received an email from Noa Pinter-Wollman asking to have a correction for Pinter-Wollman et al 2016 AmNat, with the agreement of co-authors. Yet again, my request that she submit a text Correction for us to publish was disrupted. If you can't stand the foreshadowing, jump down to the section on "Chilling Effect".

Starting on January 20th I began receiving whistleblower emails from numerous sources expressing concern about Pruitt papers at The American Naturalist, Nature, PNAS, Behavioral Ecology, and other journals. I did not pass these on to the journals in question, but encouraged the writers to do so. Shortly thereafter I started receiving emails from journal Editors (I did not initiate these contacts, contrary to claims by Pruitt's lawyer, which we will get to). Niels Dingemanse and others had begun emailing numerous Editors of various journals alerting them to concerns about papers in their journals, and the Editors (being aware of the AmNat retraction) checked with me to ask whether I considered the concerns legitimate, and how I was proceeding. I confirmed that they should examine the cases and come to their own conclusions, and gave them some advice about how we had proceeded. The most striking thing I noticed, to which I return later, is the divide between those journals that required data archiving years ago (which could evaluate concerns) and those that hadn't adopted the policy (some still hadn't as of these events). I should also note that I argued for due process in each case, for instance indicating that a journal which had Pruitt on its editorial board shouldn't summarily dismiss him, but would be better off with a hiatus to wait on McMaster University's investigation results (something which is still ongoing). I argued we shouldn't presuppose the outcome of their investigation, and should avoid a witchhunt mentality. I continued to be cc'ed or addressed by whistle blower emails for several months, including cc'ed on a complaint to Nature filed on January 30th 2020 (they posted an Editor's note in February 2021 indicating that an evaluation was in progress for Pruitt and Goodnight 2015).

January 29th, the Proceedings B retraction becomes public. Where before there was one solitary retraction, now there was a pattern of repeated flaws. Kate Laskowski tried to get ahead of this by publishing a blog post documenting her reasoning in depth (which she asked me to proofread, and I provided very light typo corrections on only). There is an emerging theme over the past year, that many retraction statements are brief and ambigous as to the scientific details. This is changing and more recent retractions and Expressions of Concern have been more forthcoming. But at the time the PRSB retraction was vague and Kates blog served to elaborate to explain in depth. Ambika Kamath and several other authors also posted a blog that same day that I was not aware of in advance. Also on this date, I asked a second Associate Editor (Alex Jordan) if he would be willing to take a second look at Jeremy Fox's findings, to see if Jeremy was being fair and thorough.  A day later, Current Biology notifies me of a pending retraction (later paused due to lawyer involvement), which I had not been aware of or involved in. A couple days later I also asked Flo Débarre to look at the data files because she (1) has no association with the intellectual subject matter, and (2) is very effective at theory and coding in R. Like Jeremy Fox, she quickly found numerous flaws and felt compelled to document them thoroughly. Within a week Flo had emailed me an detailed evaluation of 17 papers with Pruitt as an author, including the five remaining American Naturalist papers, identifying serious concerns affecting many of these, including four of the five AmNat papers. A typical example is provided here:



The two yellow blocks are supposedly independent replicates with exactly duplicated sequences of numbers. The same is true for the blue blocks, and the rose colored blocks.


1.5 Suggesting a mea culpa During the evaluation process within the journal by Jeremy Fox, another paper (2012) seemed to be problematic. Pruitt reported size-matched spider and cricket body masses that seemed implausibly precise, measured to a precision of 0.00001 grams (see image below). In my email exhange with Jonathan over this, asking for an explanation, I raised the question of whether he should own up to what data sets are flawed, to save the rest of us time. I wrote: "the behavioral ecology community as a whole is expending enormous energy the past week to dig into data files.  People are, understandably, grumpy about the extent of errors, and more seriously about the suspicion of deception, but most of all there is frustration over the impact this has on colleagues and the time that is being robbed of them even now to sort through the mess.... If, and I emphasize “if”, there is any truth at all to suspicions of data fabrication, I think you would best come clean about it as soon as possible, to save your colleagues’ time right now sorting the wheat from the chaff." 



Screen shot from a data file from a 2012 paper, in which spider masses were paired with cricket masses (columns M and P), and simply multiplying column M by 0.3 could precisely reproduce the measured cricket mass to a precision of 0.00001 grams (compare calculated column O against observed column P).

For a time, Jonathan expressed interest in using a platform like this blog to address the community. Andrew Hendry and I offered to post whatever he chose to say. Ironically, it was Andrew who pointed out that Jonathan might be in legal jeopardy (for instance if any of his flawed data were used to obtain federal research grants) and so he might want to talk to a lawyer before writing anything public. Yeah, well, you'll see for yourself how that worked out, if you keep reading.

1.6 Public Coordination.  On January 29th, a behavioral ecologist contacted me to suggest that I create a database to track papers that have been cleared of errors. Their motive here is worth quoting: "One idea I have is to set up a website or google doc that lists which papers have been retracted, which have been vetted and are now ok to cite, which are still in the process of being checked, etc.  I'm hesitant to cite any papers that may be unreliable, but I also don't want to deprive any legitimate papers of well-deserved citations, so I think this resource would be helpful“ (from an email addressed to me from a colleague). The following day, I created a Google Forms document to help track evaluations of papers. My motive was to establish the Google Forms database to identify papers that are considered in the clear, and to reduce redundancy in data investigations to minimize wasted effort. I did so as a member of the scientific community. I posted no content and provided no information that was not otherwise public, and allowed others to populate the table. Note that all of the above retractions, pending retractions, and whistle blowers preceded the online database. Because I did not curate the table, and did not personally check every claim added to it, this database later became Pruitt's primary line of criticism against me. Although this is clearly a exercise in free speech, and I posted nothing that was false or misleading (e.g., not libel or defamation), I later decided that because I could not vouch for other people's entries in the table (and though I'm not responsible for content other people add), I took down the table from any public access and subsequently refused to share it. This was, in my view, genuinely unfortunate for Pruitt's co-authors (and indeed for Pruitt himself) because most people seem to use guilt-by-association to judge all his papers, even when the data were generated by colleagues or students of his. Thus, citations to his work have been greatly reduced by the retractions, even to unretracted work. The core motive was to highlight papers that had been checked and found to have no flaws, especially those whose data were collected by other people, and thus encourage continued citations to their work. By removing the document in response to legal threats (again, even though I see those threats as groundless), I fear I removed a crucial tool in mitigating collateral damage to others.

The retractions, blog posts, and online spreadsheet attracted attention and on February 2nd I received requests for interviews by reporters for Science and Nature. The published articles did not always represent my statements accurately, a complaint also raised by Niels Dingemanse and others.

In the subsequent days I regularly received numerous emails each day from people identifying flaws in existing data repositories, or from Editors asking for advice. Additional concerns were raised about American Naturalist papers, prompting me to email co-authors on all his American Naturalist articles asking for them to examine their data and let me know if they have concerns. I specifically stated that guilt-by-association was not our approach. Here's a core text of these emails:

If you collectively conclude that you paper reports results which are fundamentally not reliable, and can document the specific reasons for this concern, then you should submit a retraction statement to the journal, which we will then check. If the Editors and the University of Chicago Press concur, then we will copy edit, typeset, and publish the retraction statement.

 If you believe that some of the results are not reliable for specific documented reasons, but core elements of the paper remain intact in your view, then we would be happy to consider publishing a correction.

 If you lack confidence in the data simply because Pruitt provided them, this is not in my view sufficient grounds for a retraction without specific evidence of wrongdoing or error. I would be willing to consider publishing a brief statement, under the guise of a Correction (which would be appended to the online & pdf paper), making a statement about your concern on behalf of some or all authors without specific evidence undercutting this particular paper’s conclusions.

 If you retain confidence in the paper in all regards, I recognize that readers may not reach the same conclusion. I would be willing to publish a brief Comment allowing you to effectively confirm validity. This is an unprecedented thing to do, but I think is warrranted in this unprecedented situation.

 You may of course choose to do none of the above. Whichever path you think is best, I’d encourage you to document your thinking fully, take time to judge, seek feedback from co-authors or others, and not rush into a final decision that you may not be confident about.

My preference at this point was for the authors to judge their own papers and request retractions, corrections, or statements clearing the work, as appropriate. One of six papers was already retracted, and we had received email requestsfor  retraction for two more papers and a correction (but the authors had not yet supplied a retraction statement).

The public coordination had another benefit: it generated the potential for Editors to consult with each other about best practices in handling the situation, which was new for all of us. In particular, Proceedings B notified me on Feb 5 of their procedure, in which they appointed a committee to generate an internal report, allow Pruitt to respond, allow co-authors to comment on the report and response, and finally for the committee to re-evaluate and make a recommendation. I had begun an informal version of this with Jeremy Fox first, then adding Alex Jordan and Florence Débarre. I made this a more formal committee on March 15 2020. I reproduce the entirety of my email here because I think it is a useful template for others in this situation.

Dear Flo, Jeremy, Steve, Emma, Jay, and Alex.

 I am writing to ask whether you would be willing to submit a report, to me and the American Naturalist journal office, evaluating and explaining concerns about the papers that Jonathan Pruitt has published in The American Naturalist (excluding the one that was already retracted, though your report can certainly comment on it if you feel that is warranted).

 I am asking the four of you because (1) Flo and Jeremy have both already expended significant energy analyzing Pruitt’s papers and datasets for this journal, and I’d like to see a document summarizing this evaluation. (2) Flo and Jeremy and Alex and Jay are Associate Editors invested in the journal’s success and scientific reputation, which stands to be harmed should scientifically flawed papers go uncorrected or unretracted. (3) Flo and Jeremy are both distant from the behavioral ecology field and do not know Pruitt personally, and so have no formal association with any intellectual disputes nor any reason to harbor personal biases. (4) Alex and Steve and Emma are very close to Pruitt’s intellectual field and so are well placed to contextualize the concerns in terms of their intellectual value and to evaluate technical aspects of conducting behavioral ecology experiments in practice. (5) Jay and Alex and Steve both do know Pruitt personally, and to my knowledge have no personal reason to hold biases against him (please correct me if that is incorrect), and (6) Steve and Emma are not AEs for AmNat, and so I am hoping they can serve as an outside observer to confirm that there is no biased process and we are evaluating Pruitt’s work fairly and in a scientifically sound and rigorous manner. Lastly, Jay is both an AE, and a co-author, and former mentor of Pruitt’s who therefore could be expected to be a fair advocate for Jonathan but also a rigorous critic where criticism is needed.

 I am hoping a written report to me, as a single document, will:

1) identify and document any concerns for each of the remaining papers with Pruitt as an author. Flo has done a great job of this already with some online documents, so much of this is done.  Conversely, when you find no grounds for concern please do indicate this, and explain what you did to reach that conclusion.

 2) Treat each paper independently, in the sense that evidence of flawed data for one paper should not lead us to presuppose the other papers must be flawed as well

 3) Present a list of questions that we would need Jonathan to answer to clarify the nature of the problems (if any) identified in (1). He would be given two weeks to respond to those questions, then you would be shown his answers and given a chance to comment.

 4) If you identify concerns about a particular paper, please comment on your recommendation for a course of action. in particular, our options appear to be:

 i) there are no errors

 ii) any errors are minor and do not need any public comment

 iii) the dataset can be fixed (e.g., by excluding all duplicated values) and re-analyzed to reach scientifically reliable inferences that could be published as a correction.

 iv) certain parts of a paper no longer are reliable and we require a correction to indicate what elements of the paper should be retroactively considered redacted, but other aspects of the paper remain valid and useful scientific contributions.  Note that in my opinion, a novel idea or question is not sufficient to be published in the journal, that idea must be backed by an effective model or data. Therefore, a paper might contain an innovative hypothesis or viewpoint but if the data to demonstrate this point is flawed, then the paper should be retracted as opposed to simply issuing a correction that eliminates the empirical evidence.

 v) a retraction. Typically these should be submitted by the authors. They should succinctly explain the rationale for the scientific decision, without suggesting any cause for irregularities or leveling accusations about motive.

 vi)  An Editoral Expression of Concern.  As Editor, I have the right to publish an explanation, based on your report (you would a coauthor unless you opted to be anonynous which is your right), of concerns that lead us to question the reliability of a previously published paper. This is confirmed by the court case Saad vs the American Diabetes Association. For this, we do not require approval by any author(s) though obviously we’d prefer their agreement if we went this route.

If you are willing to do this, in the current troubled times of many COVID distractions, please let me know. If you cannot, i understand fully, these are remarkably challenging times to stay focused on work.

I would share your report with the editorial office (including University of Chicago Press lawyers, for our own collective peace of mind), then with Pruitt to request answers to questions you pose. Once we get his answers, you have a chance to respond to them. Then I will make a decision (subject to your recommendations) about options i - vi above, for each paper, and if necessary write Expressions of Concern or invite co-authors to write retractions or corrections. If your report judges retractions or corrections to be scientifically necessary, but the authors do not write retraction or correction statements (perhaps due to the chilling effect of Pruitt’s lawyer’s threats of legal action), I would opt for an Expression of Concern.

 Thank you for the help on this matter, so we can reach a transparent and fair and scientifically rigorous final decision on this matter.

I especially want to draw attention to the second paragraph where I outlined my logic in choosing these people - some because they are experts in behavioral ecology. Some because they are statistically savvy and far enough outside the field that they have no personal or professional bone to pick with Jonathan. Jay Stachowicz precisely because I might expect him to be sympathetic to Jonathan (a former postdoc of Jay's), for instance. I wanted to stack the deck in Jonathan's favor to make the committee's fairness unimpeachable (* hold that thought).

A month later (April 19, 2020) I received the committee's report and forwarded it to Jonathan Pruitt, and to all his co-authors, inviting them to respond. All co-authors confirmed receipt of the email. No co-author contested the critiques of the data, and most confirmed they agreed with the critiques.  All co-authors who responded affirmed that they agreed the committee membership was fair and exhibited no cause for concern about bias.

Jonathan Pruitt rapidly responded asking that the Laskowski et al paper, which kicked off the whole process, also be subjected to evaluation. I declined, noting that we had already completed that retraction for what I judged to be valid reasons, at the request of all coauthors including himself. More importantly, Jonathan criticized the choice of all members of the committee, claiming that all of them were biased and inappropriate to judge his data (Flo and Jeremy for instance because they had already done a lot of work judging his data and already posted findings). I repeatedly offered to add other arbiters that he might suggest (hoping he would commit to names that he would then be unable to criticize), but he never offered such names. In my personal interpretation, had he offered any names he would have then been unable to sustain the ad hominem attack strategy against the jury, and so he ignored the request.

The other main subject of discussion at this stage (April 30, 2020) was whether Jonathan could simply delete the duplicated blocks of data and re-run his analyses and publish corrections. Jonathan repeatedly (at many journals) pushed this solution. The reason for our denying this is nicely summed up by one committee member who wrote: "In my opinion a confirmed case of duplicated data calls into question the validity of the entire dataset. Simply excising the cases we’ve been able to identify does not give me confidence that the rest of the data are OK, and if this were a case in normal editorial process where I had a reviewer point out anomalies of this type I the data I would be incredibly uncomfortable letting the study see the light of day, no matter which data were censored. While I know we must reserve judgement about how these anomalies crept in, the simple fact they are present at all suggests the entire datasets in which these sequences appear are suspect" This view was unanimously supported. Moreover, we noted that the duplicated blocks of data, if a copy and paste error, must have overwritten previous data (otherwise they would have greatly inflated his sample size and been noticed as a mismatch between experimental design and dataset size). To make this really clear, if we have a string of numbers (1, 2, 3, 4, 5, 6, 7, 8, 9, 10) there are only two ways to get duplicated blocks: 

a)  (1, 2, 3, 4, 5, 1, 2, 3, 4, 5)    - which overwrites data, so it would be inappropriate to just delete one block - which one? what numbers were overwritten?  or,

b) (1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 1, 2, 3, 4, 5) - which inflates the sample size. As Pruitt didn't note a sample size inflation we must infer (a) was the issue, in which case re-analysis with duplications omitted would be inappropriate.

Without the original paper data sheets to determine the correct values that were pasted over / omitted, simply deleting the duplicates is not enough because there's data that was obscured as well, that would need to be recovered. No paper data sheets have been provided at any point, despite undergraduate assistants' assertions that using paper data sheets was standard practice.


1.7 Intimidation. And now we get to the step where things became even more stressful for co-authors and editors. On February 12th I was alerted that other journals, which were actively pursuing corrections or retractions or Editorial expressions of concern, had received threatening letters from an attorney, Mr. McCann, who insisted that all journal evaluations be paused until McMaster University concluded its investigation. Note that CoPE guidelines do not, in fact, require that journals wait for University investigations - they say "institution", and the journal is an institution. Moreover, the investigation is only necessary if the case for retraction is unclear. 

I later learned that co-authors who were seeking retractions began to also receive such letters. The thinly veiled threat of legal action was enough to have a massive chilling effect, as evidenced by the three different sets of authors who had specifically told me they wanted a retraction or correction at AmNat, but who then received such letters and were not comfortable continuing the process.

A couple weeks after this (February 29th) I received a lengthy email from Jonathan Pruitt demanding answers to a series of questions that sounded to me like they were written with a lawyer's input. The questions mostly focused around the public database I had generated and allowed others to populate with information. His letter claimed that the database contained incorrect information and was modified by individuals with conflicts of interest, and did I accept responsibility for its contents. This was the first allegation I heard of any inaccuracies in the data sheet. Namely, some people had posted that retractions were in progress even though they had not been finalized by the journals in question. The delay in approving retractions was due in large part to the chilling effect of his lawyer's letters. In short, his legal actions had created the situation where the spreadsheet was not quite accurate. The contents of the spreadsheet were the collective work of many people reporting what they genuinely understood to be true. So it is clear (based on my own consultation with multiple lawyers) that what we did was defensible, based on the First Amendment protections for free speech in the US, the Speech Act, and definitions of libel and defamation. Nevertheless, I felt extremely threatened and immediately removed public access to the spreadsheet, which has remain closed since (and all requests to view it denied). Someone unknown to me did create and publish a copy without my consent, and someone else created a table of retractions on Jonathan's Wikipedia page

On April 30, while exchanging emails with Jonathan and the committee about his responses to their concerns, The Proceedings of the Royal Society B published five Editorial Expressions of Concern, noting that data concerns existed and evaluation was ongoing. Realizing that Jonathan might take time to respond to the AmNat Committee's concerns, plus time for co-authors and the committee to re-evaluate, and maybe another cycle of comments, I decided we were looking at a lengthy process ahead of us. It would be appropriate, per CoPE guidelines, to publish Expressions of Concern noting that the papers were under consideration. Basically a place-holder to alert the community pending a final decision. This is a common use of EoCs, that is approved by the Committee on Publication Ethics. Court decisions in the US have established the precedence that academic journal Editors have the right to publish such Expressions of Concern. And yes, I was reading a lot of law and court decisions in February through April 2020.  So, on May 1 2020 I emailed the University of Chicago Press, which publishes and owns The American Naturalist, with a copy of the report and a request to publish Editorial Expressions of Concern. The publisher examined the report and my proposed text, and approved this on May 8. Out of courtesy I notified Jonathan of our intent to publish the EoCs. The same day, Jonathan replied indicating that he thought EoCs made sense and were understandable, and thanking me for alerting him. A few hours later, his lawyer sent a letter to the University of Chicago Press, critiquing my conduct, my credibility, my choice of committee members to evaluate the work, and demanding I recuse myself from any further involvement. The truth is, my heart leapt at the idea of handing the whole unpleasant labor-intensive mess off to someone else, and I eagerly offered to recuse myself as requested. The Press asked me to sit tight while they thought this over.

A copy of his lawyer's letter is appended at the very end of this blog post. It claims to be confidential, but I have asked five different lawyers all of whom agree that there's no basis for such a demand. I post the letter despite its false characterization of my actions and motives. I post it without comment (though I have a detailed rebuttal I gave to the University of Chicago Press), because it need not be taken seriously enough to receive a detailed point by point rebuttal.

So, to recap: April 30th we get this letter demanding I be removed from the case, and the Press asks me to pause the process. It wasn't until August 4th that the Press confirms that I am within my rights to proceed with the evaluation process as I had planned. They recommended that I not be recused (which I grudgingly accepted), because in the Press' view I had full right as Editor to decide upon the process and outcome.

I received no further communications from Jonathan's lawyer, and only minimal direct communications from Jonathan himself after this (he and I had emailed extensively since November 2019 through April 30, often many times per day, many days in a row. The only other element I'll note is that when our process resumed, a number of co-authors, Associate Editors, and Editors (myself included) were subject to Freedom of Information Act queries asking for all emails pertaining to "Pruitt". 

If nothing else, the letters and FOI requests had a chilling effect of delaying the evaluation process. I use the term 'chilling effect' deliberately here as it is a key legal criterion for when threats and intimidation become suppression of free speech and scientific publishing, in contradiction of the Speech Act and the US First Amendment. Co-authors who had written their intent to submit Retraction text did not do so. Journals that had approved retractions put them on pause (and in so doing, rendered the Google sheet document temporarily inaccurate). But eventually chills wear off and the thaw follows.

1.8 The thaw In August, Pruitt provided rebuttals and explanations to the committee's report. This was sent by his lawyer to the University of Chicago Press, who sent it to me. The co-authors commented on those rebuttals (indicating skepticism of his rebuttal). The committee made recommendations to me based on their original evaluation, the rebuttal, and co-author comments. In all cases I followed the committee's recommendations. One was a minor alteration to a data file on Dryad that we requested. One was a correction noting suspect data in a supplement, which was immaterial to the main point of the paper (a theoretical model not parameterized by the data nor tested with the data). Two additional retractions. And, the most recent, a paired Correction (at the request of all three authors) that the Editorial committee and Editors found unconvincing, so an Expression of Concern (coauthored by the whole committee) was published alongside the Correction.

The process of these closing steps was notable in several ways. 

First, Pruitt did not acknowledge or reply to offers to sign on to any of the retraction notices, though he signed onto the Correction to Pruitt, Sih, and Stachowicz 2012. For all retractions, all authors other than Pruitt signed on, in every case (and he signed onto the first retraction made public in January 2020).

Second, we were on the verge of accepting the most recent Correction (to the paper on starfish and snails) when the journal received an anonymous submission (via Editorial Manager) of a critique of this same 2012 paper. Our investigation had not identified the same kinds of large blocks of repeated data that were the hallmarks of multiple other retractions. There were blocks of duplicates, but marginally significantly more common than null expectations, so not strong evidence of a problem. There were more minor errors, and some weird inconsistencies in rounding (far too many x.5's, not enough x.4's) that could be attributed to a careless undergraduate (as Pruitt implied), but nothing that called into question the validity of the data. But this Comment raised some new issues we had missed, with a detailed statistical analysis showing greater similarity in snail shells between replicate blocks, than could be explained by random assignment. In his Correction, Pruitt replied that snails were not actually assigned randomly to blocks (contradicting the Methods text originally published), but provided no statistical or simulation evidence that his non-random process could generate the odd data overlaps. Conversely, the anonymous commenter then showed that Pruitt's explanation is unlikely to be valid or explain the problem. The details are provided in the recent Expression of Concern. What I want to note clearly here is this: the snail size data have patterns of repeated numbers, much like previous retractions, but not in blocks. So, it would seem reasonable in this case to retract. Why didn't we? The logic is this. First, this is the only paper where the co-authors both supported Correction rather than Retraction.  Second, the patterns in the data identified by the anonymous individual were less shockingly egregious than in other cases. Together these three points still left me balanced between retraction and the Correction + Expression of Concern approach. I opted for the latter because it allowed Pruitt to have his own voice in a public response, but for us to also clearly and publicly evaluate the claims that he makes. Personally, I do feel that retraction would be warranted for this paper, but that the Correction and EoC approach had its advantages as well, allowing the authors to make their case and still allow editorial rebuttal. Third, the Committee on Publication Ethics (rightly or wrongly) suggests that retractions are warranted when core conclusions are affected. In this case the snail size data was ancillary to the main point (snail behavior interacts with predator behavior;  snail size was not in fact under selection nor was that selection contingent on starfish behavior).

The final point is an essential one. One of the papers was a mathematical model inspired by some data hidden away in a supplement. The data were not used to choose parameter values, or anything formal. But, the data exhibited many of the same kinds of problems we've seen already. So the authors (Pinter-Wollman et al) wished to note their mistrust of the empirical data, but continued support for the core focus and goals and findings of the paper. This is a great example of where the flaws are secondary to the focus of the paper, to the point where a Correction seemed like a reasonable route and in keeping with CoPE recommendations. However, the day that the Correction was published, we were notified that the empirical data invoked in this paper (ostensibly about a species of spider, Stegodyphus dumicolawere collected in 2014 in the Kalahari) were in large part identical to data from a Behavioral Ecology paper (Keiser et al) that described the same numbers as coming from two other species of spiders in Tennessee in 2010 (Theriodon murarium and Larinioides cornutus). It thus is plain that data were duplicated across studies and repurposed for different biological settings. Whether this was intentional or a result of carelessness, I again cannot say. But, in my own personal view this is clearly malpractice with data whether it is intentional or careless. The question then is whether we retract a valid mathematical model, out of guilt-by-association with tainted data, in order to punish (since it is not just a question of correcting an error - the mathematical model is itself self-consistent and valid). In my view it is not the role of editors to punish, but to act to ensure high quality of published work. The process of punishment is the purview of the employer of the scientist responsible for malpractice.

In parallel to all this, I was proceeding with a process as a co-author of a Proceedings of the Royal Society paper. Our initial investigations into the paper in question (on 'behavioral hypervolumes') didn't reveal any evidence of serious flaws, and we were close to signing off on a minor Correction. But, a series of observations raised new concerns. Namely, for a set of observations in the study, it appeared likely the numbers were typed in a non-random way. If you have a laptop keyboard, the numbers are arranged from 1 on the left to 9 on the right in a single row. When typing in numbers "at random" people readily type in adjacent numbers, or certain ending numbers, more often than expected. In this dataset I observed certain combinations were vastly over-represented.For example, numbers ending in 78 (adjacent keys) were far more common than numbers ending in 77 or 79. The same is true for 67 (relative to 66 or 68), and for almost all adjacent pairs of numbers. I can think of no biological basis why times on a stopwatch should fall into those clusters, and so the co-authors and I (except Jonathan) asked to be removed from the paper when the journal decided to request a Correction from Jonathan.


2. Major lessons learned:

First, one lesson is that this was an immensely long process generating vast numbers of emails, R code files, images of data. And it feels very cathartic to get the experience written down here. So thanks for reading. But the real lessons as I see them are:

2.1. The central role of good data sharing. The journals that required data archives were vastly better able to evaluate the data and suspicions, compared to journals that didn't require archiving. All journals should require this. And, we also found that quite a few data archives were incomplete, highlighting the need for better enforcement of compliance - good meta-data, all relevant variables included.

2.2.  Even with data sharing, we can't detect all forms of fraud or error. Although there were some recurrent themes (e.g., blocks of data repeated), this isn't something we normally check for when your colleague emails you data to publish. People had to build new software in R to detect the problems that were first noticed (by Kate Laskowski) by eye. Sometimes it was terminal digit analysis (like the 78 repeats I just noted), sometimes it was excessive overlap of numbers between mesocosms. There are an infinite number of ways to introduce  or create errors in data, by accident or intent, and we just can't catch them all.

2.3 The importance of coordination between journals. The journals' Editors were super careful to not bias each others evaluations of papers. But discussions were essential to learn best practices from each other, such as suitable use of Expressions of Concern, how to  set up committees to evaluate concerns. This was a new experience for almost all of us, and so having a community of peers to discuss due process was valuable. But even more crucially, each of us might not have known what to even look for, without some indications to each other about what we found. This is particularly evident from the more recently emerging evidence that some data sets are duplicated across papers in different journals, ostensibly about different species of spiders on different continents. This recycling of data is blatant (though from an abundance of caution I'll say again I don't know if it was intentional), and can only ever be detected by coordination among journals and comparisons across papers. Thus, a collaborative process between journals is not only helpful, it is crucial. Note added later: journals use iThenticate to cross-check prose for plagiarism from other papers. Can we do the same for data? Of course some data recycling is entirely appropriate when asking different questions and acknowledging the intersection between papers. But some is clearly done to mislead.

2.4 Should we be Bayesians about misconduct? Thoughout, we sought to treat each paper in isolation. But many colleagues object saying that we should be updating our priors, so to speak - with each additional example of errors in data we should grow more skeptical about the validity of as-yet-untarnished datasets by the same author. That's a defensible position, in my personal opinion, but I went against my own conscience in trying to not be Bayesian here, to make the process as objective as possible. The simplest reason is that a fair number of his papers were based on data generated by others. We absolutely should not leap to the conclusion that everyone he collaborated with was equally problematic in their data management practices. Having said this, it is absolutely pertinent that there is a repeated pattern across many papers and journals. If there are duplicated blocks of data in one, and only one, dataset, I can readily ascribe that to a transcription or copy-and-paste error. If most datasets have such errors, accident seems highly improbable and the case for systematic intentional fraud becomes ever stronger. But even if the systemic errors are unconscious (e.g., difficulty copying data from paper into a spreadsheet due to a cognitive disability), as a community we cannot trust the work done by someone who is systematically generating flawed data.

2.5 Why are manuscripts guilty until proven innocent when we review them before publication, but innocent until proven guilty when it comes to flaws and retraction elsewhere? The simple answer is that the impact on individuals' lives is asymmetric. Reject a manuscript, and it gets revised and published elsewhere. Retraction has massive negative effects on someone's psyche and career and reputation. Because the personal and professional impacts are asymmetric, the standards of evidence to make decisions are similarly asymmetrical. Now, there's another approach that might be better. If we destigmatize retraction (while retaining the stigma for fraud & misconduct), we make it easier for people to retract or correct honest mistakes. The result is an improved scientific literature, when retractions become an encouraged norm when warranted. Again, see my recent blog post about the philosophy of retraction for more detail.

2.6 Minimizing collateral damage. During a process such as this, co-authors' work comes under scrutiny as well because any paper with the central individual as a co-author is questioned. This is especially true in this case, where Pruitt had an established and self-acknowledged habit of providing data to others, for them to analyze and write up. The online database served first and foremost to 'rescue' the reputation of papers that (i) were from data collected and analyzed and written by other individuals, or (ii) were theory or reviews that did not entail data, or (iii) were cleared of errors through the investigation process. The primary hope of everyone involved was to find as many papers as possible that could be placed into these categories, to retain the reputation of as many papers and authors as possible and minimize collateral damage (and, at first, damage to Pruitt as well). This is why co-authors eagerly contributed to the database, and added retractions as they requested them (not realizing that a requested retraction might then be delayed or denied by the journal due to the chilling effect of lawyer's letters). But on balance the value of the database was primarily to encourage continued citations of papers that were untouched by the data problems. The removal of the database from the public eye, at Pruitt's demand, exacerbated the collateral damage to his coauthors. I regularly received emails asking for access to the database, which I denied out of fear. Often those emails involved a request for help in judging whether a paper could be safely cited, and I felt like the spurious and unfounded legal threats against me obligated me to be unhelpful. So, I would reply that the researchers needed to come to their own conclusions about citing the paper in question. I deeply regret that I wasn't more proactively helpful, for a period of time, in supporting citations to papers that remain sound science. Even to this day, I think there is no resource where researchers can go to check to see if a paper is judged to be okay to cite, they can only find the negatives (the retractions or Corrections). The public and finalized retractions are listed on his Wikipedia page. Knowing that some journals are still conducting evaluations, this one-sided information only serves to harm his co-authors.

2.7. Think about mental health of authors. Retraction is stressful, and might induce depression or worse. Conversely, we can't let authors hold publication decisions (including retraction) hostage with threats of self-harm. This is a tough tension to resolve.  

2.8 Editors will sleep better at night if they buy liability insurance. The letters from Pruitt's lawyers were remarkably effective at generating stress among many editors, slowing or stopping actions by journals and by co-authors. As noted above, I had received confirmation from three sets of co-authors that they wished to request retraction on the basis of concerns about data that they identified, and/or were identified by Associate Editors of AmNat, or third parties. After receiving lawyers' letters, none of those authors felt safe to actually write the retraction statements, and we received none until the journal had completed its investigation process in the fall of 2020. Even within the journal, the lawyer's letter (provided in full below) caused a pause on all deliberations from early April through early August. This is what is known as a "chilling effect", and is a topic with lengthy legal opinions protecting Editors and scientists' decisions and actions in the face of legal threats. But, as most of us scientists are not legal experts, it is extraordinarily stressful to be looking down the barrel of a potentially costly lawsuit, even when one is fully confident that the scientific facts are on one's side. I talked to lawyers in private, at the University of Chicago Press, and at the University of Connecticut, and all were confident that the threats had no teeth, but it still kept me up at night. When it did so, I only had to crack open some of the Dryad files and examine the patterns in the data to reassure myself that the evidence of scientific error and biological implausibility was clear and incontrovertible, and thus the actions and statements I made were correct.

2.9  Public statements. A retraction or correction that is done quietly, has no impact on people's beliefs about published results. It is essential that when a prominent paper is retracted or corrected, that this action be publicized widely enough for the community to be aware. This publicity is essential because it serves to make people aware about changes in what we understand to be scientifically valid, changes in our understanding of biology (e.g., removing a case study from the buffet of examples of a concept). The purpose of the publicity is not to harm the author involved. Far from it, in my experiences when authors are proactive about publicizing their own corrections or retractions, they receive adulation and respect from the community for their transparency and honesty (e.g., Kate Laskowski). A public process of disseminating information about corrections or retractions only becomes harmful when it is clear that the changes stem from gross scientific error that should have been readily avoidable, or from fraud or other misconduct. Or, when it is clear that the author fights retraction tooth and nail to create a chilling effect. In this case, it is the authors' own actions that are the source of the harm, and the dissemination of information about retractions is a service to the scientific community to correct erroneous knowledge arising from the authors improper actions.

2.10  Be patient. When you submit a complaint to a journal, there are many steps we go through to ensure a fair and correct outcome. We screen the initial complaint, and if it seems valid we assemble a committee to evaluate it. We obtain a reply from the authors. Sometimes we do so separately if the authors don't see eye to eye, sometimes as a group. If the authors disagree with the critique, we send the critique and the rebuttal to review by experts who know the statistics, or biology, well enough to give a detailed evaluation. We then synthesize the reviews and critique and rebuttal to formulate a decision. Some journals did many rounds of back-and-forth with the author in question. Note also that when an author is facing criticism on many fronts (dozens of papers at multiple journals), they aren't going to be fast about any one paper. This is where Editorial Expressions of Concern (which I sought to publish, but was blocked by legal threats spooking my publisher) come into play - they can alert the community that an evaluation is underway, early on, giving breathing room to do a thorough and fair evaluation. PubPeer also serves the role of early notification to the community. But in the particular case of Pruitt's papers, some PubPeer posts were later found to be incorrect. Leveling incorrect accusations in a non-peer-reviewed venue troubles me, which is why I prefer the slower but more thorough review process inside a journal.

Above all else, I believe that science requires open discussion of data, and clear documentation of due process, and dissemination of findings. We have followed due process, and reached findings that resulted in author-requested retractions for three papers (with full agreement of the entire Editorial board of 3 editors, the journal, the 6-person committee of Associate Editors, and all but one author, in each case). Two other papers have received Corrections, and one of these also has an Expression of Concern. Now that the back-room deliberations are complete, in the spirit of scientific openness about process, I felt it was time to clearly and publicly explain the logic and process of my involvement in this series of events. As a community we can only learn to (1) prevent and detect such cases, and (2) adjust our understanding of biology, and (3) improve procedures for future cases, when the details of the events are clearly known.

Coda

This may not be the end of this story, though I hope sincerely that it is the end for me. Investigations are ongoing at other journals, and at institutions. But on balance, my task is done, as both Editor and co-author. The threat of legal action still hovers, and I worry about posting this blog stirring that hornet's nest. But with each new retraction at another journal, arrived at independently through processes outside my control and without my influence, the evidence grows that there was a deep and pervasive problem. Should this ever wind up in court, it is easy to point to the data and make it clear that there was a strong biological and statistical rationale to doubt the data in these papers. We've bent over backwards to pursue a fair and equitable process, treating each paper separately, and bringing on advisers who are if anything likely to be on Jonathan's side or neutral arbiters. We have coordinated between journals, because that's essential to learn from each other and detect problems that cross journals (e.g, data reused for multiple papers ostensibly about different species). In short, I've learned a great deal about effective ways of processing these kinds of problems. And I've seen journals that performed admirably, and journals that didn't (yet).


Acknowledgements

This post is dedicated to the committee members who assisted The American Naturalist with its investigation - Jeremy Fox, Florence Débarre, Jay Stachowicz, Alex Jordan, Steve Phelps, Emma Dietrich, and to the many co-authors who assiduously worked to evaluate concerns about data in the face of intimidation.


Supplement

As a supplement to this document, I am providing a copy of Pruitt's lawyer's letter. I am providing it without comments, though nearly every paragraph contains statements that are demonstrably false, or misrepresentations, which I can prove with emails as needed. Just to pick a couple of examples, at no point had I "contacted the editors of more than 20 academic journals to ask them to investigate Dr. Pruitt " - they received whistleblower complaints from someone else and I had no role at all in that. Many of them then emailed me to ask what my procedures had been for responding. I also was not involved in "guiding [Laskowski] through the analysis that led her to conclude that the paper should be retracted" - she did that on her own, after concerns were raised by Erik Postma and Niels Dingemanse, with zero input from me about the analysis. Such errors are riddled throughout the letter, which casts aspersions on me, my motives, the committee that served the journal to evaluate his work, and many others. So, please read the following with an appropriate level of skepticism as to its contents. Also, I should state up front that multiple lawyers confirmed for me that all of my actions are appropriate, ethical, and protected under the US First Amendment free speech clause and the Speech Act, and that the request for confidentiality at the top of the letter has no legal basis. 






















6 comments:

  1. I was wondering if you were going to write a post along these lines...

    Thanks for the kind and generous dedication Dan. The field owes you an even bigger thank you.

    ReplyDelete
  2. Wow. Thank you, Dan. As I wrote to a friend just the other day, even before you posted this saga, "[Dan] is such a careful, thoughtful person. The ideal journal editor." This blog post proves my point. I'm sorry you had to go through all this.

    ReplyDelete
    Replies
    1. That is absolutely spot on, Ben. I second every word.

      Delete
  3. I am working together with others to get retracted a fraudulent study on the breeding biology of the Basra Reed-Warbler (an endangered bird species which is only breeding in Iraq). I am therefore very interested in this case. I strongly support your decision to publish this blogpost. I also strongly support your decision to publish the legal letter.

    I am an old-school field biologist and the key sentence in your blogpost is towards my opinion: "No paper data sheets have been provided at any point, despite undergraduate assistants' assertions that using paper data sheets was standard practice."

    It is towards my opinion remarkable that the legal letter does not refer to requests to get access to this information (given that these requests predate the date of the legal letter). The legal letter uses the term 'campaign'. It is my very strong opinion that you and all others are just conducting science.

    I have also received legal letters, lots of them. Some of them contain severe threats. I have 10s of years of experience within the field of nature conservation. I am therefore used to all kind of threats. My response on these legal threats / letters is always the same: 'please show me the raw research data'. The response on this request is always the same: (a) no response, (b) a new legal threat, (c) directing me to entities / persons who do not communicate.

    The first author of the Basra-Reed Warber study was willing to retract his article. He was lateron overruled by his Italian co-author. This Italian co-author was not involved in collecting the data. The Italian co-author did not communicate with me about my requests to show me the raw research data. In stead, I received a legal threat from the University of Pisa, his employer. This all took place in 2015. The University of Pisa has until now not responsed on a complaint which I have filed in July 2017 (so almost 4 years ago). I was only able to communicate through the social media with a student member of the Ethical Committee of this university. I have in the meanwhile corresponded with lots of editors of peer-reviewed journals about a preprint about this topic. Some responses are highly remarkable. This is a mild judgement.

    ReplyDelete
  4. Dear Dan,
    Thank you for a detailed account of the whole incident. There are a lot of learning lessons here, for researchers of every age. I have two questions:

    1. What's your take on the review process from this incident? How to make it stronger so that the problematic manuscripts don't pass through the gaps? Where does the buck stop in such cases? Without blaming anyone who was involved in the reviewing and publishing process, how can we make the process more resilient? More specifically, should the reviewers and associate/handling editors of those particular manuscript be gently warned? What are the journals going to do? Have such talks have been happening in the closed-door meetings?

    2. If the data were fabricated, what does it say about the culture of scientific research? The 'why' question is important. Why would someone do this? This culture of publication-at-any-cost, is deeply troubling. The culture of more and higher of everything -- grants, impact factor, publications, chairs, committees -- could be toxic. How does/should the fraternity deal with that? In the end, are we going to rely merely on an individual's scientific integrity or can we create better structures to avoid these falls?

    I am an early career scientist. You can understand my worries and dismay seeing this case.

    ReplyDelete
    Replies
    1. I'm not Dan but am an editor-in-chief of a journal and have dealt with retractions, so can I have a go at your questions?

      1. "What are the journals going to do?" No editor or reviewer could have been expected to catch the well-hidden data fabrications within individual papers by Pruitt. I don't think anyone involved in the prepublication peer review of those papers was at fault or has earned any kind of warning. I think the lessons are that postpublication peer review is important and necessary, and that obligatory data sharing in public repositories is a necessary step toward postpublication review that's open and transparent.

      2. "Why would someone do this? This culture of publication-at-any-cost, is deeply troubling." I agree the first question is important, and the second statement may be true, but I don't think they necessarily have anything to do with each other in this case. Indeed, this is letting Pruitt off the hook with a Twinkie-style defense. There have always been incentives to cut corners, cherry-pick results, or fake data, but the vast majority of researchers don't do these things. I don't think Twinkies made Pruitt do it. Hoping the report by McMaster will tell us more.

      Delete

A 25-year quest for the Holy Grail of evolutionary biology

When I started my postdoc in 1998, I think it is safe to say that the Holy Grail (or maybe Rosetta Stone) for many evolutionary biologists w...