Sunday, February 2, 2020

#IntegrityAndTrust Idea 2. Teamwork in research ethics and integrity

Maintaining Trust AND Data Integrity - a forum for discussion. INFO HERE.
#IntegrityAndTrust Idea 2. The role of teamwork in research ethics and integrity

Steven J. Cooke, Institute of Environmental and Interdisciplinary Science, Carleton University, Ottawa, Canada

About ten years ago I created an “expectations” document that I share and discuss with all new team members.  It consists of twelve topics and usually takes me about two hours to work my way through.  The very first topic on the list is ethics and the document states “Lab members will adhere to the highest level of standards related to ethical practices of data collection, analysis and reporting”.  Despite it being the FIRST thing I cover, I usually spend a grand total of 30 seconds as I fumble around the idea that they shouldn’t commit fraud, trying to do so without making them sound like criminals.  I think one of the reasons that I move past that point so quickly is that it seems so obvious – don’t fudge your data or do “unethical” things.  Upon reflection over the last few days I have concluded that I must do better and engage in a more meaningful discussion about ethics and integrity.  However, I am still struggling with what that means.  I look forward to learning from our community and formalizing how I approach issues of research ethics and integrity not just upon entry into our lab but as an ongoing conversation.

As I reflect on recent events, I am left wondering how this could happen.  A common thread is that data were collected alone.  This concept is somewhat alien to me and has been throughout my training and career.  I can’t think of a SINGLE empirically-based paper among those that I have authored or that has been done by my team members for which the data were collected by a single individual without help from others.  To some this may seem odd, but I consider my type of research to be a team sport.  As a fish ecologist (who incorporates behavioural and physiological concepts and tools), I need to catch fish, move them about, handle them, care for them, maintain environmental conditions, process samples, record data, etc – nothing that can be handled by one person without fish welfare or data quality being compromised.  Beyond that, although I am a researcher, I am also (and perhaps foremost) an educator.  And the best way for students to learn is by exposing them to diverse experiences which means having them help one another.  So – in our lab – collaborative teamwork during data collection is essential and the norm.

So, is having students (and post docs and/or technicians) work together to collect data sufficient to protect against ethical breaches?  Unlikely… yet it goes a long way to mitigating some aspects of this potential problem.  Small (or large) teams working together means more eyes (and hearts and minds) thinking about how research is being done.  This creates opportunities for self-correction as different values, worldviews and personal ethics intersect.  It also enhances learning opportunities especially if there are opportunities to circle back with mentors.  I try to visit team members (especially on the front end of studies) or connect with them via phone or email to provide opportunity for adjustments beyond the “plans” that may have appeared in a proposal.  When I do so I don’t just communicate with the “lead” student.  Rather, I want to interact with all of those working on a given project.  This provides an opportunity to reinforce the culture of intellectual sharing.  It is not uncommon in our lab for well-designed plans to be reworked based on input from assistants during the early phases of a project –adjustments in what we do and how we do it.  This level of ownership means that there is collective impetus to get it right (and I don’t mean finding statistically significant findings).  Creating an environment where all voices matter (no matter of status) has the potential to reduce bias and improve the quality of the science.  As a community we already do this with respect to workplace safety where safety is “everybody’s business”.  Why can’t this become the norm where “research ethics” and study quality are everybody’s business? 

There is much to think about ranging from the role of open science and pre-registration of scientific studies to reframing our training programs to build an ethical culture.  Yet, an obvious practice that I will be continuing is one where students (and others) work in teams during the data collection phase.  There is need to empower individuals to challenge their peers but do so in a constructive and collaborative manner.  By Working together in data collection there is opportunity for continual interaction and engagement that can only benefit science and enable the development of ethical researchers.  There are also ways in which this can be extended to the data analysis phase (e.g., using GitHub) where team members work collaboratively on analysis and take time to double (and triple) check their code and analysis (I am not the best one to comment on this approach but we are actively pursuing how to do this in our lab).

Saturday, February 1, 2020

#IntegrityAndTrust 1. Publish statements on how data integrity was assured

Maintaining Trust AND Data Integrity - a forum for discussion. INFO HERE.
#IntegrityAndTrust Idea 1. Publish statements on how data integrity was assured

By Pedro R. Peres-Neto

It is great to see how much effort is being put to scrutinize these data and even understand how the reported issues have happened.  Perhaps this was a one-time incident, though data issues of less severe proportions are likely to occur from time to time.   Perhaps we should take this opportunity to start thinking towards a more robust system to reduce future potential issues.   Obviously, the number of papers is increasing dramatically, so data issues (from minor to huge) are likely to increase as a result of data management, lack of scientific rigor or even data fraud.  Collating data from different sources beyond single studies, for example, has become common practice in many fields. Simple or systematic ways to improve data integrity and scrutiny may be required by different fields and data sizes.  

There are strong signs that the number of co-authors per paper is increasing through time in many fields, including ecology.  Perhaps there can be some support to adhere to a publication policy in which co-authors explain which steps they took to assure data integrity.  These could be in the form of published statements.  One way could be to have more than one co-author (perhaps not all) spend a meaningful amount of time in scrutinizing the data and analyses to assure that (we) co-authors did the best to reduce potential data issues.  By no means I am saying that co-authors of retracted papers are to blame. We know all too well that issues like that can happen.  What I’m saying is that by having more than one co-author involved from the beginning is one way to increase data integrity control.  There can be certainly other solutions and that’s why I’m suggesting that authors publish statements on how they did their best to assure data integrity.  

It is time for a discussion on potential solutions to hold and preserve data integrity to the highest standards possibly. Solutions (in my mind) are needed to have the continuing public support in trusting and using research outcomes, and the support of taxpayers that much support our research.  

Maintaining Trust AND Data Integrity - a forum

So many discussions are happening right now about ways to improve data integrity in published papers, while also maintaining trust in collaborators, supervisors, and students.

I am sure that many of these ideas will be expressed in other venues. However, we here wish to make a space available for constructive suggestions by those who wish to comment in depth but do not have a forum to do so.

These ideas will play out as a series of guests posts - as opposed to the back and forth in comment sections. We are striving for well-considered and deliberate ideas, rather than knee-jerk comments, criticisms, or the like.

These posts will not be about the Pruitt data debate specifically - but rather more general comments on how to improve the process overall - regardless of what happens with those papers.

These posts will be moderated by myself (Andrew Hendry). Please send me an email (andrew.hendry@mcgill.ca) if you would like to write one. The first post should be up soon.


#IntegrityAndTrust 1. Publish statements on how data integrity was assured. 
By Pedro Peres-Neto.

#IntegrityAndTrust 2The role of teamwork in research ethics and integrity. 
By Steven Cooke. .
.
.
.
.
Additional external posts

Trust Your Collaborators 
By Joan Strassmann

Data Dilemmas in Animal Behavior


-------------------------------------------------------

Although this forum will not be about the Pruitt data or controversy, I should make clear that:

1. I have never evaluated Jonathan or his work in any context. That is, I never reviewed any of his proposals or papers as reviewer, editor, or panel member. Nor have I provided assessment letters for his job applications or tenure or promotion. I have not been involved in any of the assessments of the data that is currently being questioned - although I have certainly heard about these issues as they unfolded. I have not collaborated with Jonathan, nor did I have any plans in place to do so.

2. I have met Jonathan at several meetings and when I visited his university for a seminar. I enjoyed meeting and talking with Jonathan, and he contributed a joke picture to # PeopleWhoFellAsleepReadingMyBook that I posted on twitter. Since, the controversy has erupted into public consciousness, I have emailed Jonathan offering him this blog as a venue should he wish to respond to the criticisms - and then later to let him know that he might want to consult legal advice before doing so.

Thursday, January 30, 2020

The Pruitt retraction storm Part 1: The current state

At the suggestion of a colleague, this blog post is meant to document the status of Dr. Jonathan Pruitt's publications, two of which have formally been retracted.

Which papers are retracted?
Which have been checked and confirmed to be sound (and, by whom), so we may continue to confidently cite them?
Which papers are currently being discussed but are not yet to the point of being retracted or cleared?

HERE IS THE LIST  (**** A WORK IN PROGRESS)

The key motivation here is to highlight papers that remain reliable, for instance when data were generated by students or postdocs other than Pruitt.  At present, pending the findings of a formal inquiry, it appears that Pruitt-generated data are the common theme in the known retractions and concerns. It is crucial that we not discard sound science produced by junior scientists working with Pruitt, merely by association. For this reason, I personally would discourage people from dismissing all Pruitt-co-authored papers out of hand.

I am generating this list based on email and social media communications; I welcome new information by email or otherwise and will keep this page up to date (daniel.bolnick@uconn.edu). I am also following the pubpeer website which seems to be a spot where ongoing concerns are being discussed, but keep in mind these do not represent retractions, just conversations.

Another reason to maintain this page: many people may be simultaneously checking past data files for the kinds of flaws that led to two recent (and one pending) retraction. The pattern so far seems to be that Pruitt's co-authors are taking a great deal of time re-examining their past publications: checking for patterns in data files, redoing analyses on the existing data. This is a massive drain on their time, when what they probably need most is to focus on new research to help them recover. At the same time I am aware of some non-authors who are delving into the data files as well. This redundancy is perhaps good to a point (in at least one case a co-author did not find problems in the data that were then identified by someone more familiar with the other cases), but also is a drain on the field's collective time. Therefore, if a paper is actively being re-examined, and IF the researchers involved wish to be named, I can include their information on this page as well as contacts. However, so concerns do not linger indefinitely, please let me know when papers are cleared of concern, or when retractions are public.

Please note that for legal reasons the journals typically will not list in-progress retractions until they are official, as retractions are usually vetted by the publisher and lawyers before being made public.

The Google Spreadsheet documenting the current status of Pruitt papers can be found here

If you have additions or amendments or corrections please contact me at daniel.bolnick@uconn.edu

-Dan Bolnick


The Pruitt retraction storm Part 2: An Editor's narrative

This blog post documents my experience of what social media seems to be calling #SpiderGate or #PruittGate. I am writing this from three perspectives. First, as Editor of a journal affected by the series of recent retractions by Dr. Jonathan Pruitt and colleagues. Second, as a one time co-author with Pruitt.  Third, as a friend of Jonathan's from long discussions about science at conferences and pubs over the past decade.

A companion post will provide  a summary of the current state of retractions and validations of his papers [please email me updates at daniel.bolnick@uconn.edu],

A third companion post will contain reflections on what this all means for science broadly and behavioral ecology specifically.

Before diving in, I want to emphasize that parts 1 & 2 of this series are meant to be a strictly factual record of the sequence of events and communications that do not imply any judgement about guilt or innocence for Dr. Pruitt. For transparency, I should also reiterate that although I do not know Jonathan well, we have been academic friends for quite a few years.

1. A narrative of events

On November 19 2019, a colleague (Niels Dingemanse) emailed me a specific and credible critique of the data underlying a paper by Dr. Jonathan Pruitt and colleagues that was published in The American Naturalist, for which I am Editor In Chief.  The critique (from an analysis by Erik Postma) identified biologically implausible patterns in the data file used in the paper (Laskowski et al 2016). Specifically, certain numbers appeared far more often than one would expect from any plausible probability distribution. For specifics, see the recent blog post explanation by Kate Laskowski. The complaint did not make specific accusations about how the suspect data might have come to be.

The lead author on this paper, Dr. Kate Laskowski, had received data from the last author, Jonathan Pruitt. Pruitt does not contest this point. She had analyzed and written the paper, trusting that the data were accurate.  On hearing about the odd patterns in the data, she did exactly the right thing: she examined the data files herself very carefully. She found additional odd patterns that have no obvious biological cause. She asked Jonathan about these patterns, as did I.  His initial explanation to me did not satisfy us. He said the duplicated numbers were because up to 40 spiders were measured simultaneously and often responded simultaneously, and were recorded with a single stopwatch. The methods and analyses did not reflect this pseudoreplication, so Jonathan offered to redo the analyses. The new analyses did not recover the same results as the original paper. Moreover, the duplicated numbers were in fact spread across spiders from different time points, webs, etc, so his initial rationale did not explain the data. At this point, Dr. Laskowski decided to retract the American Naturalist paper because she could not determine how the data were generated. She obtained consent from her co-authors including Dr. Pruitt, on wording that included the acknowledgement that the data were not reliable (without specifying how), and that Pruitt had provided the data to the other authors. This retraction was then run past the University of Chicago Press publisher and lawyers, then copyedited and typeset, and published online in mid-January.

Dr. Laskowski also examined two other Pruitt-provided dataset, one for a paper she also lead authored with Pruitt in Proceedings of Royal Society B, and one for a paper she co-authored. The former paper is now officially retracted. Her analysis and request for retraction of this PRSB paper was concurrent with the American Naturalist one, and PRSB Editor Spencer Barrett and I were in close contact through this process. Problems with the third paper were brought to our attention on January 13, by Dr. Laskowski. The retraction of the third paper is being processed by the journal and we were asked to not publicize specifics until the journal posts the retraction statement.

My involvement in this process was to field the initial complaint, email a series of queries with Jonathan seeking explanation, and to accept the retraction of the American Naturalist article without passing judgement on the cause of the problems with the data.  However, once the authors requested the retraction (of both the AmNat paper and the PRSB paper), I consulted the Committee on Publication Ethics guidelines, in depth.  Four points emerged.

First, it is clear that when oddly flawed data lead to a retraction, the Editor is supposed to report this to the author's Academic Integrity Officer (or equivalent).  I contacted the relevant personnel at Pruitt's current and former institutions to notify them of concerns. Pruitt's current employer is best positioned to conduct an inquiry. It is not my job, nor is it even my right, to render judgement about whether data were handled carelessly to accidentally introduce errors, or whether the data were fabricated, or whether there is a real biological explanation for the repeated patterns. So I encourage community members to not engage in summary judgement and await the (likely slow) process of official inquiry.

Second, Editors of multiple affected journals are encouraged to communicate with each other (which I have done with Spencer Barrett of PRSB and other Editors elsewhere) to identify recurrent patterns that might not be clear for our own journals' smaller sample of papers.

Third, it seemed wise to investigate the data underlying other articles that Pruitt published in The American Naturalist, for which I am responsible as the journal's Editor. I asked an Associate Editor with strong analytical skills (which could be any of them) who is not tied up in behavioral ecology debates (e.g., a neutral arbiter) to examine the original concern then to examine the data files for other papers.  The AE put in an impressive effort to do so, and reported to me that at least one paper appeared to have legitimate data and results (on Pisaster sea stars), but other papers had flaws that to varying degrees resembled the problems that drove retraction. The Pisaster paper apparently involved data collected entirely by Pruitt, so make of that what you will, but we found no evidence of unrealistic patterns. Analysis and discussion concerning the other papers is ongoing, we have not yet rendered a judgement.  It is the author's prerogative to request a retraction, and in a desire to approach this fairly we are giving authors time to examine their data closely, exchange concerns with Pruitt (who is in the field with limited connectivity), before reaching a final decision on retraction. The Associate Editor also examined a few files for Pruitt articles at other journals, and found some problems which we conveyed to the relevant co-authors and journal Editors.

Fourth, it seems clear at this point that the data underlying a number of Pruitt papers are not reliable. Whether the problem is data handling error, or intentional manipulation, the outcome will be both a series of retractions (the two public ones are just the beginning I fear), and mistrust of unretracted papers. This is harmful to the field, and harmful especially to the authors and co-authors on those papers. Many of them (myself included) were involved in Pruitt-authored papers on the basis of lively conversations generating ideas that he turned into exciting articles. Or, by giving feedback on ideas and papers he already had in progress (Charles Goodnight, for example, is second of two authors on a Nature paper with Jonathan, having been invited on after giving feedback on the manuscript). Or, often they were first authors who analyzed data provided by Pruitt and wrote up the results.  These people have seen their CVs get shorter, and tarnished by the fact of retraction. They have experienced emotional stress, and concern for how this impacts their careers.  I want to emphasize that regardless of the root cause of the data problems (error or intent), these people are victims who have been harmed by trusting data that they themselves did not generate. Having spent days sifting through these data files I can also attest to the fact that the suspect patterns are often non-obvious, so we should not be blaming these victims for failing to see something that requires significant effort to uncover by examining the data in ways that are not standard for any of this. So to be clear, the co-authors have in every instance I know of reacted admirably and honorably to a difficult and stressful situation. They should in no way be penalized for being the victims of either carelessness or fraud by another whom they had reason to trust.

  As the realization dawned on me that (1) many people were going to be affected, and (2) they are victims, I felt that a proactive approach was necessary to help them.  Dr. Laskowski for example was seeing some of her favorite articles retracted, while she is junior faculty at a top-notch institution. For some of Pruitt's more recent students, the majority of their publication list may be at risk.  With this in mind, I agreed with Dr. Laskowski that public acknowledgement of the retractions was the best strategy (via twitter and her blog).  I was deeply relieved to see the intense outpouring of support, sympathy, and respect that she and her fellow victims deserve.

Fundamentally I believe that if we stigmatize retractions, we will see fewer of them and the scientific record will retain its errors longer than we'd like.  When mistakes are found, transparency helps science progress and move on more quickly. I experienced this myself when I had to retract a paper because of a R code error (it was the first paper I published using R for the analyses), and received very positive support for the actions (blog about that retraction is here). So I encourage you to continue to support the affected co-authors.

Because the first retraction came out in The American Naturalist, and because of Dr. Laskowski's tweets tagging me, I inadvertently became a go-to participant in the process. I have received numerous emails every day this January about data concerns, retraction requests, and related communications. The process has often engulfed half to all of my day several days per week. Most of these I responded to as I could, or forwarded to the relevant people (Editors, Academic Integrity Officers, etc), redacting details when the initial sender requested anonymity.  Analyses and discussion of some of the emerging concerns can be found here.

The Associate Editor I mentioned above went as far back as digging into some of Pruitt's PhD work, when he was a student with Susan Riechert at the University of Tennessee Knoxville. Similar problems were identified in those data, including formulas in excel spreadsheets where logic and biology would suggest no formula belongs.  Seeking an explanation, I had the dubious role of emailing and then calling his PhD mentor, Susan Riechert, to discuss the biology of the spiders, his data collection habits, and his integrity. She was shocked, and disturbed, and surprised.  That someone who knew him so well for many years could be unaware of this problem (and its extent), highlights for me how reasonable it is that the rest of us could be caught unaware.

Meanwhile, I have delved into the one dataset underlying my co-authorship with Pruitt (a PRSB paper on behavioral hypervolumes). The analytical concept remains interesting and relevant, so not all of that paper is problematic. But, the analyical approach presented there was test-run on social spider behavior data (DRYAD data) that does turn out to have two apparent problems: an unexpected distribution of the data (not as overdispersed as we'd think it should be for behavior data);  some runs of increasingly large numbers that do not make sense; the mean of the raw data file of 1800 individuals is basically exactly 100.0; there are many duplicate raw values; and an excess of certain last digits that data forensics suggests can be a red flag of data manipulation BUT IS NOT CONCLUSIVE. Neither problem is a smoking gun, neither is as clear as that of other articles. We have requested a response from Pruitt, who is traveling doing field work in remote locations at the moment, and are holding off on deciding to retract the paper until we see a response.



The last thing I want to say is that I am increasingly intrigued and troubled by the lack of first-hand wittnesses who actually did the raw data collection, and the lack of raw data sheets . If any one was an undergrad with Pruitt who can attest to how these data were collected, their perspective would be very very welcome. ****(I have since been contacted by two undergraduates who did collect data for Pruitt; they confirm that data were recorded on paper, that the experiments they were involved with were actually done).

-Dan Bolnick
January 30, 2020


Some follow up thoughts added later:
1. These investigations take a great deal of time per paper (The AmNat retraction took 2 months of back and forth and data examination and wrangling over retraction wording), there are many papers. Be patient and do not assume every paper is unreliable, please.

2. The co-authors did not catch the flaws in the datasets, it is true, but having been deeply involved in examining these data the red flags that have cropped up all were revealed by the kinds of analyses and digging through original data looking for duplicated runs of numbers, that are not habitual automatic things to do to raw data. Not having had reason to mistrust, what they did (proceeding to analyze the data ) was quite natural.


Friday, January 24, 2020

Writing Retreats


This last fall, my students organized a writing retreat at the McGill University Gault Nature Reserve on Mt. St. Hilaire. We all had such an amazingly positive and productive experience that I wanted to tell all the PIs out there about it – and all the students too – so that they can lobby their Profs for something similar.


In the hopes of creating a shared experience, we decided that everyone would, then and there, start and (ideally) finish the draft of the introduction to a new paper that they needed/wanted to write based on data they had or were collecting. To further generate a shared narrative, I started by presenting my baby – werewolf – silver bullet metaphor for writing papers: detailed here. We challenged ourselves to – within 15-20 min – each come up with a single sentence for the baby (what people care about with respect to the overall topic of your paper), a single sentence for the werewolf (something that is not well understood about, or is a problem with, that topic), and a single sentence for the silver bullet (how a study can fill that understanding and therefore kill the werewolf and save the baby).

Each student then quickly presented their baby-werewolf-silver bullet sequence to the group for rapid feedback. Then it was off to the races. Each student worked on expanding their ideas into a true introduction while I circled around the room from one person to another to provide help and advice and to quickly read over what was being written. Babies came and went to be replaced with other, more adorable, babies. Werewolves were found to be not very scary – or unkillable – and so were replaced with other werewolves. Silver bullets were polished and refined. Introductions took shape.
 
Then it was time for an awesome chilli dinner and then trivia (biodiversity related) and scientific karaoke (each student randomly presented the research of another student based on 1-3 slides provided by that student). Then we told war stories from the field until late in the evening (early in the next morning). Ticks. Bears. Snakes. Cliffs. Bear attacks. Deer attacks. The next morning we continued our work, had a good walk and headed back to the real world.


We all really liked this writing retreat. I had a great time working with everyone on the formative stages of their papers. The students enjoyed hearing how each other student’s work could be interpreted in the context of a baby-werewolf-silver bullet context. Several students noted that it was hard to get writer’s block because quickly exchanging ideas with me or the other students would immediately allow them to progress down new avenues. We all felt excited about writing and invigorated about our various writing projects.

This enthusiasm has continued to the implementation of follow-up mini-writing retreats. Now, every Friday, we reserve a room in the graduate student house at McGill and continue the process. Students sit around at table or on couches and – simply – write. I walk around, have a seat by one or another, read their work, discuss their ideas, and just all around enjoy the process.

Perhaps it isn’t too late to teach an old professor new tricks. My normal way of writing was to have meetings with students individually to discuss things, then they would go off and write, then I would receive the paper and edit it intensively by myself at home, then I would send it back, rinse and repeat. Now we will write papers – or at least parts of them – together. I can’t wait until next Friday afternoon. But, in the meantime, I had better get back to editing this MS that a student sent me.

Monday, December 16, 2019

Darwin’s finches adapting to human influences

This post was originally posted on the British Ornithologist's Union Blog here:
https://www.bou.org.uk/blog-gotanda-antipredator-behaviour-darwins-finches/


Can Darwin’s finches adapt to invasive predators and urbanization?

Dr. Kiyoko Gotanda
University of Cambridge

Gotanda, K.M. Human influences on antipredator behaviour in Darwin’s finches. J. Anim. Ecol.
A small ground finch in the Galapagos

"All of [the terrestrial birds] are often approached sufficiently near to be killed with a switch, and sometimes, as I myself tried, with a cap or a hat." -Charles Darwin in "The Voyage of the Beagle"

The Galapagos Islands are renowned for their biodiversity and large numbers of endemic species, including the Darwin’s finches. When Charles Darwin visited the Galapagos Islands back in 1835, he noted that land birds could be approached near enough to toss a hat over the birds (Darwin 1860)! These and other Galápagos organisms evolved for millions of years in the absence of humans and mammalian predators, and thus developed a remarkable naïveté to humans and their associated animals.

Humans now have a permanent presence on the Galapagos and with that comes a variety of influences. A major, contemporary threat is the introduction of non-native predators (Fritts and Rodda 1998; Low 2002) which is often correlated with extinctions on islands (Clavero and García-Berthou 2005; Sax and Gaines 2008; Clavero et al. 2009). House cats (Felis silvestris catus) are particularly problematic because they can decimate bird populations (Lever 1994; Nogales et al. 2004; Wiedenfeld and Jiménez-Uzcátegui 2008; Loss et al. 2013). Recently, humans have introduced to the Galapagos Islands invasive house cats ( Phillips, Wiedenfeld, & Snell, 2012) that opportunistically prey on land bird species, including Darwin’s finches, which poses a major threat to the biodiversity on the islands(Stone et al. 1994; Wiedenfeld and Jiménez-Uzcátegui 2008). The Galapagos National Park has taken extreme conservation measures and successfully eradicated cats and rats on some of the islands (Phillips et al. 2005, 2012; Carrión et al. 2008; Harper and Carrión 2011).

The second human influence is we have established permanent human populations creating urban areas. The increase in urbanization can have a strong effect on ecological and evolutionary processes (Alberti 2015; Alberti et al. 2017; Hendry et al. 2017; Johnson and Munshi-South 2017). Urbanization on the Galápagos Islands has rapidly increased from ~1000 permanent residents to ~25,000 in just 40 years, presently distributed across four towns each on four different islands (UNESCO 2010; Guerrero et al. 2017). We are already seeing the impact of human habitats on behaviour in Darwin’s finches. Urban finches in the largest town on the Galapagos, Puerto Ayora (permanent human population = ~12,000) have shifted their behaviour and now exploit human foods which has changed the finches’ ecological niche (De León et al. 2018).
Figure 1. Map of the Galapagos Islands showing islands that vary in the presence, absence, or eradication of invasive predators. Islands in orange have invasive predators, islands in green are pristine, and islands in purple have had invasive predators eradicated. Islands that have the presence of invasive predators are also the islands with permanent human populations.

So, how have Darwin’s finches adapted to these different human influences? I wanted to know how the finches might be adapting to the presence, absence, or eradication of invasive mammalian predators, and to urbanization. Specifically, I was interested in their antipredator behaviour. To study this, I focused on flight initiation distance (FID), the distance at which an individual flees an approaching predator, and is a metric of fear.

An invasive house cat in the Galapagos

On islands that have invasive predators, the finches have adapted by increasing their antipredator behaviour. What’s most interesting, though, is that on islands where invasive predators had been successfully eradicated either 8 or 13 years prior to my data collection, the increased behaviour has been maintained. This suggests that the increased antipredator behaviour could be an evolved adaptation. However, it could also be due to other things such as learned behaviour or cultural transmission of the behaviour through the generations. Either way, invasive predators can have a lasting effect on antipredator behaviour in Darwin’s finches.

I also compared antipredator behaviour in urban and non-urban populations of Darwin’s finches on all four islands that have permanent human populations. I found that on the three islands with the three largest human populations, antipredator behaviour was significantly less in urban finches when compared to non-urban finches, likely due to habituation. Furthermore, antipredator behaviour was lower than what I found on islands that were pristine and had no history of any human influences.
 
Figure 2. Flight initiation distance in finches in relation to the presence, absence, or eradication of invasive predators and between urban and non-urban populations of Darwin’s finches.

Thus, my study shows that Darwin’s finches are adapting their antipredator behaviour to different human influences. These findings can help us better understand how the presence and subsequent removal of predators can have lasting effects on antipredator behaviour, and that that urbanization, and the likely habituation of Darwin’s finches to the presence of humans and other large stimuli, can strongly counteract any effects of the presence of invasive predators.


References
Alberti, M. 2015. Eco-evolutionary dynamics in an urbanizing planet. Trends in Ecology & Evolution 30:114–126.
Alberti, M., J. Marzluff, and V. M. Hunt. 2017. Urban driven phenotypic changes: empirical observations and theoretical implications for eco-evolutionary feedback. Philosophical Transactions of the Royal Society B 372:20160029.
Carrión, V., C. Sevilla, and W. Tapia. 2008. Management of introduced animals in Galapagos. Galapagos Research 65:46–48.
Clavero, M., L. Brotons, P. Pons, and D. Sol. 2009. Prominent role of invasive species in avian biodiversity loss. Biological Conservation 142:2043–2049.
Clavero, M., and E. García-Berthou. 2005. Invasive species are a leading cause of animal extinctions. Trends in Ecology & Evolution 20:110.
Darwin, C. 1860. The Voyage of the Beagle (Natural Hi.). Doubleday and Co., London, UK.
De León, L. F., D. M. T. Sharpe, K. M. Gotanda, J. A. M. Raeymaekers, J. A. Chaves, A. P. Hendry, and J. Podos. 2018. Urbanization erodes niche segregation in Darwin’s finches. Evolutionary Applications 12:132–1343.
Fritts, T. H., and G. H. Rodda. 1998. The role of introduced species in the degredation of island ecosystems: a case history of Guam. Annual Review of Ecology and Systematics 29:113–140.
Guerrero, J. G., R. Castillo, J. Menéndez, M. Nabernegg, L. Naranjo, and M. Paredes. 2017. Memoria Estadística Galápagos.
Harper, G. A., and V. Carrión. 2011. Introduced rodents in the Galapagos: colonisation, removal and the future. Pages 63–66 in C. R. Veitch, M. N. Clout, and D. R. Towns, eds. Island Invasives: Eradication and Management. IUCN, Gland, Switerzland.
Hendry, A. P., K. M. Gotanda, and E. I. Svensson. 2017. Human influences on evolution, and the ecological and societal consequences. Philosophical Transactions of the Royal Society B 372:20160028.
Johnson, M. T. J., and J. Munshi-South. 2017. Evolution of life in urban environments. Science 358:eaam8237.
Lever, C. 1994. Naturalized Animals: The Ecology of Successfully Introduced Species. Poyser Natural History, London.
Loss, S. R., T. Will, and P. P. Marra. 2013. The impact of free-ranging domestic cats on wildlife of the United States. Nature Communications 4:2961.
Low, T. 2002. Feral Future: The Untold Story of Australia’s Exotic Invaders. University of Chicago Press, Chicago.
Nogales, M., A. MartÍn, B. R. Tershy, C. J. Donlan, D. Veitch, Nés. Puerta, B. Wood, et al. 2004. A review of feral cat eradication on islands. Conservation Biology 18:310–319.
Phillips, R. B., B. D. Cooke, K. Campbell, V. Carrion, C. Marquez, and H. L. Snell. 2005. Eradicating feral cats to protect Galapagos Land Iguanas: methods and strategies. Pacific Conservation Biology 11:257–267.
Phillips, R. B., D. A. Wiedenfeld, and H. L. Snell. 2012. Current status of alien vertebrates in the Galápagos Islands: invasion history, distribution, and potential impacts. Biological Invasions 14:461–480.
Sax, D. F., and S. D. Gaines. 2008. Species invasions and extinction: the future of native biodiversity on islands. Proceedings of the National Academy of Sciences 105:11490–11497.
Stone, P. A., H. L. Snell, and H. M. Snell. 1994. Behavioral diversity as biological diversity: introduced cats and lava lizard wariness. Conservation Biology 8:569–573.
UNESCO. 2010. Reactive Monitoring Mission Report.
Wiedenfeld, D. A., and Jiménez-Uzcátegui. 2008. Critical problems for bird conservation in the Galápagos Islands. Cotinga 29:22–27.

#IntegrityAndTrust Idea 2. Teamwork in research ethics and integrity

Maintaining Trust AND Data Integrity - a forum for discussion.  INFO HERE . #IntegrityAndTrust Idea 2.  The role of teamwork in researc...