Restoring trust in science: What are questionable research practices?
Credibility is everything in scientific research. Indeed, it can be argued that both scientists and the community at large hold the scientific research community accountable to high standards in rigour, transparency and reporting. Medical decisions are based on a shared understanding of publicly reported clinical trials and other health and medical research, and we have a moral obligation to ensure that published research findings are accurate.
However, over recent years, major reviews and evaluations of the scientific literature have increasingly highlighted the inherent biases and questionable research practices of which many researchers are unaware. Some researchers are unaware due to lack of appropriate training. Others may be aware of the problems but choose to continue because there is pressure (actual or perceived) to publish their research or risk losing their funding and/or jobs – widely described by the maxim ‘publish or perish’.
Over the past decade something quite remarkable has been occurring in the scientific world. Scientists across several disciplines (e.g. medicine, psychology, biology, economics, political science) have begun to redefine the way we conduct and report research – many go as far as to call it an academic revolution.
In this article, we describe some of the practices that are considered questionable in scientific research, focusing on the non-publication of negative results.
A blog post on 21 November 2016 by Professor Brian Wansink, founding director of the Food & Brand Lab at Cornell University in the United States, described the work of Ozge Sigirci, a PhD candidate. Sigirci had come to do some voluntary work in his laboratory. Prof Wansink wrote:
I gave her a data set of a self-funded, failed study which had null results […] I said, ‘This cost us a lot of time and our own money to collect. There’s got to be something here we can salvage because it’s a cool (rich & unique) data set.’ I had three ideas for potential Plan B, C, & D directions (since Plan A had failed). I told her what the analyses should be and what the tables should look like. I then asked her if she wanted to do them.
Every day she came back with puzzling new results, and every day we would scratch our heads, ask ‘Why,’ and come up with another way to reanalyze the data with yet another set of plausible hypotheses. Eventually we started discovering solutions that held up regardless of how we pressure-tested them. I outlined the first paper, and she wrote it up…
Many researchers will read this quote and nod in recognition of the creativity that is sometimes needed in scientific work. Indeed, many researchers around the world have been explicitly trained to conduct scientific experiments in just the same way.
However, the practices described by Prof Wansink are in fact considered scientific misconduct, because the published articles did not report transparently on the process of producing the results. That is, a reader could get the impression that Wansink and colleagues had predicted one particular outcome, tested it in one or a few defined statistical analyses, and then recorded the results. In fact, the blog post showed that the true process had involved a number of predictions and statistical tests that were never mentioned in the final report. This is highly questionable because it leads readers astray: the results are falsely described as having a stronger evidential value than they actually had. That is not good science; it is very bad science.
According to Mobley and colleagues (2013), about 30 per cent of surveyed doctoral students and post-doctoral researchers in medical cancer research have experienced pressure from senior researchers to publish positive results they know or believe are incorrect. A meta-analysis from 2009 showed that roughly 30 per cent of researchers admitted to employing questionable research practices such as the ones described by Prof Wansink. A 2012 survey of 2200 research psychologists showed that such practices appear to be the prevailing norm in the discipline of psychology. Similar results were reported in 2017 from a study involving 277 Italian psychologists and a recent international survey of 807 researchers in ecology and evolution.
Consequences of scientific misconduct
By April 2018, the blog post by Prof Brian Wansink had resulted in seven of his published papers being retracted; 15 articles have since been corrected and 52 of his publications have been found to contain minor to severe errors. Prof Wansink remains in his position at Cornell University.
Unlike Dr Jens Förster, who was recently exposed by the University of Amsterdam as having employed unspecified, questionable research practices in his own work and in collaboration with PhD students he supervised. Dr Förster chose to leave academia in 2017, just as Dr Amy Cuddy from Harvard and Berkeley universities recently did after having received heavy public criticism for her alleged use of questionable research practices; practices that later were disclosed in a public statement by her colleague at Berkeley, Prof Dana Carney.
Perhaps the most severe consequences to date of engaging in scientific misconduct seem to have been suffered by Dr Jodi Whitaker, whose PhD qualification was revoked in 2017 by the Ohio State University due to altered data points found in the raw data on her supervisor’s computer.
It appears that leaders in academia have not yet discovered the best way to deal with disclosed questionable practices. Given that a majority of scientific researchers admit to engaging in questionable research practices, it could reasonably be concluded that a high proportion of scholarly leaders in the academy have been similarly as inadequately trained as their students and staff. Clearly, the solution needs to involve transparency and full disclosure in the publishing of research.
Criticism of the use of questionable research practices is far from a new topic in science; it was raised as early as 1830 by Charles Babbage in his book Reflections on the Decline of Science in England, and of Some of Its Causes . Thus, there’s nothing revolutionary about these concerns. The true revolution is in the raised standards requiring transparent practices such as preregistration, open publication of data and materials, and complete and transparent reporting of methods and results. These practices are currently being embraced by a growing number of journals, funding agencies, reviewers, laboratories and institutions. In ten years, it will likely be very difficult to publish research in the life sciences and social sciences that do not fulfill these transparency standards.
Why publication of negative results is important
One important reason preregistration is commonly referred to as one of the most important transparent practices is that it increases the likelihood of publication of negative results. Preregistration of clinical trials has been required by the International Committee of Medical Journal Editors since 2005, but the practice has only recently been adopted for other types of studies, including qualitative research.
Another important initiative that has the potential to completely extinguish the non-publication of negative results is Registered Reports (RR), where the main peer review process takes place prior to commencement of data collection. Once approved by reviewers and the editor, the journal commits to publishing the study regardless of what is found. As of April 2018, 105 journals have adopted RR as a new article format, and the number of journals on board is increasing rapidly.
In addition to these raised standards and requirements, we argue that there is a moral obligation on the part of the researcher to publish all the results of their studies, regardless of the results, because:
- For the most part, research is supported from public funds.
- Patients and other participants who give up their time to participate do so in good faith that the research will be properly conducted, completed and reported.
- Patients and other participants may have accepted the risks of negative side effects to support the research.
- The research community and the public deserve to get the full picture of research outcomes – positive and negative.
- Knowledge accumulation is severely hampered when results are not published or are not reported transparently.
In a 2012 interview, Dr Haiko Sprott, head of the Pain Clinic in Basel, Switzerland, described the value of publishing negative results, particularly in studies of rare diseases:
If we only consider positive results, it will result in a biased impression of the effectiveness of a specific treatment. Therefore because these negative data haven’t been published, nobody knows that this specific treatment failed in many more cases. So the patient will be given the wrong treatment, which may not help or even cause serious problems.
‘No result is worthless’, stated Gabriella Anderson in a blog for BMC’s Journal of Negative Results in Biomedicine. Indeed, a null result can lead to new and unexpected discoveries. In her blog, Dr Anderson has cited the example of Albert Michelson and Edward Morley, who in the 18th century conducted many experiments leading to null results that later played an important role in Albert Einstein’s 1905 proposed special theory of relativity.
Raising our standards in scientific research
Ensuring that scientific research provides robust and trustworthy results is a shared responsibility:
- It requires researchers to follow agreed scientific methods in planning, conducting and reporting their research.
- Editorial boards and journal reviewers need to be open in the ways that they assess submissions reporting negative results.
- Funding bodies also need to be courageous about supporting sound scientific proposals, even if the results turn out to be ‘negative, unexpected or even controversial’.
If you’re a researcher or a clinician interested in getting involved in health or medical research, some of the concepts discussed in the literature can seem overwhelming, especially since many of us do not have enough time as it is – and this might be particularly true for researchers sharing their time between clinical and research responsibilities. However, as scientists we can (and should) do so much better.
Future generations will look on the term ‘open science’ as a tautology – a throwback from an era before science woke up. ‘Open science’ will simply become known as science, and the closed, secretive practices that define our current culture will seem as primitive to them as alchemy is to us.
Dr Rebecca Willén is a researcher who was taught according to some of the questionable practices discussed above, and was forced to educate herself on how to conduct high-quality scientific research. In February 2018 she published several retroactive disclosure statements to supplement her earlier published articles – an initiative that was covered by one of the best-known transparent science podcasts and has gained positive responses from the research community.
In 2016, she founded the Institute for Globally Distributed Open Research and Education (IGDORE) with the explicit mission of educating and training scientists and students on world’s-best scientific practices. IGDORE offers support for researchers and students who want assistance and guidance in learning about scientific openness and transparent practices. IGDORE provides a co-working space in Ubud, Bali, where scientists and students are welcome to spend dedicated time focusing on learning about scientific transparency and replicability, among other things related to quality scientific endeavours. In future, IGDORE will host Open Science Retreats offering lectures and workshops.
Western Alliance provides training and resources on research methods to promote quality standards in health and medical research, as well as networking and collaboration opportunities between health service professionals and academic researchers in western Victoria.