Skip to content

Reproducibility in science — where the MRC comes in

The MRC and a group of partner organisations have today published a report and joint statement  about the reproducibility and reliability of research, and what can be done to improve them. Here, Jim Smith, MRC Deputy Chief Executive and Director of Strategy, thinks about how discussions of reproducibility offer us the opportunity to improve the way science is done.

Jim Smith

Jim Smith

From basic discovery science to clinical studies, medical research works. When a new drug saves or extends lives, a new screen permits early detection of disease, or we find a new use for an old treatment, we can be confident in the long research journey that got us to that point.

But things aren’t perfect. For some years there have been rumblings in the scientific community and beyond that all is not well. In 2005, John Ioannidis published a paper in PLoS Medicine [1] provocatively titled Why most published research findings are false. In it he argued that most study designs will lead to conclusions that are more likely to be false than true.

And sure enough, a report [2] by the Open Science Foundation, published in Science this year, described the replication of 100 papers in psychological research journals: 97 of the original studies reported significant results, but this was true of only 36 of the replications.

Some might argue that science is self-correcting, and that the truth will eventually out. But wouldn’t we prefer the truth to come out a little sooner? Unreliable results hinder scientific progress and cause delays. They also waste valuable resources, an issue not to be taken lightly when research budgets are being squeezed.

I would not agree with those who have called the issue of reproducibility a crisis, but I concur with researcher and open science advocate Prof Marcus Munafò when he says that the current interest provides an opportunity to look at the way we work and to improve it.

A global problem with global solutions

To this end, the MRC got together in April this year with the Academy of Medical Sciences, the Biotechnology and Biological Sciences Research Council (BBSRC) and the Wellcome Trust to organise a symposium on reproducibility in basic biomedical research and how to improve it. The report of that meeting, and a joint statement, are published today.

We conclude that there is no single cause of irreproducibility. There does not seem to be, for example, an epidemic of fraudulent behaviour. Rather, problems seem to arise from cumulative effects at different stages of the scientific process, from experimental design to the vicissitudes of publication.

For this reason there is no simple solution to make the problem go away, and nor can any one group of people solve the problem. But we can all — research funders, publishers, research institutions, professional bodies and individual researchers — play a role.

One thing the meeting noted is that the research system is set up to reward new, innovative findings. These are easier to publish in the kinds of journals people pay attention to; they add gloss to CVs; they excite people recruiting for new positions; and they make funding panels sit up and take notice. They make headlines and they capture the imaginations of the public and policymakers. For these reasons the incentive to find something new is greater than the incentive to be right, and this needs to change. We need a cultural shift in medical research that values and rewards robust methodology and valid findings.

This is not to say that we should remove competition altogether: there are positive aspects of competition, such as the need to excel, which are good for science. It is the perverse incentives that need to go.

There are general areas in which progress can be made. The importance of laboratory standards and quality control cannot be overstated. For example, people need to be using the cell lines and antibodies that they think they are. Cell line authentication by short tandem repeat profiling is a simple task, and will prevent the use of misidentified or contaminated cells lines in experiments [3].

We should also address bias in experimental design, data analysis and data presentation. Taking a more ‘open science’ approach, in which protocols are pre-registered in advance and journals committed to publishing the results regardless of outcome, could be one way of doing this. This would help to remove practices such as p-hacking (choosing when to stop recording data, selecting which variables to use, or publishing whichever statistically significant result you find), or HARKing (hypothesising after results are known).

Post-publication peer review provides a near-instant way for researchers in a field to register their thoughts alongside the data in the scientific literature, speeding up self-correction.

We do not want such efforts to lead to huge increases in bureaucracy, and nor should they stifle creativity or inhibit the pursuit of new ideas — we certainly don’t want biomedical research to become a giant results-verifying machine. But we do want an environment in which conclusions can be trusted and taken forward, and in which negative results of well-designed and conducted experiments are valued in and of themselves.

The role of the MRC

While everyone must accept their responsibilities in the problems and solutions of reproducibility, the MRC, as a major funder of medical research, has an important role to play.

We have already made some changes. For example, applications for funding that do not provide enough detail to judge the rigour of animal experiments are sent straight back, and our boards and panels are all instructed not to look at where a paper was published but at its content (a key component of the San Francisco Declaration on Research Assessment).

In the future we might consider increased analysis and dissemination of the outcomes of grants when they finish, in order to record negative as well as positive results. This is an additional burden on researchers, but it could provide a useful way to promulgate results that otherwise would not be published and to give credit where it’s due. Of course, this would not be allowed to jeopardise the publication of important positive results.

We might also establish additional training, to ensure that cadres of young researchers have a good understanding of the scientific method and statistical analysis, together with guidance in the skills needed to ensure scientific integrity. We are already exploring idea this with partners including the Wellcome Trust.

Finally, we could also consider supporting research into the scientific method itself. And while I wouldn’t want to see the MRC supporting research solely intended to reproduce results, we may consider incorporating elements of this into funded work.

Like our partner organisations, we will be developing and implementing changes to our own practices, as well as working alongside others to tackle this question. We’ll update you on our progress within the next year, and in the meantime I welcome the comments of colleagues. Email me at jim.smith@headoffice.mrc.ac.uk.

Jim Smith

References

[1] Why most published research findings are false PLOS Medicine (2005) doi:10.1371/journal.pmed.0020124

[2] Estimating the reproducibility of psychological science Science (2015) doi: 10.1126/science.aac4716

[3] Reproducibility: changing the policies and culture of cell line authentication (2015) Nature Methods doi:10.1038/nmeth.3403

No comments yet

Leave a Reply

You may use basic HTML in your comments. Your email address will not be published.

Subscribe to this comment feed via RSS