The replication crisis
This section will highlight some of the issues around replication in quantitative research. Replication is possible in qualitative research, and many qualitative researchers see the value of replication. So if you are a qualitative researcher, this section is still relevant to you. It will allow you to explore key issues faced by quantitative colleagues, learn how to read quantitative research papers more critically, and think about whether these issues could also apply to qualitative research, albeit manifested differently.
If we consider relatively direct replications, using the same materials as the original authors but conducted by different researchers, what percentage of published results do you imagine would replicate? It would be tempting to think that most published research findings are true, and therefore that a replication of a published research finding would be pretty likely to find the same result. However, researchers have found these percentages of findings could not be replicated:
- Psychology: up to 60%
- Cancer biology: up to 55%
- Economics: up to 40%
- Philosophy: up to 30%
The number of studies that could not be replicated was much higher than expected in certain fields, which has led some to refer to this as a ‘replication crisis'.
Why is it that so many quantitative studies cannot be replicated? It’s complicated!
Previously, you learned the three classifications of failed replications: the original result was a false positive, the replication result was a false negative, or differences between the two studies could have been responsible for the different results. However, these three interpretations are not all as likely as each other. There are ways to try and work out which of these are most likely.
The original result being a false positive is more likely than you would think. Researchers often do not publish all the research that they do. As a researcher, there is an incentive to publish papers in ‘high impact’ journals (journals that are regarded highly in the researcher’s discipline, and that publish papers that receive a high number of citations). Historically, it has been harder to publish negative (null) results than positive (statistically significant) results, as journals have prioritised headline-grabbing results that confirm popular or contemporary positions. This has been the case for all journals, but especially high-impact ones.
This means that researchers have an incentive to get positive results in their research and can feel disappointed, stressed, and even ashamed if they don’t get a significant result. This can entice them to turn to questionable research practices, to increase the likelihood of a false positive result.
Limits to replication
