Flimsy interpretations

In Week 3, you learned how researchers can be biased towards statistically significant results, and results that fit with the story they are trying to tell in their paper. One of the ways to spot this is when results and conclusions don’t follow on from each other. Tenuous links between results and conclusions are not always obvious, but they will be easier to spot once you are familiar with papers in your particular research area. Here are some to avoid in your own research:

P-value interpretation

In quantitative papers where a specific p-value threshold is being used to determine whether a result is statistically significant or not, researchers should specify at the beginning of their analysis section what their threshold will be for interpreting significance (e.g. < 0.05). It’s important that p-values are interpreted consistently throughout the analysis. For example, you shouldn’t find 0.05 being used to show a significant difference in one case, but not in another. Any statistical value that is larger than your identified threshold should not be presented as evidence of an effect or association, however much you may wish it to be so!

Support for theory

Sometimes, researchers can be so invested in a particular theory they are not able to see other ways their results could be interpreted. You should always try to think about alternative explanations for your results, and include these in the manuscript discussion. When reading other researchers’ papers, think about other possible interpretations of their results, and evidence for and against those different interpretations. It can be difficult to see theories outside your own position, so it is helpful to get other researchers with different experiences or expertise to read your work before submitting it. You can offer to do the same for them when they are writing a manuscript.

Burying results

Sometimes researchers present several results in an article, but ‘cherry-pick’ which of these to highlight in the discussion, overemphasising results that fit the story they’re trying to tell in their paper, and underemphasising those that seem to be contradictory. It’s important that any contradictory results are included in the discussion section, with speculation about why they may have occurred.

  

In Week 3, we pointed out that these biases are largely due to problematic incentive structures in academia. Researchers are incentivised to publish exciting, significant results in their papers, as these are more easily accepted by highly-regarded journals. Knowing this, it isn’t surprising that researchers are often biased to tell a simple, effective story in their papers, even though research is messy!

Slowly, the norms do seem to be shifting, so it is becoming more common to be fully transparent in your manuscript writing, by including potentially confusing results and being honest about uncertainties.

Positionality statements

Avoiding the pitfalls