Writing transparently

When writing manuscripts, researchers should aim to be as transparent as possible, being honest about what happened in the study, how it was conducted, and when and why decisions were made. By using questionable research practices, researchers make it more likely that they get a false positive result, which can partially explain low replicability rates.

In the video, Priya introduced another important consideration for evaluating replication results: sample size (the number of samples in your study, e.g. participants). Smaller sample sizes make it more likely to get both a false positive and a false negative result. This is because smaller sample sizes provide less information about the population you are studying, which increases the variability and uncertainty in your results. With a small sample, the random variation (or 'noise') can more easily overshadow the true effect you are trying to measure. This means you might detect an effect that isn’t really there, a false positive, or miss an effect that actually exists, a false negative.

For instance, imagine trying to judge the average height of a population by looking at just a few individuals. Your estimate is more likely to be off compared to measuring a larger group, because you may happen to have either a very tall or very short person in your sample. So, if you have an original study with a small sample size and a (well-designed) replication with a large sample size, you could be more confident in the result of the replication than the result of the original study.

  

Activity 2:

What not to do!

Allow about 30 minutes

So far, you have considered good and bad writing practices. With these in mind, have a go at this ‘hack your way to scientific glory’ [Tip: hold Ctrl and click a link to open it in a new tab. (Hide tip)] activity. First, choose a political party: Republican [UK equivalent: Conservative] or Democrat [UK equivalent: Labour]. Then predict whether the party has a positive or negative impact on the economy. When you have done that, change aspects of the research (e.g. participant inclusion criteria and how you’re measuring your dependent variable) and see whether you can find a significant result (p < 0.05) in your predicted direction.

The reason this is an example of ‘what not to do’ is because when you first choose a political party and predict whether they will have a positive or negative impact on the economy, you are forming a hypothesis. But, if you then play around with the data until you get the result that you wanted, and only stop when you do, then you are fixing the result.

The activity involves various questionable research practices, such as p-hacking, HARK-ing, and selective reporting. However, there is a way to do different analyses on the same data without any of these being a problem. If instead of deciding on a hypotheses first then confirming it, you were to conduct purely exploratory research (without a hypothesis) you could be transparent about all of the different ways you looked at the data and how the results differed when you tried different things. This could even lead to people conducting their own future studies to confirm your exploratory results!

When reading an academic paper, it’s important to read with a critical mindset and feel free to disagree with the methodological or analysis strategy, the interpretation of the results, or the conclusions drawn. Although we know that there are rare instances of outright fraud in science, we would expect that the researchers are truthfully describing what happened in the study, how it was conducted, and when and why decisions were made.

Questionable research practices

Generalisability