Skip to main content
Printable page generated Thursday, 27 November 2025, 11:20 AM
Use 'Print preview' to check the number of pages and printer settings.
Print functionality varies between browsers.
Unless otherwise stated, copyright © 2025 The Open University, all rights reserved.
Printable page generated Thursday, 27 November 2025, 11:20 AM

Integrity: Supporting robust interpretations

Introduction

  

The image shows a person on a boat in an empty sea, observing their surroundings with a telescope

  

Researchers are human, and humans have their own experiences, perspectives, values, and biases. For this reason, we must be alert to the possibility that any individual researcher’s observations could be influenced by their position.

In Week 3, you looked at the principle of integrity in open research: you will finish your consideration of integrity in Week 5. You will learn how to protect yourself from unintended bias, spot the signs of flimsy analysis, and scrutinise the robustness of your own research. By the end of this week, you will have a toolkit of techniques which you can use to support the integrity of your research.

  

Conflicts of interest

It is standard in all types of research to disclose any specific ‘conflicts of interest’ – factors that could make the researcher biased towards particular results. This is common in research projects where money is involved. Would the researcher (or the company funding the researcher) benefit from the results of the research coming out in a particular direction?

Here are some examples of potential conflicts of interest:

  • A pharmaceutical company funding a clinical trial for its own drug may create a conflict of interest, if the company stands to benefit financially from positive results.
  • A researcher receiving funding from an oil company to study the environmental impacts of drilling may face conflicts of interest, if they are under pressure to downplay negative findings.
  • Researchers studying the effectiveness of educational interventions funded by the companies producing those interventions may have conflicts of interest if their findings recommend using the products of those companies.

  

While it is common to disclose conflicts of interest, there are many fields where it isn’t common to consider how our experiences, perspectives, values, and biases might affect our research in a way that is less clear-cut.

Positionality

Positionality refers to an individual's social and political position within society, including their identity, background, experiences, and beliefs. These factors can influence the way researchers perceive and interpret data, potentially impacting the research process and outcomes. Positionality is not always negative – a researcher can also be uniquely positioned to study something because they have a deeper lived understanding of it.

Here are some positive and negative examples of ways that positionality could influence research:

  • Gender biases among medical professionals may affect the way certain health conditions are studied or treated.
  • Economic researchers' ideological beliefs and political affiliations can shape their interpretations of data and policy recommendations.
  • Indigenous researchers may offer traditional ecological knowledge that complements non-indigenous approaches, leading to innovative conservation strategies.
  • An activist-researcher may use their lived experience of living under a dictatorship when conducting a study on political systems.

Positionality statements

The image shows a person’s hand in the foreground, appearing very large, and a man standing on a rock in the background in the background, looking very small.

Positionality statements allow readers to assess the positionality of a researcher, and how it might affect their research. They are common in qualitative research, and are starting to be considered in quantitative research, too.

Researchers can consider including a ‘positionality statement’ in papers, to contextualise themselves and their research environment, and define the boundaries of their research output. This can provide additional context around how the study was conducted, including the researcher’s experiences, perspectives, and potential biases.

Positionality statements and conflict of interest statements both serve to disclose personal biases or influences that might affect an individual's work or perspective. However, they differ in scope and intent. Positionality statements focus more heavily on the author's social and cultural identity factors, such as race, gender, and socioeconomic status, to provide context on how these factors might influence their viewpoint. Conflict of interest statements, on the other hand, disclose financial or personal relationships that could compromise the integrity of one's work or create a perception of bias. While both aim for transparency, positionality statements address broader socio-cultural influences, whereas conflict of interest statements specifically target potential financial or relational biases.

Qualitative researchers often talk about reflexivity – a researcher's ability to reflect critically on their own position, and how it influences the research process. This is a key skill in the social sciences and other disciplines that use qualitative research methods for studying social interaction, interpersonal relations, or cultural practices. A dash of reflexivity is particularly helpful for writing a positionality statement: see this article for more on reflexivity and positionality statements.

  

Activity 1:

Allow about 30 minutes

Write a positionality statement for yourself. You can do this for a research project you are involved in, one you have been involved in in the past, or for a fictional project in a field you are interested in.

As you do so, think about whether your positionality might influence the way you perceive and interpret the data in the project you have chosen, and whether it might impact the research process and outcomes in both negative and positive ways. For instance, your negative experience with police after being a witness to a crime might influence your interpretation of police data relating to lineup processes in a way that does not reflect reality. In contrast, your experience of having a child with a rare disease might help to ask the right questions when interviewing other parents with similar experiences.

To use this interactive functionality a free OU account is required. Sign in or register.
Interactive feature not available in single page view (see it in standard view).

Flimsy interpretations

In Week 3, you learned how researchers can be biased towards statistically significant results, and results that fit with the story they are trying to tell in their paper. One of the ways to spot this is when results and conclusions don’t follow on from each other. Tenuous links between results and conclusions are not always obvious, but they will be easier to spot once you are familiar with papers in your particular research area. Here are some to avoid in your own research:

P-value interpretation

In quantitative papers where a specific p-value threshold is being used to determine whether a result is statistically significant or not, researchers should specify at the beginning of their analysis section what their threshold will be for interpreting significance (e.g. < 0.05). It’s important that p-values are interpreted consistently throughout the analysis. For example, you shouldn’t find 0.05 being used to show a significant difference in one case, but not in another. Any statistical value that is larger than your identified threshold should not be presented as evidence of an effect or association, however much you may wish it to be so!

Support for theory

Sometimes, researchers can be so invested in a particular theory they are not able to see other ways their results could be interpreted. You should always try to think about alternative explanations for your results, and include these in the manuscript discussion. When reading other researchers’ papers, think about other possible interpretations of their results, and evidence for and against those different interpretations. It can be difficult to see theories outside your own position, so it is helpful to get other researchers with different experiences or expertise to read your work before submitting it. You can offer to do the same for them when they are writing a manuscript.

Burying results

Sometimes researchers present several results in an article, but ‘cherry-pick’ which of these to highlight in the discussion, overemphasising results that fit the story they’re trying to tell in their paper, and underemphasising those that seem to be contradictory. It’s important that any contradictory results are included in the discussion section, with speculation about why they may have occurred.

  

In Week 3, we pointed out that these biases are largely due to problematic incentive structures in academia. Researchers are incentivised to publish exciting, significant results in their papers, as these are more easily accepted by highly-regarded journals. Knowing this, it isn’t surprising that researchers are often biased to tell a simple, effective story in their papers, even though research is messy!

Slowly, the norms do seem to be shifting, so it is becoming more common to be fully transparent in your manuscript writing, by including potentially confusing results and being honest about uncertainties.

Avoiding the pitfalls

Sometimes academic journals have specific word counts, and it can be difficult to fit a lot of nuance and several additional analyses into the body of the paper. Where this is the case, it can be helpful to include all this information as supplementary information accompanying the paper (e.g. as a document uploaded to the Open Science Framework) rather than in the paper itself.

Preregistration, which you learned about in Week 4, can also help you to avoid the pitfalls described on the previous page. If you plan to use p-values to make conclusions about whether your statistical tests are significant or not, you will need to outline a significance threshold in your preregistration. You will also need to outline how you will interpret different results, including whether they will support a specific theory.

Preregistration can also protect you from burying results that don’t support your conclusions: if you preregister that you will run certain analyses, you will need to report the results of these, regardless of what they were.

Importantly, without anyone checking your preregistration, you could still make decisions in the preregistration that make your results less credible. For example, you could pick a very high significance threshold (e.g. p < 0.1), which would make false positive results more likely. You could say that a result supports a theory that doesn’t make sense, or you could ‘bury’ additional analyses that negate the results you’d prefer to draw attention to, if they weren’t included in your preregistration. Preregistration doesn’t automatically give your work more integrity, but it can help you to think through your research decisions more clearly before you start, and stop you from tricking yourself later.

  

Activity 2:

Allow about 30 minutes

Think of a disagreement you’ve had with someone. Write down three versions of the disagreement: one where you’re completely right, one where the other person is completely right, and one where you explain the complicated truth!

To use this interactive functionality a free OU account is required. Sign in or register.
Interactive feature not available in single page view (see it in standard view).

When you are ready, press 'reveal' to see our comments.

Discussion

Reflect on the activity – was it more difficult to write someone else’s perspective rather than your own? How might this manifest in research? Might it be easier to write about how your results support your preferred theory than to think about alternatives? Could stepping into the shoes of another researcher help you to try to work out alternative explanations for your results?

Robustness

Robustness refers to the strength and reliability of results. Results can be considered more robust (and therefore have more integrity) if they hold up under various conditions, e.g. different data analyses. When results are robust to different data analyses, this indicates that the conclusions drawn from the research are not overly dependent on specific features of one type of analysis, and are therefore likely more widely applicable.

The two images show a ruined wooden bridge on the left and a cast iron bridge on the right. The bridge on the left is likely to be less robust to extreme weather events than the one on the right
The bridge on the left is likely to be less robust to extreme weather events than the bridge on the right.

In some fields, it’s common to run many robustness analyses. For example, in economics, it is typical to have dozens of pages of any paper showing that a particular result holds up no matter how you measure the variables, which participants you include, which statistical model you use, and even when you control for a variety of factors that could be an alternative explanation for the effect.

However, in other fields it’s less common. For example, in psychology, papers are often published where only one key analysis is performed to examine the results. If data and materials aren’t shared openly, it means others outside of the original research team cannot even choose to run these analyses themselves to check the robustness of the results. This is a good example of how integrity is difficult to check without transparency.

Multiverse analysis

One way to take robustness to its most extreme is to perform ‘multiverse analysis’. Although this sounds like something out of Dr Who - a science fiction TV show featuring intrepid explorers of time and space - it’s actually a lot less out-of-this-world (however, still very cool). Multiverse analysis is where researchers try to perform all possible reasonable analyses on the data, in order to explore which analyses show the effect they’re interested in and which don’t.

The image shows a distant galaxy, surrounded by stars.

Activity 3:

Allow 15 minutes

Using the hack your way to scientific glory activity from Week 3, try to write down as many different combinations of analysis parameters and the results you get from them (perform and write down as many as you can in fifteen minutes). In effect, you will be doing a mini-multiverse analysis. What does it mean that the results are so variable, depending on the analysis performed?

To use this interactive functionality a free OU account is required. Sign in or register.
Interactive feature not available in single page view (see it in standard view).

When you are ready, press 'reveal' to see our comments.

Discussion

There are actually NINE HUNDRED different analysis combinations you could use in this activity! Conducting a full multiverse analysis on data like these would take a very long time, and go beyond the scope of the usual research project. But perhaps thinking about some of the most important robustness checks you could do with your research could be a first step towards enhanced robustness, without exploring the entire multiverse.

The open research decision tree

Allow 15 minutes

Here’s another chance to explore the open research decision tree. Go into the decision tree (below) and click ‘Actions’. The ‘Actions’ pathway suggests concrete steps you can take at three different stages: the planning stage, when you are actively collecting data, and after you have finished your analysis. Explore each of these and try to decide which actions support the integrity of a piece of research, and why you think it does so.

Interactive feature not available in single page view (see it in standard view).

Use the text box below to write your ideas about which actions in the decision tree relate to integrity:

To use this interactive functionality a free OU account is required. Sign in or register.
Interactive feature not available in single page view (see it in standard view).

When you are ready, press 'reveal' to see our comments.

Discussion

You might have chosen actions like preregistration, positionality statements, or robustness analysis as actions that are related to integrity. It is sometimes tricky to state that an action supports integrity alone, because transparency and integrity often relate to each other: without transparency, we cannot assess whether or not research has integrity.

Quiz

The image shows an abstract pattern which reminds you of a brain or a maze.

By now, you should be alert to issues relating to integrity in open research. Throughout the course, we offer you self-test quizzes to help you test your understanding of course concepts. These quizzes are there to help you consolidate your knowledge. It is important to check your answers and read the feedback we have written, to develop your understanding.

Answer the following questions:

  Question 1

Guest users do not have permission to interact with embedded questions.
Interactive feature not available in single page view (see it in standard view).

  Question 2

Guest users do not have permission to interact with embedded questions.
Interactive feature not available in single page view (see it in standard view).

  Question 3

Guest users do not have permission to interact with embedded questions.
Interactive feature not available in single page view (see it in standard view).

  Question 4

Guest users do not have permission to interact with embedded questions.
Interactive feature not available in single page view (see it in standard view).

  Question 5

Guest users do not have permission to interact with embedded questions.
Interactive feature not available in single page view (see it in standard view).

Summary

This week you explored more aspects of the principle of integrity: conflicts of interest, positionality, and the link between results and conclusions, and robustness.

Positionality is an individual's social and political position, shaped by factors like identity, background, experiences, values, and beliefs. It can influence the link between results and conclusions: when researchers interpret their data, there is a chance it could colour their interpretation. Potential researcher biases can also impact how researchers write up conclusions following on from their results. It’s important to try to avoid, or at least be explicit about potential biases in your own writing. Finally, robustness is whether or not results hold up in multiple analyses – where they do, we can have more confidence in the results. This helps researchers defend themselves against potential bias.

In Week 6 you’ll be moving on from the principle of integrity to the principle of accessibility.

References

Fivethirtyeight.com: Hack your way to scientific glory
Available at: https://projects.fivethirtyeight.com/ p-hacking/

Lacy, M. (2017): Just tell me what I need to know: reflexivity and positionality statements
Available at: https://medium.com/ @Marvette/ just-tell-me-what-i-need-to-know-reflexivity-and-positionality-statements-fb52ec0f4e17

The Open University (2024): The open research decision tree
Available at: https://www.open.edu/ openlearncreate/ course/ view.php?id=11974

 

Click here to move on to the next week

Glossary

Multiverse analysis
Systematically sampling a vast set of specifications, known as a multiverse, to estimate the uncertainty surrounding the validity of a scientific claim.
Positionality
Refers to an individual's social and political position within society, including their identity, background, experiences, and beliefs.
P-value
A p-value is a statistical measurement used to validate a hypothesis against observed data. The lower the p-value, the greater the significance of the observed difference.
Preregistration
The practice of publishing the plan for a study, including research questions/hypotheses, research design, and/or data analysis plans, before the data has been collected or examined.
Reflexivity
An idea borrowed from qualitative research, reflexivity involves critical reflection on the researcher's position and how it influences the research process.
Robustness
Refers to the strength and reliability of results.