4. Reasons to Evaluate
Evaluations are often required by funders (e.g. by nearly all government funds for education and development projects). There are, however, many more reasons to conduct evaluations. These include:
- Cost-effectiveness Determining whether a project impacted intended outcomes and the size of those impacts can be used to select between different projects, ensuring that resources are allocated to those projects that are most effective
- Unintended consequences Human beings are complex and social systems are even more
so. Social initiatives conducted over the past century provide substantial
evidence of how even the simplest interventions and efforts to help others can
have significant and far-reaching unintended consequences. Sometimes these are
good but it is surprisingly easy to cause inadvertent harm. Evaluation provides
a way to check for harm and mitigate the risk of repeating and/or ‘scaling up’
an approach that causes unforeseen harm.
- Improve practice over time Another consequence of social complexity is that it can be
difficult to design a project that has a large impact. It usually takes time
and a ‘learning cycle’ to arrive at an approach that really works. Evaluation
enables lessons about what works to be used and incorporated into future
practice so that project designs improve and become more effective over time
(this is the learning cycle embedded in the OAD Impact Cycle).
- Contribute to knowledge and practice Evaluation offers lessons for others about what went well, what worked and what did not. Since evaluation findings can be shared, evaluations can be used to improve not only OAD project designs and practice but also a broader understanding of what works in related fields (e.g. effective techniques for science communication); and to improve the design of similar projects conducted by other organisations and actors (e.g. other science unions, AstroEdu, UNAWE etc.).
- Demonstrating impact to
stakeholders For a project to be sustainable, it must
be supported by all key stakeholders. These include the people who deliver the
project (e.g. OAD project leaders and teams); those who fund the project (e.g.
the OAD, the IAU, Kickstarter donations etc.); those who participate (e.g.
students, teachers); and any other affected communities or parties (e.g. school
districts). Evaluations provide project supporters with evidence that they take
their project objectives seriously and prioritise achieving positive outcomes.
If an evaluation shows positive results, the project supporters can use these
results to build trust with target participants and attract support for its
continuation.
- Increasing funding and scale of
delivery Most large international funders will
not allocate significant resources to any intervention that has not been
demonstrated to bring about positive outcomes. Organisations that require
rigorous impact evaluation before scaling-up include The Gates Foundation, the
largest private philanthropic foundation in the world; UN agencies; Oxfam; and most
government aid departments.
Evaluations are particularly important to conduct when projects are
- using innovative
methods that have not been tested
- operating in a domain
where knowledge is lacking about what works
- working on problems
whose mechanisms are poorly understood
- attempting to change
behaviours, attitudes or social structures
- trying to affect outcomes that are difficult to observe
For example, we hope that an OAD project proposal may focus on improving gender inclusion in the sciences. This project would probably be a good candidate for having an in-built evaluation. This is because we know that there are multiple interacting causes for disparities in gender representation in the sciences and that these causes are likely to vary over time and across contexts.
We also don’t know of any highly effective solutions to these causes; and probably do not know what all the causes are. Furthermore, we do know that there have been several high-profile examples of projects that sought to enhance girls’ entry into science and failed to do so or actually had the opposite effect. An evidence-based design (incorporating what is known) combined with a carefully designed evaluation framework would help mitigate the risk of causing unforeseen harm and ensure that the project contributed to evolving knowledge and practice in this area — at best, the project would be found to be effective and could be used by others. At worst, the project would contribute to our understanding of what does (and does not) work and thus help future projects design more effective approaches.