Skip to main content
Skip to content

About this free course

Download this course

Share this free course

Theories in technology evaluation
Theories in technology evaluation

Start this free course now. Just create an account and sign in. Enrol and complete the course for a free statement of participation or digital badge if available.

3 Politics, assessment and evaluation

I would argue that a good number of the issues that arise when we discuss stakeholders and participation in evaluation illustrate – to a greater or lesser extent – the relationship between politics and evaluation and the various forms this may take. Context is once again a key factor here. Consequently it’s likely that some of you may seldom or never (at least knowingly) come into contact with the political dimensions of evaluation that I discuss below, while others will experience all of them. For reasons which I hope are obvious I’ve also included a very brief discussion of some of the functions of evaluation that are often ‘hidden’. That is, examples of evaluation being used for purposes other than those that would generally be regarded as legitimate or ethical. Again, context is important. And again, therefore, some of you will have experience of the hidden uses of evaluation, while others may well never come across or be involved in them. Having worked your way through this section you will be able to judge for yourself whether an explicit awareness of these features of evaluation is of more or less value to you.

Bamberger et al. illustrate the role that politics can play in evaluation when they state that:

We use the term political influences and constraints in a broad sense to refer not only to pressures from government agencies and politicians but also include the requirements of funding or regulatory agencies, pressure from stakeholders, and differences of opinion within an evaluation team regarding evaluation approaches and methods.

(Bamberger et al., 2006, p. 26, original emphasis)

They then identify the various types of political influences as:

  • individual
  • professional
  • stakeholder
  • societal.

Table 1 provides a wide range of examples of how politics can influence the design, application and outcome of project and programme-type evaluation. I also provide a brief summary of Bamberger et al.’s discussion of each type of political influence below, adding, where appropriate, a technology bias to highlight the relevance of the argument to this course.

Table 1 Some of the ways that political influences affect evaluations
During evaluation design
The criteria for selecting evaluators Evaluators may be selected:
For their impartiality or their professional expertise
For their sympathy toward the program
For their known criticisms of the program (in cases where the client wishes to use the evaluation to curtail the program)
For the ease with which they can be controlled
Because of their citizenship in the country or state of the program’s funding agency.
The choice of evaluation design and data collections methods The decision to use either a quantitative or qualitative approach or to collect data that can be put into a certain kind of analytical model (e.g., collecting student achievement or econometric data on an education program) can predetermine what the evaluation will and will not address.
Example of a specific design choice: Whether to use control groups (i.e. quasi-experimental design)Control groups may be excluded for political or ethical rather than methodological reasons, such as:
To avoid creating expectations of compensation
To avoid denial of needed benefits to parts of a community
To avoid pressures to expand the project to the control areas
To avoid covering politically sensitive or volatile groups
On the other hand, evaluators may insist on including control groups in the evaluation design to give an impression of rigor even when they contribute little to addressing evaluation questions.
The choice of indicators and instruments The decision to only use quantitative indicators can lead (intentionally or otherwise) to certain kinds of findings and exclude the analysis of other, potentially sensitive topics. For example, issues of domestic violence or sexual harassment on public transport will probably not be mentioned if only structured questionnaires are used.
The choice of stakeholders to involve or consult The design of the evaluation and the issues addressed may be quite different if only government officials are consulted, compared with an evaluation of the same programme in which community organisations, male and female household heads, and NGOs are consulted. The evaluator may be formally or informally discouraged from collecting data from certain sensitive groups – for example, by limiting the available time or budget, a subtle way to exclude difficult-to-reach groups.
Professional orientation of the evaluators The choice of, for example, economists, sociologists, political scientists, or anthropologists to conduct an evaluation will have a major impact on design and outcomes.
The selection of internal or external evaluation Evaluations conducted internally by project or agency staff have a different kind of political dynamic and are subject to different political pressures compared with evaluations conducted by external consultants, generally believed to be more independent.
The use of national versus international evaluators also changes the dynamic of the evaluation. For example, although national evaluators are likely to be more familiar with the history and context of the program, they may be less willing to be critical of programs administered by their regular clients.
Allocations of budget and time While budget and time constraints are beyond the total control of some clients, others may try to limit time and resources to discourage addressing certain issues or to preclude thorough, critical analysis.
During implementation
The changing role of the evaluator The evaluator may have to negotiate between the roles of guide, publicist, advocate, confidante, hanging judge, and critical friend.
The selection of audiences for progress reports and initial findings A subtle way for a client to avoid criticism is to exclude potential critics from the distribution list for progress reports. Distribution to managers only, excluding program staff, or to engineers and architects, excluding social workers and extension agents, will shape the nature of findings and the kinds of feedback to which the evaluation is exposed.
Evolving social dynamics Often, at the start of the evaluation, relations are cordial. But they can quickly sour when negative findings begin to emerge or the evaluator does not follow the client’s advice on how to conduct the evaluation (e.g., from whom to collect data).
Dissemination and use
Selection of reviewers If only people with a stake in the continuation of the project are asked to review the evaluation, the feedback is likely to be more positive than if known critics are involved. Short deadlines, innocent or not, may leave insufficient time for some groups to make any significant comments or to include their perspectives, introducing a systematic bias against these groups.
Choice of language In developing countries, few evaluation reports are translated into local languages, excluding significant stakeholders. Budget is usually given as the reason, suggesting that informing stakeholders is not what the client considers valuable and needed. Language is also an issue in the United States, Canada, and Europe where many evaluations concern immigrant populations.
Report distribution Often, an effective way to avoid criticism is to not share the report with critics. Public interest may be at stake, as when clients have a clear and narrow view of how the evaluation results should be disseminated or used and will not consider other possible uses.
Source: Bamberger, Rugh and Mabry, 2006, pp. 116–117.