Skip to main content

About this free course

Download this course

Share this free course

Theories in technology evaluation
Theories in technology evaluation

Start this free course now. Just create an account and sign in. Enrol and complete the course for a free statement of participation or digital badge if available.

2.3 Criticisms and complications of a participatory approach

Most, if not all, evaluation methods have drawbacks, and participatory approaches are no exception. Building on from the previous discussion of issues of scale, scope and purpose, my intention here is simply to highlight four further examples that can be highly problematic. All of these scenarios and their outcomes are ‘not necessarily desirable, but they correspond to reality’ (Braun, 1998, p. 142).

  • Complexity. One of the most obvious sources of potential problems is that the complexity of an evaluation increases as the scope and degree of stakeholder participation grows. This then creates the potential for further problems because complexity increases the challenges of organising and managing the process of evaluation. In addition, the more stakeholders there are, the more likely it is that the interests of one, or a group of stakeholders, will run counter to those of others. Dealing with these competing interests puts further pressure on the organisation, management and application of the evaluation.
  • Stakeholders’ legitimacy. By this I mean the true or actual extent to which an identified stakeholder represents the constituents they claim to represent. This is an issue you may be familiar with from national and international politics, rather than as an issue for evaluation. In an industrial dispute, for example, it is relatively common for company managers and governments to question the extent to which trade unions actually represent the wishes and interests of their members.

    In the context of evaluation similar situations may arise, particularly where there is a strict hierarchy of management or seniority. Evaluations that are carried out in these settings often end up collecting data on, for example, the implementation and/or use of technology by junior staff, from supervisors, managers or military officers who claim to speak on their behalf. One of the primary issues here is that the legitimacy of the spokesperson may be compromised because as a manager or military officer they have other interests that may compete with providing a frank account of what has really happened in the workplace.

  • ‘Missing’ stakeholder(s). The scenario I want to outline here is where no recognised stakeholder exists, or where one does but does not have the resources, profile and/or voice to articulate their interests, but where it is clear that people could have an affect on, and/or be affected by, a development, project or programme. How do evaluators respond? Do they put resources into enabling/empowering a stakeholder to come forward, or simply turn a blind eye? I suspect the latter is the option chosen in many cases.
  • Mobilising bias. By this I mean the identification of ‘preferred’ stakeholders, those whose interests are easier to accommodate into an evaluation. Examples include stakeholders who support a particular technology or project, or a particular outcome, or, a particular design of evaluation, or who are more likely to be supportive of its possible findings.