1.2 Methodological issues for evaluation
The paradigm problem
The conflict between the universalist and contingency approaches to evaluation illustrates a far more significant tension between paradigms, which you may be aware of if you are familiar with the social sciences. The ‘paradigm problem’ or ‘paradigm wars’ refers to the hostility that developed in the social sciences from the 1960s (becoming particularly pronounced through the 1980s) between advocates of the ‘naturalistic’ or ‘interpretive’, and the ‘scientific’ paradigms and their respective standpoints on the most appropriate approaches and methods to be used in research. Their contrasting positions are outlined in Box 1.
Box 1 Contrasting paradigms
Research is a systematic investigation to find answers to a problem. Research in professional social science areas, like research in other subjects, has generally followed the traditional scientific method. Since the 1960s, however, a strong move towards a more qualitative, naturalistic and subjective approach has left social science research divided between two competing methods: the scientific empirical tradition, and the naturalistic phenomenological mode.
In the scientific method, quantitative methods are employed in an attempt to establish general laws or principles. Such a scientific approach is often termed nomothetic and assumes social reality is objective and external to the individual.
The naturalistic [or interpretive] approach to research emphasises the importance of the subjective experience of individuals, with a focus on qualitative analysis. Social reality is regarded as a creation of individual consciousness, with meaning and the evaluation of events seen as a personal and subjective construction. Such a focus on the individual case rather than general law-making is termed an ideographic approach.
Each of these two perspectives on the study of human behaviour has profound implications for the way in which research is conducted.
Although it is fair to say that the tension between the naturalistic and scientific positions has nowadays eased and debate about the relative merits of each paradigm has cooled, the division still exerts an influence. There are, for example, plenty of scholars and practitioners of evaluation and research who consider qualitative research non-scientific and non-rigorous, regarding the results as ‘descriptive’. This, as Bamberger et al. (2006) note, is tantamount to saying that it has lower value than quantitative research. Meanwhile those who favour the qualitative paradigm can be equally sharp in their criticism of quantitative research. Some other important differences between the two approaches are outlined in Box 2, while others form part of the discussion that follows.
Box 2 Reductionism and expansionism
Research in the quantitative paradigm is reductionist, reducing data to numbers to measure outcomes and support correlations, comparisons, trends, and probabilities. But qualitative data cannot be reduced to numbers without unacceptable loss of meaning. Because the qualitative view of a program involves interdependent aspects, too many and too dynamic to isolate meaningfully, qualitative practitioners attempt complex overall or holistic views. As new data reveals ever more complexity, the inquiry is expansionist. Theoretical triangulation in analysis increases the expansion as evaluators consider data from different conceptual or theoretical frameworks and surface competing findings and rival explanations ... The limitations of each methodology underlie realisation that using mixed-method designs can capitalise on the strengths of both qualitative and quantitative strategies, and each can help compensate for the weaknesses of the other.
Given that the quantitative versus qualitative issue still features in a good deal of the literature on evaluation, albeit implicitly, I agree with Clarke that ‘An insight into this debate is essential if evaluators are to avoid becoming slavishly attached to one particular methodological paradigm’ (1999, pp. 37–38). Consequently a fuller discussion of the topic follows.
I would also highlight one further reason why an insight into this subject is important to evaluators: funding bodies and other potential sponsors and clients of evaluation will almost certainly have their own methodological preferences. This applies equally to other stakeholders as well, although for obvious reasons the most influential are likely to be the stakeholders who are funding the evaluation. As we have already seen, these may stem from whatever paradigms and philosophies an individual, group or society may subscribe to. They may also be adopted for purely instrumental reasons, as Jamison and Baark’s example of a ‘practical’ evaluation trajectory illustrates (Evaluating technology (T887_1),). A discussion of the ‘hidden’ functions of evaluation later in the unit also highlights this. In short, a particular paradigmatic position might be chosen to ensure a particular outcome from an evaluation. Ultimately, it is important to be aware both of your own and others’ biases and preferences when seeking support for a particular approach to the design, application and reporting of evaluation.