5.6 Developing the Open University hard systems method
When the writers of the course T301 Complexity Management and Change, the predecessor to T306 (the course from which this unit is taken), started in 1982 they had to decide what to include and what to leave out (just as we have). They started with the systems analysis approach of the engineers De Neufville and Stafford (1971), which had been developed in a civil engineering group at the Massachusetts Institute of Technology (MIT). De Neufville and Stafford defined systems analysis as ‘a coordinated set of procedures that addresses the fundamental issue of design and management: that of specifying how men, money, and materials should be combined to achieve a larger purpose’ (p. 1). Unlike this course, De Neufville and Stafford had a definition of system much closer to its everyday meaning: any complex, large combination of facilities. They also conceived of a systems analyst, who would use the power of computing to be explicit ‘perhaps as never before, about what is involved in the creation of a design and what goes on while this takes place’ (p. 3). They believed a systems analyst would:
be more aware of their objectives because of being forced to make explicit statements about what they were and how they were to be measured;
have mechanisms at their disposal for predicting future demands on a system, which often are not observable in advance but must be determined from an interaction of social and economic factors;
have a procedure for generating a large number of possible solutions and for determining efficient methods to search through them;
use optimisation techniques that can pick out favourable alternatives;
be aware of strategies of decision-making that could be used to select among possible alternatives.
De Neufville and Stafford (1971) regarded systems analysis as being consistent with a scientific approach, which they thought followed the process described in Figures 40 and 41. The T301 course team was not fully satisfied with the original method developed by De Neufville and Stafford. They developed a new starting sequence for the application of the hard systems approach (HSA) because from experience they knew it was the initial starting conditions that often shaped how the analysis proceeded (something that chaos theory has subsequently made more generally known – see Appendix C).
The T301 team saw the starting point as a decision, problem, or opportunity; it got away from the previous focus of starting on just a problem. They did that so that the HS-method could be taught as dealing with aspects of decision making that are designed to prevent problems and messes from occurring and also recognising opportunities and seizing them in an optimal way (Tait, 1982). With this change in emphasis at the start the academics recognised it was no longer essential to have a client-consultant relationship – the approach was available to everyday decision making. The revised OU HS-method began with describing the system using what was known as the system-description method. The process is described in Figure 42. Joyce Tait, one of the contributors to the development of the OU hard systems method says in reflection:
I think what was innovative at the time was the fairly strong warning about inappropriate application of the HS-method and the inclusion of a soft stage 1, under ‘system description’. The existing ‘method for system description’ was incorporated into this first stage for reasons of consistency in our courses, but I think it was a mistake – not a robust enough method. Thinking back, it would have been a good idea to use the early stages of the soft systems approach (Checkland's SS-method) as a preliminary to HS-method decisions on ‘what are the objectives’ but we would never have got away with it. Peter Checkland was external assessor and he was against any softening of the HS-method. I think this aspect is still innovative. There is not much sign, given the huge amount of misapplication of HS-method, particularly in the public sector, e.g. the whole measures of performance industry, that the points we were making in T301 have got through to anybody.
In other words, Joyce Tait believes that generally the systematic HS-method would be better contextualised within a systemic approach. My colleague, Simon Bell,, gives the following example:
In 1996, I worked with an Indian researcher from an Indian agricultural research centre on a project concerned with monitoring the uptake of recommendations about fertilizer use by small farmers in remote regions of southern India. He was in a complete fix about how to do this and had problems scoping the problem.
We started off with SS-method and got a picture of why current monitoring did not work. This boiled down to farmers being busy when monitoring people turned up during the working day and the fear experienced by officers of the local research centre about being spied on by staff from the research centre. When he finished his SS-method and produced an activity plan it indicated the need for some quantitative modelling of the current situation as a base-line survey. He then used the HS-method to identify how such a survey could be conducted and what it should measure.
As with any modelling activity (see OpenLearn unit T553 Systems Modelling) the cost of measurement and data collection must be taken into account. The HS-method does not take this explicitly into account; nor do most methods whether these are systems approaches or not.
So called hard systems methods attained the dominant position Peter Checkland refers to in Appendix D (see Activity 49 below, where this appendix is attached for your convenience), because of some powerful organisational pressures. It is argued that they were desperately needed because of the increasing complexity of decision making in organisations. It was considered no longer safe to leave decisions that could affect the whole organisation to the hunches and best-guesses of one person, however experienced they might be. As organisations increased in size after World War II and the number of variables to be controlled increased, the levels of risk and uncertainty associated with decisions rose. The environment of organisations also changed rapidly so that the organisations had to become more adaptive, more ready to change, and more aware of the political and social implications of their decisions. In addition to all these other factors, there was a rapid expansion in the domain of knowledge, so that the amount of data that could bear on a particular decision or problem exceeded the scope of one person. An additional factor favouring the development of hard systems methods was their strong emphasis on the supposed logical and scientific nature of the decision-making process (see Figure 40). Stafford Beer (1966), in the preface to Decision and Control: the Meaning of Operational Research and Management Cybernetics, states: ‘This book is about management and the way in which it may invoke the use of science to help solve problems of decision and control’. Given the dominance of the scientific method in our culture, this heavy emphasis on the scientific nature of hard systems thinking helped to give it academic weight and respectability. It is interesting to note the same cycle repeating itself with regard to ‘the sciences of complexity’ as evidenced in Horgan (1996) (see also Appendix C).
Many of the early hard systems methods were developed by observing good managers, decision-makers or designers in action and codifying what they did. As defects in these early methods became apparent the methods were modified and changed. The OU hard systems method evolved as a result of teaching the approach to students at summer school and in response to certain criticisms of hard systems methods in general (see Tables 3 and 8).
The OU experience with the predecessor course to T306 was that students, who had to choose a project based on a hard systems, soft systems or systems failures approach, were often unable to appreciate the relationships between approach and context. Often, those students who were predisposed to technological or quantitative approaches finished the course still seeing the world only in hard systems terms. They were unable to make the epistemological distinctions I have outlined as being desirable in my ideal of a systems practitioner. In the process they metaphorically dropped the C ball!
Unfortunately, the terms hard and soft are now part of the systems tradition. The reasons for this are outlined in more detail in Appendix D. In this unit, we do not wish to perpetuate the hard and soft distinctions but prefer the distinctions systemic and systematic (Table 3) used in an holistic way, i.e. not either/or but both/and.