4.1.7 Micro-level technology assessment
I defined micro-level technology assessment as an activity that takes place within public or private organisations or within government. It therefore focuses on undertaking some form of systematic attempt at foreseeing the costs and benefits (consequences) of an investment in a technology, or technology-based project or programme, within a particular organisation or institution. Clearly the type and range of potential environmental and contextual variables that need factoring in to micro-level assessment will differ from those that are relevant at the macro and meso levels. As we have already seen, however, this does not mean that assessment at this level is free from the influence of features of the macro and meso domains. For example, the assessments and evaluations of information systems development in English local government that I carried out with colleagues in the 1990s often highlighted the significance of EU policy to local councils in England (Bellamy, Horrocks and Webb, 1995).
Another feature of assessment in the micro sphere is that, as we are dealing with a context that can be defined fairly easily, it might also be possible to control it as well (although this potential obviously varies depending on the size of the organisation). However, while scope and controllability may work in an evaluator’s favour, they can also create other challenges. One of the most significant is that the customers, commissioners and stakeholders of an assessment are likely to expect far more precise and accurate (i.e. realistic) findings than they would expect from meso and macro assessments. However, this may be difficult to achieve, particularly in larger organisations where there may be a variety of ‘sub’ organisations (e.g. departments and units) and/or where an organisation is split over several locations. (These are, of course, formal ‘sub’ organisations. In many organisations it is likely that there may be numerous informal ‘sub’ organisations. Identifying these, let alone factoring them into an assessment, is likely to be a significant challenge.) Again, I have direct experience of this: an organisation had decentralised its management and functions and by the time I arrived to carry out an evaluation of the implementation of certain technologies many parts of the organisation had developed their own systems and applications (largely without the knowledge of corporate management).
Situations such as these mean that micro assessment takes on some of the characteristics of meso assessment. An evaluator will recognise that this returns complexity to the equation. However, the commissioners of the assessment may not wish to acknowledge this. If they are senior managers – which is likely – they may, for example, believe (or wish to give the impression) that the organisation they manage functions as a cohesive entity. They may also believe that the management team is able to exert a high degree of control over the various parts and players of the organisation. In most cases the evaluator has to take these claims at face value, but once inside the organisation may soon find that reality is somewhat different.
A significant amount of research into organisations and organisational behaviour supports this view, highlighting that even quite small organisations can be far more complex systems than they look ‘on paper’. Managers may be able to produce detailed diagrams of organisational structures and responsibilities but, as I know from my own experience, and many of you will also know, in practice organisations are seldom what they look like on paper.
One way to address this tension is to appoint evaluators from within an organisation because they already possess this tacit knowledge. This course of action is often dismissed, however, not least because of fears about the objectivity of internal evaluators being compromised. That said, there are plenty of instances where the objectivity of external assessors and evaluators is equally open to question.
Another factor that can work against the production of a realistic assessment is time. Any assessment or evaluation requires a reasonable and sensible amount of production time. Again, the scale and scope of the exercise have a direct bearing on what this might be. Micro-level assessments are typically seen as relatively short exercises, which might well be a reasonable assumption to make except where the kind of ‘hidden’ complexity I noted above is present. Then what might have looked like a one-week exercise on the basis of the briefing given by the commissioners becomes either impossible to complete to the desired quality in that time, or results in a request for more time or resources.
Ultimately, although micro-level technology assessment may seem more straightforward than either meso- or macro-level assessment, in practice the real issue is that the nature of the challenge changes. As the authors of one of the leading books on the measurement and management of IT costs and benefits remark ‘... predictive evaluations [assessments] are complex. The evaluator has to understand the existing system in order to predict and understand the future investment [in technology], as well as be able to estimate the potential impact of the future situation’ (Remenyi et al., 2005, p. 27).