7 Designing evaluation
So far in this course we have considered information used in making a selection. What about information about consultants thereafter? Evaluating consultants’ work was highlighted as one of the difficulties in this area, yet such information is important. You may need to evaluate a small initiative (perhaps a limited diagnosis, feedback and initial planning contract) with a view to deciding whether to work more extensively with the consultant. You may need performance measures to contribute to performance management of the consultant while the project is ongoing. You may need to justify the expenditure to your superiors. You may be considering further use of the consultants or you may be asked for your opinion by colleagues considering their services. And you may wish to learn from the experience, with a view to making more effective use of consultants in future. Even in fairly straightforward interventions, measurement can be difficult and interpreting measures even more so.
For this reason it is important to think about the information needed to evaluate a proposed consultancy, and the processes for obtaining this information, from the very first stages. In addition to clarifying the purposes of evaluation, you need to consider the nature of the information sought. Often a ‘measure’ is merely an indicator: it may be possible for the person measured to influence the indicator directly without changing the substance of what it purports to measure. (This is the equivalent of winding back the odometer on a car, and you will be familiar with many such ways of influencing alleged measures from your experience of performance appraisal.) When establishing ‘measures of effectiveness’, you need to be alert to this possibility.
Imagine that you have hired a consultancy firm to improve your graduate recruitment. They convinced you that the best way forward was for them to design and run a new assessment centre to select graduate recruits. How might you measure the success of such consultants, and what questions might such measures leave unanswered?
You may have identified a number of potential measures, for example:
the extent to which agreed price and timescale targets were met – easy to measure
the cost of the process compared with the previous methods
acceptance rates for those offered jobs compared with previous experience
retention rates for those selected by the new method, in the short and longer term
the percentage of recruits selected as potential high fliers at the end of probation
candidate satisfaction with the process (for successful and unsuccessful candidates)
candidate assessment of the organisation
the identification of development needs in successful candidates.
In the scenario described, a total disaster would become apparent before too long. If you had measures for the previous system you might be able to detect less dramatic strengths and weaknesses over a suitable period. What is much harder is to know whether another solution would have been better. Might another consultancy have recommended training internal staff to run the assessment centre? While the initial investment might have been higher, the overall costs might have been lower, and there might have been other benefits. Hidden potential disadvantages of this approach may have included that the better graduates rapidly become aware if off-the-shelf exercises are being used, as they encounter the same ones time after time, and become ‘sophisticated testees’. Furthermore, the organisation loses the opportunity to ‘sell itself from the outset to strong candidates, and managers miss the chance to start to build a relationship with their new staff at the very first stage – I have inherited many staff selected for me by others, and have never had the same relationship with them that I have with those I was instrumental in choosing. Relative success is far harder to evaluate than outright failure! It is virtually impossible to say whether a consultancy was superb or merely competent.
You can begin to see from the above discussion why evaluation is so problematic. Designing graduate recruitment is simple relative to some of the change interventions with which HR consultants may be involved, yet even here evaluation presents significant problems. In any initiative which takes months, as many do, while it is possible to compare ‘before’ and ‘after’, many other factors may be influencing the measures. Key personnel may have changed, the external environment may have changed, and either or both may be affecting any of your chosen measures. And even if you can be relatively sure that the intervention is the major influence, you will never know whether a different intervention, or one managed differently by another consultancy, might not have been more effective.
Despite the difficulties, it is important to attempt to evaluate any initiative, not only for the reasons outlined above, but to maximise the learning (for both client and consultant) from the experience. Such evaluation is helped by discussing evaluation from the outset, and in particular by:
clarity about what the consultancy is initially intended to achieve, and within what constraints (essential in any case)
clarity about the measures of effectiveness to be used
careful logging of any changes to these initial objectives, with a discussion of the reasons for these changes
notes on what went well and what went less than well in the project, together with an assessment of the contribution of client and consultant in each case
notes, at the time, on any lessons learnt
an overall ‘open and honest’ assessment of the assignment with the consultant once it is complete.
If your organisation makes extensive use of consultants, and does not already have a system for sharing evaluation information, you may wish to investigate whether such a system should be devised. However, it is important to ensure that evaluation does not work against the collaboration that is necessary in many forms of HR consultancy. For this reason the evaluation needs to be clearly designed to aid learning by both parties. For collaborative working, ‘evaluating the relationship’ will be a key part of this learning.
If you are involved in hiring consultants, review your experience to date and, in the light of the factors described in this course, consider how the quality of future hiring decisions might be improved. If you work as a consultant, then use the same material as the basis for considering how you might increase your chances of being hired by a potential client. Draw up an action plan for taking the necessary steps to bring about relevant improvements.