It all sounds so simple. Our public services - health, education, social services and so on - aren’t perfect; just about everyone would agree on that. We want their performance to improve. But how are we going to tell if a particular service is really improving? We’ve got to have some way of recording and measuring its performance.
But no public service is going to become perfect overnight, so, let’s tell the people in charge just how much we want the performance to improve, and when. In other words, let’s set some targets that will identify the priorities and make sure that the service actually moves in the right direction, at the right speed.
Ratings in the NHS
In practice, of course, it’s not as easy as that. Let’s take the British National Health Service (NHS). One problem is that the NHS is such a large organisation, and does so many different things. Therefore, any sensible list of targets is going to have to be rather long. If there are no targets set for an area of NHS activity, there’s a risk that too many of its resources will be diverted away to meet targets elsewhere.
But there are also pressures for targets and performance indicators to be simple: they need to be understood and acted on by a wide range of staff; they also attract political debate and media comment. But can a simple number do the job?
For a few years, up to 2005, NHS trusts in England were given an annual ‘star rating’, from no stars up to three stars. Every year, there was intense media discussion, nationally and locally, about how different hospitals were moving up and down the star ratings, but rather little about their performance on the many separate indicators (40 in 2005), that the stars were based on, and even less on aspects of performance that were not covered by the indicators.
So, in one sense, the outcome was simple - just a number of stars - but actually the ‘star rating’ was only the tip of a large iceberg of targets and performance measures, and the rest of the iceberg was often ignored.
From 2006, star ratings were replaced by an ‘annual health check’, which looks at a much broader range of issues in assessing the performance of healthcare organisations. Also, each NHS trust ends up with overall ratings on two different scales, representing "Quality of services" and "Use of resources", rather than a single number of stars. but ahain, most news coverage of these ratings concentrated entirely on the overall ratings and not on the detail that goes into them.
Rating secondary schools
In secondary school education, in England, there used to be a simple way of rating the performance of schools. Take all the students in a school who are in their last year of compulsory schooling (aged 15 at the beginning of the school year), see what percentage of them get five or more GCSEs at grades A* to C, publish these percentages in a league table, the best schools get the highest percentages. Or do they?
Surely some schools get a high score simply because their students were doing well even before they got to secondary school? This must be at least part of the reason that most of the schools near the top of the league were selective. So for several years, so-called ‘value added’ measures were also produced, that took account of students’ performance in national tests at ages 10–11.
While most people agreed that these measures make the comparisons between schools fairer, many felt that they are still not as fair as they should be. For instance, there may be other reasons outside the school’s control, other than the students’ performance at age 10–11, why a particular school’s exam results are worse than average. Maybe the school is in an economically deprived area, or maybe it has more than its share of students whose first language is not English.
The 2005 English secondary school tables included a ‘contextual value added’ pilot study, where 430 schools were given a score that allowed for several such additional factors.
From 2006, a slightly revised version of these scores was published for all schools, and the original ‘value added’ tables were dropped. Yet there is still room for controversy over which factors should be taken account of, and exactly how the calculations are done.
And things are now far from simple: we have the original percentage score, several other similar percentage scores taking account, among other things, of whether students have good passed in English and Maths, and the ‘contextual value added’ score. From 2007 an additional, different, contextual added value score was also included, and also yet another percentage (for GCSE passes in Science).
And all this is only for the GCSE stage of schooling, there are yet more tables for A levels and for standard tests taken at younger ages.
Different parts of the UK have taken a different line. In Scotland, summaries of public examination results for individual schools are published, but not in any form resembling league tables. In Wales and in Northern Ireland, even less information about individual schools is made available: the devolved administrations in these countries argue that the kind of data published in England is just too misleading.
Arguments against targets
Meanwhile there are general arguments against targets from many sides. In May 2006, the deputy chairman of the British Medical Association, a doctors’ organisation, blamed bullying of staff in the NHS on a “survival of the fittest culture” caused by “the highly pressurised target ethos in the health service”.
But, if we accept such arguments, should we have no targets or performance indicators at all? In that case, would we know how our local schools or hospitals were doing? Would it matter if we didn’t know? Perhaps not, some people argue, if we can be sure that all hospitals and schools are up to standard. But how can we be sure even of that, unless performance is being monitored somehow?
Others have argued that we do not need national targets or performance indicators: people in different places have different priorities and the targets should reflect that. Published school and hospital performance data already have a different basis in Manchester and Glasgow, because of devolution.Why shouldn’t they be different in Manchester and Swindon as well?
But when there are differences between areas in service provision, the press are full of stories about ‘postcode lotteries’. Collectively, we don’t seem to be keen on postcode lotteries, but without national performance indicators, we might not even be sure whether they existed.
These are not easy questions. What do you think?