"67% of people more likely to believe a news report containing a number than one that doesn't" *Source: me imagining the answers that three people I know well I think might have given to a vaguely related question.
Watch any TV or radio news broadcast, or read through any newspaper, and you're likely to find numbers being to used to give the story an additional sense of authority. In the same way, numbers are used by politicians to support policy claims and decisions made at local, national and international levels of government.
But where do these numbers come from, why are they there, and should we trust them?
Where do the numbers come from?
Different numbers come from different places: sports pages tend to be filled with sports results and league tables bought in from specialist wire services; business pages buy in information about stockmarket prices and price movements from financial data providers; lifestyle pages are often replete with the results of surveys commissioned by PR companies or lobbyist groups; politics pages are filled with poll results often commissioned by the news organisations themselves; home affairs pages detail information released as official statistics, often, but not always, published in the UK by the Office of National Statistics; and world news pages are replete with data collected by other governments, as well as international organisations such as the EU, the United Nations (and its various offshoots), and the World Bank.
In some cases, there will be little argument about the "truthiness" of a number—if the score at the end of the football is match is 2-0, then that's the result. Sports such as horse racing and Formula One motorsport provide an exception - the result at the end of the race is a provisional one, and subject to confirmation by the race stewards before being released as a firm result.
In other cases, the extent to which the numbers can be deemed to be "true" may be more circumspect, although there may be a strong desire to trust the actual source of the data.
This is particularly the case for national statistics. Numbers such as population figures are accurately determined at the time of a census, but in intervening periods are derived from forecasts projected from the data collected in the census year. Economic indicators such as GDP (gross domestic product) are often released as provisional figures, subject to later revision, as non-profit fact checkers Full Fact describe in This week's GDP figures explained [28 January, 2014]: "[t]he ONS measures GDP in three different ways, what's known as the output (what we produce), income (what we earn) and expenditure (what we spend) approaches. But they don't measure all three at the same time. ... along with preliminary estimates, there are second estimates, quarterly national accounts and the annual blue book, as well as occassional general changes, each of which uses progressively more data and better methods for estimating GDP, giving us a fuller and more accurate picture".
Is GDP a useful measure anyway? Listen to this July 2011 episode of Thinking Allowed, in which philosopher Martha Nussbaum outlines an alternative "human capabilities" approach for measuring how populations are flourishing or failing.
A further striking example of how figures can change dramatically when base measures change, as reported by Morten Jerven, author of a book on development statistics in Africa, writing in the Guardian in 2012 (how Ghana went from being one of the poorest countries in the world one day to an aspiring middle-income one the next):
GDP is typically calculated as a sum of the "value added" of the production of goods and services in all sectors of the economy. In order to compare one year's value added with another, and thus get an idea of whether the economy is expanding or contracting, a new set of sums for all the sectors are computed. In order for these two amounts to be comparable, they are expressed in constant prices. The easiest way of doing this, particularly if data are sparse, which they are at most African statistical offices, is to generate "base year" estimates for future level estimates. When picking a base year the statistical office chooses a year when it has more information on the economy than normally available; such as data from a household, agricultural or industrial survey. The information from these survey instruments is added to the normally available administrative data to form a new GDP estimate. This new total is then weighted by sectors, thereafter other indicators and proxies are used to calculate new annual estimates.
With a previous base year of 1993, "the statistical office was increasingly aware that they were underestimating the size of the Ghanaian economy"; rebasing using 2006 figures that included sectors that didn't even exist in 1993 (such as mobile telephony and the internet) saw its GDP estimates jump by 60%.
In many cases, then, official government numbers come from predictions based on estimates derived from officially collected returns at some point in the past.
At the other end of the spectrum, we see numbers with much weaker provenance entering the public sphere via the press in their reporting of popular surveys and polls. As an example of how polls and surveys make it into the news, survey company Opinium publish a list of news stories based on their polls.
To make it easier to search for original polls and surveys, as well as how they are reported in the press, see this quickly-put-together SurveyWatch custom search engine that searches over a variety of UK pollster and national news organisation websites.
Somewhere in between, we can also find evidence of numbers that gain the veneer or official respectability whilst coming from a far less authoritative source, as when politicians use numbers from dodgy polls to justify a particular policy choice (for example, When A Government Minister’s Data Laundry is Hung Out to Dry...).
Why are the numbers there and how are they communicated?
Given that numbers are widely used in news reports and policy briefings, it makes sense to ask why they are included at all? Although there seems to be a surprising lack of research on this topic by mass communication researchers, there have been claims that "using numbers in news increases story credibility" (Willem M. Koetsenruijter in the presumably partisan, albeit peer reviewed, Newspaper Research Journal,Vol. 32, No. 2, Spring 2011). Citing other sources, Koetsenruijter suggests that his results support others' claims that "the use of numbers is more about creating a particular impression than it is about the actual content that the numbers provide". That is, numbers are used for a rhetorical effect, for example to help establish the credibility of the author or support the development of an apparently logically constructed argument.
Drawing on his Institute for Fiscal Studies (IFS) Annual Lecture of November 2012, Andrew Dilnot, Chair of the UK Statistics Authority, wrote in Numbers and Public Policy: The Power of Official Statistics and Statistical Communication in Public Policymaking that "the power of precise thinking and the ways in which numerical analysis and analytical thought can give you, first, clarity and, second, an important sense of understanding, both of which are powerful tools in public policymaking". To this extent, mathematical models, and the use of numbers, can be used to help communicate a particular line of analytical reasoning.
So what does Dilnot think are the important issues associated with statistical communications? He identifies the following seven areas which we can also use as the basis for a checklist against which to evaluate the way in which numbers are used as part of a communication:
- uncertainty: statistics are estimates based on samples, so it is important to make clear how much uncertainty there is around any particular measure, (in a numbers game, this would correspond to identifying variability, error or confidence limits associated with a statistic); is the uncertainty communicated?
- trends:in this, Dilnot is unequivocal - "we should always be clear that the number makes most sense when it is set in its long-run context and we must look at those trends". Are trends discussed?
- accessibility: numbers should be communicated in meaningful ways that helps the audience get a feel for how big (or small) the numbers actually are. I heard a news report the other day saying that the total Lloyd Bank's payment protection insurance claim bill had reached about £10 billion. I did a quick estimate to get some bounds on that number: if there are a million claimants (2% or so of the UK population) they get £10,000 each; if they had been paying insurance for about 10 years on 5 policies and were just claiming a refund, that would be £200 per year per policy they had been paying or about £17 a month per policy. Note that I don't know if these estimates are correct, I was just trying to get a feel for what £10 billion might actually relate to in the absence of other information. Do we get any sense of whether the numbers are "big" numbers, or "small" numbers?
- international comparisons: where international comparisons are meaningful, this can put national figures in to some sort of international context, particularly when reported on a per capita - i.e. per head of population - basis; are comparisons made with other countries (and if so, on what basis might those other countries have been chosen?) which leads directly to:
- context: in the grander scheme of things, how does this piece of information sit? For example, "if you are talking about manufacturing industry, for example, it might be useful to tell people how large a share of the national economy it is". Do you get a sense of the wider context?
- attribution and causation: if you don't have strong grounds to make a claim, don't. Presumably, on the other hand, if there is a strong link, maybe it should be communicated? Are any claims about causality made, how strong are they, and how well supported do they appear to be?
- comprehensible words: undue subtlety and overuse jargon are the enemy of effective communication. (From which we might also take the message: beware of people using such techniques and ask why they might be doing so?!) Is the piece written in plain English? Does it hide subtle claims that may change the meaning or interpretation of the piece?
With such checklists in mind, we may be better able to identify how numbers are being used to persuade us as well as inform us.
Should we trust them?
Having briefly reviewed where some of the numbers that influence decisions come from, and why and how they are presented to us at all, it is also worth considering the extent to which we should trust them.
In the abstract to a presentation given at the 58th World Congress of the International Statistical Institute, Dublin, during the fourth week of August, 2011, Richard Alldritt and Jacob Wilcock of the UK Statistics Authority suggest that: [o]fficial statistics are not inherently trustworthy. They are rarely complete counts like football scores. For the most part, they are estimates drawn from whatever sources are available. It is right to expect them to be the best estimates that can be produced - at the time and given available resources. But that is not the same as being ‘right’. Even when statistics are known to be accurate, such as those of school examination results, they can tend to mislead. These and many other statistics do not measure quite what the user might assume they measure. So what is it we are asking the public to trust?
For those of us who aren't statisticians, how can we start to view numbers critically? This handy guide from the OU's Professor of Applied Statistics, Kevin McConway, and Winton Professor for the Public Understanding of Risk at Cambridge University, David Spiegelhalter, gives a quick checklist for making sense of health statistics: Radio listener’s guide to ignoring health stories
Alldritt and Wilcock then identify three different takes on what it means to trust the outputs of statistical offices: being trusted to explain, being trusted to be the best available, and being trusted to be the most relevant. Taking these in reverse order, the choice of which statistics to measure comes under the banner of identifying the most relevant statistics. Relevance may in part depend on the use to which the statistics are put, as for example described in The Use Made of Official Statistics, a 2007 report from the Statistics Commission, forerunner to the UK Statistics Authority. Requiring the best available statistics gets across the idea that, since statistics are very often estimated, the means by which those estimates obtained should be as robust as possible. Being trusted to explain is a more nuanced requirement based on the fact that a characteristic shared by many statistics – whether accurate or not - is that they do not always mean quite what they seem to mean. As Alldritt and Wilcock further illustrate:
[a] wealthy area of England can have low GDP per capita - perhaps because it has a large retired population. A rise in recorded thefts from shops can be due to increased police attention to such crimes. Statistics of migration exclude short term migrants and a rise in their numbers may not be visible. Population statistics for a city centre will usually exclude the large day-time population of tourists and commuters. Statistics of road accidents may be missing large numbers of accidents about which the police were not informed. The common strand in these examples is that the statistics have some important limitations that are not obvious from the figures themselves. We want to be able to trust that these limitations will be frankly and honestly explained to the user [my emphasis].
As we can see, contextualising the statistics and understanding the conditions under which they were generated is an essential part of communicating statistics if we aren't to misinterpret them. And if this at first glance pedantic approach is not followed, we should not be too quick to jump to conclusions about what the numbers actually say.
Making sense of it all
Numbers are everywhere, a special class of often memorable facts that we can appeal to and share in everyday conversations as well as the highest levels of international decision-making. But many such numbers are estimates, and it's worth remembering that that is often so. The more precise a number, the more conditions and caveats are likely to be associated with the way that actual number was actually calculated (or in the limiting case, actually counted). But whilst the big numbers used in big decisions may often have an element of uncertainty associated with them, that does not mean we shouldn't necessarily trust them. Rather, we should check the means by which those numbers both came to be so, and by which they entered the argument; in short, we should ask ourselves the question: why this number, in particular, not that? And who says so?
Rate and Review
Rate this article
Review this article
Log into OpenLearn to leave reviews and join in the conversation.