Surveys are an inescapable part of our society. Hardly a day passes without a mention in the media that “A survey has revealed such and such”. If you’ve managed to get through life without ever being approached by someone with a clipboard asking you to answer a few questions, or being called up, or written to, at home asking you to take part in some survey research, well, you’re a pretty unusual person. But can surveys really tell us anything useful?
Okay, suppose I want to know how many people in the UK enjoy eating mangos. I could ask everyone, but that would take a very long time, and in any case it’s not necessary. If I make some stew, and I want to know how it tastes, I don’t have to eat the whole saucepanful to find out. I taste it - that is, I take a sample. For this to work, however, I have to do it properly. It would be no good just taking a spoonful from the top, without stirring first. I’d get the herbs that had floated to the top, and miss the onions that had burnt onto the bottom. I want my spoonful to be representative of the whole panful, so I stir it up first. It would also be no good to eat a salty biscuit and taste the stew straight afterwards without rinsing my mouth out. The salt in the biscuit would bias my perception of the taste of the stew.
To answer my mango question, then, I don’t have to ask all 60 million people in Britain. I just have to ask some of them. But my sample of people has to be representative of the whole UK population, in terms of mango-eating habits at least. And I have to ask them about mangos in an unbiased way — it would be no good saying “What do you think of these delicious and health-giving fruits?”, for instance.
The art of carrying out a good survey is a complicated one, and involves compromises and trade-offs as well as scientific and statistical principles. For instance, the best approach to choosing a sample of people for a survey is usually to choose them at random. (In a way, this is the statistical equivalent of stirring the stew before tasting it.) But there are many ways to do that. In my mango survey, I could choose a random sample of people from right across the UK. But they would be spread many miles apart from one another, and in going round the country to interview them, I’d spend most of the time travelling. It would be better if I could group my interviews together into a few towns, and then interview several people in each town. Other things being equal, that would make my results less accurate, because the towns I pick might not be representative, even if I choose them at random. On the other hand, by grouping my interviews into towns, I could reduce my travelling costs, so I could afford to do a lot more interviews, and the gain in accuracy from these extra interviews could well outweigh the loss in accuracy from clustering the interviews in towns.
There are also compromises in the way that the questions are asked. In an earlier More or Less programme, you might have heard an item on the UK Government’s International Passenger Survey (IPS). The IPS uses interviewers to collect data from travellers entering or leaving the UK, on things like travel plans, nationality, and amounts spent on accommodation. These data provide information on migration to and from the UK, economic information for the Government, useful information for the travel industry, and much more besides. The programme followed a team of IPS interviewers working on a cross-Channel ferry. They choose passengers on a particular random basis as they boarded the ship. But the interviews do not take place when these passengers are chosen; instead the interviewers note details of their clothing and so on, and then try to find them and interview them later, during the voyage. It isn’t always straightforward to find them again.
In a way this all sounds slightly chaotic and messy, and indeed the IPS interviewers don’t always find their chosen interviewees. But the question to ask yourself is not, “Is this a perfect way to collect the data?”, but “Is this the best way to collect the data, given all the constraints and restrictions involved?”
The interviewers could avoid having to go back and find people again, if they interviewed them while they were coming onto the ship - at the time they are chosen for the sample.
But that would disrupt the flow of passengers onto the ship, and possibly hold up the journey for everyone. It would annoy people who are just coming on board and are probably keen to find a place to sit or a cup of tea, and annoyed people are likely to turn down a request for an interview.
The IPS team could just give out survey questionnaires for the passengers to fill in, and hope that enough of them would be returned later.
But again, many people might not bother. Also there are particular issues of language on a survey involving international travel which could cause problems for a paper questionnaire like this.
Or the sample of people could be chosen during the voyage, rather than when people are coming aboard.
The trouble with this is that some people are going to be a lot easier than others for the interviewers to come upon during the voyage, because they are sitting in easily accessible parts of the ship. The interviewers would be more likely to select these easy-to-find people for interview, and such a sample would not be representative of the whole group of passengers. When passengers are coming on board, they all have to enter through a limited set of doors, and everyone can be treated on an equal basis.
So, while the IPS method isn’t ideal, it may well be a very good practicable method of gathering these data. IPS interviewers on sea crossings do obtain useable information from over 85% of the travellers that they sample, which is a much better response rate than most surveys achieve.
So, in a survey, you have to keep a lot of balls in the air. You have to ask enough people to take part, and you have to make sure that too many people don't turn you down. You have to make sure the people you ask are representative enough. You have to ask questions that don't bias the answers. And there's a lot more to take into account that I haven't had space to mention. The general idea might be like tasting a stew, but the whole thing is rather more complicated. With all this going on, no survey is going to give you perfect answers. The skill in conducting a good survey lies in making sure that the results are as good as they can be.