The Worst Survey Ever?

I recently popped into a branch of one of the main high street banks and couldn’t resist picking up the in-branch service questionnaire. These things are relatively common these days, with every organisation from high street shops to the police asking you to rate the quality of the service you received. This one, though, has issues than most.

Paper survey

The first issue is the poor design and quality of the survey. It has been roughly torn along one edge, uses too many colours (including red and green together, which is a disaster for many red-green colour-blind readers), and has a large organisational chart taken straight from Microsoft Word in the middle of the page. Any literature you produce and provide to customers reflects your brand and image, so the poor presentation and finish of this survey is bound to reflect poorly.

The first sentence is one of the most obvious examples of a loaded question I’ve seen, one of the cardinal sins of survey and questionnaire design.

The next four questions are similarly problematic. I don’t think it’s clear that they are questions, as the alignment (centre-aligned) is ambiguous, and there are no places to mark an answer, like a simple empty box. The provided answers are also entirely arbitrary: why can you answer ‘excellent/very good’ for queue experience and not for ‘making you feel valued?’ Is the queue experience actually how long you had to queue, or is it the entertainment provided while you queue that respondents are asked to comment on? And, finally for this section, ‘making you feel valued’ is so vague it’s practically meaningless.

Next the centre of the paper is taken up with a Microsoft Office organisational chart explaining how this particular bank decide to interpret the 0-10 scale. Good practice for questionnaires is simply to ask respondents to rate on a scale of 0-10, with 10 always being the ‘best’ score. It’s common for people to ask what they think of something ‘out of 10’ so most respondents can cope with such scales quite easily (provided there aren’t too many). Only when analysing the results do you apply an interpretation to the numbers, with 10-9 commonly seen as ‘excellent,’ 8-7 as ‘very good’, and so on. So not only is it wrong to ask respondents to reply based on your scale, but it’s also grossly incorrect to say 8-7 is ‘no opinion.’ This scale effectively holds respondents to ransom, telling them to give ‘excellent’ or their opinion won’t count.

Ignoring the fact that the grammar of the following question is poor, the question itself is reasonable: ‘overall, how was the service at this branch?’ I would query the next question though; with national high-street chains I think most people are unlikely to recommend a particular branch, but more recommend an overall bank. For example, I am not likely to recommend a specific branch of my bank, but I might say my bank offer a good service. Furthermore, the 0-10 scale doesn’t really apply here. The question is ‘how likely are you to recommend this branch…,’ which would require an answer scale of ‘very likely’ to ‘very unlikely.’

Seeing an open question is good, as it provides a ‘catch all’ opportunity for respondents to make any final comments. But, it is highly disheartening to see personal details being asked for. I doubt they’re necessary and, given the unprofessional appearance of the rest of the survey, it seems unlikely the personal data will be properly stored and secured. It’s another basic tenet of market research that respondents are anonymous unless there’s a good reason to identify an individual, and it doesn’t seem necessary here. Is it for a prize draw or to contact you for further comments? And if it is, why does it not say what it’s for? At best, it will make respondents think twice about completing the form, and at worse it might prevent respondents from completing the survey at all.

So, all in all I think this is probably the worst example of a survey I’ve ever seen. It’s damaging for the image of the individual branch, and for the bank in question. I would probably discard any results from the survey given its serious flaws, and would certainly question any actions that were proposed as a result of the survey. I would also question why the survey was produced; there doesn’t seem to be a strong purpose for asking the questions other than the vague statement to ‘improve customer service.’ How? What aspect or aspects of customer service? How will you measure it? How will you know you’ve improved? Are you even sure there is a problem? I could go on…

The really sad thing is that it would cost a minuscule amount of money – perhaps at most half a day’s work – to have a professional researcher look over the proposed questions and provide critique and feedback that would have prevented such a calamity from occurring in the first place.

Advertisements