Lord Ashcroft KCMG PC is an international businessman, philanthropist, author and pollster. For more information on his work, visit lordashcroft.com
How do you measure the accuracy of an opinion poll?
Obviously, you compare its findings to the result of the election it was asking about. The closer the numbers to each party’s actual vote share, the more accurate the poll.
This is all very well for surveys carried out a few days before an election, when voters’ minds are made up and there is little time for events to intervene. But what about a poll conducted months or even years before anyone casts a real vote: how can we judge whether it is “accurate”?
Most pollsters ask people which party they would choose if an election were held tomorrow. Let’s say the survey takes place on a Wednesday, with the supposed election on the Thursday. By the usual standard, we should – on the Friday – take Wednesday’s survey and line it up against the outcome of the election that didn’t happen, and… you see the problem?
The Schrodinger’s Cat aspect of voting intention polls is, ironically, heightened by a device many researchers use to try and make their surveys more accurate. This is the “turnout question” – asking respondents how likely they would be to get out and vote in an imminent election. But since that contest is, by definition, hypothetical, it amounts to asking whether they will turn out and vote in the election that won’t be happening tomorrow. Psephology meets metaphysics.
My own approach has been to scrap the pretend election and ask how likely people think they are, on a scale from zero to one hundred, to end up voting for each party at the next general election when it comes. This has the advantage of showing the intensity of each party’s support, and the other options each party’s potential voters are considering. It is also, arguably, more consistent with the way people really look at a decision that is still a long way off.
I have quantified party support by counting those who put their likelihood of voting for one party at fifty or more out of a hundred – effectively, those currently saying they are more likely than not to vote for a particular party next time round. Last October, this method put Reform nine points ahead of its nearest rivals; in March, this had narrowed to a dead heat between Reform, the Conservatives and the Greens.
This tells a story that is broadly consistent with the trend in other published polls, despite the ferocious protestations of partisan observers. Some who don’t like my findings like to suggest that since I am a Conservative donor I must be artificially inflating the Tories’ standing in my surveys. For the record, I’m not. But the accusation has always baffled me.
What would be the point of producing polls suggesting your party is doing better than it really is? It was to counter such “comfort polling” – a term I coined – that I got into the business in the first place.
In 2004, the Tory high command was putting it about that despite Tony Blair’s national lead, their private polls put them ahead in the marginal seats and on course to win the next election. This sounded so unlikely to me that I commissioned my own research to find out whether it was true. It wasn’t.
As I argued in Smell The Coffee, my analysis of the 2005 campaign, by kidding themselves that they were doing better than they really were, the Conservatives failed to understand how voters saw them, spent precious time and money in constituencies they had no chance of winning, and failed to gain seats they could have taken if resources had been allocated more realistically. Twenty years on, I’m hardly likely to be encouraging Kemi Badenoch to make the same mistake.
Though the accuracy of midterm voting intention polls is by definition unknowable, there is nothing wrong with trying to track the parties’ fortunes in broad terms over time. The problem is that despite occasional admonitions from pollsters themselves, and despite the fact that all polls have margins of error (meaning that each party’s “true” level of support is usually within a two or three points either side of the published figure, depending on the number of people interviewed), voting intention polls are not widely understood in this way. Furious debates break out over whether the Conservatives are “really” on 18 per cent or 21 per cent, or whether it is more “accurate” to talk of a two or four or six-point Reform lead. Last month, I saw polling enthusiasts (they really do exist) reporting my supposed “voting intention” results to two decimal places – a ludicrous, angels-on-pinheads exercise in spurious specificity.
The fact is that these numbers are the least important, the least interesting and the least meaningful part of a political survey. I am interested in what voters think, what matters to them, what motivates them, how they interpret (and whether they even notice) the political events in the headlines. “Voting intention” figures are really a by-product of this more significant material, not the main event. In focus groups, which I consider at least as instructive as polls as a way of discerning the national mood, people will talk animatedly for an hour about whatever is on their mind: migration, tax, food prices, gas bills, waiting lists, crime, welfare, the state of their town centre, their children’s chances of ever buying a house. They will have a wise perspective on recent events, and plenty of shrewd and punchy things to say about the politicians. But ask them at the end how they think they’ll end up voting, and as likely as not they will puff their cheeks and shrug: who knows? “I’ll have to read the manifestos,” they often add.
This is why I ask people which way they are leaning in the way that I do.
I believe this method meets them where they are in the moment, prompting them to consider the question in a different way than when asked to make a definitive choice about a hypothetical event. The answers show the proportions saying they are inclining towards each party at a given moment, and allow us to explore how opinion on other issues varies between each party’s potential backers. But the question is not designed to be a precision tool – it gives a broad indication of political sentiment and how it moves over time.
Sometimes the figures that emerge will look different from those of other pollsters: I am asking a different question, after all. There might also be some results which at first glance look unusual. With three or four parties clustered together, any one of them could pop into an unexpected lead – in which case, statistical quirks and margins of error could be a more likely explanation than a party suddenly taking the nation by storm.
I have nothing but respect for my colleagues in the polling business, all of whom are trying to measure public opinion in good faith and, yes, as accurately as they can. But when it comes to voting intentions, take the numbers with a pinch of salt – whoever asks the question, and however they ask it.
Lord Ashcroft KCMG PC is an international businessman, philanthropist, author and pollster. For more information on his work, visit lordashcroft.com
How do you measure the accuracy of an opinion poll?
Obviously, you compare its findings to the result of the election it was asking about. The closer the numbers to each party’s actual vote share, the more accurate the poll.
This is all very well for surveys carried out a few days before an election, when voters’ minds are made up and there is little time for events to intervene. But what about a poll conducted months or even years before anyone casts a real vote: how can we judge whether it is “accurate”?
Most pollsters ask people which party they would choose if an election were held tomorrow. Let’s say the survey takes place on a Wednesday, with the supposed election on the Thursday. By the usual standard, we should – on the Friday – take Wednesday’s survey and line it up against the outcome of the election that didn’t happen, and… you see the problem?
The Schrodinger’s Cat aspect of voting intention polls is, ironically, heightened by a device many researchers use to try and make their surveys more accurate. This is the “turnout question” – asking respondents how likely they would be to get out and vote in an imminent election. But since that contest is, by definition, hypothetical, it amounts to asking whether they will turn out and vote in the election that won’t be happening tomorrow. Psephology meets metaphysics.
My own approach has been to scrap the pretend election and ask how likely people think they are, on a scale from zero to one hundred, to end up voting for each party at the next general election when it comes. This has the advantage of showing the intensity of each party’s support, and the other options each party’s potential voters are considering. It is also, arguably, more consistent with the way people really look at a decision that is still a long way off.
I have quantified party support by counting those who put their likelihood of voting for one party at fifty or more out of a hundred – effectively, those currently saying they are more likely than not to vote for a particular party next time round. Last October, this method put Reform nine points ahead of its nearest rivals; in March, this had narrowed to a dead heat between Reform, the Conservatives and the Greens.
This tells a story that is broadly consistent with the trend in other published polls, despite the ferocious protestations of partisan observers. Some who don’t like my findings like to suggest that since I am a Conservative donor I must be artificially inflating the Tories’ standing in my surveys. For the record, I’m not. But the accusation has always baffled me.
What would be the point of producing polls suggesting your party is doing better than it really is? It was to counter such “comfort polling” – a term I coined – that I got into the business in the first place.
In 2004, the Tory high command was putting it about that despite Tony Blair’s national lead, their private polls put them ahead in the marginal seats and on course to win the next election. This sounded so unlikely to me that I commissioned my own research to find out whether it was true. It wasn’t.
As I argued in Smell The Coffee, my analysis of the 2005 campaign, by kidding themselves that they were doing better than they really were, the Conservatives failed to understand how voters saw them, spent precious time and money in constituencies they had no chance of winning, and failed to gain seats they could have taken if resources had been allocated more realistically. Twenty years on, I’m hardly likely to be encouraging Kemi Badenoch to make the same mistake.
Though the accuracy of midterm voting intention polls is by definition unknowable, there is nothing wrong with trying to track the parties’ fortunes in broad terms over time. The problem is that despite occasional admonitions from pollsters themselves, and despite the fact that all polls have margins of error (meaning that each party’s “true” level of support is usually within a two or three points either side of the published figure, depending on the number of people interviewed), voting intention polls are not widely understood in this way. Furious debates break out over whether the Conservatives are “really” on 18 per cent or 21 per cent, or whether it is more “accurate” to talk of a two or four or six-point Reform lead. Last month, I saw polling enthusiasts (they really do exist) reporting my supposed “voting intention” results to two decimal places – a ludicrous, angels-on-pinheads exercise in spurious specificity.
The fact is that these numbers are the least important, the least interesting and the least meaningful part of a political survey. I am interested in what voters think, what matters to them, what motivates them, how they interpret (and whether they even notice) the political events in the headlines. “Voting intention” figures are really a by-product of this more significant material, not the main event. In focus groups, which I consider at least as instructive as polls as a way of discerning the national mood, people will talk animatedly for an hour about whatever is on their mind: migration, tax, food prices, gas bills, waiting lists, crime, welfare, the state of their town centre, their children’s chances of ever buying a house. They will have a wise perspective on recent events, and plenty of shrewd and punchy things to say about the politicians. But ask them at the end how they think they’ll end up voting, and as likely as not they will puff their cheeks and shrug: who knows? “I’ll have to read the manifestos,” they often add.
This is why I ask people which way they are leaning in the way that I do.
I believe this method meets them where they are in the moment, prompting them to consider the question in a different way than when asked to make a definitive choice about a hypothetical event. The answers show the proportions saying they are inclining towards each party at a given moment, and allow us to explore how opinion on other issues varies between each party’s potential backers. But the question is not designed to be a precision tool – it gives a broad indication of political sentiment and how it moves over time.
Sometimes the figures that emerge will look different from those of other pollsters: I am asking a different question, after all. There might also be some results which at first glance look unusual. With three or four parties clustered together, any one of them could pop into an unexpected lead – in which case, statistical quirks and margins of error could be a more likely explanation than a party suddenly taking the nation by storm.
I have nothing but respect for my colleagues in the polling business, all of whom are trying to measure public opinion in good faith and, yes, as accurately as they can. But when it comes to voting intentions, take the numbers with a pinch of salt – whoever asks the question, and however they ask it.