The question of whether or not uniform national swing (UNS) calculations are a sensible way of trying to turn national opinion poll vote figures into seat estimates has been much debated in the last few months. So how did UNS do this time round?
Here is how the May 2010 result compares with a UNS projection based on the actual vote changes which occurred between 2005 (notional results) and this time:
Conservatives: 305 seats*. UNS prediction: 291 (-14)
Labour: 258 seats. UNS: 266 (+8)
Liberal Democrats: 57 seats. UNS: 62 (+5)
* Excluding Thirsk & Malton from calculations
In a close election the errors between UNS and the actual result come with political significance. Under UNS the vote shares the parties secured would have given a Parliament where the combined numbers of Labour and Liberal Democrats was 328 rather than 315. That 13 seat difference could have made hung Parliament negotiations very different.
That said, the overall record of UNS is far better than you might guess from some of the comments made about it in the run-up to polling day. UNS didn’t turn in a perfect prediction but it got pretty close. The biggest error – on the Conservative Party’s seats – was 14 seats. After all the talk about Ashcroft money, difference performances in the marginals and the like that is more a mouse than a mountain.
Whilst it’s certainly true that UNS based predictions are sometimes quoted as if they are certain to be correct – and that’s wrong, they should have a health warning – you’re probably more likely to have been misled by people saying, “Oh, the result will be nothing like UNS because of this long set of reasons…” than by someone relying on UNS.
UNS also had a pretty decent record in both 2001 and 2005. Prior to this election it was a fair question to ask whether a change of government would see, as happened in 1997, uniform national swing projections turn in a much less accurate prediction. As it was, that didn’t happen and it is UNS’s poor performance in 1997 that looks to be the exception:
2005 general election
Conservatives: 197 seats. UNS prediction: 184 (-13)
Labour: 355 seats. UNS: 369 (+14)
Liberal Democrat: 62 seats. UNS: 62 (+/-0)
2001 general election
Conservatives: 166 seats. UNS prediction: 181 (+15)
Labour: 412 seats. UNS: 402 (-10)
Liberal Democrat: 52 seats. UNS: 47 (-5)
1997 general election
Conservatives: 165 seats. UNS prediction: 207 (+42)
Labour: 418 seats. UNS: 395 (-23)
Liberal Democrat: 46 seats. UNS: 28 (-18)
8 Comments
True. UNS does work if the LD vote share stays close to the same, which it did.
But if the LD vote share had gone up substantially, as was being predicted by lots of people up until the last few days, UNS would’ve broken horribly. Our vote collapsed down to pretty close to the same as last time, and I’m definitely in the “people changed their minds/it was the undecideds wot lost it” school on that one.
If our vote share had been 28%+, UNS would’ve failed horribly. It wasn’t, so it didn’t…
Like many others, I was sceptical of the likelihood that UNS could predict the election result accurately. However, much of that scepticism is based on the presumption that we were going to see a Lib Dem surge of some degree (as pretty much everyone expected until – and indeed after – the exit poll on election night). If we do see a seismic change of party support at a future election then I suspect a “gearing” mechanism of seat shifts will lead to a big failure for UNS. We can already see that the Tories outperformed UNS in their marginal gains and I suspect that effect would have been magnified yet more for the Lib Dems if our poll ratings had translated into actual votes.
“You’re probably more likely to have been misled by people saying, “Oh, the result will be nothing like UNS because of this long set of reasons…” than by someone relying on UNS.”
Rubbish. Strictly true but nonetheless vacuous as a point; the only reason this is the case is because everyone was misled by polling figures (prior to the exit poll) so of course a less effective system which minimized the impact of change in our support (which were the main ‘mistake’ in the post-debate polling figures) would have been ‘more misleading’ but that’s not an argument for UMS that’s an argument for something we already knew going into this – that the post-debate polling turned out to be wrong and a lot of people, probably because the polling made hung parliament considerations come to the fore and pushed votes towards the Tories and Labour again, and meant that the disparity between the amount of support we’d won from Labour and the amount of support we’d one from the Tories (the most significant difference between Matt Silver’s model and UMS) became a non-issue.
A fair comparison would be ‘how does UMS compare to e.g. 538’s model when it comes to the number of seats predicted by the exit poll versus the number we actually got’ but of course 538’s model relies on detailed data for it’s calibration to work correctly so given the earlier polls majorly misrepresented voter intentions on the day you’d need to do that calibration based on what the exit polls were for particular seats and that’s tantamount to saying ‘when you use a better model based on election results to predict the election result you get a result closer to the election result’. Which is trivially true.
But to read this as some kind of vindication of UNS is exactly the same as the following view: You are trying to get from point A to point B. Two people draw you two different maps. One is detailed, one is vague. The detailed map is drawn by someone who, as it happens, is mistaken about where you want to go to because you didn’t describe it correctly. The vague one is so vague it could be either. Once you finally get to your designation you conclude that the vague map is superior because the more accurate map actively steered you in the wrong direction. The correct conclusion of course is that you should have described it correctly. Just as the correct conclusion in this case is to blame the poll not the superior psephological models.
UNS is to result prediction models as FPTP is to election systems; transparently inferior.
Sorry Nate Silver, not Matt.
Uniform national swing works as a general predictor of the number of seats when the local factors which vary it are themselves distributed randomly.
For example, if we have a higher swing in seats where we are close, we will win more than predicted. But if there is no correlation between existing share of vote in seat, and high or low swing, we will win about as many as predicted. Only, they won’t all be the ones that were predicted.
So, simply because it works ok for total number of seats, does not mean it works well for an individual seat, or that local effort counts for little.
Duncan: one of the problems with more complicated prediction models (such as Nate Silver’s) is that they require data which isn’t available outside of election time. So if you’ve got an opinion polls at other times and want to have some idea of what it might mean for a general election, the question of how useful UNS is matters. That it mostly (but not always) comes out as a pretty good general indicator is useful to know. As for your comment that my point was “Rubbish”; well it’s also true as you go on to say 🙂 I think it’s far from vacuous for that very outside of election time reason.
The short way to say it: UNS is based on the assumption that the next election will be pretty much the same as the last, which is an assumption which has held for decades. Even the occasional “landslide” elections have involved fairly small changes, and those changes have been in patterns consistent with other elections. As long as this continues to hold, UNS will be a pretty accurate prediction. In any election where it changes, UNS will be junk.
Switching to a new voting system is likely to be one of those changes that invalidates UNS for that year.
Of course there are always local factors, but across the country they cancel each other out.
It is noticable that the Tories did better than predicted by the UNS.
One possible reason for this is that they were better funded and could make better use of their resources in getting their vote out.
If so, this is something we should be very concerned about. Technology wise they are probably a long way ahead of us.