Inside Story

Did late deciders confound the polls?

Predictions of the 2019 election result were way off the mark. But we still don’t know why

Murray Goot 19 September 2019 4665 words

Second thoughts? Voters queued on election day, 18 May 2019. William West/Getty Images


Everyone who believes the polls failed — individually and collectively — at the last election has a theory about why. Perhaps the pollsters had changed the way they found respondents and interviewed them? (Yet every mode — face-to-face interviewing, computer-assisted telephone interviewing via landlines and mobiles, robopolling, and interviewing online — produced more or less the same misleading result.) Perhaps the pollsters weighted their data inadequately? Did they over-sample the better-educated and under-sample people with little interest in politics? Perhaps, lemming-like, they all charged off in the same direction, following one or two wonky polls over the cliff? The list goes on…

But the theory that has got most traction in the post-election polling is one that has teased poll-watchers for longer than almost any of these, and has done so since the advent of pre-election polling in Australia in the 1940s. This is the theory that large discrepancies between what polls “predict” and what voters do can be explained by the existence of a large number of late deciders — voters who don’t really make up their minds until sometime after the last of the opinion polls are taken.

In 2019, if that theory is right, the late deciders need to have either switched their support to the Coalition after telling the pollsters they intended to vote for another party, or shifted to the Coalition after telling the pollsters that they didn’t know which party to support. It was, after all, the Coalition that the polls underestimated, and Labor that they overestimated. On a weighted average of all the final polls — Essential, Ipsos, Newspoll, Roy Morgan and YouGov Galaxy — the Coalition’s support was 38.7 per cent (though it went on to win 41.4 per cent of the vote) and Labor’s 35.8 per cent (though it secured just 33.3 per cent of the vote). Variation around these figures, poll by poll, wasn’t very marked. Nor was there much to separate the polls on the two-party-preferred vote: every poll underestimated the difference between the Coalition’s and Labor’s two-party-preferred vote by between 2.5 and 3.5 percentage points.

The most recent, and most widely reported, research to have concluded that late deciders made the difference is a study of voters interviewed a month before the election and reinterviewed a month after it, published recently by the ANU Centre for Social Research and Methods. According to Nicholas Biddle, the author of the study, the group that determined the result was comprised of those who were “undecided in the lead-up to the election and those who said they were going to vote for the non-major parties but swung to the Coalition.” At the beginning of July, the Australian’s national affairs editor, Simon Benson, also argued that those who were only “leaning” towards Labor ahead of the election had “moved violently away from Labor” once they entered the polling booths and had a pencil in their hands; the “hard” undecided, those who registered in the opinion polls as “don’t knows,” also decided not to vote Labor. At the beginning of June, an Essential poll, conducted shortly after the election, had presented evidence for much the same point.

Over the years, the idea that polls may fail to pick the winner because they stop polling too early had become part of the industry’s stock-in-trade. Especially in the period before 1972, when there was only one pollster, Roy Morgan, the argument had been difficult to refute. By 2019, it was an oldie — but was it also a goodie?

The ANU study: For the Biddle report, two sets of data were collected from the Life in Australia panel, an online poll conducted by the Centre for Social Research and Methods. The first was collected between 8 and 26 April, with half the responses gathered by 11 April, five weeks ahead of the 18 May election; the second between 3 and 17 June, with half gathered by 6 June, three weeks after the election. The analysis was based on the 1844 people who participated in both.

In the first survey, respondents were asked how they intended to vote. Among those who would go on to participate in the June survey, the Coalition led Labor by 3.8 percentage points. In the second survey, respondents were asked how they had voted; among those who had participated in the April survey, the Coalition led Labor by 6.4 percentage points. These figures, based on first preferences, included those who said they didn’t know how they were going to vote (April) and those who didn’t vote (June).

Although Biddle says that the data “on actual voting behaviour and voting intentions” were collected “without recourse to recall,” this is misleading. While the data on voting intentions were collected “without recourse to recall” — this is axiomatic — the same cannot be said for the data on voting behaviour. The validity of the data on voting behaviour, collected well after the election, is wholly dependent on the accuracy of respondents’ recall and their willingness to be open about how they remember voting. It can’t be taken for granted.

Among those who participated in both waves and reported either intending to vote (April) or having voted (June), support shifted. The Coalition’s support increased from 38.0 to 42.2 per cent, Labor’s increased from 34.1 to 35.4 per cent, while support for “Other” vote fell from 14.4 to 8.7 per cent. Only the Greens (13.6 per cent in April and 13.7 per cent recalled in June) recorded no shift.

The panel slightly overshot the Coalition’s primary vote at the election (41.4 per cent) and, as the polls had done, also overshot Labor’s (35.4 per cent). More importantly, it overshot the Greens (10.4 per cent), and undershot the vote for Other (14.9 per cent), and did so by sizeable margins. It overestimated the Greens by 3.3 percentage points, or about one-third, and underestimated Other by 6.2 percentage points, or more than a third. These are errors the polls did not make. A problem with “Australia’s first and only probability-based panel,” as the ANU study is billed, or a problem with its respondents’ recall of how they really voted? None of these figures — or the comparisons with the polls — are included in Biddle’s report; I’ve derived them from the report’s Table 3. Of course, the total shift in support across the parties was much greater than these numbers might indicate.

From his data, Biddle draws three conclusions: that “voter volatility… appears to have been a key determinant of why the election result was different from that predicted by the polls”; that part of the “swing towards the Coalition during the election campaign” came “from those who had intended to vote for minor parties,” a group from which he excludes the Greens; and that the swing also came from those “who did not know who they would vote for.”

None of these inferences necessarily follows from the data. Indeed, some are plainly wrong. First, voter volatility only comes into the picture on the assumption that the polls were accurate at the time they were taken. And before settling on “volatility” to explain why they didn’t work as predictions, one needs to judge that against competing explanations. Nothing in the study’s findings discounts the possibility that the public polls — which varied remarkably little during the campaign, hence the suspicions of “herding” — were plagued by problems of the sort he notes in relation to the 2015 polls in Britain (too many Labour voters in the pollsters’ samples) and the 2016 polls in the United States (inadequate weighting for education, in particular), alternative explanations he never seriously considers.

Second, while positing a last-minute switch to the Coalition among those who had intended to vote for the minor parties might work with the data from Biddle’s panel, it cannot explain the problem with the polls. Had its vote swung to the Coalition, the minor-party vote would have finished up being a good deal smaller than that estimated by the polls. But at 25.3 per cent, minor-party support turned out almost exactly as the polls expected (25.7 per cent, on a weighted average). In its estimate of the minor-party vote — the Greens vote, the Other vote, or both — the ANU panel, as we have seen, turned out to be less accurate (21.4 per cent).

Click to enlarge

Third, in the absence of a swing from minor-party voters to the Coalition, a last-minute swing by those in the panel “who did not know who they would vote for” can’t explain the result. That’s the case even if the swing among panel members, reported by Biddle, occurred entirely on the day of the election and not at some earlier time between April and 18 May, the only timeline the data allow. In the final pre-election polls, those classified as “don’t know” — 5.7 per cent on the weighted average — would have had to split about 4–1 in favour of the Coalition over Labor on election day in order to boost the Coalition’s vote share to something close to 41.4 per cent and reduced Labor’s vote share to something close to 33.3 per cent (unavoidably rendering the polls’ estimate of the minor-party and independent vote slightly less accurate). In the ANU panel, those who had registered as “don’t know” in April recalled dividing 42 (Coalition), 21 (Labor) and 36 (Other) in May. That is certainly a lopsided result (2–1 in favour of the Coalition over Labor) but nowhere near as lopsided as would be required to increase the gap between the Coalition and Labor in the polls (roughly three percentage points) to eight percentage points, the gap between the Coalition and Labor at the election.

The C|T research: Biddle wasn’t the first to argue that there was a late swing — a swing that the polls couldn’t help but miss — and to produce new data that purported to show it. Already, the Australian’s Simon Benson had publicised another piece of research — “the most comprehensive and intelligent analysis so far” — said to show the effect on the election of “[hard] undecided and ‘soft’” voters who had swung late.

This research was conducted by “a private research firm” (the C|T Group, in fact — the political consultancy that polled for the Liberal Party) and “provided to senior Liberals and shown to the Weekend Australian.” Its findings — released without the knowledge or endorsement of any senior member in the Group — were said to show that: (a) ahead of the election, “many Labor voters” had been “only leaning towards Labor” — having been classified initially as “don’t knows,” nominating Labor, presumably, only after being pressed about the party for which they were likely to vote (in the jargon of the trade, after being asked “a leaner”); (b) “on the day of the election,” these Labor “leaners” — plus the “‘hard’ undecided” who remained “don’t knows” after the “leaner” (“about 5 per cent”) — “couldn’t bring themselves to back Labor” and “largely went with a minor party”; and (c) via the minor parties, the preferences of both “came over to the Coalition.” Benson quotes the “research briefing” as saying, “Rather than Newspoll results suggesting Newspoll ‘got it wrong,’ a more informed interpretation is that the ‘“hard” undecided’ voters (those still undecided on May 17) did not support Labor on election day.”

But the story doesn’t survive the most cursory of checks. If the “soft” Labor voters — the “leaners” — and the “don’t knows” (the “hard” undecided) moved to the minor parties on election day, Newspoll’s estimate of the vote for the minor parties must have been an underestimate. In fact, Newspoll’s estimate of the vote for “others” was an overestimate: “others” in the final Newspoll were 16 per cent; at the election, they accounted for 14.9 per cent of the vote. (I exclude the Greens from what the analysis calls “others,” almost all of them supporters of Pauline Hanson’s One Nation or the United Australia Party, simply because it makes little sense to assume that many “softly” committed Labor supporters switched to the Greens and then preferenced the Coalition.) Newspoll didn’t underestimate the vote for “others,” and neither did the final polls from Essential, Galaxy, Ipsos and Morgan.

“Labor and Shorten,” Benson says, may “have made it very difficult for their soft supporter base to stick with them” — and, he might have added, difficult for the “hard” undecided to swing to them. But not all the polls overestimated Labor’s vote, as Newspoll did. Ipsos, which provided a better estimate than Newspoll of the Coalition’s first-preference lead over Labor, estimated Labor’s support at 33 per cent — almost exactly the proportion that voted Labor. Roy Morgan was also closer than Newspoll in estimating Labor’s first-preference vote; so, too, was Essential.

Newspoll estimated the “don’t knows” (the “hard” undecided) at 4 per cent — not 5 per cent, the estimate in the post-election private polling. By ignoring the “don’t knows,” as all the polls did, it was effectively assuming that they would split in much the same way as those who had nominated a party: 38 (Coalition), 37 (Labor), 9 (Greens) and 16 (Other). If we assume, for the sake of the argument, that three times as many of the “don’t knows” voted Coalition as voted Labor, then a more accurate Newspoll would have been one in which the “don’t knows” were split 60–20–10–10. Had that happened, Newspoll would have estimated the Coalition’s first-preference vote at 39 per cent, Labor’s at 36 per cent (allowing for rounding) — a Coalition lead of three percentage points compared with the lead of one percentage point it actually reported. Since the Coalition’s winning margin was 8.1 percentage points, an estimate of three percentage points would have been better than an estimate of one percentage point, but not much better. On the other hand, Newspoll’s estimate of the Coalition’s share of the two-party-preferred vote would have been 50.4 per cent (50.5 per cent if it stuck to 0.5 per cent as its smallest unit) not 48.5 per cent. Compared with the actual tally of 51.5 per cent, this estimate of the two-party-preferred would have been considerably better.

Newspoll was at liberty to adjust its figures along these lines. It didn’t, presumably because it wasn’t persuaded that there were good reasons to do so. But if Newspoll’s figures are to be rejigged, why not those of the polls that don’t appear in the Australian? Most of them had a higher proportion of “don’t knows” to redistribute (5 per cent, Morgan; 7 per cent, Ipsos and YouGov Galaxy, and all (except Morgan on the two-party-preferred) had results as close, if not closer, to the mark than Newspoll’s. Rejigged, their figures would benefit even more substantially than Newspoll’s. For some reason, the research Benson cites appears not to have noticed this; and if Benson noticed it, he didn’t draw it to the attention of his readers.

Like all the polls, Newspoll got the election wrong. But Newspoll’s performance, on some measures, was worse than other polls: both its overestimate of the Labor vote and its underestimate of the Coalition vote were greater than any other poll’s. Having stayed in the field later than all the others — it ran its final poll from Tuesday through to Friday — and having boosted its sample size to 3038, nearly twice the number used by anyone else, Newspoll had given itself the best possible chance of picking up the late swing to the Coalition for which, no doubt, both the Australian and its readers were hoping. But a late swing to the Coalition is something it did not pick up. Its final poll detected what Benson described, literally though ludicrously, as a “half-point break” not to the Coalition but “towards Labor.”

The fact that C|T’s findings surfaced in the Weekend Australian is no great surprise. Where better for the C|T researcher to drop the findings than into the hands of the Australian, the newspaper that commissioned Newspoll, hence the newspaper where the research was most likely to get a run and least likely to be critically examined? The findings, as Benson wrote, offered “another explanation” for why Labor hadn’t done as well as the polls’ expected. And they seemed to get Newspoll off the hook: “Even polling on Friday night would not have picked up what was going to happen.”

The Essential poll: The C|T Group wasn’t the first to produce research purporting to show a late swing either. That honour belongs to Essential Media — a firm that conducted polls on its own account and then placed them with the Guardian.

Immediately after the election, Essential’s Peter Lewis wondered whether the polls had erred by simply “removing” the “don’t knows” — what the C|T research would call the “hard” undecided — from the poll; “removing” them, as all the pollsters had done, meant, in effect, assuming that they would split along the same lines as the respondents who said that they did know how they were going to vote. Essential had not only “removed” that 8 per cent of the sample categorised as “undecided” — a figure Lewis revealed for the first time — which was “nearly double the number from previous elections,” it had also given insufficient thought to another 18 per cent of respondents who “told us they hadn’t been paying a lot of attention to the campaign.” As a result, Lewis conceded, the company may have missed the “possibility” that “the most disengaged 10 per cent” — why 10 per cent? — had “turned up on election day and voted overwhelmingly for the Coalition.”

To test this theory, Essential conducted another poll. According to this poll (or at least the Guardian’s reporting of it — Essential has not responded to a request for a copy), the result “underscore[d] the fact that undecided voters broke the Coalition’s way in the final weeks of the campaign, with 40 per cent of people who made up their minds in the closing week backing the Coalition, compared to 31 per cent for Labor.” Of those who had been “undecided” on election day — 11 per cent of the post-election sample — “38 per cent broke Morrison’s way and 27 per cent Bill Shorten’s way.” From this, the Guardian inferred that the Coalition did especially well from late deciders. Though the story didn’t say it, Lewis’s theory, it seemed, had been confirmed.

But had it? One problem with the analysis is that those who made up their mind either during the last week or on election day weren’t necessarily those categorised as “don’t know” in Essential’s final poll; that group may have included respondents who indicated a party preference but hadn’t made their minds about whether that was the way they would actually vote. Another problem is that the report doesn’t say what proportion of respondents made up their minds in the final week. And we are not told what proportion was changing from one party (which?) to another party (which?) rather than simply confirming an earlier decision to vote for one of the parties and not another. Without knowing any of this, there is no way of estimating the impact a 40–31 split among the “undecided” in the final week would have had on the distribution of overall voting-intention figures.

The figures for election day itself give us more to work with; but they don’t do much to confirm the thesis. To see what difference a 38–27 split would have made (38–27–35, allowing for “others”) requires us to compare it to the 40–36–24 split when it was assumed that the “undecided” would divide in much the same way as the rest of the sample. Since the proportion of “don’t knows” in Essential’s final pre-election poll was 8 per cent (not 11 per cent), the new ratios imply that 3 per cent (unchanged) would have voted for the Coalition, 2 per cent (rather than 3 per cent) would have voted Labor, and 3 per cent (instead of 2 per cent) would have voted for some other party.

On these figures, had the final Essential poll been conducted at the same time as the final Newspoll, the Coalition’s share of the distribution would have remained unchanged (about 40 per cent) and Labor’s would have come down from 36 to 35 per cent. In terms of the two-party-preferred vote, the Coalition’s share would have risen from 48.5 to 48.8 per cent (49 per cent, if we round up to nearest 0.5 per cent) — the two-party estimate produced by YouGov Galaxy and Ipsos. For Essential, this would have been a better set of figures — but no cigar.

Evidence, post-election, that the “don’t knows” favoured the Coalition — let alone did so by a wide margin — is not unequivocal. A poll conducted by JWS, in the two days after the election, shows virtually no difference among those who said they had decided on their vote in the last week, including election day, between the size of the Coalition vote (39 per cent) and the size of the Labor vote (37 per cent). Moreover, among those who had voted Greens, 49 per cent also said they had decided late.

The Guardian’s account of Essential’s analysis, like all the arguments for a “late swing,” fails to mention the exit poll conducted by YouGov Galaxy for Channel 9, which purported to show that the government was headed for defeat — and by a similar margin to that predicted by the pre-election polls.


Oldies can be goodies. Given that the polls, using different techniques, missed the mark by roughly the same amount, all pointing in the wrong direction, the idea of a late swing might be especially tempting. But if the decisions of late switchers are to explain why the polls performed poorly, these voters would have to have switched from Labor to the Coalition — not from minor parties to the Coalition. For respondents who said they “didn’t know” how they were going to vote to have made the difference, they would have to have voted for the Coalition — and to have done so overwhelmingly.

Responding to pollsters, respondents can be strategic or they can be sincere. If they are strategic, “late deciders” may not be late deciders at all; for the most part, they will simply be respondents who dissemble. Mark Textor, co-founder of the C|T Group, and Australia’s most internationally experienced pollster, insists that respondents now “game” the polls. “Knowing the results will be published,” he observed after the 2015 British election (referring, presumably, to Britain’s public rather than its private polls), “leads many respondents to give the most dramatic choice that sends the message of the day… they are using their answers to ‘tickle up’ one party or another.”

Misleading pollsters, though not necessarily in this way, has a long history. As early as 1940, the American Institute of Public Opinion used a “secret ballot” to encourage respondents to be honest about how they intended to vote. The Australian Gallup Poll, worried about its under-reporting of the DLP vote, would later introduce a similar device. More recently, it has become fashionable to talk knowingly about the “shy Tory” — respondents who may be perfectly certain that they are going to vote for a party on the right but don’t feel comfortable admitting it to a pollster, not only in a face-to-face interview, apparently, but also in a robopoll or in response to an online questionnaire. If the polls were “gamed” in 2019, it won’t have shown, and it won’t have mattered, provided it affected the Labor and Coalition vote in equal measure. That voters didn’t drift from the minor parties to the Coalition at the last minute is clear from a comparison of the final polls and the election results. The final destination of the “don’t knows,” however, cannot be established in this way.

A “late swing,” including claims about the “don’t knows” dividing disproportionately in favour of one party or another, has long been invoked by pollsters who believe their polls were fine at the time they were taken — only to be overtaken by events. This line of defence can take a pollster only so far. Always, the challenge has been to find a narrative that fills the gap between what the last poll showed and how the electorate actually voted — the last-minute intervention of Archbishop Mannix in the 1958 campaign, for example, or the death of President Kennedy shortly before the 1963 election — and to do so plausibly, if not always persuasively.

Reviewing the performance of the polls this time, Graham Young, a pollster who claims to have “pioneer[ed] the use of the Internet for qualitative and quantitative polling in Australia,” also concluded that “undecided voters, and a late swing to the government, rather than problems with methodologies’ explained the polls’ ‘failure.’” His narrative? That Bill Shorten had “pulled-up stumps” too early and gone “for a beer on Friday, while Morrison was still working hard, just as people were making their final decision.” Benson’s narrative turned out to be very much the same. The idea that over 350,000 voters responded to the last minute campaigning — or absence of it — by switching their vote choice from Labor to the Coalition, stretches belief.

Strikingly, none of those responsible for actually producing the polls sought refuge in Benson’s or Biddle’s or Young’s line of argument — certainly, not on its own. For Lewis, “the quality of poll sampling” also merited examination — not only in the online polling of the kind Essential used but also in the modes used by other pollsters. He thought that the problems pollsters encountered around the weighting of their data, especially data gathered from groups reluctant to respond, warranted investigation as well.

John Utting, another pollster, though not one involved in this election, wasn’t buying the last-minute-change argument at all. He thought more structural factors were at work. Had the kinds of problems that had brought the polls undone, he wondered, existed undetected for a long time? “Did polling create a parallel universe where all the activity of the past few years, especially the leadership coups and prime ministerial changes, were based on illusions, phantoms of public opinion that did not exist?”

Not, apparently, in some of the private polling. The last of the Liberal Party’s “track polling” — polling conducted nightly during the campaign using rolling samples in twenty seats — put the Liberals ahead of Labor, on the final Thursday, 43–33, according to a report that quotes Andrew Hirst, the party director, as its source. Exactly which seats — five of them Labor, fifteen Liberal — were polled is not disclosed, and Hirst has declined to name them. Nor are we told how the polling was conducted. But if the polling was as accurate as the story implies, it follows that: we don’t have to posit a last-minute swing; we don’t have to worry about the need to track down “shy Tories” or similar respondents (and non-respondents) who may have gamed the polls; and we can accept that whatever mode C|T chose, its polling worked. Not only did it work during the campaign; the polling showed that the government’s fortunes had “turned around immediately after the [early April] budget.” This suggests that the problems encountered by those pollsters that used the same mode — and there must have been some that did, given the range of modes deployed during the campaign — could have been overcome had they (or their impecunious paymasters) been both willing and able to invest in them properly.

The fact that none of the post-election surveys has succeeded in identifying any last-minute swing suggests that a swing of any great significance simply didn’t happen. While it’s conceivable that evidence of a swing will still emerge, this line of inquiry seems to have reached a dead end for now. It’s one thing to go back to the past in search of explanations; it’s another thing to be trapped in it. •

Murray Goot is a member of the Association of Market and Social Research Organisations panel inquiring into the 2019 federal election polls. The views expressed here are his own. For comments on an earlier draft of this article, he is indebted to Ian Watson and Peter Browne.