Public polls overwhelmingly show support falling for a constitutionally entrenched Voice to Parliament, and opposition growing. With the gap between Yes and No narrowing — hardly a recent phenomenon, as several charts make clear — Yes campaigners will be increasingly concerned about how to stem the flow both nationally and in the required four states. The more ambitious of the Yes campaigners may also be examining ways of not just stemming the flow but reversing it, with the level of support nationally in the latest Resolve poll having dipped below 50 per cent (a 49–51 split) and support in three of the states also less than half.
A key question for campaigners is whether voters are switching from “undecided” to No or from Yes to No. “What worries the government,” says columnist George Megalogenis, “is the recent narrowing of the gap between committed Yes and No voters, which reflects a greater shift from the undecided to the No column than from Yes to No.” Another columnist, Janet Albrechtsen, calls Noel Pearson’s highly personal attacks on those disagreeing with him a boon to the No side because “more undecided voters might ask themselves ‘would I want this man running the Voice?’ and shift into the No side of the ledger.”
Is the rise in No being driven by “undecided” voters coming off the fence or by less “committed” Yes voters jumping the fence? That could depend on how “undecided” is defined. In talking about the “undecided,” Albrechtsen and Megalogenis may be focusing on quite different sets of voters.
In any poll, the “undecided” are defined not by the poll’s question but by the question’s “choice architecture” — the range of possible responses the pollster offers respondents. On the Voice, the polls have attempted to measure the “undecided” in at least three different ways. Some polls have offered respondents the opportunity to indicate they have no clear opinion; hence, the “Don’t know” option, or something similar. Some polls have encouraged respondents to express an opinion that has more nuance than Yes or No, enjoining them to indicate whether their views are held “strongly” or “not strongly”; views not strongly held, arguably, are another form of indecision. And some polls have presented respondents with a similar range of responses, but with another possible response — “Neither support nor oppose” — in the middle.
These don’t exhaust the range of possibilities. Some polls have asked respondents, directly, how likely they are to change their positions — “somewhat” or “very” likely — which is another way of indicating that while they appear to have made a choice, their decision is not final. Others have asked respondents who have indicated support for Yes or No how likely they are to turn out and vote.
Still other architectures remove the “undecided” option altogether. Both the most favourable and the least favourable polls for the Yes and No sides are polls of this kind: the latest Resolve poll, which has Yes trailing No, and the latest Essential poll, which has support for Yes a long way ahead of support for No (60–40); each restricted respondents to a Yes or No.
Not to distinguish among these response architectures — some of which allow for further variations — is to risk drawing comparisons between polls that can’t readily be compared, even where the questions asked are similar. It is also to risk inferring trends based on polls that offer respondents very different choices: none of the graphs tracking the narrowing of the gap between Yes and No appears to take any account of the various choice architectures involved in generating the numbers. Not to be aware of these different architectures also risks focusing on only one version of what is going on. Thus, the attention paid to the latest forced-choice Resolve poll or the latest Essential poll is disproportionate.
Depending on the chosen architecture, the “undecided” vote can vary enormously — from more than half, when respondents are invited to consider a middle option in a five-point scale, to zero, when being “undecided” is designed out of the choices on offer. In other words, the contribution to the No vote of the “undecided” is a function, in part, of the choice architecture. Nonetheless, across all choice architectures, the boost to the No vote by the “undecided” appears to have been much smaller than the contribution of those who switched from Yes.
Three types of response architecture: In the standard architecture — following the kinds of questions pollster George Gallup promoted in the 1940s as a “sampling referendum” — respondents are presented with two options (Yes/No, Support/Oppose, and so on) plus a third, for those who don’t want to choose either.
On whether to put a Voice into the Constitution, the standard architecture offers various choices: Yes/No/Don’t know (Newspoll’s most recent polling for the Australian; YouGov for the Daily Telegraph); Yes/No/Undecided–Prefer not to say (Freshwater Strategy for the Australian Financial Review); Yes/No/Undecided (Roy Morgan Research); Yes/No/Unsure (Dynata for the Institute of Public Affairs); Support/Oppose/Don’t know–Not sure (Dynata for the Australia Institute); Yes/No/Need more information–Can’t say” (JWS).
Three things are worth noting. One is that these polls don’t imagine respondents having no opinion. The third choice they offer allows for respondents who have conflicting opinions that leave them “undecided,” qualified opinions that don’t readily fit a straight Yes or No, or Yes/No opinions that reticent respondents may prefer not to declare (a possibility acknowledged explicitly only by Freshwater).
A second point to note is the near-universal assumption that anyone who ticks Yes/No (Support/Oppose) has decided where they stand, at least for the moment. Those who haven’t decided are captured under a residual term: Undecided, Unsure, Don’t know, Can’t say. If some of those — perhaps most of those — who tick Yes/No (Support/Oppose) are still not entirely decided, this particular architecture provides no way of indicating it.
Third, some pollsters (JWS; Resolve Strategic, below) have offered respondents a residual category that conflates two quite different things: not wanting to align one’s views with Yes/No (Support/Oppose) and having a particular reason (“lack of information”) for not wanting to do so. Not only might those in the residual category place themselves there for reasons other than wanting more information, respondents who answer Yes/No (Support/Oppose) might welcome more information too.
In Gallup’s day, a response other than Yes/No, Support/Oppose and so on was usually left to respondents to volunteer. Pollsters have always been keen to promote the idea that the public’s views fit whatever categories the pollsters choose; a choice outside these categories is not something they are generally keen to encourage. With online polling, which means almost all polls these days, respondents can only be offered a residual option — as they should be — as an explicit alternative.
In what we might call the non-standard architecture, pollsters offer a set of response categories designed to distinguish respondents who hold their views (in favour/against) strongly from those who don’t hold their views strongly — the latter sometimes described as being “softly” in favour or “softly” against.
This is one of the two architectures Resolve has used. Since August 2022, it has asked whether respondents support a Voice in the Constitution and, it seems, offered these alternatives: Yes, definitely; Yes, probably; No, probably not; No, definitely not; Undecided/Not enough information. Since April, though, and possibly earlier, the final alternative has read Undecided/Not enough information/May not vote, a category that mixes up the one thing that necessarily distinguishes these respondents from the other respondents (Undecided in the sense of “none of the above”) from other things that may not (Not enough information and/or May not vote).
Before switching to a standard format at the end of May 2023, Newspoll used a similar non-standard response set — something that has been a hallmark of its issue polling over nearly forty years. On three occasions, Newspoll sought to identify those “strongly in favour,” “partly in favour,” “partly against” and “strongly against,” offering “Don’t know” as a residual category. (In principle, there is no reason why one could not also distinguish a strong “Don’t know” from a somewhat “Don’t know,” but that is a distinction that pollsters never draw.)
In the third choice of architecture — one that resembles the non-standard architecture but needs to be distinguished from it — response options take the form of a five-point scale with “Neither support nor oppose” (or some neutral equivalent) in the middle. These scales are known in the trade as Likert items, after the American survey researcher Rensis Likert. The use of “Neither support nor oppose” distinguishes a Likert item from the non-standard architecture, which has a “don’t know” at the end but no middle option.
SEC Newgate has asked respondents regularly whether they “Strongly support,” “Somewhat support,” “Neither support nor oppose,” “Somewhat oppose,” or “Strongly oppose” the “creation of an Indigenous Voice to Parliament.” The Scanlon Foundation has adopted a similar approach. So, too, has Essential — but only once, with another option, “Unsure,” added at the end of the scale.
Accepting versus squeezing: architectures that make the “undecided” visible: Do the various choice architectures affect the proportion of respondents who are “undecided”? If we compare the “undecided” in the standard architecture (Yes/No/Don’t know) with those who tick “Neither support nor oppose” on the Likert items, the answer may be no. In the standard format, the proportion “undecided” about a constitutionally enshrined Voice averaged as follows: 27 per cent (across three questions) between May and September 2022; 19.5 per cent (two questions) between October 2022 and January 2023; and 22 per cent (five questions) between February and May 2023. Given other variations among questions, these are not very different from the proportions ticking “Neither support nor oppose” in the Likert items: 23 per cent between May and September 2022 (four items); 25 per cent between October 2022 and January 2023 (one item); and 23 per cent between February and May 2023 (two items).
Eliminating the “undecided” — architectures of denial and removal: Pollsters have developed ways not only of reducing the “undecided” votes but of making them disappear. The most extreme of these methods is a binary response architecture that imposes a strict two-way choice: Yes/No, Support/Oppose, and so on. These polls give no other option. If we ask whether the choice architecture affects the proportion that shows up as “undecided,” nowhere is the answer clearer than here.
How many respondents have refused to answer when the question is asked in this way is nowhere disclosed; Essential Research, whose polls are published in the Guardian, says it doesn’t know the number. What happens to respondents who refuse to answer is not something pollsters are keen to disclose either. Resolve, which has used the binary format in relation to the Voice since August 2022, appears not to block these respondents from taking any further part in the poll. But in the Essential poll, respondents who baulk at the binary are removed from the sample.
What the process of deleting respondents does to the representativeness of a sample is something pollsters don’t openly address. In an industry that encourages the belief that sampling error is the only kind of error that matters, this is not entirely surprising.
In estimating support for a constitutional Voice, a number of pollsters have resorted to the binary format either wholly (Essential, Compass, and Painted Dog in Western Australia) or in part (Resolve). Their justification for offering respondents just two options is that at the referendum these are the two choices that voters will face. This is misleading. Voters will have other choices: not to turn out (acknowledged by Resolve in the response options it offers in the preceding question) or to turn out but not cast a valid vote. On the ABC’s Insiders, independent senator Lidia Thorpe said she was contemplating turning out but writing “sovereignty” on the ballot.
Binaries are not favoured by the market research industry. In Britain, the Market Research Society Code of Conduct states that “members must take reasonable action when undertaking data collection to ensure… that participants are able to provide information in a way that reflects the view they want to express, including don’t know/prefer not to say.” This code covers all members, including those whose global reach extends from Britain to Australia (YouGov, Ipsos and Dynata).
In Australia, a similar guideline published by the Research Society (formerly the Market Research Society of Australia) advises members to “make sure participants are able to provide information in a way that reflects the view they want to express” — a guideline almost identical with that of the MRS, even if it stops short of noting that this should allow for a “don’t know/prefer not to say.” Whether such guidelines make a difference to how members actually conduct polls is another matter; of the firms that have offered binary choices on the Voice, some (Essential) are members of the Research Society, others are not (Compass, Resolve).
But a binary is not the only way to make the “undecided” disappear. Some pollsters publish a set of figures, based on the standard architecture, from which respondents registered as “undecided” have been removed using a quite different technique. In its latest release, for example, Morgan publishes one set of figures (Yes, 46 per cent; No, 36 per cent; Undecided, 18 per cent) followed by another (Yes, 56 per cent; No, 44 per cent), the latter derived from ignoring the “undecided” and repercentaging the rest to a base of 82 (46+36). This is equivalent to assuming the “undecided” will ultimately split along the same lines as those who expressed a choice. In publishing its figures, with the “undecided” removed, Freshwater appears to do something similar.
Whether the basis on which Morgan (or Freshwater) reallocates the “undecided” is correct is open to doubt. Morgan acknowledges this: “past experience,” it cautions, “shows that ‘undecided’ voters are far more likely to end up as a ‘No’ rather than a ‘Yes’ vote.” Indigenous Australians minister Linda Burney, who is said to be “completely confident the Yes campaign will convince undecided voters to back the Voice,” expresses the opposite view.
In considering the narrowing lead of Yes over No, we should ask how the “undecided” have been acknowledged, defined and dealt with in each poll’s response architecture.
What the standard architecture (Yes/No/Don’t Know) shows: Between June and September 2022, the three polls that used a “Yes/No/Don’t Know” response architecture (two by Dynata for the Australia Institute, one by JWS) reported that an average of 55 per cent of respondents said they would have voted Yes, 18 per cent would have voted No, and 27 per cent would not have put their hand up for either.
Across the following four months, the corresponding averages (for the two questions asked by Freshwater and Morgan) were 51.5 per cent, 28.5 per cent, and 20 per cent. (Omitted is a poorly constructed question conducted by Dynata for the Institute of Public Affairs.) From February 2023 to the end of May, when Freshwater, Morgan, and JWS asked five questions between them, support for a Voice in the Constitution averaged 43 per cent, opposition 34.5 per cent, and the “undecided” 22 per cent.
Since May 2022, support for Yes has declined (from 55 per cent in the first four months to 43 per cent in the most recent quarter) and support for No has risen (from 18 to 34.5 per cent), quarter by quarter, but the decline in the proportion supporting neither Yes nor No (from 27 to 22 per cent) has been relatively small. So, while the 16.5 percentage point rise in the No vote is not entirely accounted for by the 12 percentage point fall in the Yes vote, the contribution to the No vote of the “undecided” appears to have been much smaller than the contribution of those who switched from Yes.
In some cases, pollsters have tried to reduce the number of “don’t knows” by asking these respondents a follow-up question — known in the trade as a “leaner” — designed to get them to reconsider; this might be seen as a way of distinguishing “soft” don’t knows from “hard” don’t knows.
Some of these pollsters have published the figures both before and after the leaner (JWS) or made them available (Freshwater). On these figures (one set from JWS; three sets from Freshwater), the proportion of “undecided” respondents was 8 percentage points smaller, on average, after the leaner than before. Except for one occasion when they split evenly, more chose the Yes side than chose the No side. So, far from contributing to a narrowing of the gap between Yes and No, squeezing the undecided widened the gap.
What the non-standard architecture (Yes, strong/weak; No, strong weak; Undecided) shows: In the first four months after the 2022 election, none of the pollsters who asked questions about support for the Voice used the non-standard architecture. That was to change, first through Resolve, then through Newspoll.
Between September 2022 and January 2023, Resolve adopted this architecture twice. Averaging the two polls, support stood at 50 per cent, opposition 29.5 per cent, Undecided/Not enough information 21 per cent. Between February and May, across three more polls, the corresponding figures were 45 per cent Yes; 34 per cent No; 20 per cent Undecided/Not enough information/May not vote. So, over the two periods, Yes dropped by 5 points, No rose by 4.5, and those opting for the residual category dropped by just 1 point. The rise in opposition is almost entirely accounted for by the fall in support.
Taken at face value, the three Newspoll surveys, conducted in the last quarter, tell a rather different story: 54 per cent Yes; 38 per cent No; 8 per cent Don’t know. But they can throw no light on the shift from quarter to quarter because Newspoll’s figures indicates the size of the “don’t knows” after the leaner; asked to divulge the proportion before the leaner, Newspoll declined.
Could the leaner — or the “squeeze’,” as Freshwater prefers to call it — explain the difference between the size of the “don’t know” response with the standard architecture and the size of the “don’t know” response in the non-standard architecture? In the standard (Freshwater) format, the “don’t knows” averaged 15 per cent, squeezed; in the non-standard (Newspoll) format, the “don’t knows” averaged just 8 per cent, squeezed. (Resolve’s data is not squeezed.) This suggests that, compared with the standard architecture, asking about the Voice while offering a non-standard set of response options makes a difference to the number that finish in the “undecided” column; the non-standard architecture lowers the number markedly.
What the Likert items (Yes, strong/weak; Neither…nor; No, strong/weak) show: The Likert items confirm these shifts. In the first four months, when four Likert items (from Essential, SEC Newgate and the Scanlon Foundation) featured in the polls, the level of support for the Voice (“strongly support” plus “somewhat support”) averaged 57 per cent; the level of opposition (“somewhat oppose” plus “strongly oppose”), 17.5 per cent; those inclined neither one way nor the other, 34.5 per cent. In the next quarter, SEC Newgate produced the only Likert item: 55 per cent supported the Voice, 19 per cent opposed, and 25 per cent neither supported nor opposed. In the most recent period, which saw two (SEC Newgate) items, support averaged 52.5 per cent, opposition 24 per cent, and 23 per cent were neither for nor against.
While the proportion of respondents only partly in support appears to have declined (from 24.5 to 21 per cent) the proportion strongly opposed appears to have increased (from 17.5 to 24 per cent). But the proportions strongly in support or partly opposed have barely shifted. This lends some support to Dennis Shanahan’s remark, seemingly based on private polling, about the “start” of a “drift from soft Yes to hard No.” But on whether this is due to “young people and Labor supporters,” as Shanahan believes, there is room for doubt; although SEC Newgate does not report separately on the demographics of those who are partly in support or strongly in support, the drift away from the Voice has been much more marked among older than among younger voters and much more marked among Coalition than among Labor voters, in their polling.
Compared with results obtained with the standard set of responses, the Likert items point to much smaller shifts away from support and towards opposition: a drop in the level of support for the Voice of just 4 percentage points, not 12; a rise in the level of opposition of just 6.5 points, not 16.5; and a falling away of the “undecided” vote — here, the proportion neither in favour nor opposed — of just 1.5 percentage points, not 5. As with the standard architecture, most of the additional No vote appears to have come from those who supported (strongly or somewhat) the Voice in earlier polls, with the decline in the “Neither… nor” group appearing to contribute much less to the growth in the No vote.
What the binary architecture (Yes/No) shows: Binaries are designed to eliminate the “undecided.” But when they are asked in the wake of response architectures that recognise the undecided, they can tell us one important thing: what happens to the “undecided” when they are forced to choose.
If we compare the results Resolve produced when it used the non-standard architecture and followed up with a binary, it is clear that the Yes side enjoyed a greater boost than the No side when the “undecided” were forced to choose. In other words, far from contributing to a narrowing of the gap between Yes and No, eliminating the undecided widened the Yes vote’s lead; this is consistent with the picture that emerges from other architectures when the “undecided” are squeezed. The one exception was Resolve’s June poll, its most recent, where the “don’t knows,” given a binary choice, appear to have split in favour of the No side (7 Yes, 11 No), causing the overall balance to shift to the No side (49–51).
“Undecided” — differences across the complete catalogue of measures: Across the pollsters’ questions, “Undecided” is hardly a fixed category. Typically, moreover, the “undecided” vote varies with the choice architecture.
Some commentators base their discussion of the “undecided” on the standard response format: Yes/No/Don’t know, “can’t say,” “not sure,” and so on. Megalogenis is one; constitutional lawyer and columnist Greg Craven is another. Each estimates the “undecided” vote to be “around 20 per cent” — a number clearly based on the (unsqueezed) numbers published in relation to questions that offered the standard response options. This proportion was lower in polls that used a leaner: 20–22 per cent before the leaner, quarter-by-quarter; around 15 per cent, it seems, after the leaner.
What of the non-standard format? Though the Resolve poll asks respondents to classify themselves as either “definitely” or “probably” (Yes/No), the Sydney Morning Herald and Age have never published a set of results for any of the samples that separates the “definitely” from the “probably.” Looking at the figures, and the limited detail about the polls that the papers choose to publish, a reader could be excused for thinking that Resolve used the standard rather a non-standard response architecture. A reader could certainly conclude that its publisher didn’t think the distinction mattered.
In Newspoll, those who described themselves as “partly” in favour (28 per cent) or “partly” against (13 per cent) represented a much bigger proportion of the electorate than is represented by the “undecided” (even before the leaner) in polls that used the standard format. If we add those who answered “Don’t know” (8 per cent), we get a combined figure of 49 per cent — half the electorate — who are neither strongly Yes nor strongly No.
Craven speculates that “Once someone congeals [sic] to No” — after shifting from “Don’t know,” presumably — “they will not be shifted.” This implies that even someone only partly against the Voice should not be considered “undecided.” But in support of his opinion, he offers no evidence.
The use of Likert items lifts the proportion of the electorate we might regard as “undecided” to a slightly higher level still. Adding in those only somewhat in support (21 per cent), those neither in support nor opposed (23 per cent) and those only somewhat against (9 per cent), we reach a number of 53 per cent for the most recent four months; that is, over half.
“Undecided”: Further questions, different answers: Some questions in the polls have sought to establish how many respondents are “undecided” about the Voice not in any of these ways but by asking respondents how sure they are that their preferences won’t change. In response to a question Freshwater asked in December 2022, and repeated in April and in May 2023, only 39 per cent (on average) of those who favoured a constitutional change were “certain” they would “vote this way”; among those opposed to a constitutional change, the average was 61 per cent; these are figures not previously published.
Nonetheless, the proportions that said they “could change” their mind or were “currently undecided” remained substantial: 34 per cent (December), 31 per cent (April), 31 per cent (May). Of these, about a third could change their mind, the other two-thirds being currently “undecided.” Among those who could change their mind, the proportion was consistently higher among those who intended to vote Yes than among those who intended to vote No: 17–11 per cent (December), 12–6 per cent (April), and 10–7 per cent (May).
The number of voters who are persuadable could be even greater. Common Cause is reported to have “identified” 20 per cent of the non-Indigenous population as “strong Voice supporters,” 15 per cent as “opponents,” with the other 65 per cent “open to being persuaded either way.”
Two polls also asked respondents how likely they were to actually turn out and vote. Here, too, the response architecture mattered, with JWS using the non-standard response architecture and Resolve using the standard architecture. In February, when JWS asked how likely respondents were “to attend a polling booth (or source a postal vote) and cast a formal vote in this referendum,” more than a third of its respondents said “somewhat likely” (17 per cent), “unlikely” (8 per cent) or “can’t say” (10 per cent). In April, when Resolve asked how likely it was that respondents would “be registered to vote” and would “turn out to cast a vote in this referendum about the Voice,” similar proportions said they were unlikely to cast a vote (10 per cent) or were “undecided” (9 per cent); in the absence of the other JWS categories — extremely likely, very likely and somewhat likely — the rest of the sample (81 per cent) could only say that they were likely to cast a vote.
How different were the likelihoods of Yes and No supporters actually turning out? In the JWS poll, fewer of the Yes (48 per cent) than the No supporters (56 per cent) said they were extremely likely to cast a formal vote — though the gap narrowed (72–69) when those very likely to do so were added. Between those in the Resolve poll who intended to vote Yes (89 per cent of whom said they were likely to turn out) and those who intended to vote No (87 per cent of whom said they were likely to turn out), there was hardly any difference. In both polls, more No supporters than Yes supporters said they were unlikely to turn out. In the JWS poll, 11 per cent of No supporters compared with 4 per cent of Yes supporters said they were unlikely to turn out; in the Resolve poll, the corresponding figures were 10 and 8.
More striking than either of these sets of figures were Resolve’s figures for those “undecided” about whether they favoured Yes or No: 44 per cent of these respondents said they were either unlikely to vote (14 per cent) or were “undecided” about whether they would vote (30 per cent). If nearly half of the “undecided” (on the standard measure) were not to vote (JWS did not publish its figures), allocating the “undecided” to either the Yes or No side would be defensible only if the allocation didn’t assume that these respondents would cast their lot with the No side (Morgan’s hunch) or with the Yes side (Burney’s hope).
The government’s explanation for the “narrowing of the gap between committed Yes and No voters,” as reported by George Megalogenis, is not borne out by any of our measures. On the standard format, the “narrowing of the gap” between May 2022 and May 2023 appears to have been due to respondents moving from Yes (down 12 percentage points) to No (up 16.5); the shift to No from among the “undecided” (down 5) appears to explain much less of what has happened. In the non-standard architecture, the combined support for Yes has slipped (down 5) over the last eight months while the combined support for No has grown (up 4.5), the “undecided” (down 1) having hardly moved.
Moreover, any narrowing of the gap between those “strongly” committed to a Yes vote and those “strongly” committed to a No vote has been due to the number “strongly” Yes shrinking and the number “strongly” No expanding; it has not been due to a reduction in the proportion that “neither supports nor opposes” having the Voice inscribed in the Constitution. Responses to the Likert items over the last year also suggest a decline in support (down 4) and a rise in opposition (up 6.5) without a marked reduction in the proportion registered as “neither… nor” (down 1.5). Binaries, posed hot on the tail of questions that have offered a non-standard set of responses, have not narrowed the gap between Yes and No; except for the most recent of these questions, they have widened it.
Every measure leads to the same conclusion: the gap has narrowed because the Yes side has lost support and the No side has gained support. Each of these measures, it has to be conceded, is based on cross-sectional data — data derived from polls conducted at a particular time that reveal only the net movement across categories. Since the gross movement is certain to have been bigger, panel data — data derived by interviewing the same respondents at different times — might tell a different story. But every claim about how opinions have moved has appealed, if only implicitly, to the evidence provided by the cross-sectional data; panel data have not rated a mention. (So far as we know, no panel data exist.)
The choice architecture makes no difference in establishing that the gap between the Yes and No has narrowed. It makes some difference in showing whether the narrowing is due to a gain of support on the No side rather than a loss of support on the Yes side (suggested by the standard architecture and by the non-standard architecture) or a loss of support in almost equal measure on both the Yes and the No sides (the Likert items). And it makes a big difference in determining the size of the Yes and No vote (the binary architecture being particularly powerful), in estimating the proportion of respondents’ undecided (less so with the standard architecture compared with Likert items), and in identifying the proportion that might be persuaded to change their minds.
To say that the choice architecture makes a difference is also to say that it may not be possible to express one form of the architecture in terms of another; when Newspoll switched from the non-standard to the standard form of response, the previous results could not be converted into the standard form. It follows that changes in support may be difficult to track when the choice architecture changes.
This should not be read as an argument against changing architectures; the more closely the response architecture mimics a referendum, the better it is likely to be. Gallup’s standard architecture — with or without a leaner — is to be preferred to a binary, a form that offers too restricted a range of choice. The standard architecture is also to be preferred to the non-standard architecture or to a Likert item, forms that offer too wide a choice.
This analysis also does not mean that other, more direct measures of uncertainty should be discarded or not introduced. On the contrary, different measures may serve well as forms of validation and as sources of insight. •