“What should journalists do when the facts don’t matter?” asked the American media academic Michael Socolow in the Conversation after Donald Trump won the presidential election. Underlying his question was the assumption that “facts” have become unmoored from reality — that there may be no going back to a time when they could be distinguished from opinions. But several intriguing studies suggest that this might not be entirely correct. They show that most people across the political spectrum still have a shared grasp of facts about the economy and other politically contentious issues, though they don’t always admit it.
A little over a decade ago, American political scientists, John Bullock, Alan Gerber, Seth Hill and Gregory Huber, set out to discover why Democrats and Republicans responding to social science surveys offered up markedly different “facts.” Was it because the two groups were acquiring and processing information in ways that produced “separate realities,” as some researchers have hypothesised? Or did the responses reflect “the joy of partisan cheerleading”?
To get a better grip on this phenomenon, Bullock and his colleagues updated an experiment they’d run once before, in 2008. They posed a set of factual questions about things like unemployment, economic growth and climate change to a large group of randomly selected Americans (in this case almost 1200 respondents). Members of the control group were simply asked the basic set of questions; others were given a modest reward for a correct answer, and a smaller (but safer) incentive to answer “don’t know” if they weren’t able to offer an informed response.
Participants were only offered the rewards after they had answered five questions designed to test partisan responses. Once the rewards were made clear, the survey posed two extra questions and then repeated the original five.
The difference between respondents’ answers before and after the incentives were offered was striking: “Even small incentives reduced partisan divergence substantially — on average, by about 55 per cent and 60 per cent across the questions for which partisan gaps appear when subjects are not incentivised.” A rise in “don’t know” responses revealed that the partisan divide was being driven not just by “expressive behaviour” — that partisan cheeerleading again — but also by respondents’ awareness that they didn’t actually know the answers and had resorted to the party line.
Summing up, the researchers write: “Republicans and Democrats do hold different factual beliefs, but their differences are likely not as large as naive analysis of survey data suggests. Just as people enjoy rooting for their favourite sports teams and arguing that their teams’ players are superior, even when they are not, surveys give citizens an opportunity to cheer for their partisan teams. Deep down, however, many individuals understand the true merits of different teams and players — or, at minimum, they understand that they don’t know enough to support their expressive responding as correct.”
That’s one study, and perhaps its design influenced the results. But another team of political scientists, Markus Prior, Gaurav Sood and Kabir Khannawere, had been struck by the fact that supporters of a US president’s party often report more positive economic conditions than its opponents. They hypothesised (like Bullock and his co-researchers) that “partisans give partisan-congenial answers even when they have, or could have inferred, information less flattering to the party they identify with.” They too surveyed representative groups of voters, “experimentally manipulating respondents’ motivation to be accurate via monetary incentives and on-screen appeals.” And they too found that the incentives significantly reduced the partisan differences in responses to questions about questions about petrol prices, the size of the federal debt, the percentage of Americans living in poverty and so on.
“Many partisans interpret factual questions about economic conditions as opinion questions, unless motivated to see them otherwise,” they write. “Typical survey conditions thus reveal a mix of what partisans know about the economy, and what they would like to be true.”
But what if polarisation has worsened since those studies? A young PhD student, Seth Emblem, was troubled by the fact that the Covid-19 pandemic, a focus of heightened partisanship, had not only divided opinion but also had a clear impact on behaviour, which tended to undermine the idea that differences in opinion were mainly rhetorical. Perhaps partisanship — supercharged by increasingly partisan media — really was “guiding people into information bubbles”?
In Essays on Political Economy and Behavior Emblem outlines his own late-2021 survey of 300 Americans, which took the questions from the two earlier studies, updated them using the latest data, and then added new questions about issues, including vaccine efficacy, that had been subject to partisan misinformation campaigns. As a control he included other questions, including levels of farm subsidies, that hadn’t.
The results were broadly in line with the two earlier studies. “We found that partisan gaps were smaller than we expected,” he wrote, “but that monetary incentives tended to diminish partisan gaps where they existed.” The biggest gaps were in the misinformation category, and that’s where the incentives reduced the gaps most strikingly:
Our findings suggest that while certainly not all partisans agree, any disagreements are not as large and as fundamental as often reported on in the popular press. What is accounting for this discrepancy is the fact that when responding to standard surveys and polls, people do not seem to actually provide answers based on what they know to be true. Instead they may be providing answers that flatter their political perspective. When there is no clear reason that they should provide accurate answers, why should we expect anything else?
One other study took a slightly different approach. Again, the respondents are Americans, though this time the researchers — the University of Hamburg’s Mike Farjam and Linnaeus University’s Giangiacomo Bravo — are based in Europe. Using the same system of rewards, they posed a series of questions on just two themes — climate change, and race and crime — and ask their 900 respondents to deal with each question in three ways: by indicating their general attitude to the issue, by estimating the relevant data, and by guessing how much their own data estimate differed from estimates by opposing partisans.
Like the other studies, though more cautiously expressed, they conclude that “despite apparent polarisation in expressed attitudes and mutual perceptions, partisan groups may be less divided in their actual beliefs about real-world data.” This is especially the case with climate change, though not to a significant extent, interestingly, in relation to race and crime, where perceptions and facts were more closely aligned. Farjam and Bravo also found that “partisans’ expectations about the opposing group’s adherence to data tend to exaggerate the actual differences.”
What of Australia? Although the limited evidence suggests we’re as polarised as Americans in party-political terms, Australia’s parties haven’t split anywhere near as dramatically on vaccination, for example. Yet the three studies do raise a question about how we interpret opinion polls. When the Australian Financial Review reported this week that “about 16 per cent” of respondents to a recent Freshwater poll “nominated immigration as one of their top issues, up from just 5 per cent in September 2023,” was that a real shift in opinion? Or were many of those one in ten newly concerned respondents Coalition voters swinging in behind Peter Dutton’s attacks on Labor’s management of migrant numbers?
Leaving aside the fact that eleven percentage points is not a great return for Mr Dutton’s depressing strategy, the American research does suggest we should be wary of polling on any issue that tends to split respondents along partisan lines. •