Why were the “quiet Australians” missed in the election pre-polls?

The inaccuracy of polling data came as a big shock in the recent Federal election. After consistent predictions of a Labor victory, with bookmakers even paying out in advance, the Liberal party retained their majority.

As late as Friday night, most major polls had Labor ahead 51-49 on the two-party preferred vote. This result mirrors a number of polling “upsets” globally over the past few years, from Trump’s election in the US to Brexit.

It’s clear polls are missing a cohort of conservative voters, which the Australian Liberal party has now named “the quiet Australians”. This conservative vote was also under represented in the 2015 UK General Election, the 2016 Brexit referendum and the 2016 US election. This group is getting harder to quantify using traditional polling methods.

There are several reasons why the polls can paint an inaccurate picture of the elections, and in particularly over the past weekend in Australia.

  1. Increased disengagement

In research, we get much stronger levels of accuracy and participation when people feel engaged in the issues at hand. The current level of disengagement with political issues may be driving some of this under-representation. We often hear more strongly in research from those with the loudest, most vested voices. In this case, those that wanted change may have been more visible and audible in the polls than those content with the status quo

  1. Fragmented audiences

Political opinion polling is a science that depends on extraordinary accuracy in terms of representative samples, at a time when inaccuracy is now too easy. There is no single way of reaching every Australian cohort in an opinion poll. This compares to several years ago, when almost everyone had a landline and would answer the phone to do a quick survey. Today, increasing numbers of Australians have internet-only landline connections, if they have a landline at all, and rely exclusively on mobile. Analysis of ACMA data predicts only 50 percent of Australian homes will still have a landline in 2021, down from 83 percent in 2011.

With the 2015 UK general election, the main problem with polling inaccuracy was determined to be unrepresentative samples. People who took part in polls did not accurately reflect the population as a whole.

Analysis of the 2016 UK Brexit polling has indicated that online polls seemed to perform better than phone polls. In total, 63 percent of online polls correctly predicted a Leave victory, while 78 percent of phone polls wrongly predicted that Remain would win. Some of the discrepancy may have been due to younger voters supporting Remain, but not bothering to vote. A key difference in Australia, of course, is that with voting being mandatory, the entire electorate is represented.

  1. Asking the right questions

In the 2016 US election, the vast majority of polls predicted that Hillary Clinton would beat Donald Trump. One poll that did correctly pick Trump in the lead was the USC/Los Angeles Times Daybreak tracking poll. It allowed people to assign themselves a probability of voting for either candidate, rather than having to declare their preference with 100 per cent. Different types of questioning – rather than the current, standard voting intention question – may end up providing greater insight. And we are now seeing discussions about whether social media data is a valuable predictor of voting intention.

So when it comes to research, there is increasing inaccuracy built in to the methodologies that takes time and money to overcome.

Political opinion polling, such as that we see in our media, is a relatively unique form of research with regards to the degree of accuracy required, and the level of public scrutiny placed on it throughout an election cycle. Commercial organisations recognise that insight and strategy is based on investing time and money in seeking viewpoints from a variety of data sources to ensure all views are accurately represented.

Share

12 thoughts on “Why were the “quiet Australians” missed in the election pre-polls?

  1. Another factor which has been missed is that the media also missed the story. In elections past reporters were given weeks to travel across the country, gauging the mood of the electorates in their vast diversity. Today newsrooms just don’t have that capacity and journalists take their lead from the (incorrect) polls. It’s a cycle that never gets beyond the Beltway.

    1. Good observation. In fact, the Age published a story in the last week before the election, written by a journalist who had travelled from Hobart to Cairns on public transport. His article captured the disengagement of “average punters” and their conservative tendencies. That article was the main reason the election result wasn’t a big surprise for me. Reinforces your point exactly.

  2. If you aren’t getting enough of the the “quiet Australians” in your results, then your sampling and weighting needs more nuance outside of age/sex/region/2016 vote.

  3. Thank you for the insight. Missing as a potential Point 4 is the large cultural shift that has occurred under the monopolies of Alphabet (Google) and Facebook. This has given rise to those feeling comfortable in the anonymous attacking and marginalisation of the everyday Australians who may hold a different position/political opinion that reduces open and honest dialogue. Never before has it been so easy to label, harass and target people in a town square that has become infinitely bigger – and it’s shame.

  4. The problem with polling extends beyond sampling. The lack of variability in consecutive polls’ estimates of the two-party preferred vote for an extended period leading up to the election was statistically improbable. Basically, the estimates sat within a 1 percent range poll after poll. Even if the true 2PP was in that range, random variation means that pattern had less than one chance in 200,000 of happening.

    There is a lot of post poll processing of the data that goes into producing the 2PP estimates that get published. The national results need to be adjusted for preference flows and local variation at the electorate seat level. Different polling houses use different assumptions and modelling to do that. This post processing to generate 2PP estimates clearly affected biased the published results to generate the observed consistency. Galaxy have a history of producing improbably consistent long term trends from their polls. When they took over Newspoll, the latter’s poll results also took on this characteristics. Previously, they had tended to show expected statistical variation in their estimates.

    It seems likely there was also a significant herding effect, which arises when pollsters don’t want to be the outlier that gets it wrong. So they massage their results to stay closer to the crowd.

    There is usually a house effect in 2PP polling as well, whereby certain houses produce results biased towards one major party or the other – see above re post processing and modelling. Seems not to have been such a factor in the polling for this election.

  5. This statement from Ipsos was interesting

    https://www.ipsos.com/en-au/statement-ipsos

    A number of broad trends present challenges to polling in Australia, and to polling organisations globally, which Ipsos is actively reviewing. Areas which will be reviewed are currently being identified and to date include:
    In Australia the voting trend away from the two major parties appears to have increased again in this election with both parties showing a reduction in vote share since 2016 in the vicinity of 0.5%-1.0%. This makes predicting the preference flows from an increasingly fragmented base of minor parties and independent candidates difficult.

    1. Politics is inherently local, challenging a national poll in the heat of the last throes of an election campaign that is fought on local issues.

    2. Globally pollsters are challenged by more highly educated people being more politically engaged, and therefore more likely to complete a polling survey. With randomly generated mobile and landline phone numbers this is less likely to be an issue of coverage, but more one of likelihood of different types of people to respond and is not easy to address through weighting.

    3. 7% of Ipsos’ final poll respondents said they “didn’t know” who they would support at the polling booth. The reporting of primary and two-party preferred voting intention effectively assumes these people will vote in the same way as those who have provided an intention. It is unclear whether this is a reasonable assumption

    4. Any poll is only as accurate as the stated intention people give on what they will do when they vote. Stated intention may change between the date polled and when a voting ballot is completed. For most of the people responding to the Ipsos pre-election poll who had not postal voted or pre-voted already this gap was between 3 and 6 days.

    5. Finally, Ipsos notes that political polling has a range of goals, most beyond the measurement of the overall two-party preferred vote. Ipsos’ polls look to understand key issues, leadership qualities and personalities, the primary voting intentions of voters and many other issues. Predicting outcomes is always difficult when the race is tight, and Ipsos remains committed to finessing its approach moving forward.

  6. I always wonder if the polls themselves are the problem.

    Hearing over and over before an election that its result is a foregone conclusion and X will win, can trigger the reaction in the supporters of X to not bother voting (in the countries where voting is optional) and the opposing supporters to try harder (by turning out and persuading their circle to vote too).

    Even in Australia, swinging voters could easily think that the Labor victory was a foregone conclusion (as they were told over and over by the media) and trigger the thought that they don’t need any more support, I’ll vote the other way for the supposed underdog of the election. This is why a grassroots independent campaign, such as Zali Steggel in Warringah, can be so successful, as it can mobilise forces around a perceived underdog and manage to instil the “every vote counts” sense of urgency.

    1. An interesting perspective Ania, and possibly one of the reasons Pollsters are always under such pressure to get it right

  7. When I was traveling in regional Queensland in 2016-2018 I heard voices that weren’t picked up in polling. I knew Queensland would swing to the LNP and away from Labor.

    1. Thanks for the comment Warren. Qualitative research always provides a depth of insight not readily seen in the polls, and can provide signals that vote as represented through polls may not be as decided as it appears.

  8. Thanks Amanda, insightful analysis – thank you for sharing. The representative samples relying on out of date traditional polling through landlines now has 3 strikes as you state; the question I am interested in is how does social media analysis provide a more accurate understanding of preferences, and is this something we, with our CBMA / SR7 capabilities, could undertake? If we could provide much more granular and accurate pre-election advanced analytics, then the question we need to consider is should we? Interested in your thoughts?

    1. You raise an interesting point James, the variety of data sources that are used for commercial research certainly can include social listening – definitely worth further exploration.

Leave a Reply to Sean Armistead Cancel reply