Why the polls may be wrong

Sylvia Estrada Claudio

This is AI generated summarization, which may have errors. For context, always refer to the full article.

Why the polls may be wrong
Opinion polls do not reflect the processes of change that come from the everyday exchanges and discussions that are happening in a nation which is in political ferment

Quantitative methods, such as opinion surveys, are valuable. Quantitative methods are often the only way by which we can ascertain the opinions of very large numbers of people. Indeed I would tell people upset with our leading polling companies who were consistently indicating a Duterte win in the May 2016, to get ready for a Duterte presidency.

On the other hand, any good teacher will tell you to be critical of statistics, even those released by honest and expert statisticians. There are limits to what statistics and opinion polls can tell us. Even the most competent and objective pollers will tell you that there are many situations where their methods should not be applied. As a teacher I ask my students to give equal value to qualitative methods. The study of a few persons, for example, has led to valid theories about the psychology of many. Thus, I urge most students to consider a mix of qualitative and quantitative methods when examining social phenomena.

Most experienced researchers will also tell you that even objective measures can be influenced by any number of factors. Most experienced researchers will tell you that statistical methods need to match the subjective experiences of people at some point. In short, all data needs to be interpreted. Experienced researchers often ask, “Was this something expected even before the data came in or was it counter-intuitive?” There are many labels and explanations for this sense of what the data will show. I just call it “my gut sense”.

As a researcher I know how to tame this gut feeling to ensure that it does not bias me toward certain interpretations. Most feminist activists I knew were not voting for President RodrigoDuterte. And yet my gut was tamed because I knew that my feminist friends were but a small number of the voting population.

Something doesn’t fit

But recently, the “scientific gut” of several analysts and scientists have been screaming that something is not quite right about the continuing and even improving net trust and approval ratings of President Duterte in the surveys.

I write this after the biggest protest rally against the administration so far and the counter rally that has yet again proven that the far less come to pro-Duterte rallies. I also write this at a time when mainstream and social media are increasingly critical of the President. Furthermore, we are beginning to see a number of politicians come out with critical remarks against the administration. Even the more loyal ones sometimes contradict themselves. A recent example is the retreat of the House of Representatives from their move to defund the Commission of Human Rights.

As everyone knows, a qualitative measure of how the political winds are blowing is the behavior of our politicians.

And so, like any good researcher, I looked into why well-run polls could be wrong. And my research tells me that we may not be getting a real picture of how much support President Duterte has.

My first clue is something from polling science itself: polls can be wrong in unusual situations. With thousands dead from the drug war and a climate of impunity, “unusual” is an understatement for our times. Other analysts have noted what has been termed the “creeping dictatorship” of this regime.

Polls in the time of dictatorship

There is very little data on polling issues during the time of the Marcos dictatorship. It would seem that only one poll was conducted towards the end of that regime. That poll was evenly split about the desirability of Martial Law, less than a year before it was ended by a popular revolt.

For more analysis I needed to look at other studies in countries that also experienced a dictatorship.

Rodrigo Patto Sá Motta writes of the dictatorship in Brazil that the concept of “solid support” is problematic in situations where there is no freedom to criticize and the voices of opposition are repressed. One of the reasons many analysts have begun to doubt the poll findings is that the threat of death is real in our communities.

It is also true that critical voices have been massively trolled and death and rape threats of critics has become a common experience. Add to this the continuing filing of cases by the VACC, aka the DOJ, against opposition figures and the imprisonment of Senator Leila de Lima.

Pollsters ask participants to answer questions which, in and of itself, aggravates the perception of danger. Let us say a pollster asks, “I have here a list of names of people. Please tell me your opinion of their performance by pointing to the part of this rating board that expresses your opinion.”

In situations where people are unafraid and have an understanding of confidentiality, we may get good answers. But this is not the case in poor communities where the majority of the respondents reside. In situations where deaths occur because of failing to answer to accusations made on a drug list, people will feel the need to answer, and in a manner that they believe will protect them from repercussion. In these situations the number of people who refuse to answer, what pollsters call the “refusal rate”, may not adequately assess people’s fear.

Julia Paley, writing of responses to the question “which political party do you like best” in the light of Chile’s experiences with dictatorship, notes that this had become a loaded question where affiliation to certain parties could lead to harm. She notes that when asked, the subject of her research chose a political party that was actually very low in his family’s esteem.

As Motta also notes, perhaps the interviewee is actually uninterested in the question, is uninformed of the issue, or is interested in another question. In situations of duress however, the question demands an answer.

I will add that our people’s literacy with regards to research ethics is very low. Even if it is for their protection, they are loathe to sign consent forms. This is why verbal consent is acceptable for many situations in Philippine social science research. It is a guarantee of confidentiality but is often not sufficient to ensure that respondents do not fear exposure.

Social desirability

Answering due to fear is an extreme case of what polling science calls “the social desirability bias”. This is a tendency for people to answer what they guess will make the pollster, neighbors, the barangay officials think well of them. Picture now the respondent in a barangay where the pollster picks every third house. In situations of extreme polarization, and knowing the Filipino penchant for congruence with others, it is possible that social desirability bias is driving approval and trust ratings up.

Thirty years ago, Sikolohiyang Pilipino expert Rogelia Pe-Pua, attempting to indigenize research methods, had already come to the conclusion that the type of private and confidential interview assumed in polling research was inappropriate in Filipino communities.

Even where private and confidential interviews are possible, social desirability factors can become so grave that voters will repeatedly mislead pollsters by refusing to choose an option they think is unpopular. Social desirability bias is the explanation most pollsters give for how they failed to predict a Trump victory in the US. In fact many predictions were way off.

Another explanation for the failure to predict a Trump victory is that pollsters were unable to cover certain sectors. In the US case these were remote rural areas. In the Philippines these are the gated subdivisions of the rich or middle class where pollsters are not allowed.

Ways forward

The points I have been discussing are inherent limitations of polling science. These are not meant to disparage reliable polling agencies. Nor is it my intent to make people refuse to use polls. Rather it is an attempt to help us understand polls properly, something even the scientists who run polls wish to achieve. It is also an appeal to fellow scientists to find ways to overcome what I now believe are factors that are skewing their data.

There are ways by which these limitations can be mitigated. One way is to ask a different set of questions such as when one poll showed that despite high ratings for the drug war, 71% of respondents said they find it “very important” that drug suspects be kept alive by police personnel. This finding is contradictory to the excellent net satisfaction rating for the drug war garnered during the same time.

Another way to overcome these limitations is to increase the number of people sampled. This is essentially what happened when the recent Brexit vote got “corrected” by the actual vote. The size in numbers of the actual vote is certainly far larger than the sample size of the poll.

This is where we need more transparency from our polling companies. Statisticians generally use a formula called “Slovin’s Formula” to determine sample size. Statistician Billy Almarinez notes in a Facebook post, however, that this is not the formula that the leading poll agencies have been using.  Almarinez notes further that these agencies do not give us an explanation about how they determine sample size, an observation I validated by looking for information at the websites of the two top polling companies.

Almarinez notes that there are ways other than the Slovin formula to determine sample size. As we have seen, the formulas have worked in correctly predicting election results. But the sample sizes are relatively small compared to what the Slovin formula would calculate. It is therefore not an unfair speculation that sample sizes may be too small to correct for our unusual situation.

In conclusion

Our view of the polls must also take into consideration the caution that we are getting a slice of people’s views that captures a very specific time period. Opinion polls do not reflect the processes of change that come from the everyday exchanges and discussions that are happening in a nation which is in political ferment. Opinion change is nuanced and incremental. Polls do not measure at this level of detail.

Motta and Paley further caution that the dictatorships they studied inappropriately used the polls to blunt criticism and shore up their legitimacy. Obviously, this did not prevent an eventual shift in their political fortunes.

In the meantime, I suggest that those of us who wish to assess the workings of our government do not use opinion polls as a gauge of government’s failure and success. We must also be cautious about what they are saying about the mood of our people. Lastly, if my analysis is correct, we cannot assume the polls are indicators of what will happen next. – Rappler.com

Sylvia Estrada Claudio, MD, PhD is a Professor of the Department of Women and Development Studies of the College of Social Work and Community Development, University of the Philippines.

Add a comment

Sort by

There are no comments yet. Add your comment to start the conversation.

Summarize this article with AI

How does this make you feel?

Loading
Download the Rappler App!