Polls in the time of dictatorship
There is very little data on polling issues during the time of the Marcos dictatorship. It would seem that only one poll was conducted towards the end of that regime. That poll was evenly split about the desirability of Martial Law, less than a year before it was ended by a popular revolt.
For more analysis I needed to look at other studies in countries that also experienced a dictatorship.
Rodrigo Patto Sá Motta writes of the dictatorship in Brazil that the concept of “solid support” is problematic in situations where there is no freedom to criticize and the voices of opposition are repressed. One of the reasons many analysts have begun to doubt the poll findings is that the threat of death is real in our communities.
It is also true that critical voices have been massively trolled and death and rape threats of critics has become a common experience. Add to this the continuing filing of cases by the VACC, aka the DOJ, against opposition figures and the imprisonment of Senator Leila de Lima.
Pollsters ask participants to answer questions which, in and of itself, aggravates the perception of danger. Let us say a pollster asks, “I have here a list of names of people. Please tell me your opinion of their performance by pointing to the part of this rating board that expresses your opinion.”
In situations where people are unafraid and have an understanding of confidentiality, we may get good answers. But this is not the case in poor communities where the majority of the respondents reside. In situations where deaths occur because of failing to answer to accusations made on a drug list, people will feel the need to answer, and in a manner that they believe will protect them from repercussion. In these situations the number of people who refuse to answer, what pollsters call the “refusal rate”, may not adequately assess people’s fear.
Julia Paley, writing of responses to the question “which political party do you like best” in the light of Chile’s experiences with dictatorship, notes that this had become a loaded question where affiliation to certain parties could lead to harm. She notes that when asked, the subject of her research chose a political party that was actually very low in his family’s esteem.
As Motta also notes, perhaps the interviewee is actually uninterested in the question, is uninformed of the issue, or is interested in another question. In situations of duress however, the question demands an answer.
I will add that our people’s literacy with regards to research ethics is very low. Even if it is for their protection, they are loathe to sign consent forms. This is why verbal consent is acceptable for many situations in Philippine social science research. It is a guarantee of confidentiality but is often not sufficient to ensure that respondents do not fear exposure.
Answering due to fear is an extreme case of what polling science calls “the social desirability bias”. This is a tendency for people to answer what they guess will make the pollster, neighbors, the barangay officials think well of them. Picture now the respondent in a barangay where the pollster picks every third house. In situations of extreme polarization, and knowing the Filipino penchant for congruence with others, it is possible that social desirability bias is driving approval and trust ratings up.
Thirty years ago, Sikolohiyang Pilipino expert Rogelia Pe-Pua, attempting to indigenize research methods, had already come to the conclusion that the type of private and confidential interview assumed in polling research was inappropriate in Filipino communities.
Even where private and confidential interviews are possible, social desirability factors can become so grave that voters will repeatedly mislead pollsters by refusing to choose an option they think is unpopular. Social desirability bias is the explanation most pollsters give for how they failed to predict a Trump victory in the US. In fact many predictions were way off.
Another explanation for the failure to predict a Trump victory is that pollsters were unable to cover certain sectors. In the US case these were remote rural areas. In the Philippines these are the gated subdivisions of the rich or middle class where pollsters are not allowed.
The points I have been discussing are inherent limitations of polling science. These are not meant to disparage reliable polling agencies. Nor is it my intent to make people refuse to use polls. Rather it is an attempt to help us understand polls properly, something even the scientists who run polls wish to achieve. It is also an appeal to fellow scientists to find ways to overcome what I now believe are factors that are skewing their data.
There are ways by which these limitations can be mitigated. One way is to ask a different set of questions such as when one poll showed that despite high ratings for the drug war, 71% of respondents said they find it "very important" that drug suspects be kept alive by police personnel. This finding is contradictory to the excellent net satisfaction rating for the drug war garnered during the same time.
Another way to overcome these limitations is to increase the number of people sampled. This is essentially what happened when the recent Brexit vote got “corrected” by the actual vote. The size in numbers of the actual vote is certainly far larger than the sample size of the poll.
This is where we need more transparency from our polling companies. Statisticians generally use a formula called “Slovin’s Formula” to determine sample size. Statistician Billy Almarinez notes in a Facebook post, however, that this is not the formula that the leading poll agencies have been using. Almarinez notes further that these agencies do not give us an explanation about how they determine sample size, an observation I validated by looking for information at the websites of the two top polling companies.
Almarinez notes that there are ways other than the Slovin formula to determine sample size. As we have seen, the formulas have worked in correctly predicting election results. But the sample sizes are relatively small compared to what the Slovin formula would calculate. It is therefore not an unfair speculation that sample sizes may be too small to correct for our unusual situation.
Our view of the polls must also take into consideration the caution that we are getting a slice of people’s views that captures a very specific time period. Opinion polls do not reflect the processes of change that come from the everyday exchanges and discussions that are happening in a nation which is in political ferment. Opinion change is nuanced and incremental. Polls do not measure at this level of detail.
Motta and Paley further caution that the dictatorships they studied inappropriately used the polls to blunt criticism and shore up their legitimacy. Obviously, this did not prevent an eventual shift in their political fortunes.
In the meantime, I suggest that those of us who wish to assess the workings of our government do not use opinion polls as a gauge of government's failure and success. We must also be cautious about what they are saying about the mood of our people. Lastly, if my analysis is correct, we cannot assume the polls are indicators of what will happen next. – Rappler.com
Sylvia Estrada Claudio, MD, PhD is a Professor of the Department of Women and Development Studies of the College of Social Work and Community Development, University of the Philippines.