Western academia helps build China’s automated racism

Coda Story

This is AI generated summarization, which may have errors. For context, always refer to the full article.

Western academia helps build China’s automated racism
Researchers in China are developing new and more invasive techniques to surveil Uyghurs. Some of their work is being supported by academia in the West.

Last summer, a respected US academic journal about data mining published a study titled “Facial feature discovery for ethnicity recognition”, authored by 4 professors in China and one in Australia. The study found that an effective way to automatically predict the ethnicity of minorities in China was for facial recognition systems to focus on specific, T-shaped regions of their faces. In order to reach this conclusion, over 7,000 photographs were taken of 300 Uyghur, Tibetan, and Korean students at Dalian Minzu University in northeastern China. 

The study, which received funding from Chinese government foundations, attracted little attention when it was published, but went viral at the end of May when PhD student Os Keyes tweeted out its abstract, writing: “TIL [today I learned] there’s a shitton of computer vision literature in 2017-2018 that COINCIDENTALLY tries to build facial recognition for Uyghur people. How. Curious.” Keyes’ post was retweeted over 500 times.

 

The study sparked concern for good reason. China’s government is waging a well-documented mass surveillance and internment campaign against the Uyghurs, a predominantly Muslim people in the country’s far western region of Xinjiang, where around one million of them have been detained in “re-education” camps. From facial recognition cameras in mosques to mass DNA collection and iris scans, biometrics are being deployed in Xinjiang to track Uyghurs and other minorities on an unprecedented scale. Most of China’s billion-dollar facial recognition startups now sell ethnicity analytics software for police to automatically distinguish Uyghurs from others.

Despite this, academic papers that refine facial recognition techniques to identify Uyghurs are being published in U.S. and European academic journals and presented at international computer science conferences. China’s largest biometrics research conference, last held in Xinjiang in 2018, included prominent U.S. artificial intelligence (AI) researchers as keynote speakers, including one from Microsoft. One paper at the conference, co-authored by local police, discussed ways to find “terrorism” and “extreme religion” content in Uyghur script. 

Separately, Imperial College London is hosting an open facial recognition competition where one of the sponsors is a Chinese AI startup called DeepGlint which advertises its Uyghur ethnicity recognition capabilities to police on its Chinese website, where it boasts of several Xinjiang security projects. The competition’s organizer stated he was not aware of DeepGlint’s role tracking Uyghurs and said he wouldn’t accept funding from DeepGlint in the future. (Update 9 August, 2019 : Imperial College organizers have now removed DeepGlint as one of the competition’s sponsors).

The U.S. journal that published the viral study on recognizing Chinese minorities is called Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery. It is part of Wiley, a publicly traded, multibillion dollar academic publishing house based in New Jersey. After the paper’s publication was heavily criticized on Twitter and Reddit, the journal’s editor-in-chief, Witold Pedrycz, released a statement defending its inclusion, stating that the paper underwent a “stringent editorial process” and absolving Wiley of any responsibility if its results were used in a harmful way. 

“Like other technologies in the area of intelligent systems, facial recognition can and will have far-reaching implications, both positive and negative, and potentially can be used for possible unexpected malicious purposes,” the statement read. “The editors and Wiley do not agree with or support such usages of the developed concept or methods.” There is no evidence the paper’s conclusions were directly applied by Xinjiang’s police or Chinese surveillance companies. Still, its authors note that “face recognition has great application potential in border control, customs check, and public security.” The paper remains freely accessible.

James Leibold, an expert on ethnicity and race in China and professor at Australia’s La Trobe University, thinks the risk the paper poses is unacceptable. “There is a significant possibility that this research will adversely affect minority communities in China due to the widespread use of pre-emptive racial profiling, which violates the presumption of innocence and other rights of the individual,” he said. 

Leibold, whose work focuses on Xinjiang, also raised concerns about the students whose faces were used for the paper, which makes no mention of consent, informed or otherwise. “Ethical procedures aren’t as stringent in China, and it’s possible that the minority students who participated in the project at Dalian Minzu University were not fully aware of how their data was going to be used, and the implications for their own communities.” 

The paper’s authors, Witold Pedrycz, and Wiley did not reply to requests for comment for this article. Neither did Curtin University, the Australian university where a co-author, Wan Quan Liu, is an associate professor in the Computer Science department, and has co-written other studies on ethnic recognition techniques in China. Last month, Curtin University announced it is reviewing its research approval procedures after journalists revealed Liu’s role in developing AI techniques to better identify ethnic minorities in China.

Western academia in China

Wiley’s journal wasn’t the first time researchers outside China welcomed Uyghur facial recognition papers. In 2017, a study from professors at Xinjiang University stated that the school’s College of Information Science and Engineering had begun building a facial recognition database dubbed XJU1, containing “at least” 800 Uyghur and Kazakh “volunteers”. The participants would be captured in different poses and under different lighting conditions – issues that have long plagued accurate facial recognition – in order to “test various baseline algorithms” on them. 

The resulting paper, also funded by Chinese government science foundations, was published online by the Institute of Electrical and Electronics Engineers, or IEEE, a U.S. professional association with over 400,000 members in more than 160 countries. The paper was also presented at the International Conference on Machine Vision and Information Technology in Singapore. Although there’s no evidence this paper has been used by police or Chinese surveillance firms, human rights researchers have concluded that Uyghurs and Kazakhs are the two main minorities at the heart of China’s Xinjiang crackdown.

The same school behind the XJU1 facial recognition database, Xinjiang University’s College of Information Science and Engineering, has strong connections to China’s booming biometrics industry. Last August, it hosted China’s largest academic conference for biometrics, the Chinese Conference on Biometric Recognition or CCBR 2018. Xinjiang was chosen by the conference’s organizers, two Chinese government-backed AI research entities, at a troubling time. In the months leading up to the conference, Human Rights Watch had raised the alarm over the use of biometrics in Xinjiang, stating that authorities were building a system called the Integrated Joint Operations Platform that systematically surveilled minorities, gathered their personal data, and potentially flagged them for detention. Facial recognition was a critical tool. 

All this didn’t stop Springer Nature, a London and Berlin-based academic publishing giant, from being listed as a “technical sponsor” of CCBR 2018 or from publishing all the conference’s papers on its website. It also didn’t stop three prominent U.S. facial recognition researchers from flying to Xinjiang’s capital of Urumqi to give keynote speeches at the conference. 

Anil Jain, a professor in charge of Michigan State University’s Biometrics Research Group who is often quoted on U.S. facial recognition issues in places like Wired and Slate, was on the CCBR’s advisory board and was pictured receiving an honorary certificate. 

Top Microsoft researcher Gang Hua, who has since left the company, and Qiang Ji, a professor at the Rensselaer Polytechnic Institute, both gave speeches. All were listed as “special guests” on the conference’s website. None of the researchers responded to requests for comment about their trip, and none mention their participation in the conference on their websites. A Microsoft spokesperson said the company “is not working with the Chinese government on any surveillance projects” but did not comment on Gang Hua specifically. Michigan State University’s said they “declined to participate” in this article. RPI stated that “like their colleagues at other research universities, Rensselaer faculty members regularly present at international conferences.” 

Though this CCBR conference didn’t tackle facial recognition of Uighurs, other surveillance techniques were discussed. One study authored by researchers at Xinjiang Police College and Xinjiang University said it had found a new technique for finding “harmful text information” hidden in images related to “extreme religion and terrorism information”.

These kinds of tools are already being abused in Xinjiang. When journalists reverse engineered a Xinjiang police app, they found it was automatically scanning smartphones for innocuous Islamic material. 

Another paper presented at the CCBR, written by researchers at the Ministry of Public Security in Beijing, analyzed different ways to foil attempts to bypass or trick police iris scans, noting that “in 2017, the Xinjiang Uygur Autonomous Region began to collect various biometric information including iris.” Refusing to submit to iris scans can reportedly result in Uyghurs and other minorities being sent to re-education camps.

Facial recognition for Uyghurs is something the CCBR has tackled before, just not at the Urumqi meetup. At its 2016 conference in Chengdu, a papertitled “Facial Ethnicity Classification with Deep Convolutional Neural Networks” was published finding new and better ways to tell Uyghurs’ faces apart from China’s Han ethnic majority.

A Springer Nature spokesperson said the company had a publishing contract for CCBR conference proceedings and also supported the Urumqi conference’s Best Paper Award. “CCBR has a well-regarded reputation as a high-quality technical conference focused on biometric research. The conference provides a pivotal platform for the open discussion of, and engagement with, all research”, the spokesperson said.

Academia and ethics

So far, the social media pushback to the Wiley study has been one of the few signs of public resistance in academia to these types of collaboration. Some have started speaking up publicly. Dave Churchill, an assistant computer science professor in Canada, tweeted in May that “the Chinese government is literally using AI to track Muslims and put them in concentration camps. But AI researchers don’t speak out because they’re worried about their fucking citation counts / invited speaking opportunities”. 

Churchill said part of the issue was that risks so clearly outweighed the benefits. “There’s so little that you actually do by speaking out, especially against something like the Chinese Communist Party, and losing out is so much worse,” he said.

Chinese researchers and companies are having more discussions about AI ethics, but the emergence of such a field is unlikely to staunch big data-driven repression. Lorand Laskai, a visiting researcher at Georgetown University’s Center for Security and Emerging Technology, said minority and even individual rights are typically sidestepped in such discussions. “During a recent debate on AI ethics and norms, Chinese scholar Zhang Wei said that Chinese values mean that China will value the security of the collective over the rights of the individual when it comes to AI,” Laskai said. “I think this is indicative of the Chinese approach to AI ethics.” – Rappler.com

 

Charles Rollet covers video surveillance for IPVM. He also contributes to Foreign Policy and Wired.

This article has been republished from Coda Story with permission.

Add a comment

Sort by

There are no comments yet. Add your comment to start the conversation.

Summarize this article with AI

How does this make you feel?

Loading
Download the Rappler App!