This is AI generated summarization, which may have errors. For context, always refer to the full article.
MANILA, Philippines – In a study released on the journal Patterns on Monday, July 10, programs used to detect whether a written work is written by a human or by artificial intelligence are said to “frequently misclassify non-native English writing as AI generated.”
The study, written by a team of scientists led by James Zhou, an assistant professor of biomedical data science at Stanford University, ran 91 Test of English as a Foreign Language (TOEFL) essays written by non-native English speakers and 88 US eighth-grade essays from the Hewlett Foundation’s ASAP dataset through seven AI writing detectors – or Generative Pre-training Transformer (GPT) detectors – to determine what biases, if any, were shown by the detector programs.
According to the study, “While the detectors accurately classified the US student essays, they incorrectly labeled more than half of the TOEFL essays as ‘AI-generated’ (average false-positive rate: 61.3%). All detectors unanimously identified 19.8% of the human-written TOEFL essays as AI authored, and at least one detector flagged 97.8% of TOEFL essays as AI generated.”
The study pointed out the difference between the types of essays lay primarily in text perplexity, defined as a measure of how “surprised” or “confused” a generative language model is when trying to guess the next word in a sentence. In this case, the TOEFL essays showed less text perplexity.
The GPT detectors using text perplexity as a gauge to distinguish AI-generated and human-written texts, would thus penalize non-native writers whose linguistic expressions were more limited when compared to native English writers.
Catching AI plagiarism also a problem
The study also found that GPT detectors had an even harder time distinguishing between human-written and AI-written essays if prompts were adjusted to add text perplexity.
ChatGPT-made essays were fed once again through it to add more literary language, and as the study pointed out, “detection rates plummeted to near zero” after the additional prompt was made.
Said the study, “These findings underscore the vulnerabilities of current detection techniques, indicating that a simple manipulation in prompt design can easily bypass current GPT detectors.”
The study points out the implications of GPT detectors for non-native writers as a point of potential discrimination, especially in educational settings where non-native English writers and speakers risk accusations of cheating.
The study added, “Within social media, GPT detectors could spuriously flag non-native authors’ content as AI plagiarism, paving the way for undue harassment of specific non-native communities. Internet search engines, such as Google, that implement mechanisms to devalue AI-generated content may inadvertently restrict the visibility of non-native communities, potentially silencing diverse perspectives. Academic conferences or journals prohibiting use of GPT may penalize researchers from non-English-speaking countries.”
As such, the detector tools tend to foster an atmosphere of “presumption of guilt.” Students are seen as dishonest and have to prove themselves trustworthy instead.
Paradoxically, non-native English writers, academics, and students may even be more tempted or otherwise inclined to use AI help to get past such barriers because ChatGPT and the like can add text perplexity that can fool GPT detectors more readily.
The study thus recommends three things to help in the responsible use of GPT detectors and the development of more equitable methods.
The study first cautions against “the use of GPT detectors in evaluative or educational settings, particularly when assessing the work of non-native English speakers.”
Second, it asks for a more comprehensive evaluation of GPT detectors. Said the study, “To mitigate unjust outcomes stemming from biased detection, it is crucial to benchmark GPT detectors with diverse writing samples that reflect the heterogeneity of users.”
Third, it also asks that “the design and use of GPT detectors should not follow a one-size-fits-all approach. Rather, they should be designed by domain experts and used in collaboration with users.” GPT detectors should thus be evaluated for the intended domains they’re supposed to serve and communicate relevant risks (or any biases) that may be left in the detector.
Lastly, inclusive conversations involving all stakeholders, including developers, students, educators, policymakers, ethicists, and those affected by GPT are needed. “It’s essential to define the acceptable use of GPT models in various contexts, especially in academic and professional settings,” the study said, pointing out how GPT models can help enhance writing especially for those who don’t have the same mastery of the English language.
“Could it be considered as a legitimate use case where GPT augments, not supplants, human efforts, assisting in language construction without undermining the originality of ideas?” Such a dialogue, the study’s authors say, can help to develop more enlightened and fair policies governing AI usage in writing, making sure to allow for its benefits while minimizing harms caused. – Rappler.com