artificial intelligence

WHO warns against bias, misinformation in using AI in healthcare


This is AI generated summarization, which may have errors. For context, always refer to the full article.

WHO warns against bias, misinformation in using AI in healthcare

The words AI and Artificial Intelligence are seen in this illustration taken, May 4, 2023

Dado Ruvic/Reuters

The World Health Organization is enthusiastic about AI use but has concerns over potentially biased training data, and misleading or inaccurate information

The World Health Organization called for caution on Tuesday, May 16, in using artificial intelligence for public healthcare, saying data used by AI to reach decisions could be biased or misused.

The WHO said it was enthusiastic about the potential of AI but had concerns over how it will be used to improve access to health information, as a decision-support tool and to improve diagnostic care.

The WHO said in a statement the data used to train AI may be biased and generate misleading or inaccurate information and the models can be misused to generate disinformation.

It was “imperative” to assess the risks of using generated large language model tools (LLMs), like ChatGPT, to protect and promote human wellbeing and protect public health, the UN health body said.

Its cautionary note comes as artificial intelligence applications are rapidly gaining in popularity, highlighting a technology that could upend the way businesses and society operate. –

Add a comment

Sort by

There are no comments yet. Add your comment to start the conversation.

Summarize this article with AI

How does this make you feel?

Download the Rappler App!