AI expert, medical professionals raise some concerns over Huawei diagnostic tool
MANILA, Philippines – A COVID-19 diagnostic tool by Chinese tech giant Huawei began operation at a Baguio hospital on Tuesday, March 24.
Trained with data from the Wuhan outbreak, the machine learning and AI-powered Huawei system is designed to help speed up COVID-19 diagnoses using computer tomography (CT) scans. It’s designed to identify patterns in the scans that may indicate the presence and the progress of the disease. (READ: Artificial intelligence may be pandemic lifesaver... one day)
Huawei, a week ago, began offering the system to Philippine hospitals, with Baguio General Hospital being the first to publicly announce their adoption of the technology.
But some groups are skeptical.
One of them is the Philippine College of Radiology, a society of Philippine radiologists which questioned the use of the CT scan as a viable diagnostic option, saying screening still relies on the examination and testing of samples found in the upper respiratory and lower respiratory tracts. It said that both chest radiographs and CT scans should not be used as the “first-line test” for diagnosis.
It added that it may unnecessarily expose other patients being scanned for other ailments to the coronavirus, that ventilation in scanning rooms may facilitate the spread, and that there is the potential for data privacy laws in the Philippines to be overstepped. You can view their full statement here, addressed to the Department of Health and hospital administrators.
The Society of Medical Physicists in the Republic of the Philippines, released its own statement, echoing similar concerns, adding issues related to costs. The organization said that it joined an online meeting with Huawei but in the end, they still couldn’t recommend the technology because they feel that “several challenges” still need to be addressed, from both a medical and data privacy standpoint.
A source from Huawei, speaking anonymously, said that while CT scan data is uploaded on Huawei's cloud platform, only the hospital will have access to the data, which can delete, download, erase or anonymize the data. The company has also reported very high diagnosis accuracy rates for their tool, up to 98%, which was first used in China. (READ: AI tool predicts which coronavirus patients get deadly 'wet lung')
Their technology is said to be able to help doctors distinguish between early, advanced, and severe stages of COVID-19, and more quickly evaluate a patient's progress and the effects drugs may be having on them. (READ: Huawei looks to partner with PH hospitals for AI-assisted coronavirus diagnosis )
The debate for the use of CT scans in COVID-19 diagnoses appears to be up in the air at the moment.
The American College of Radiology “strongly urges caution” when using CT scans as an interim measure due to the shortage of testing kits – noting similar concerns with regards to ventilation, exposure to infected equipment, and that findings on chest imaging in COVID-19 may overlap with other viral infections.
But some research has also shown COVID-19 to have detectable patterns using CT scans.
Guidelines to protect local data
Data privacy concerns were also raised by an AI professional – Ralph Regalado, the CEO of local startup Senti AI who is currently working with the Department of Health – over the use of a AI-powered tools from 3rd parties.
Regalado, in a written piece, welcomes such a technology but warns that the right contracts and guidelines must be set in place to protect our local data.
Regalado didn’t name Huawei and their AI diagnostic tool specifically as his piece talks about AI tools in general that could also be made and offered by different companies.
Regalado said that it should be discussed who would own the data, where it is stored, and how we ensure that the third party provider will delete it. “Hospitals that primarily store [the patient data] and the government have the authority to dictate how it should be used and set usage limitations, making sure that these patient data are protected,” he wrote.
Will the data be anonymized? Are the patients aware that their data will be used? Who will be accountable in case of a data breach? Who will be accountable if the predictions are wrong? Who has approved that the system is ready for commercial use? And how accurate will the system be if it’s trained with data collected from another country? These are some of the key questions that Regalado asks.
“The list of items to factor in is long and demanding but this doesn’t mean that we shouldn’t welcome these AI systems. It’s just a matter of writing the right contracts and setting guidelines to protect our local data. It’s a step that we must do to be ready to use these AI innovations,” said Regalado. – Rappler.com