Workers at Samsung’s semiconductor arm inadvertently leaked confidential company data to ChatGPT on at least three separate occasions, TechRadar reported on Tuesday, April 4.
Samsung lifted a ban on using ChatGPT about three weeks ago, though the ban may have been warranted as it sought to prevent the leaking of company data to begin with.
Gizmodo, in its report citing Korean media, noted two of the events were meant to troubleshoot faulty code. In one incident, a Samsung employee asked ChatGPT’s help in diagnosing the source code from a faulty semiconductor database. A second incident saw another employee sharing confidential code with ChatGPT to seek fixes for defective equipment.
In the third case, an employee reportedly submitted an entire meeting so ChatGPT could create meeting minutes.
OpenAI saves data from ChatGPT prompts to improve its AI models unless users of the service opt out. It has previously warned against sharing sensitive information because, as it has mentioned in an outline of what the service does, the service is “not able to delete specific prompts.”
A research blog post from data detection and response platform provider Cyberhaven says, “As of March 21, 8.2% of employees have used ChatGPT in the workplace and 6.5% have pasted company data into it since it launched” – a notable potential security risk if there was one.
Samsung Semiconductor is now developing an artificial intelligence program for in-house use, but can only use prompts that are, at most, 1,024 btyes in size. – Rappler.com
There are no comments yet. Add your comment to start the conversation.