Samsung bans employees from using ChatGPT-like technology following data leak







  • Generative AI's impressive capabilities have made their way into the workplace, with AI chatbots helping people focus on their primary responsibilities rather than mundane tasks. However, the use of ChatGPT in workflows often involves feeding sensitive information to it, raising alarms at Samsung HQ.


    Recently, Samsung Electronics Co. discovered that employees were uploading sensitive code to ChatGPT, causing the South Korea-based company to bar staff from using the AI chatbot and other generative AI tools. According to Bloomberg News, Samsung notified employees of one of its largest divisions about the new policy via a memo on Monday.

    The memo noted that Samsung is concerned that data transmitted to AI platforms like ChatGPT, as well as Google Bard and Bing, is stored on external servers. This storage method makes the data difficult to retrieve and delete, and there is a risk of it being disclosed to other users.

    While Samsung has seen a growing interest in generative AI platforms such as ChatGPT, it has also noted the growing concern about the security risks presented by generative AI. As a result, the generative AI ban covers company-owned computers, tablets, and phones, as well as its internal networks. Any disclosure of confidential or personal information related to the company on generative AI tools is prohibited and could result in termination of employment.

    The memo also highlighted that Samsung engineers accidentally leaked internal source code by uploading it to ChatGPT last month. However, it did not specify what the code pertained to. In addition to the generative AI ban, Samsung is also developing its own internal AI tools for translation, summarizing documents, and software development. It's also working on ways to block the uploading of sensitive information to external services.


    Post a Comment

    Previous Next

    نموذج الاتصال