The Risks and Benefits of AI Assistants in the Workplace: Lessons from Samsung Semiconductor

Russell Kidson
Apr 4, 2023
Google Android

Let's learn some lessons about using AI assistants in the workplace, thanks to Samsung Semiconductor.


Samsung Semiconductor recently allowed its fabrication engineers to utilize an AI assistant called ChatGPT. However, it was discovered that the engineers were inadvertently sharing confidential information, such as internal meeting notes and data related to the performance and yield of their fabrication processes, while using the tool to quickly correct errors in their source code. Consequently, Samsung Semiconductor plans to create an AI service similar to ChatGPT for internal use, but, for the time being, it has limited the length of questions that can be submitted to the tool to 1024 bytes, according to a report by the Economist.

Samsung Semiconductor has reported three instances in which the use of ChatGPT resulted in data leaks. While this number may not appear to be significant, all three incidents occurred within a span of 20 days, which is cause for concern.

One of the reported incidents involved a Samsung Semiconductor employee who used ChatGPT to correct errors in the source code of a proprietary program. This action, however, unintentionally revealed the code of a highly classified application to an external company's artificial intelligence.

The second incident was even more concerning. An employee entered confidential test patterns designed to identify defective chips into ChatGPT, and requested optimization of the sequences. These test sequences are strictly confidential, and optimizing them could potentially shorten the silicon test and verification process, leading to significant time and cost savings.

Another employee used the Naver Clova application to convert a recorded meeting into a document, which was then submitted to ChatGPT for use in creating a presentation. However, these actions posed a significant risk to confidential information, prompting Samsung to caution its employees about the potential dangers of using ChatGPT.

Samsung Electronics has informed its executives and staff that any data entered into ChatGPT is transmitted and stored on external servers, making it difficult for the company to retrieve it and increasing the risk of data leakage. Although ChatGPT is a useful tool, its open learning data feature can potentially expose sensitive information to third parties, which is unacceptable in the highly competitive semiconductor industry.

Samsung is taking steps to prevent similar incidents from occurring in the future. If another data breach happens, even after implementing emergency information protection measures, access to ChatGPT may be restricted on the company network. Nonetheless, it's evident that generative AI and other AI-powered electronic design automation tools will play a crucial role in the future of chip manufacturing.

Regarding the data leakage incident, a Samsung Electronics spokesperson declined to confirm or deny any details, citing the sensitive nature of the matter as an internal issue.

Balancing the benefits and risks of AI assistants in the workplace

The integration of AI assistants in the workplace has the potential to bring about numerous benefits, including increased productivity and efficiency. AI tools such as ChatGPT can swiftly detect and rectify errors, freeing up valuable time for employees to focus on other tasks. Moreover, AI assistants can learn from previous interactions and adapt to better serve the needs of employees, resulting in even greater productivity gains.

However, the use of AI assistants in industries that deal with sensitive information raises significant concerns about data privacy and security. The inadvertent sharing of confidential information by Samsung Semiconductor's employees using ChatGPT highlights the potential risks of using AI tools in high-stakes environments. If not managed properly, the use of AI assistants can lead to data breaches and leaks, exposing confidential information to unauthorized third parties.

It is important to note that the risks associated with AI assistants are not limited to intentional data breaches or malicious attacks. Even well-intentioned employees can unintentionally share sensitive information by entering it into AI tools without fully comprehending the potential consequences. Additionally, the open learning data feature of AI assistants can potentially expose sensitive information to third parties, further increasing the risk of data breaches.

Given these risks, it is crucial for companies to carefully evaluate the potential benefits and drawbacks of using AI assistants in the workplace, especially in industries that handle sensitive information. Companies must establish clear guidelines for the use of AI tools, including limitations on the types of data that can be entered and stored in these systems. Furthermore, companies must implement strong data privacy and security measures, such as encryption and access controls, to minimize the risk of data breaches and leaks.

AI: A double-edged sword if there ever was one

The recent data leaks resulting from Samsung Semiconductor employees' use of ChatGPT have raised crucial concerns about the potential risks and benefits of incorporating AI assistants in the workplace. While AI tools like ChatGPT can significantly enhance productivity, they can also expose companies to data privacy and security risks.

Samsung Semiconductor has acknowledged these risks and is implementing measures to prevent future incidents. Nonetheless, this situation serves as a valuable reminder for other companies considering the adoption of AI assistants in their operations. To mitigate the risks associated with AI tools, companies should establish well-defined guidelines for their use and implement robust data privacy and security measures.

The case of Samsung Semiconductor emphasizes the importance of striking a balance between the potential benefits and risks of AI assistants in the workplace. As AI tools continue to proliferate, companies must take proactive measures to ensure that confidential information is shielded from unauthorized access and disclosure. Ultimately, the successful integration of AI assistants in the workplace necessitates thoughtful consideration of the potential risks and benefits, as well as the implementation of effective risk management strategies.


Tutorials & Tips

Previous Post: «
Next Post: «


  1. Albert said on August 18, 2023 at 1:49 pm

    Thanks for the tip Martin.

    It is for these kinds of posts that I follow GHacks.

    1. Mike Williams said on August 26, 2023 at 8:55 pm

      What’s up with the generic comment, are you a bot?

  2. Tachy said on August 18, 2023 at 3:23 pm


    Where on the planet is that still in use? I was forced to give up using my RAZRV3 years ago because 2G was phased out by AT&T.

    1. arbuz said on August 20, 2023 at 5:02 pm

      Everywhere 3G has been turned off and you don’t have LTE coverage, and believe me there are many developed countries where this is the case and if it weren’t for 2G you wouldn’t even be able to make a phone call.

    2. Doc Fuddled said on August 31, 2023 at 5:55 pm

      Maybe I missed it, but I don’t believe tha term “2G” is in the article. Perhaps you are referring to “AGM G2”??

  3. Tachy said on August 18, 2023 at 3:27 pm


    Your website has gone insane.

    When I the post button I then saw my comment posted on a different article page. When I opened this article again, it is here.

    1. Martin P. said on August 31, 2023 at 4:39 pm

      @Tachy @Martin Brinkmann

      ” Your website has gone insane. ”

      Same here. Has happened several times.

      1. owl said on September 1, 2023 at 3:42 am

        @Martin P.,

        For over two weeks now,
        I’ve been seeing “Comments” posted by subscribers appearing in different, unrelated articles.
        For the time being,
        it would be better to specify the “article name and URL” at the beginning of the post.

  4. Anonymous said on August 18, 2023 at 11:17 pm

    @tachy a lot of non-phone devices with a sim in them rely on 2G, at least here in europe.
    Usually things reporting usage or errors/alarms on something remote that does not get day to day inspection in person. They are out there in vast numbers doing important work. Reliable, good range. The low datarate is no problem at all in those cases.
    3G is gone or on its last legs everywhere, but this stuff still has too much use to cancel.

    Anyhow, interesting that they would put that in. I can see the point if you suspect a hostile 2G environment (amateur eavesdroppers with laptop, ranging up to professional grade MITM fake towers while “strangely” not getting the stronger crypto voip 4G because it is being jammed, and back down to something as old ‘stingray’ devices fallen into the wrong hands).

    But does this also mean that they have handled and rolled out a fix for that nasty 4G ‘pwn by broadcast’ problem you reported earlier this year? I had 4G disabled due to that, on the off chance that some of the local criminals would buy some cheap chinese gear, download a working exploit and probe every phone in range all over town in the hope of getting into phones of the police.

  5. Andy Prough said on August 19, 2023 at 3:04 am

    >”While most may never be attacked in stingrays, it is still recommended to disable 2G cellular connections, especially since it does not have any downsides.”

    The downside would be losing connectivity. I spend a lot of time way out in the countryside where there’s often no service or almost none. My network allows 2G, and I need it sometimes. I have an option on the phone to disable 2G, I may do that when I’m in the city and I have good 5G connectivity, but not out in the country.

    I would imagine that the stingray exploits, like most of the bad things in this world, are probably things you will run into in the crowded big cities.

  6. owl said on August 21, 2023 at 3:40 am

    I stopped using it in a mobile (Wi-Fi line) environment, so I’m almost ignorant of the actual situation,
    But the recent reality in Japan makes me realize that “the infrastructure of the web is nothing more than a papier-mâché fiction”.

    It is already beyond the scope of what an individual can do.
    What we should be aware of is the reality that “governments and those in power want to control the world through the Web”, and efforts to counter (resist and prevent) such ambitions are necessary.

  7. Anonymous said on August 26, 2023 at 9:27 pm

    Why do you want people to disable the privacy features? Hmmmmm?

  8. Anonymous said on August 27, 2023 at 2:30 am

    Now You: do you plan to keep the Ads privacy features enabled?

    I’d like to tell you, but apparently if you make a post critical of Google, you get censored. * [Editor: removed, just try to bring your opinion across without attacking anyone]

  9. Tachy said on August 27, 2023 at 5:15 am


    You website is still psychotic. Comments attach to random stories.

  10. John G. said on August 28, 2023 at 2:46 pm

    @Martin please do fix the comments, it’s completely insane commenting here! :[

  11. ECJ said on August 28, 2023 at 5:37 pm


    The comments are seriously messed up on gHacks now. These comments are mixed with the article at the below URL.

    And comments on other articles are from as far back as 2010.

  12. Naimless said on August 29, 2023 at 12:57 am

    What does this article has anything to do with all the comments on this article? LOL I think this Websuite is ran by ChatGPT. every article is messed up. Some older comments from 2015 shown up in recant articles, LOL

  13. Paul Knight said on August 31, 2023 at 3:35 am

    The picture captioned “Clearing the Android Auto’s cache might resolve the issue” is from Apple Carplay ;)

  14. Anonymous said on August 31, 2023 at 9:57 pm

    How about other things that matter:
    Drop survival?
    Screen toughness?
    Degree of water and dust protection?

Leave a Reply

Check the box to consent to your data being stored in line with the guidelines set out in our privacy policy

We love comments and welcome thoughtful and civilized discussion. Rudeness and personal attacks will not be tolerated. Please stay on-topic.
Please note that your comment may not appear immediately after you post it.