ChatGPT data breach: Have you used it on March 20th?

Emre Çitak
May 4, 2023

OpenAI, the company behind the development of the AI-powered conversational agent, ChatGPT, has recently acknowledged a potential data breach in the chatbot's source code due to a software defect. 

OpenAI has revealed that the vulnerability in the Redis open-source library, which ChatGPT uses, has led to certain users being able to access the titles of chat logs from another active user. Furthermore, in some cases, these users may have also been able to view the first message of a new conversation, but only if both parties were simultaneously active on the chatbot. 

The company has acknowledged that the software defect may have also resulted in the accidental exposure of payment-related information for premium ChatGPT users who were active on the platform between 1-10 am PST on March 20.

The affected users' names, email addresses, payment addresses, credit card type, and the last four digits of their payment card numbers may have been visible to other users during this period, although full payment card details were never exposed.

ChatGPT data breach occurred between 1-10 am PST on March 20

OpenAI has reassured that the number of users who were impacted by this bug is minimal. Although the data leak in ChatGPT was promptly resolved with limited damage, as affected paying subscribers accounted for less than 1% of the total users, it does raise concerns about the potential risks that could impact chatbots and their users in the future.

This incident may serve as a warning sign for the importance of prioritizing and enhancing the security measures that are put in place for these AI-powered conversational agents.

How OpenAI deals with the ChatGPT data breach? 

OpenAI promptly addressed the issue by disabling ChatGPT as soon as the software bug was detected on March 24, and the company was able to resolve the issue on the same day. OpenAI has further assured the public that it is in the process of notifying all individuals who may have been affected by the data leak. 

OpenAI has partnered with Bugcrowd to launch a bug bounty program, which the company states is part of its "commitment to secure AI" and to "recognize and reward the valuable insights of security researchers who contribute to keeping our technology and company secure." 

The bug bounty program launched in partnership with Bugcrowd will allow individuals to report any security flaws, vulnerabilities, or bugs found within OpenAI's systems in exchange for monetary compensation. The rewards for eligible reports will range from US$200 for low-severity findings to $20,000 for exceptional discoveries. 

As chatbots continue to evolve and become more advanced, they are likely to introduce new cyber threats, whether through their enhanced language capabilities or their widespread use. This makes them a prime target for cyber attackers to exploit as a potential attack vector. It is crucial for developers to remain vigilant and implement robust security measures to mitigate such risks and safeguard their users' privacy and security. 


Tutorials & Tips

Previous Post: «
Next Post: «


There are no comments on this post yet, be the first one to share your thoughts!

Leave a Reply

Check the box to consent to your data being stored in line with the guidelines set out in our privacy policy

We love comments and welcome thoughtful and civilized discussion. Rudeness and personal attacks will not be tolerated. Please stay on-topic.
Please note that your comment may not appear immediately after you post it.