OpenAI Bug Bounty Program: Make ChatGPT great again
OpenAI is an AI research and deployment company that aims to create artificial intelligence systems that benefit all of humanity. As part of its commitment to secure AI, OpenAI has launched a bug bounty program to encourage security researchers, ethical hackers, and technology enthusiasts to help identify and address vulnerabilities in its systems.
How does OpenAI Bug Bounty Program work?
OpenAI has started a bug bounty program to reward anyone who discovers and reports security issues with its artificial intelligence services, such as ChatGPT.
The bug bounty program is managed by Bugcrowd, a leading bug bounty platform that handles the submission and reward process. Participants can report any vulnerabilities, bugs, or security flaws they discover in OpenAI's systems and receive cash rewards based on the severity and impact of the issues. The rewards range from $200 for low-severity findings to up to $20,000 for exceptional discoveries.
What are the OpenAI Bug Bounty Program's rules?
According to Bugcrowd, you must follow these rules to join the program:
-
You are authorized to perform testing in compliance with this policy.
-
Follow this policy and any other relevant agreements. In case of inconsistency, this policy takes precedence.
-
Promptly report discovered vulnerabilities.
-
Refrain from violating privacy, disrupting systems, destroying data, or harming user experience.
-
Use OpenAI's Bugcrowd program for vulnerability-related communication.
-
Keep vulnerability details confidential until authorized for release by OpenAI's security team, which aims to provide authorization within 90 days of report receipt.
-
Test only in-scope systems and respect out-of-scope systems.
-
Do not access, modify, or use data belonging to others, including confidential OpenAI data. If a vulnerability exposes such data, stop testing, submit a report immediately, and delete all copies of the information.
-
Interact only with your own accounts, unless authorized by OpenAI.
-
Disclosure of vulnerabilities to OpenAI must be unconditional. Do not engage in extortion, threats, or other tactics to elicit a response under duress. OpenAI denies Safe Harbor for vulnerability disclosure conducted under such circumstances.
The bug bounty program is essential to OpenAI's mission of creating safe and advanced AI. By participating in the program, security researchers can play a crucial role in making OpenAI's technology safer for everyone.
OpenAI also offers safe harbor protection, cooperation, remediation, and acknowledgment for vulnerability research conducted according to its policy and rules of engagement.
To learn more about the bug bounty program, click here.
You can also explore open security roles at OpenAI on its careers page.
Advertisement