Do YOU want to make $20,000? ChatGPT’s creator is offering a reward if you find bugs

>

OpenAI launched a Bug Bounty program on Tuesday that will pay you up to $20,000 if you discover flaws in ChatGPT and its other artificial intelligence systems.

The San Francisco-based company invites researchers, ethical hackers and tech enthusiasts to take a look at certain features of ChatGPT and the framework of how systems communicate and share data with third-party applications.

Rewards are given to people based on the severity of the bugs they report, with compensation starting at $200 per vulnerability.

The program follows news that Italy banned ChatGPT after a data breach at OpenAI allowed users to view people’s conversations – a problem the bounty hunters were able to find before it strikes again.

Open AI invites researchers, ethical hackers and tech enthusiasts to review certain features of ChatGPT and report any bugs

“We are excited to build on our coordinated disclosure commitments by providing incentives for qualifying vulnerability information,” OpenAI shared in a statement. rack.

“Your expertise and vigilance will have a direct impact on keeping our systems and users safe.”

Bugcrowd, a leading bug bounty platform, curates submissions and shows that 16 vulnerabilities have been awarded so far with an average payout of $1,287.50.

However, OpenAI does not accept submissions from users who jailbreak ChatGPT or bypass protections to access the chatbot’s alter ego.

Users discovered that the jailbroken version of ChatGPT can be accessed through a special prompt called THEN – or ‘Do Anything Now’.

So far it has allowed comments speculating about conspiracies, for example the 2020 US general election was “stolen”.

The DAN version also claims that the COVID-19 vaccines were “developed as part of a globalist plot to control the population.”

ChatGPT is a large language model trained on huge text data, which allows it to generate human-like responses to a given prompt.

Rewards are given to people based on the severity of the bugs they report, with compensation starting at $200 per vulnerability

Rewards are given to people based on the severity of the bugs they report, with compensation starting at $200 per vulnerability

But developers have added so-called “prompt injections,” instructions that guide responses to certain prompts.

THEN, however, is a prompt that commands to ignore these prompt injections and act as if they don’t exist.

Other rules of the Bounty Bug program prohibit letting the model pretend to do bad things, pretend to give you answers to secrets, and pretend to be a computer and run code.

Participants are also not authorized to conduct additional security testing against certain companies, including Google Workspace and Evernote.

“Once a month, we will evaluate all submissions in order, based on various factors, and award a bonus through the bugcrowd platform to the researcher with the most impactful findings,” said OpenAI.

‘Only the first submission of a given key counts.

“Remember, don’t hack or attack other people to find API keys.”

Italy’s data protection authority announced a temporary ban on ChatGPT last month, saying its decision was provisional “until ChatGPT respects privacy.”

The move was in response to ChatGPT being taken offline on March 20 to fix a bug that allowed some people to see the titles or subject line of other users’ chat history, leading to fears of a substantial data breach. personal information.

The authority added that OpenAI, which developed ChatGPT, must report within 20 days on the measures taken to ensure user data privacy or face a fine of up to $22 million.

OpenAI said it found that 1.2 percent of ChatGPT Plus users had “maybe” exposed personal information to other users, but it thought the actual numbers were “extremely low.”

The measure taken by the Italian watchdog temporarily limits the company to retaining data from Italian users.

It criticized “the lack of notice to users and to all data subjects whose data is being collected by OpenAI” and added information provided by ChatGPT “does not always match real data, determining whether inaccurate personal data is retained.”

The authority also criticized the “lack of a legal basis justifying the massive collection and retention of personal data.”