1. Home
  2. AI
  3. Anthropic launches $15K jailbreak bounty program for its unreleased next-gen AI
Anthropic launches K jailbreak bounty program for its unreleased next-gen AI

Anthropic launches $15K jailbreak bounty program for its unreleased next-gen AI

0

Source: Coin Telegraph

The program will be open to a limited number of participants initially but will expand at a later date.

Artificial intelligence firm Anthropic announced the launch of an expanded bug bounty program on Aug.8, with rewards as high as $15,000 for participants who can “jailbreak” the company’s unreleased, “next generation” AI model. 

Anthropic’s flagship AI model, Claude-3, is a generative AI system similar to OpenAI’s ChatGPT and Google’s Gemini. As part of the company’s efforts to ensure that Claude and its other models are capable of operating safely, it conducts what’s called “red teaming.”

Red teaming is basically just trying to break something on purpose. In Claude’s case, the point of red teaming is to try and figure out all of the ways that it could be prompted, forced, or otherwise perturbed into generating unwanted outputs.

Read more

Go to Source
Author: Tristan Greene