How to Trick ChatGPT and Get Paid $50,000
The infamous AI jailbreaker, Pliny the Prompter, has teamed up with HackAPrompt 2.0 to host a competition with $500,000 in prizes for discovering AI jailbreaks. Participants can earn $50,000 bounties for crafting prompts that successfully bypass AI models' restrictions, such as soliciting sensitive information about weapons or hazardous materials. During the competition, which lasts for two weeks, participants generate adversarial prompts, while results are open-sourced to contribute to understanding AI vulnerabilities. The event reflects a game-like structure, catering to a community dedicated to testing the robustness of AI, emphasizing techniques in social engineering to navigate the inherent contradictions in AI model objectives. Pliny's long-standing experience in this domain has fostered a knowledge-sharing environment, enabling participants to learn and skillfully manipulate AI behaviors. This initiative aligns with an ongoing discourse on AI safety and ethical hacking, training teams to engage comprehensively with potential AI threats.
Source đź”—