Definition
Techniques to bypass the safety measures and restrictions of AI models, causing the model to generate content that would normally be blocked. GPAI providers with systemic risk must test their models for jailbreaking as part of adversarial testing.