Responsible AI Platform
Technical Concepts

Adversarial Testing

Definition & Explanation

Definition

Systematically testing AI systems for vulnerabilities by exposing them to deliberately misleading or malicious input. Required for GPAI models with systemic risk. Includes red teaming, jailbreak attempts, and robustness testing.

Related Terms

Read more about this topic