Responsible AI Platform
Technical Concepts

Red Teaming

Definition & Explanation

Definition

A testing method where experts try to "break" an AI system by finding vulnerabilities, bias, or unwanted behavior. Required for GPAI models with systemic risk under the EU AI Act. Part of adversarial testing to ensure robustness.

Related Terms

Read more about this topic