Definition
A testing method where experts try to "break" an AI system by finding vulnerabilities, bias, or unwanted behavior. Required for GPAI models with systemic risk under the EU AI Act. Part of adversarial testing to ensure robustness.
Definition & Explanation
A testing method where experts try to "break" an AI system by finding vulnerabilities, bias, or unwanted behavior. Required for GPAI models with systemic risk under the EU AI Act. Part of adversarial testing to ensure robustness.