Responsible AI Platform

Annex XI AI Act

Technical documentation referred to in Article 53 — for GPAI models

Official text

Technical documentation referred to in Article 53(1), point (a) — technical documentation for providers of general-purpose AI models Section 1 Information to be provided by all providers of general-purpose AI models The technical documentation referred to in Article 53(1), point (a) shall contain at least the following information as appropriate to the size and risk profile of the model:

1. A general description of the general-purpose AI model including:

(a) the tasks that the model is intended to perform and the type and nature of AI systems in which it can be integrated;

(b) the acceptable use policies applicable;

(c) the date of release and methods of distribution;

(d) the architecture and number of parameters;

(e) the modality (e.g. text, image) and format of inputs and outputs;

(f) the licence.

2. A detailed description of the elements of the model referred to in point 1, and relevant information of the process for the development, including the following elements:

(a) the technical means (e.g. instructions of use, infrastructure, tools) required for the general-purpose AI model to be integrated in AI systems;

(b) the design specifications of the model and training process, including training methodologies and techniques, the key design choices including the rationale and assumptions made; what the model is designed to optimise for and the relevance of the different parameters, as applicable;

(c) information on the data used for training, testing and validation, where applicable, including the type and provenance of data and curation methodologies (e.g. cleaning, filtering, etc.), the number of data points, their scope and main characteristics; how the data was obtained and selected as well as all other measures to detect the unsuitability of data sources and methods to detect identifiable biases, where applicable;

(d) the computational resources used to train the model (e.g. number of floating point operations), training time, and other relevant details related to the training;

(e) known or estimated energy consumption of the model.

With regard to point (e), where the energy consumption of the model is unknown, the energy consumption may be based on information about computational resources used.

Section 2 Additional information to be provided by providers of general-purpose AI models with systemic risk

1. A detailed description of the evaluation strategies, including evaluation results, on the basis of available public evaluation protocols and tools or otherwise of other evaluation methodologies. Evaluation strategies shall include evaluation criteria, metrics and the methodology on the identification of limitations.

2. Where applicable, a detailed description of the measures put in place for the purpose of conducting internal and/or external adversarial testing (e.g. red teaming), model adaptations, including alignment and fine-tuning.

3. Where applicable, a detailed description of the system architecture explaining how software components build or feed into each other and integrate into the overall processing.

Source: EUR-Lex, Regulation (EU) 2024/1689 — text reproduced verbatim.

📬 AI Act Weekly

Get the most important AI Act developments in your inbox every week.

Subscribe

Frequently asked questions

What does Annex XI of the AI Act regulate?

Annex XI describes the technical documentation that providers of general-purpose AI (GPAI) models must prepare under Article 53(1)(a). It has two sections: one for all GPAI providers and an additional section for models with systemic risk.

What must be in the technical documentation of a GPAI model?

Section 1 requires: (1) general description including tasks, usage policies, architecture, parameters, modalities and licence, and (2) detailed description of development, training methodologies, data, computational resources and energy consumption.

Do I need to document my AI model's energy consumption?

Yes, point 2e of Section 1 requires the known or estimated energy consumption of the model. If exact consumption is unknown, an estimate based on computational resources used may be provided.

What extra documentation is required for GPAI with systemic risk?

Section 2 requires three additional elements: (1) detailed evaluation strategies with results and methods, (2) description of adversarial testing (red teaming), alignment and fine-tuning, and (3) description of system architecture.

Do I need to describe my training data?

Yes, point 2c of Section 1 requires information on training, testing and validation data, including type, provenance, curation methods, number of data points, scope, characteristics, and bias detection measures.

How does Annex XI differ from Annex IV?

Annex IV is for high-risk AI systems (Article 11), while Annex XI is specifically for GPAI models (Article 53). Annex XI focuses more on model architecture, training data and compute, while Annex IV is broader with requirements for risk management and human oversight.

What is red teaming and why is it required?

Red teaming is adversarial testing where the model is attacked to discover vulnerabilities. Section 2, point 2 requires this for GPAI models with systemic risk, including description of alignment and fine-tuning measures.

How many parameters must my model have to fall under Annex XI?

Annex XI applies to all GPAI models regardless of size. The number of parameters must be documented (point 1d), but there is no minimum threshold. The extra Section 2 requirements only apply to models with systemic risk (Article 51).