Guidelines dated 29 July 2025: The European Commission published guidelines (C(2025) 5053) on the definition of an AI system under Article 3(1) of the AI Act. These guidelines are not legally binding, but they provide the most authoritative interpretation currently available for organizations that need to determine whether their software falls under the AI Act.
Perhaps the most fundamental question in AI Act compliance is also the most frequently overlooked: does our software actually fall under the regulation? Many organizations assume they are working with ordinary software, until an external audit or a new procurement process forces them to test that assumption. On 29 July 2025 the European Commission published guidelines that answer exactly that question.
Why the definition is so decisive
The AI Act is not a generic digital law applying to all software. It applies exclusively to systems that qualify as an "AI system" within the meaning of Article 3(1). That makes the definition the threshold concept of the entire regulation: without an AI system, there are no AI Act obligations, no prohibited practices, no high-risk assessment.
At the same time, the definition already sparked intense debate during negotiations in the European Parliament. How broadly or narrowly should AI be defined? Too broadly, and the AI Act becomes a kind of digital operating licence for all software. Too narrowly, and systems that genuinely should fall under it escape regulation. The Commission was authorized under Article 96 of the AI Act to issue guidelines on this very point, and it has done so.
The guidelines are not legally binding. Only the Court of Justice of the European Union can provide a definitive interpretation of the AI Act. In practice, however, supervisory authorities and courts will rely heavily on the Commission's interpretation, especially during the early phase of enforcement. Organizations that deviate from that interpretation bear an additional burden of proof.
Seven elements, one definition
Article 3(1) of the AI Act defines an AI system as: "a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments."
The Commission distils seven elements from this definition. Each element must be present, but the timing differs: some manifest themselves in the build phase, others only during operational use.
Machine-based system
Hardware and software together - from classical servers to quantum computing. The system must run on a machine and be computationally driven.
Varying levels of autonomy
The system is designed to operate with some degree of independence from direct human control. Purely manually operated systems fall outside the definition.
Adaptiveness after deployment (optional)
The system may exhibit self-learning behaviour after going live. Note the word "may" in the regulation: this element is not required. A system without self-learning capacity can still qualify as an AI system.
Explicit or implicit objectives
The system works towards internal goals, whether or not explicitly defined by the developer. These objectives are distinct from the "intended purpose" that plays a role elsewhere in the AI Act.
Inference - the key element
The system infers how to generate outputs from the input it receives. This is the indispensable condition. Techniques include: supervised learning, unsupervised learning, reinforcement learning, deep learning, and logic-based methods.
Outputs: predictions, content, recommendations or decisions
The result falls into one of these four categories. Think of a risk score (prediction), generated text (content), a product recommendation, or an automated decision.
Influence on physical or virtual environments
The outputs are not passive: they affect the world. Whether it is a decision that impacts a person or an action in a digital environment - the system has effect beyond itself.
The first element is that it must be a machine-based system, encompassing hardware and software together, including quantum computing. The second element is varying levels of autonomy: the system is designed to operate with a degree of independence from direct human instructions. Purely manually operated systems therefore fall outside scope.
The third element is adaptiveness after deployment. The word "may" in the regulation is crucial here: adaptiveness is optional. A system does not need to learn after going live in order to qualify as an AI system. The fourth element is objectives, both explicit and implicit. These internal objectives are distinct from the "intended purpose" that plays a role elsewhere in the AI Act.
The fifth element is the most decisive: inference. The system must be able to infer how to generate outputs from the input it receives. The Commission describes this as an indispensable condition and links it to concrete AI techniques: supervised learning, unsupervised learning, reinforcement learning, deep learning, and logic-based methods. The sixth element is the outputs themselves, namely predictions, content, recommendations or decisions. The seventh element is that those outputs can actively influence physical or virtual environments.
The boundary with ordinary software
In practice, the key question is where the line falls between an AI system and ordinary software. Recital 12 of the AI Act explicitly excludes certain categories: simple traditional software, rule-based systems operating solely on rules defined by humans, systems for purely mathematical optimization, classical heuristic methods, and simple prediction systems with limited capacity to analyze patterns autonomously.
The distinction turns on the capacity for autonomous pattern analysis. A calculator processes input and produces output but infers nothing along the way. A spell checker operates on fixed dictionary rules. A simple if-then-else system does exactly what the programmer defined, nothing more. None of these systems analyze patterns autonomously or adjust their output based on what they discover in the data.
A CV screening tool that evaluates candidates based on historical hiring data does. A recommendation engine that learns from user behavior and makes suggestions accordingly does. An expert system that delegates process automation to an inference model does as well. The line is not razor-sharp, but the guidelines provide sufficient guidance for a substantiated analysis.
Grey areas and a persistent misconception
The Commission explicitly acknowledges that grey areas exist. Not every system is easy to classify, and the guidelines do not provide an exhaustive list of examples. What they do provide is an analytical framework: assess each system on the basis of its specific characteristics, against the seven elements, and look in particular at whether the system is capable of analyzing patterns autonomously and adjusting its output accordingly.
A persistent misconception is that organizations that have programmed their own rules automatically fall outside the AI Act. That is incorrect. A system built on rules defined by humans, but which subsequently analyzes patterns autonomously and adjusts its output based on data insights, may still qualify as an AI system. The question is not who wrote the rules, but whether the system itself learns and infers.
That distinction is more relevant in practice than it may seem. Many systems started as rule-based tools and were later extended with machine learning components without the compliance documentation being updated. Hybrid systems in particular deserve a critical reassessment.
What this means for your organization
From 2 February 2025 onwards, both the definition and the prohibited AI practices of the AI Act are in force. That means organizations must already be able to demonstrate that they know which of their systems qualify as AI systems.
That assessment has direct consequences. A system that qualifies as an AI system brings with it at minimum the AI literacy obligation under Article 4. If the system also falls into a high-risk category, additional obligations around transparency, data governance, and human oversight will apply from August 2026.
For organizations procuring software from vendors, a simple but effective rule applies: ask your supplier explicitly whether the product is an AI system within the meaning of Article 3(1) of the AI Act. Good suppliers can answer that question and have documentation to back it up. If they cannot, that is itself a signal that more due diligence is required before signing the contract.
A definition that works as a compass
The Commission's guidelines do not offer a watertight checklist, but they do provide a clear compass. Inference is the key concept: if a system does not infer how to generate outputs based on patterns in its input, it is not an AI system. But once that capacity is present, even if the system itself was built by humans based on human-defined rules, the AI Act deserves serious attention.
For compliance professionals, lawyers, and product managers, the practical lesson is clear: start with the definition. The rest of the AI Act obligations follow only once you know whether your system has cleared the first hurdle. That hurdle has seven elements, but the fifth one, inference, is by far the most decisive.