Critical for scope determination: The European Commission published guidelines on July 29, 2025, explaining when software qualifies as an AI system under the AI Act. This definition with seven elements determines the entire scope of the regulation. Inference, autonomy and adaptivity are key concepts that determine in practice whether your tools do or do not fall under compliance obligations.
The AI Act is in force and the first obligations already apply. Yet one question keeps coming up in organizations: when is a tool actually an "AI system" within the meaning of the AI Act, and when is it just "ordinary software"? The European Commission published guidelines on July 29, 2025, that address precisely this question: the Commission Guidelines on the definition of an artificial intelligence system. This blog outlines the key points from those guidelines in plain language, with attention to the implications for lawyers, compliance officers, product teams and data specialists.
Why these guidelines exist
The AI Act does not apply to all software, but only to systems that fall within the definition of an "AI system" from Article 3(1). That definition therefore determines the scope of the entire regulation. The Commission has been asked in Article 96 AI Act to provide guidance on that definition, precisely because it is decisive for questions such as: do I need to conduct a high-risk assessment, does my use case fall under prohibited practices, or is my dashboard just ordinary data analysis?
Importantly, the guidelines are not binding. They provide direction and interpretation, but ultimately it is the Court of Justice that will definitively rule on the interpretation of the AI Act. At the same time, practice among supervisors and companies in the EU is highly dependent on how the Commission interprets this definition. The guidelines were published in parallel with the guidelines on prohibited AI practices, precisely because the definition of an AI system also determines which prohibited practices apply. Since February 2, 2025, both the definition and the prohibited practices have been in force, meaning organizations now need clarity on what does and does not fall under the regulation.
The core of the AI system definition
The definition from Article 3(1) AI Act reads as follows: "'AI system' means a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptivity after deployment, and that, for explicit or implicit objectives, infers from the input it receives how to generate outputs such as predictions, content, recommendations or decisions that can influence physical or virtual environments."
That definition contains seven elements that together determine whether a system qualifies as an AI system. The Commission emphasizes a lifecycle approach: there is a building phase (pre-deployment) and a usage phase (post-deployment), and not every element needs to be continuously present in both phases. Some elements appear mainly in one phase, others in the other. This approach reflects the complexity and diversity of AI systems and ensures that the definition aligns with the objectives of the AI Act by covering a wide range of systems.
| Element | Description | Phase |
|---|---|---|
| 1. Machine-based system | Software running on hardware: processors, memory, storage, network | Both phases |
| 2. Autonomy | Designed to operate with a degree of independence | Usage |
| 3. Adaptivity | Possible adaptive behavior after deployment (optional) | Usage |
| 4. Objectives | Operates with explicit or implicit objectives | Both phases |
| 5. Inference | Infers from input how to generate outputs | Both phases |
| 6. Outputs | Predictions, content, recommendations or decisions | Usage |
| 7. Environmental influence | Outputs can influence physical or virtual environments | Usage |
Machine-based: broader than you think
The first element sounds technical, but is essentially simple: "machine-based" means that AI systems are developed with and run on machines. The term "machine" encompasses both hardware and software. Hardware refers to physical elements such as processors, memory, storage devices, network components and input/output interfaces that provide the infrastructure for computations. Software includes computer code, instructions, programs, operating systems and applications that determine how hardware processes data and executes tasks.
All AI systems are machine-based because they need machines to function, such as for model training, data processing, predictive modeling and large-scale automated decision-making. The entire lifecycle of advanced AI systems depends on machines that can comprise many hardware or software components. This element in the definition underscores that AI systems must be computationally driven and based on machine operations.
The term "machine-based" covers a wide range of computational systems. Even the most advanced emerging quantum computing systems, which represent a significant departure from traditional computer systems, constitute machine-based systems, despite their unique operational principles and use of quantum mechanical phenomena. Biological or organic systems can also fall under this, as long as they provide computing capacity. In other words: the definition is not limited to big tech or to GPU clusters. Even a relatively modest model running on a server of a medium-sized organization can fall under "machine-based system".
Autonomy: something must really happen "by itself"
The second element refers to the system being "designed to operate with varying levels of autonomy". Recital 12 of the AI Act clarifies that the term "varying levels of autonomy" means that AI systems are designed to operate with "some degree of independence of actions from human involvement and of capabilities to operate without human intervention".
The concepts of autonomy and inference go hand in hand: the inference capability of an AI system (that is, its ability to generate outputs such as predictions, content, recommendations or decisions that can influence physical or virtual environments) is crucial to establishing its autonomy. Central to the concept of autonomy is "human involvement" and "human intervention" and thus human-machine interaction. At one extreme of possible human-machine interaction are systems designed to perform all tasks via manually operated functions. At the other extreme are systems that can operate fully autonomously without any human involvement or intervention.
The reference to "some degree of independence of action" in recital 12 AI Act excludes systems designed to operate exclusively with complete manual human involvement and intervention. Human involvement and human intervention can be direct, for example via manual control, or indirect, for example via automated system-based control that allows people to delegate or supervise system operations.
Practical examples: A system that requires manually provided inputs to generate an output itself is a system with "some degree of independence of action", because the system is designed with the ability to generate an output without that output being manually checked or explicitly and exactly specified by a human. Similarly, an expert system that follows a delegation of process automation by humans and that, based on input provided by a human, is able to independently produce an output such as a recommendation, is a system with "some degree of independence of action".
The reference in the definition to "machine-based system designed to operate with varying levels of autonomy" underscores the system's ability to interact with its external environment, rather than a choice for a specific technique, such as machine learning, or model architecture for system development. The level of autonomy is therefore a necessary condition for determining whether a system qualifies as an AI system. All systems designed to operate with a reasonable degree of independence of actions meet the autonomy condition in the AI system definition.
Systems that have the ability to operate with limited or no human intervention in specific usage contexts, such as in the high-risk areas identified in Annex I and Annex III AI Act, may under certain circumstances bring additional potential risks and considerations for human oversight. The level of autonomy is an important consideration for a provider when designing, for example, the human oversight or risk mitigation measures of the system in the context of the intended purpose of a system.
Adaptivity: important but not decisive
The third element of the definition in Article 3(1) AI Act is that the system "may exhibit adaptive behavior after deployment". The concepts of autonomy and adaptivity are two different but closely related concepts. They are often discussed together, but they represent different dimensions of an AI system's functionality. Recital 12 AI Act clarifies that "adaptivity" refers to self-learning capabilities, allowing the system's behavior to change during use. The new behavior of the adapted system may produce different results than the previous system for the same inputs.
The use of the term "may" in relation to this element of the definition indicates that a system may, but does not necessarily need to possess adaptivity or self-learning capabilities after deployment to constitute an AI system. Accordingly, the ability of a system to automatically learn, discover new patterns or identify relationships in the data that go beyond what it was initially trained for, is an optional and therefore not a decisive condition for determining whether the system qualifies as an AI system.
This is a crucial point that prevents many misunderstandings. A model that is trained in the development phase and then deployed "frozen" without further adaptations can therefore still be an AI system, as long as it meets the other elements - particularly the ability to infer.
Objectives: the difference between internal goals and intended purpose
The fourth element of the definition concerns the objectives of the AI system. AI systems are designed to operate according to one or more objectives. The objectives of the system can be defined explicitly or implicitly. Explicit objectives refer to clearly formulated goals that are directly coded into the system by the developer. They can, for example, be specified as the optimization of a particular cost function, a probability or a cumulative reward. Implicit objectives refer to goals that are not explicitly stated, but that can be inferred from the behavior or underlying assumptions of the system. These objectives may arise from the training data or from the AI system's interaction with its environment.
Recital 12 AI Act clarifies that "the objectives of the AI system may differ from the intended purpose of the AI system in a specific context". The objectives of an AI system are internal to the system and refer to the goals of the tasks to be performed and their results. A virtual AI assistant system for businesses may, for example, have as objectives to answer user questions about a range of documents with high accuracy and a low error rate.
In contrast, the intended purpose is externally oriented and encompasses the context in which the system is designed to be deployed and how it should be used. According to Article 3(12) AI Act, the intended purpose of an AI system refers to "the use for which an AI system is intended by the provider". In the case of a virtual AI assistant system for businesses, the intended purpose may, for example, be to help a specific department of a company perform certain tasks. This may require that the documents the virtual assistant uses meet certain requirements (e.g. length, format) and that user questions are limited to the domain in which the system is intended to operate.
This intended purpose is fulfilled not only by the internal operation of the system to achieve its objectives, but also by other factors, such as the integration of the system into a broader customer service workflow, the data used by the system, or usage instructions. For organizations, this means that both the technical objectives and the deployment context must be documented. This is later relevant when determining whether a system counts as "high risk", for example.
Inference: the heart of the AI definition
The fifth element of an AI system is that it must be able to infer, from the input it receives, how to generate outputs. Recital 12 AI Act clarifies that "[a]n important characteristic of AI systems is their ability to infer." As further explained in that recital, AI systems must be distinguished from "simpler traditional software systems or programming approaches and should not cover systems that are based on rules solely defined by natural persons to automatically execute operations." This ability to infer is therefore an important, indispensable condition that distinguishes AI systems from other types of systems.
Recital 12 also explains that '[t]his ability to infer refers to the process of obtaining outputs, such as predictions, content, recommendations or decisions, that can influence physical and virtual environments, and to an ability of AI systems to derive models or algorithms, or both, from inputs or data.' This understanding of the concept 'inference' is not inconsistent with the ISO/IEC 22989 standard, which defines inference 'as reasoning where conclusions are drawn from known premises' and this standard contains an AI-specific note stating: '[i]n AI, a premise is a fact, a rule, a model, a feature or raw data."
The 'process of obtaining outputs, such as predictions, content, recommendations or decisions, that can influence physical and virtual environments', refers to the AI system's ability, mainly in the 'usage phase', to generate outputs based on inputs. An 'ability of AI systems to derive models or algorithms, or both, from inputs or data' refers primarily, but is not limited to, the 'building phase' of the system and underscores the relevance of the techniques used for building a system.
The terms 'infer how', used in Article 3(1) and clarified in recital 12 AI Act, are broader than, and not only limited to, a narrow understanding of the concept of inference as an ability of a system to derive outputs from given inputs, and thus to infer the result. Accordingly, the wording used in Article 3(1) AI Act, that is 'infers how to generate outputs', should be understood as referring to the building phase, where a system infers outputs through AI techniques that enable inferencing.
Two main categories of AI techniques that enable inference
With specific focus on the building phase of the AI system, recital 12 AI Act further clarifies that '[t]he techniques that enable inference while building an AI system include machine learning approaches that learn from data how to achieve certain objectives, and logic- and knowledge-based approaches that infer from encoded knowledge or symbolic representation of the task to be solved.' These techniques should be understood as 'AI techniques'.
| Machine Learning approaches | Logic & Knowledge-based approaches |
|---|---|
Supervised learning: Learning from labeled data (spam detection, image classification, fraud detection) | Expert systems: Medical diagnosis based on encoded knowledge from experts |
Unsupervised learning: Finding patterns without labels (clustering, drug discovery) | Knowledge bases: Facts, rules and relationships encoded by humans |
Self-supervised learning: Data creates its own labels (language models, image recognition) | Symbolic reasoning: Logical inference, deductive engines |
Reinforcement learning: Learning through trial-and-error (robot arms, autonomous vehicles) | Search and optimization: Sorting, matching, chaining operations |
Deep learning: Neural networks with layered architectures (GPT models) | Classical NLP: Grammatical analysis, syntactic parsing |
The first category of AI techniques includes a wide variety of approaches that enable a system to 'learn'. In supervised learning, the AI system learns from annotations (labeled data), where input data is linked to the correct output. An email spam detection system is an example of this: during the building phase, the system is trained on emails that have been labeled by humans as 'spam' or 'not spam'. Other examples are image classification systems, diagnostic medical systems and fraud detection systems.
In unsupervised learning, the AI system learns from data that is not labeled. The model is trained to find patterns, structures or relationships in the data without explicit guidance. AI systems for drug discovery, for example, use clustering and anomaly detection to group chemical compounds and predict potential new treatments.
Self-supervised learning is a subcategory where the AI system learns from unlabeled data in a supervised manner, where the data itself is used to create its own labels. Language models that predict the next token in a sentence or image recognition systems that predict missing pixels are examples of this.
Reinforcement learning systems learn through trial and error, refining their strategy based on feedback from the environment. A robot arm learning to grasp objects or systems for personalized content recommendations are examples.
Deep learning is a subset of machine learning that uses layered architectures (neural networks) for representation learning. AI systems based on deep learning can automatically learn features from raw data and are the technology behind many recent breakthroughs in AI.
In addition to machine learning approaches, the second category of techniques is logic- and knowledge-based approaches. Instead of learning from data, these AI systems learn from knowledge, including rules, facts and relationships encoded by human experts. Classical language processing models based on grammatical knowledge, expert systems for medical diagnosis and systems with deductive engines are examples of this.
Four types of outputs and their impact on environments
The sixth element of the AI system definition in Article 3(1) AI Act is that the system infers 'how to generate outputs such as predictions, content, recommendations or decisions that can influence physical or virtual environments'. The ability of a system to generate outputs is fundamental to what AI systems do and what distinguishes those systems from other forms of software.
Outputs of AI systems belong to four broad categories: predictions, content, recommendations and decisions. Each category differs in the level of human involvement.
Predictions are one of the most common outputs that AI systems produce. A prediction is an estimate about an unknown value from known values. AI systems that use machine learning are able to generate predictions that reveal complex patterns in data. AI systems deployed in self-driving cars, for example, make real-time predictions in an extremely complex environment. AI systems for energy consumption estimate energy usage by analyzing data from smart meters, weather forecasts and behavioral patterns.
Content refers to the generation of new material by an AI system: text, images, videos, music. There is an increasing number of AI systems that use machine learning models (e.g. GPT technologies) to generate content. Although content can be understood from a technical perspective in terms of a series of 'predictions', it is mentioned as a separate category in recital 12 AI Act due to its prevalence in generative AI systems.
Recommendations refer to suggestions for specific actions, products or services based on preferences, behavior or other data inputs. AI-based recommendation systems can leverage large-scale data, adapt to user behavior in real-time and provide highly personalized recommendations. When recommendations are automatically applied, they become decisions.
Decisions refer to conclusions or choices made by a system. An AI system that has a decision as output automates processes that are traditionally handled by human judgment. Such a system implies a fully automated process where an outcome in the environment is produced without any human intervention.
The seventh element of the definition is that the outputs of the system 'can influence physical or virtual environments'. That element emphasizes that AI systems are not passive, but actively impact the environments in which they are deployed. The reference to 'physical or virtual environments' indicates that the influence can be on tangible, physical objects (e.g. robot arm) as well as on virtual environments, including digital spaces, data streams and software ecosystems.
What precisely does not fall under the definition?
Recital 12 explains that the AI system definition must distinguish AI systems from "simpler traditional software systems or programming approaches and should not cover systems that are based on rules solely defined by natural persons to automatically execute operations." Some systems have the ability to infer in a limited way, but may nevertheless fall outside the scope due to their limited capacity to analyze patterns and autonomously adjust their output. The Commission mentions five important categories:
1. Systems for improving mathematical optimization. Systems used to improve mathematical optimization or to accelerate and approximate traditional, established optimization methods fall outside the scope. This is because, although those models have the ability to infer, they do not go beyond 'basic data processing'. An indication may be that the system has been used in a consolidated manner for many years. Physics-based systems can, for example, use machine learning to improve computational performance or accelerate traditional simulations. Satellite telecommunication systems can use ML to optimize bandwidth allocation with performance similar to established methods. Although these systems may contain automatic self-adjustments, these are aimed at optimizing operation by improving computational performance, not at adapting decision-making models in an intelligent way.
2. Basic data processing. This refers to systems that follow predefined, explicit instructions or operations without any 'learning, reasoning or modeling' in the system lifecycle. They operate based on fixed human-programmed rules, without AI techniques. Examples are database management systems that sort data on criteria ("find all customers who bought product X"), standard spreadsheet software without AI functionalities, and software that calculates a population average. Systems for descriptive analysis, hypothesis testing and visualization also fall under this. A sales dashboard can use statistical methods to show total sales and trends, but does not recommend how to improve sales.
3. Systems based on classical heuristics. Classical heuristics are problem-solving techniques that rely on experience-based methods to efficiently find approximate solutions. They typically involve rule-based approaches or trial-and-error strategies rather than data-driven learning. A chess program that uses a minimax algorithm with heuristic evaluation functions can evaluate board positions without prior learning from data. Heuristic methods may lack adaptability and generalization compared to AI systems that learn from experience.
4. Simple prediction systems. All machine-based systems whose performance can be achieved via a basic statistical learning rule fall outside the scope due to their performance. In financial forecasting, systems can be used that always predict the historical average price. Such baseline benchmarking methods help assess whether more advanced models add value, but do not achieve the performance of more complex systems. Static estimation and trivial predictors that only predict averages or means are other examples.
The common thread is clear: once a system does not go beyond basic statistics, fixed rules or marginally accelerating a classical model, it is in principle not an AI system within the meaning of the AI Act.
What does this mean for your organization?
The determination of whether a software system is an AI system must be based on the specific architecture and functionality of a given system and must take into account the seven elements of the definition set out in Article 3(1) AI Act. No automatic determination or exhaustive lists of systems falling within or outside the definition are possible. The guidelines explicitly emphasize this: each assessment must be based on the actual characteristics of the system.
For organizations, this means that a thorough inventory is essential. With an AI register, AI use case inventory or AI governance process, it must first be determined whether something is an AI system at all. These guidelines provide arguments to explicitly place certain BI tools, dashboards or simple scripts outside scope, provided this is well substantiated. Document briefly per system why it does or does not fall under the definition, preferably with reference to the specific elements and categories from the guidelines.
Once there is machine learning, generative models, recommendation algorithms or expert systems that derive outputs from data or knowledge rules, the definition will quickly apply. Modern applications such as chatbots based on LLMs, HR screening tools with scoring, dynamic pricing algorithms or internal assistants that search documents with a transformer model typically fall under the definition.
Many software platforms now add AI functionality, for example an "AI assistant" in a CRM or text editor. The underlying application may not fall under the definition, while the AI module does. In governance documentation, it is then advisable to describe separately which component constitutes the AI system. This prevents you from unnecessarily bringing a complete application under the AI Act obligations, when only one module falls under it.
It is important to realize that only certain AI systems are subject to regulatory obligations and supervision under the AI Act. The risk-based approach of the AI Act means that only those systems that pose the main risks to fundamental rights and freedoms will be subject to the prohibition provisions in Article 5 AI Act, the regulatory framework for high-risk AI systems covered by Article 6 AI Act and the transparency requirements for a limited number of predefined AI systems set out in Article 50 AI Act. The vast majority of systems, even if they qualify as AI systems within the meaning of Article 3(1) AI Act, will not be subject to any regulatory requirements under the AI Act.
AI System Definition Quiz
Test your knowledge about the AI system definition from the AI Act. How many elements does the definition contain? What is the difference between autonomy and adaptivity? And when does a system precisely not fall under the Act?
Need help with scope determination? 💡
Do you want to know whether your systems fall under the AI Act? Or do you have questions about setting up an AI register according to the new guidelines? Contact us for a no-obligation conversation about how you can practically apply the definition within your organization.