CLVPartners

Artificial Intelligence

Questions regarding the scope of the AI Act

Reading time: 5 minutes

On 12 June 2024, a new era began in the regulation of artificial intelligence (“AI“) with the adoption and publication of the European Union’s Artificial Intelligence Regulation (“AI Act“). The purpose of the legislation is to provide a framework for the safe, transparent, and responsible development and use of AI in the European Union. In order to properly interpret the obligations set out in the AI Act, it is first necessary to clarify exactly which organizations, activities, or technological solutions are covered by the regulation.

In the second part of our series of articles, we will therefore examine the most important rules relating to the scope of the AI Act in order to help our clients start preparing for compliance in advance and to identify whether they are developing or using AI systems in their operations that are subject to the provisions of the AI Act.

The most important rules relating to the scope of the AI Act

Subject of the AI Act

The AI Act essentially covers the regulation of AI systems. Accordingly, it is first important to identify what qualifies as an AI system.

According to the AI Act, an AI system is a machine-based system that is specifically designed to operate with varying levels of autonomy and to be capable of adapting after deployment. These systems analyze inputs for explicit or implicit purposes and generate outputs—such as predictions, content, recommendations, or decisions—that may have an impact on the physical or virtual environment.

What fundamentally distinguishes AI systems from traditional software solutions is their ability to learn from input data, draw conclusions based on that data, and create models. In contrast, simpler software systems based on classic programming approaches—including systems that perform automated operations based solely on predefined, human-set rules—do not have such learning or adaptation capabilities. As a result, these solutions are not considered AI systems and are therefore not covered by the relevant regulations.

Traditional software solutions include applications that operate entirely on predetermined rules and are incapable of independent learning or adaptation. These include traditional calculators, certain basic functions of Microsoft Excel, and performance evaluation software used for financial forecasting, which are only capable of processing historical data and drawing simple statistical conclusions.

It is also important to note that, as a general rule, the AI Act does not apply to certain specific areas. The scope of the regulation does not cover AI systems used for military, defence, or national security purposes, nor does it cover systems, models, and results created specifically for scientific research and development. In addition, the AI Act does not apply to natural persons who use AI systems solely for personal, non-professional purposes.

Territorial and personal scope

The scope of the AI Act is not limited to operators located within the European Union. The regulation applies to all AI systems that are placed on the market, put into service, or used within the internal market of the European Union. Accordingly, in certain cases, the regulation also applies to operators located outside the Union.

The regulation essentially covers all operators who come into contact with AI systems, from development to use. Accordingly, the scope of the AI Regulation may include, among others, developers, service providers, distributors, importers, installers, operators, and users of the systems.

Temporal scope

The provisions of the AI Act will enter into force in stages.

However, it is important to note that certain provisions of the AI Act are already in force. These include, among others, provisions on definitions, rules on AI systems that pose an unacceptable risk (i.e., prohibited AI systems), obligations governing general-purpose AI models, and provisions on AI literacy, regulatory oversight, and sanctions.

The requirement for so-called AI literacy has particular significance. Under this provision, organizations using AI systems are required to ensure that the persons managing or operating the system have an adequate level of knowledge about AI.

According to the current provisions of the AI Act, the majority of the provisions will become applicable on 2 August 2026. However, as part of the Digital Omnibus Package, the European Commission proposed in November 2025 that the application of certain rules be postponed by up to 18 months.

Further uncertainty regarding implementation arises from the fact that the Commission was supposed to publish guidance on the classification of high-risk AI systems by 2 February 2026. These guidelines are key to determining whether a given AI application is considered high-risk and, as a result, subject to stricter documentation, compliance, and oversight requirements. However, the guidelines have not yet been published. In addition, several Member States have encountered difficulties in designating the authorities responsible for implementing the regulation.

As a result, there is still considerable uncertainty regarding the entry into force and practical application of certain provisions.

Domestic regulation

Due to its legislative form, Hungarian regulation is essentially supplementary to EU rules. Accordingly, Act LXXV of 2025 on the implementation of the European Union’s Artificial Intelligence Act in Hungary (“Hungarian AI Act”) applies only to matters, organizations, and AI systems that affect Hungary or its territory.

In terms of temporal scope, the AI Act generally applicable from December 2025, with the exception of the provision on the regulatory sandbox, which will enter into force on 2 August 2026.

Summary

The AI Act regulates systems that can learn from input data, drawing conclusions, or generating outputs autonomously, while traditional software that operates solely according to predefined rules is not covered by its scope. The territorial scope of the AI Act is broad, as it applies not only to entities established in the EU, but also to those who place AI systems on the EU market or use their outputs in the EU. The regulation covers all relevant actors, from the development to the use of the system. Although the rules of the AI Act become applicable in stages, several key provisions are already in force, but the lack of guidelines and institutional conditions for implementation is currently causing uncertainty in practical application.

Photo source: pexels.com, Tara Winstead

Questions regarding the scope of the AI Act Read More »

The foundations of artificial intelligence regulation in the European Union

Reading time: 4 minutes

In 2024, the European Union adopted its Artificial Intelligence Regulation (the “AI Regulation“), which established the world’s first comprehensive regulatory framework for artificial intelligence. The provisions of the AI Regulation will gradually become mandatory until August 2, 2027. The AI Regulation refers certain implementation and supervisory tasks to the Member States, as a result of which a domestic regulatory framework for the use of artificial intelligence (“AI“) was also promulgated in Hungary in the fall of 2025.

Given that the AI Regulation will have to be applied almost in its entirety from August this year, CLVPartners is launching a series of newsletters on artificial intelligence to help with preparations. The aim of the series of articles is to present the legal issues related to the use of artificial intelligence in a practical yet easy-to-understand way. In the first part of the series, we will outline the basic concept of the current EU and Hungarian regulatory framework and its main objectives.

Purpose of the AI Regulation, concept of its regulation

AI is one of the fastest-growing areas of technology, and according to some forecasts, its application could bring significant benefits across a wide range of economic and social activities. At the same time, the European Union has recognized that the use of AI also carries a number of risks, such as the risk that its inappropriate use could jeopardize the fundamental rights and freedoms protected by EU law.

The purpose of the AI Regulation is to ensure that the development and use of AI systems takes place within a responsible framework. It is important to note that the AI Regulation applies not only to manufacturers, importers, distributors, and service providers operating in the European Union, but also to companies outside the EU if their products or services are available on the EU market or have an impact on EU citizens. To this end, the AI Regulation imposes obligations on developers and users of AI systems and establishes a uniform regulatory system for their authorization on the EU market. The AI Regulation stipulates that its regulatory framework serves to strengthen transparency and accountability and to promote the spread of human-centered and reliable artificial intelligence. It also aims to eliminate discrimination and bias, while ensuring that EU fundamental values and rights are upheld and providing effective protection against the risks posed by AI systems.

The AI Regulation takes a risk-based approach, classifying AI systems into four risk categories and assigning different rules and obligations to each category. The use of so-called prohibited AI systems that pose an unacceptable risk, such as cognitive behavioral manipulation or emotion recognition in the workplace, is already prohibited in the European Union. High-risk AI systems are subject to strict requirements, in particular testing, transparency, and human oversight obligations, and may only be placed on the market once these requirements have been met. These include, among others, systems used in medical diagnostics, self-driving vehicles, or biometric identification. For low-risk AI systems, such as chatbots, transparency obligations are the main requirement, while the AI Regulation does not set out specific rules for minimal or risk-free AI systems.

The AI Regulation is directly applicable in all EU Member States and, due to its nature as a source of law, cannot be transposed into national law and does not need to be promulgated separately. As a result, the AI Regulation creates a uniform legal framework for the regulation of artificial intelligence throughout the European Union.

Hungarian regulations

In addition to creating a uniform EU regulatory framework, the AI Regulation also imposes several obligations on Member States. Accordingly, Member States, including Hungary, have begun to develop the institutional and legal frameworks necessary to ensure the effective implementation and supervision of the provisions of the AI Regulation.

Under the AI Regulation, the supervision of compliance with the requirements for AI systems classified in each risk category will be the responsibility of the Member States. Accordingly, Member States are required to designate a market surveillance authority and a notifying authority responsible for assessing technical compliance. In addition, each Member State must establish regulatory test environments to support the development of safe and lawful AI.

To ensure compliance with these requirements, in the fall of 2025, the Hungarian Parliament passed Act LXXV of 2025 on the implementation of the European Union’s Artificial Intelligence Regulation in Hungary (“AI Act“), which lays the foundations for the domestic regulatory and institutional structure. The AI Act is also implemented by Government Decree 344/2025 (X. 31.) on the implementation of Act LXXV of 2025 on the implementation of the European Union’s regulation on artificial intelligence in Hungary, which lays down detailed rules on the operation of authorities performing tasks related to artificial intelligence. (X. 31.) on the implementation of Act LXXV of 2025 on the implementation of the European Union’s regulation on artificial intelligence in Hungary (“AI Government Decree“), which lays down detailed rules on the functioning of authorities performing tasks related to artificial intelligence.

Under the AI Act, the reporting authority tasks are performed by a single body, the AI reporting authority. This authority is responsible for designating conformity assessment bodies that examine and certify the technical conformity of high-risk AI systems in advance. Under the provisions of the AI Government Decree, the National Accreditation Authority performs this task.

Under the AI Act, market surveillance tasks are also performed by a single authority. The market surveillance authority is responsible for examining the lawful use of AI systems after they have been placed on the market. The Act also requires the AI market surveillance authority to establish and operate an AI regulatory test environment from August 2026 and to act as a point of contact. Under the provisions of the AI Government Decree, the Minister for National Economy is responsible for performing these tasks.

The AI Act also establishes the Hungarian Artificial Intelligence Council, which acts as a coordinating and advisory body. The task of the Hungarian Artificial Intelligence Council is to promote the uniform interpretation of the AI Regulation in Hungary through guidelines and position statements.

Summary

In summary, it can be said that in 2024, the European Union was the first in the world to adopt a comprehensive regulatory framework whose primary objectives are to promote the spread of human-centered, transparent, and reliable artificial intelligence, protect EU fundamental values and rights, and adequately address the risks arising from AI systems. The AI Regulation applies a risk-based regulatory approach, setting differentiated requirements according to the risk posed by each AI system.

The AI Regulation is directly applicable in all Member States, but leaves the implementation and supervisory tasks to national authorities. As a result, in the fall of 2025, Hungary enacted the AI Act and the related AI Government Decree to ensure the domestic implementation of the AI Regulation.

Photo source: pexels.com, Dušan Cvetanović

The foundations of artificial intelligence regulation in the European Union Read More »

CLVPartners
Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.