CLVPartners

artificial intelligence

Data protection considerations related to the development of AI models

Reading time: 5 minutes

Artificial intelligence (“AI“) is a rapidly evolving family of technologies that contributes to a wide range of economic, environmental, and social benefits across all sectors and social activities. By improving predictive accuracy, optimizing operational processes and the allocation of resources, and enabling the personalization of digital solutions available to individuals and organizations, the use of AI can confer a decisive competitive advantage on businesses while also delivering beneficial social and environmental outcomes.

The use of artificial intelligence, alongside its potential benefits, is also associated with certain risks. In order to mitigate these risks, Regulation (EU) 2024/1689 of the European Parliament and of the Council on artificial intelligence (“AI Act”) has been adopted, several provisions of which have already entered into force. At the same time, the development of many AI models involves the use of personal data, which raises the question of how the AI Act affects data processing activities related to AI systems.

The relationship between the AI Act and the GDPR

The AI Act makes it clear that it does not amend the application of existing EU rules on the processing of personal data, including the requirements set out in the GDPR. Accordingly, organizations falling within the scope of the AI Act must, in the course of their data processing activities, comply fully with the provisions of the GDPR.

Through the enforcement of the right to the protection of personal data, the GDPR also supports the effective exercise of other fundamental rights, including, inter alia, freedom of thought and expression, the right to information and education, and the freedom to conduct a business. On this basis, it can be concluded that the GDPR establishes a legal framework that facilitates responsible innovation, including the responsible development and deployment of AI-related technologies.

Data protection considerations in relation with the development of AI Models

In connection with the development of AI models, the European Data Protection Board (“EDPB”) adopted a standalone opinion on data protection aspects arising in relation to the processing of personal data in the context of artificial intelligence models (“Opinion”).

The Opinion examines how personal data may be used in the development of AI models and highlights the issues requiring particular attention when placing on the market AI systems developed using personal data.

Lifecycle of AI Models

The EDPB divides the lifecycle of AI models into two stages, emphasizing that data processing may occur in either of them. The first stage covers the processes preceding the deployment of the model (including e.g. its creation, development, the training, the fine-tuning). The second stage relates to the deployment phase, encompassing the use of the model following its development.

Existence of a legal basis for data processing by data controllers

One of the cornerstones of data protection regulation is that personal data may only be processed where a specific legal basis exists. The Opinion reiterates the general expectation that data controllers must determine the appropriate legal basis for their processing activities.

However, the EDPB found that, as a general rule, an AI model developer may rely on legitimate interest as a legal basis, provided that the existence of such legitimate interest is duly substantiated. For this purpose, a three-step test – already familiar to those with experience in data protection compliance practice – serves to properly assess whether a legitimate interest genuinely exists.

The EDPB emphasizes that the balancing test must take into account whether the data subjects can reasonably expect their personal data to be used. The Opinion is significant in this regard because it sets out several criteria intended to assist data protection authorities in assessing the “reasonably foreseeable” criteria

The Opinion also recalls that, where it appears that the interests, rights, and freedoms of data subjects override the legitimate interests of the data controller or of a third party, all is not lost. Namely, the data controller may consider the implementation of mitigating measures to limit such adverse effects. These may include, for example, pseudonymization, or measures aimed at masking personal data or replacing them with fictitious personal data within the training dataset. The introduction of appropriate data protection measures can make data processing lawful again.

Anonymity

The GDPR classifies as personal data any information relating to an identified or identifiable natural person, whether directly or indirectly. According to the position of the EU institution, in the context of AI model development, personal data may only be used where they are properly anonymized, such that even in the event of a potential reverse engineering of the model, the identification of data subjects is not possible. With regard to anonymization, the EDPB emphasizes that the competent data protection authorities must assess, on a case-by-case basis, whether the organization developing the AI model has complied with this requirement. The body also sets out several recommended technique that may be suitable for preserving anonymity (e.g. prevent or limit the extraction of personal data used for training purposes).

Summary

The EU body emphasizes in its Opinion that compliance with data protection requirements governing the processing of personal data must be ensured throughout both the development and deployment of AI models. It is evident that the expansion of AI and its potential risks are being treated and monitored as a priority in law enforcement, and therefore numerous regulatory guidelines from authorities can be expected in the near future.

Photo source: pexels.com, Tara Winstead

Data protection considerations related to the development of AI models Read More »

The foundations of artificial intelligence regulation in the European Union

Reading time: 4 minutes

In 2024, the European Union adopted its Artificial Intelligence Regulation (the “AI Regulation“), which established the world’s first comprehensive regulatory framework for artificial intelligence. The provisions of the AI Regulation will gradually become mandatory until August 2, 2027. The AI Regulation refers certain implementation and supervisory tasks to the Member States, as a result of which a domestic regulatory framework for the use of artificial intelligence (“AI“) was also promulgated in Hungary in the fall of 2025.

Given that the AI Regulation will have to be applied almost in its entirety from August this year, CLVPartners is launching a series of newsletters on artificial intelligence to help with preparations. The aim of the series of articles is to present the legal issues related to the use of artificial intelligence in a practical yet easy-to-understand way. In the first part of the series, we will outline the basic concept of the current EU and Hungarian regulatory framework and its main objectives.

Purpose of the AI Regulation, concept of its regulation

AI is one of the fastest-growing areas of technology, and according to some forecasts, its application could bring significant benefits across a wide range of economic and social activities. At the same time, the European Union has recognized that the use of AI also carries a number of risks, such as the risk that its inappropriate use could jeopardize the fundamental rights and freedoms protected by EU law.

The purpose of the AI Regulation is to ensure that the development and use of AI systems takes place within a responsible framework. It is important to note that the AI Regulation applies not only to manufacturers, importers, distributors, and service providers operating in the European Union, but also to companies outside the EU if their products or services are available on the EU market or have an impact on EU citizens. To this end, the AI Regulation imposes obligations on developers and users of AI systems and establishes a uniform regulatory system for their authorization on the EU market. The AI Regulation stipulates that its regulatory framework serves to strengthen transparency and accountability and to promote the spread of human-centered and reliable artificial intelligence. It also aims to eliminate discrimination and bias, while ensuring that EU fundamental values and rights are upheld and providing effective protection against the risks posed by AI systems.

The AI Regulation takes a risk-based approach, classifying AI systems into four risk categories and assigning different rules and obligations to each category. The use of so-called prohibited AI systems that pose an unacceptable risk, such as cognitive behavioral manipulation or emotion recognition in the workplace, is already prohibited in the European Union. High-risk AI systems are subject to strict requirements, in particular testing, transparency, and human oversight obligations, and may only be placed on the market once these requirements have been met. These include, among others, systems used in medical diagnostics, self-driving vehicles, or biometric identification. For low-risk AI systems, such as chatbots, transparency obligations are the main requirement, while the AI Regulation does not set out specific rules for minimal or risk-free AI systems.

The AI Regulation is directly applicable in all EU Member States and, due to its nature as a source of law, cannot be transposed into national law and does not need to be promulgated separately. As a result, the AI Regulation creates a uniform legal framework for the regulation of artificial intelligence throughout the European Union.

Hungarian regulations

In addition to creating a uniform EU regulatory framework, the AI Regulation also imposes several obligations on Member States. Accordingly, Member States, including Hungary, have begun to develop the institutional and legal frameworks necessary to ensure the effective implementation and supervision of the provisions of the AI Regulation.

Under the AI Regulation, the supervision of compliance with the requirements for AI systems classified in each risk category will be the responsibility of the Member States. Accordingly, Member States are required to designate a market surveillance authority and a notifying authority responsible for assessing technical compliance. In addition, each Member State must establish regulatory test environments to support the development of safe and lawful AI.

To ensure compliance with these requirements, in the fall of 2025, the Hungarian Parliament passed Act LXXV of 2025 on the implementation of the European Union’s Artificial Intelligence Regulation in Hungary (“AI Act“), which lays the foundations for the domestic regulatory and institutional structure. The AI Act is also implemented by Government Decree 344/2025 (X. 31.) on the implementation of Act LXXV of 2025 on the implementation of the European Union’s regulation on artificial intelligence in Hungary, which lays down detailed rules on the operation of authorities performing tasks related to artificial intelligence. (X. 31.) on the implementation of Act LXXV of 2025 on the implementation of the European Union’s regulation on artificial intelligence in Hungary (“AI Government Decree“), which lays down detailed rules on the functioning of authorities performing tasks related to artificial intelligence.

Under the AI Act, the reporting authority tasks are performed by a single body, the AI reporting authority. This authority is responsible for designating conformity assessment bodies that examine and certify the technical conformity of high-risk AI systems in advance. Under the provisions of the AI Government Decree, the National Accreditation Authority performs this task.

Under the AI Act, market surveillance tasks are also performed by a single authority. The market surveillance authority is responsible for examining the lawful use of AI systems after they have been placed on the market. The Act also requires the AI market surveillance authority to establish and operate an AI regulatory test environment from August 2026 and to act as a point of contact. Under the provisions of the AI Government Decree, the Minister for National Economy is responsible for performing these tasks.

The AI Act also establishes the Hungarian Artificial Intelligence Council, which acts as a coordinating and advisory body. The task of the Hungarian Artificial Intelligence Council is to promote the uniform interpretation of the AI Regulation in Hungary through guidelines and position statements.

Summary

In summary, it can be said that in 2024, the European Union was the first in the world to adopt a comprehensive regulatory framework whose primary objectives are to promote the spread of human-centered, transparent, and reliable artificial intelligence, protect EU fundamental values and rights, and adequately address the risks arising from AI systems. The AI Regulation applies a risk-based regulatory approach, setting differentiated requirements according to the risk posed by each AI system.

The AI Regulation is directly applicable in all Member States, but leaves the implementation and supervisory tasks to national authorities. As a result, in the fall of 2025, Hungary enacted the AI Act and the related AI Government Decree to ensure the domestic implementation of the AI Regulation.

Photo source: pexels.com, Dušan Cvetanović

The foundations of artificial intelligence regulation in the European Union Read More »

CLVPartners
Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.