CLVPartners

Compliance

Questions regarding the scope of the AI Act

Reading time: 5 minutes

On 12 June 2024, a new era began in the regulation of artificial intelligence (“AI“) with the adoption and publication of the European Union’s Artificial Intelligence Regulation (“AI Act“). The purpose of the legislation is to provide a framework for the safe, transparent, and responsible development and use of AI in the European Union. In order to properly interpret the obligations set out in the AI Act, it is first necessary to clarify exactly which organizations, activities, or technological solutions are covered by the regulation.

In the second part of our series of articles, we will therefore examine the most important rules relating to the scope of the AI Act in order to help our clients start preparing for compliance in advance and to identify whether they are developing or using AI systems in their operations that are subject to the provisions of the AI Act.

The most important rules relating to the scope of the AI Act

Subject of the AI Act

The AI Act essentially covers the regulation of AI systems. Accordingly, it is first important to identify what qualifies as an AI system.

According to the AI Act, an AI system is a machine-based system that is specifically designed to operate with varying levels of autonomy and to be capable of adapting after deployment. These systems analyze inputs for explicit or implicit purposes and generate outputs—such as predictions, content, recommendations, or decisions—that may have an impact on the physical or virtual environment.

What fundamentally distinguishes AI systems from traditional software solutions is their ability to learn from input data, draw conclusions based on that data, and create models. In contrast, simpler software systems based on classic programming approaches—including systems that perform automated operations based solely on predefined, human-set rules—do not have such learning or adaptation capabilities. As a result, these solutions are not considered AI systems and are therefore not covered by the relevant regulations.

Traditional software solutions include applications that operate entirely on predetermined rules and are incapable of independent learning or adaptation. These include traditional calculators, certain basic functions of Microsoft Excel, and performance evaluation software used for financial forecasting, which are only capable of processing historical data and drawing simple statistical conclusions.

It is also important to note that, as a general rule, the AI Act does not apply to certain specific areas. The scope of the regulation does not cover AI systems used for military, defence, or national security purposes, nor does it cover systems, models, and results created specifically for scientific research and development. In addition, the AI Act does not apply to natural persons who use AI systems solely for personal, non-professional purposes.

Territorial and personal scope

The scope of the AI Act is not limited to operators located within the European Union. The regulation applies to all AI systems that are placed on the market, put into service, or used within the internal market of the European Union. Accordingly, in certain cases, the regulation also applies to operators located outside the Union.

The regulation essentially covers all operators who come into contact with AI systems, from development to use. Accordingly, the scope of the AI Regulation may include, among others, developers, service providers, distributors, importers, installers, operators, and users of the systems.

Temporal scope

The provisions of the AI Act will enter into force in stages.

However, it is important to note that certain provisions of the AI Act are already in force. These include, among others, provisions on definitions, rules on AI systems that pose an unacceptable risk (i.e., prohibited AI systems), obligations governing general-purpose AI models, and provisions on AI literacy, regulatory oversight, and sanctions.

The requirement for so-called AI literacy has particular significance. Under this provision, organizations using AI systems are required to ensure that the persons managing or operating the system have an adequate level of knowledge about AI.

According to the current provisions of the AI Act, the majority of the provisions will become applicable on 2 August 2026. However, as part of the Digital Omnibus Package, the European Commission proposed in November 2025 that the application of certain rules be postponed by up to 18 months.

Further uncertainty regarding implementation arises from the fact that the Commission was supposed to publish guidance on the classification of high-risk AI systems by 2 February 2026. These guidelines are key to determining whether a given AI application is considered high-risk and, as a result, subject to stricter documentation, compliance, and oversight requirements. However, the guidelines have not yet been published. In addition, several Member States have encountered difficulties in designating the authorities responsible for implementing the regulation.

As a result, there is still considerable uncertainty regarding the entry into force and practical application of certain provisions.

Domestic regulation

Due to its legislative form, Hungarian regulation is essentially supplementary to EU rules. Accordingly, Act LXXV of 2025 on the implementation of the European Union’s Artificial Intelligence Act in Hungary (“Hungarian AI Act”) applies only to matters, organizations, and AI systems that affect Hungary or its territory.

In terms of temporal scope, the AI Act generally applicable from December 2025, with the exception of the provision on the regulatory sandbox, which will enter into force on 2 August 2026.

Summary

The AI Act regulates systems that can learn from input data, drawing conclusions, or generating outputs autonomously, while traditional software that operates solely according to predefined rules is not covered by its scope. The territorial scope of the AI Act is broad, as it applies not only to entities established in the EU, but also to those who place AI systems on the EU market or use their outputs in the EU. The regulation covers all relevant actors, from the development to the use of the system. Although the rules of the AI Act become applicable in stages, several key provisions are already in force, but the lack of guidelines and institutional conditions for implementation is currently causing uncertainty in practical application.

Photo source: pexels.com, Tara Winstead

Questions regarding the scope of the AI Act Read More »

Current Activities of the European Data Protection Board to Support GDPR Compliance

Reading time: 7 minutes

The European Data Protection Board has published its Work Programme for 2026–2027 (hereinafter: the “Programme”), adopted on 11 February 2026, The Programme provides not only strategic directions but also concrete tools to support organisations’ day-to-day compliance. This article summarises the consultation results and the key plans set out in the European Data Protection Board’s Programme.

Background

In its 2024–2027 strategy, the European Data Protection Board identified four interlinked priorities. These include strengthening the consistent application of data protection rules and further priority is deepening supporting organisations in complying with the law. A cooperation among data protection authorities, particularly in cross-border cases. The strategy also emphasises ensuring that data protection is effective in a fast-evolving digital environment affecting multiple regulatory areas, including applications of artificial intelligence. Moreover, the European Data Protection Board aims to actively foster and shape international dialogue on privacy and personal data protection. The Programme supports the implementation of the European Data Protection Board’s 2024–2027 strategy, based on the identified priorities and the most important needs of stakeholders.

Main elements of the programme

The Programme builds on the consistent application of Regulation (EU) 2016/679 of the European Parliament and of the Council (“GDPR”) and sets out the European Data Protection Board’s activities for 2026–2027 along four pillars: harmonisation and compliance, a common culture of enforcement, challenges in the digital regulatory environment, and global data protection dialogue.

Harmonisation and legal clarity

The European Data Protection Board will continue to issue detailed yet accessible guidance on topics considered critical by stakeholders during events and consultations, such as anonymisation and pseudonymisation, data processing based on legitimate interests, “consent or pay” models, and targeted updates on guidance for data protection officers.

The European Data Protection Board also intends to facilitate GDPR compliance with new practical tools, particularly for small and medium-sized enterprises (SMEs), including templates and guidance. To this end, a public consultation was conducted between 5 November and 3 December 2025 to identify which practical templates would most effectively support GDPR compliance.

The consultation highlighted the greatest demand for templates on records of processing activities, data protection impact assessments, legitimate interest assessments, privacy notices, transfer impact assessments, data processing agreements, data breach notification forms, and risk assessment templates. The European Data Protection Board has prioritised three templates in the Programme—legitimate interest assessment, records of processing activities, and privacy notices—to provide consistent, practical support, especially for organisations with limited resources.

In addition, the European Data Protection Board supports controllers and processors in developing and implementing compliance measures, for instance, through opinions on certification schemes, codes of conduct, and accreditation.

Stronger enforcement culture and cooperation

The second pillar aims to ensure consistency in the application and enforcement of the GDPR and to enhance cooperation among its members. The European Data Protection Board will continue to support the development of cooperation and enforcement tools and promote the sharing of expertise. Efforts will also focus on giving greater attention to priority issues and creating consistency.

In line with these objectives, the European Data Protection Board will focus on the consistent application of the GDPR and effective cooperation between authorities. To this end, it will update, among other things, its guidelines on handling cross-border cases, its principles on imposing fines, and its rules on mutual assistance and emergency procedures. As part of its action on the Coordinated Enforcement Framework (CEF), in 2026 it will focus on fulfilling the obligations under Articles 12-14 of the GDPR regarding transparent information, communication and measures for the exercise of data subjects’ rights. Where necessary, it will set up working groups to provide operational platforms for cases requiring cooperation on enforcement matters. To ensure the effective functioning of the consistency mechanism, it will adopt opinions addressed to national supervisory authorities with a view to supporting consistent decision-making.

Data protection at the intersection of digital legislation

The European Data Protection Board’s priority is to ensure coherence across EU digital legislation. In the rapidly evolving technological and market environment, data protection interacts closely with multiple other EU laws, such as the AI Regulation. This increases the importance of consistent interpretation, coordinated action by authorities, and clear guidance. The European Data Protection Board collaborates with other regulators, including competition and consumer protection authorities, to support the new cross-regulatory environment. Key technological topics include generative AI, telemetry and diagnostic data, and blockchain-related data protection issues.

Global data protection dialogue and data transfers

The European Data Protection Board continues to promote global dialogue on privacy and data protection, focusing on international cooperation between its members and third-country authorities, especially those with EU adequacy decisions.

Conclusion: more support, greater legal certainty

A key message of the Programme is that GDPR compliance is not merely a matter of regulatory oversight, but a process that can be actively supported and structured. Templates, harmonised guidance, and enhanced authority cooperation aim to make GDPR application more predictable and practical. At the same time, each organisation must tailor its data processing documents and procedures to its own business processes and risks. The European Data Protection Board seeks to strengthen fundamental rights, support organisational compliance, and ensure that European data protection remains coherent and competitive in a fast-changing digital environment.

Photo source: pexels.com, MART PRODUCTION

Current Activities of the European Data Protection Board to Support GDPR Compliance Read More »

Termination based on employer-related reasons and their legal framework in practice

Reading time: 5 minutes

The termination of an employment relationship is one of the most complex areas of labour law, requiring particular care. To ensure compliance with the law, it is essential that the employer has a thorough understanding of the relevant legislation, as well as the rights and obligations of both parties. Therefore, to ensure compliance, this article reviews the possible grounds for termination by the employer and, within that context, provides a detailed overview of the practical considerations regarding terminations based on reasons related to the employer’s operations.

Key rules governing termination of employment by the employer

The purpose of labour law regulations is primarily determined by the social function and the hierarchical relationship of the parties. Consequently, Act I of 2012 on the Labour Code (“Labour Code”) sets forth in detail the substantive and procedural conditions under which an employer is entitled to terminate an employee’s employment relationship.

One way to terminate an employment relationship is through a termination notice given by the employer. When the employer decides to terminate the employment relationship by giving a termination notice, it must be determined whether there is a valid basis for doing so as required by law.

In the case of an indefinite-term employment relationship, the grounds for termination may be based solely on

the employee’s ability,

the employee’s behaviour or

reasons related to the employer’s operations.

It is thus clear that the groups of reasons can be divided into two main categories, depending on whether they relate to the employee or the employer.

In practice, it is often difficult to draw a clear line between whether the disputed circumstance is related to the employee’s ability or behaviour (e.g., in cases of performance issues, it is often not entirely clear whether they are caused by the employee’s attitude or a lack of ability). In other cases, however, these circumstances are clearly distinct (e.g., an employee’s regular tardiness is typically a behavioural issue, while medical unfitness or a lack of required language skills indicate deficiencies in ability).

Of course, any of these circumstances may justify the employer’s right to terminate the employment contract.

Another important category involves reasons related to the employer’s operations, which, as the name implies, are independent of the employee’s conduct or abilities. It happens that the number of orders at the employer decreases, the economic environment deteriorates, or that organizational restructuring, reorganization, or outsourcing is necessary to maintain competitiveness. Of course, in such cases, the need to terminate certain employment relationships may arise, which can indeed serve as a lawful basis for termination by the employer. Since this article focuses specifically on terminations based on reasons related to the employer’s operations, we will now describe this category of reasons in detail.

Reasons for and rules governing termination of employment related to the employer’s operations

First, it is worth emphasizing that grounds for termination based on the employer’s operations constitute a broad category, as they may encompass numerous specialized, economically motivated decisions for which an exhaustive statutory list would not be practical. Below, we describe a few typical scenarios, noting that these may occur even in combination.

We can speak of the elimination of a position when an employer completely eliminates a specific position within the organization, and as a result, the employment relationship of all employees working in that position is terminated.

In contrast to the above, the situation involves a reduction in headcount rather than the elimination of a position when the employer does not eliminate the position itself but reduces the number of employees in that role (e.g., due to a decrease in tasks or digitization). Although the court does not examine the economic rationality of the decision or the criteria for selection, the fundamental principles must, of course, be observed in such cases as well—with particular regard, for example, to the requirement of equal treatment and the prohibition of abuse of rights.

Replacement for better qualifications is also one of the grounds for termination that fall within the employer’s sphere of interest and decision-making. The rationale behind such a replacement is that the employer decides to fill the position in question with an employee who possesses additional qualifications in the future; for example, the employer may require to have proficiency in a specific language or additional training.

Another common scenario is when an employer decides to reorganize the performance of tasks in the future, for example, by establishing temporary agency work, simplified employment, or contractor/service relationships instead of employing workers under a traditional employment relationship.

A common feature of these grounds is that the practicality of the employer’s organizational or business decisions cannot be questioned on its own merits. Accordingly, the court cannot deem terminations related to reorganization to be unlawful merely because the practicality or economic rationality of the decision is debatable. Similarly, an employee cannot successfully argue that the measure serving as the basis for the termination was not economically rational. It is important, however, that the reason for terminating the employee’s employment must be reasonable, meaning that the selected employee’s dismissal must be related to the economic reason.

Application of the rules governing collective redundancies

If an employer terminates the employment of a specific number of employees within a relatively short period of time, citing operational reasons, the decision may be classified as a collective redundancy procedure. In such cases, the employer is subject to specific procedural, consultation, and notification obligations.

Since a large number of employees may suddenly enter the labor market, the law requires compliance with a set of procedures that include specific safeguards to counterbalance this. These rules are intended to ensure that both the affected employees and the labour market, as well as the Government Employment Service —which assists employees in finding new employment as quickly as possible—can prepare for the change. We will discuss the detailed rules of this in the next part of our series of articles.

Summary

Overall, it can be said that the termination of employment is one of the most complicated areas of employment law, the primary purpose of which is to balance the power between the employee and the employer. One possible ground for termination by the employer is a reason related to the employer’s operations, which may serve as a means of restructuring the organization or maintaining economic stability. In cases of termination based on such grounds, a thorough understanding of the regulations is particularly important, since if the employer terminates the employment of a specified number of employees within a short period of time citing this reason, the decision may qualify as a collective redundancy, which entails specific procedural obligations.

Photo source: pexels.com, Jahoo Clouseau

Termination based on employer-related reasons and their legal framework in practice Read More »

The managing director’s liability, particularly in relation to the annual financial statements, and possible ways to limit liability

Reading time: 7 minutes

Introduction

A key figure in Hungarian company law is that the executive officer responsible for the operational management of business associations—typically the managing director in the case of a limited liability company. Act V of 2013 on the Civil Code (“Civil Code”) sets out in detail the fundamental rules applicable to them, including their liability, which extends not only to day-to-day operational decisions but also, in particular, to the company’s financial and sustainability reporting. This becomes particularly important as the general reporting period approaches. For companies operating on a financial year identical to the calendar year, May is of particular importance: major tax returns must be submitted by May 20, and the annual financial statements must be formally approved and published by May 31. In light of these upcoming deadlines, it is important for our clients to be aware of the key rules governing the managing director’s duties and responsibilities in this context.

In this article, we examine the liability of managing directors primarily in relation to the annual financial statements. Within this framework, we outline the boundaries of this liability and how these risks can be consciously mitigated by establishing appropriate safeguards.

General overview of the managing director’s liability

The Civil Code clearly establishes the fundamental principles of executive management. The managing director is responsible for the operational management of the company and must perform this activity with the company’s interests as a priority.

The most important expectation imposed on the managing director is the so-called “prudent businessperson” standard. This objective standard requires that the executive officer make decisions in all cases based on adequate information, in good faith, after assessing potential business and legal risks, and exclusively with the company’s interests in mind.

In practice, this means that the managing director may not make decisions that prioritize their own interests or those of third parties (including members) over the company’s financial stability. The Civil Code makes it clear that limited liability protects only the members; the managing director’s underlying personal liability for breaches of duty and unlawful conduct remains independent of this.

Liability related to the preparation and approval of financial statements

One of the managing director’s key responsibilities is the preparation, approval, and publication of the annual financial statements in accordance with accounting law.

Although bookkeeping and the preparation of tax returns are often carried out by internal or external accountants and financial professionals, the liability and legal responsibility for the proper keeping of accounts and the accuracy of the financial statements rests with the managing director.

The approval of the financial statements and the decision on the use of after-tax profit fall within the exclusive competence of the company’s supreme body. In this context, the managing director’s obligation is to prepare the draft, submit it to the supreme body, and provide a written proposal regarding the use of profits.

Furthermore, if the managing director detects that the company’s capital position is inadequate, they are obliged to convene the general meeting and initiate prompt remedial measures.

 

Mitigating managing director liability – practical options

Granting of discharge

The Civil Code itself provides a tool for limiting the liability of managing directors: the so-called discharge. The essence of discharge is that the supreme body confirms that the managing director’s activities in the previous financial year were appropriate. If granted, the company generally cannot subsequently enforce claims for damages against the managing director for breaches of duty during the given period.

However, it is important to note that discharge does not provide absolute protection. If it is later proven that the facts or data underlying the discharge were false or incomplete, the company may still assert claims for damages against the managing director.

Establishing internal procedures

Liability can also be mitigated by implementing appropriate internal procedures for significant decision-making within the company. This can take various forms, such as introducing joint signature systems or approval processes based on specific areas of responsibility.

Decisions subject to approval by the supreme body

It is common for companies to require prior approval by the supreme body for certain decisions that would otherwise fall within the managing director’s competence. These may be defined in various ways (e.g., based on subject matter or financial thresholds). It is also possible for the managing director to seek approval from the general meeting even in matters where it is not legally required. In such cases, if approval is granted, the managing director’s liability is naturally reduced.

Involvement of external experts

As noted above, the managing director is fundamentally responsible for all operational decisions. However, they are often required to make decisions on specialized matters that may fall outside their expertise. In such cases, it is strongly recommended to assess potential risks and decision options with the involvement of experts in advance. This can lead to more well-founded decisions and may also reduce the extent of the managing director’s potential liability.

Directors’ and Officers’ (D&O) liability insurance

The liability of managing directors is extremely broad, as their decisions affect nearly all aspects of the company’s operations—from financial management and legal compliance to obligations toward employees and business partners. Moreover, in certain cases, they may be held liable with their personal assets for damages they cause, representing a significant personal risk. In an increasingly complex and strictly regulated economic environment, it is easier than ever to make inadvertent mistakes with serious legal and financial consequences. D&O liability insurance offers a solution to mitigate these risks by providing coverage for claims brought against executive officers. Typically, such insurance covers legal defense costs and, subject to contractual terms, awarded damages, thereby enabling managing directors to make responsible decisions with greater security.

Conclusion

It is clear that the scope of duties for managing directors is extremely wide-ranging, while their responsibilities are also particularly strict, often involving significant personal risk. In this complex and constantly evolving environment, conscious preparation and the establishment of appropriate safeguards are essential. These include, among others, well-designed and documented decision-making processes, strengthened internal control systems, and, where appropriate, adequate insurance coverage. Together, these measures help ensure that managing directors can perform their duties within transparent, lawful, and secure frameworks, reducing the risk of errors and the associated liability. Since the way companies operate is constantly changing, it is advisable to conduct periodic reviews of management procedures established years ago to ensure that they comply with updated protocols and that management responsibilities are aligned accordingly.

Photo source: Pexels.com, Vlada Karpovich

The managing director’s liability, particularly in relation to the annual financial statements, and possible ways to limit liability Read More »

Changes to Occupational Safety Rules at the Beginning of the Year

Reading time: 7 minutes

As we reported in our extraordinary newsletter, Act XCIII of 1993 on Labour Safety (“Labour Safety and Health Act”) introduces new rules as of 1 January 2026 for employer organizations regarding the provision of conditions for occupational safety and health. In this article, we summarize the requirements necessary to comply with these obligations.

Principles and requirements

The Labour Safety and Health Act sets out in detail the requirements that employers must take into account to ensure occupational safety and health. In this context, employers must strive to avoid hazards, assess risks that cannot be avoided, and combat hazards at their source. Furthermore, undertakings are required to take human factors into consideration when designing workplaces and selecting work equipment and work processes, to apply the achievements of technical progress, to replace hazardous solutions with less hazardous ones, and to provide appropriate instructions to employees. Companies must develop a coherent and comprehensive prevention strategy covering work processes, technology, work organization, working conditions, social relationships, and the effects of workplace environmental factors.

The role of risk assessment

One of the employer’s most important obligations is the preparation and maintenance of a risk assessment, including risk management and the determination of preventive measures. The assessment is carried out by a specialist, who identifies the hazard sources, determines the group of employees exposed to risks, and assesses the nature of the hazards and the extent of exposure. The risk assessment must be carried out before the commencement of the activity and reviewed when justified—at least every five years. Justifiable cases include changes in technology, work equipment, the method of work, or the scope of the employer’s activities. A risk assessment is likewise justified and required if a work accident or occupational disease occurs in connection with deficiencies in the applied activity, technology, work equipment, or method of work. These tasks qualify in all cases as occupational safety and occupational health professional activities and may only be performed by persons with the prescribed qualifications.

Persons authorized to carry out risk assessments

The Labour Safety and Health Act also contains differentiated rules regarding the qualifications required to carry out risk assessments and to define the occupational safety and occupational health content of the prevention strategy, with particular regard to the hazard class and the number of employees. The detailed rules are set out in Decree 5/1993. (XII. 26.) MüM (hereinafter: “MüM Decree“), which classifies employers into hazard categories and stipulates the qualifications required to perform the tasks accordingly.

In the case of employers classified in hazard class III with a maximum of 50 employees (e.g., labour market service providers, IT infrastructure providers, and wholesale and retail trade in general), there has been no change since 1 July 2025, in accordance with the MÜM Decree, the activity may also be carried out by a person holding a specialist medical qualification in occupational medicine, industrial medicine, occupational hygiene, public health and epidemiology, preventive medicine and public health, or by a person holding a qualification as a public health or epidemiological inspector or supervisor.

As of 1 January 2026, a new rule provides that, for employers employing at least 50 employees, the occupational safety content of the prevention strategy must be developed by a person with higher-level occupational safety qualifications in the case of activities classified under Hazard Classes I and II pursuant to the MüM Decree, such as paper manufacturing, pharmaceutical manufacturing, machinery manufacturing, computer, electronic and optical product manufacturing, and tobacco product manufacturing.

Also introduced as of this year is the rule that, for activities classified under Hazard Class I pursuant to the MüM Decree—such as paper manufacturing, pharmaceutical manufacturing, and machinery manufacturing—the preparation of the risk assessment at employers employing at least 50 employees must be carried out by a person with higher-level occupational safety qualifications.

Special rules for teleworking

In the case of teleworking, the employee performs work for part or all of their working time at a location separate from the employer’s premises. In such cases, work may be performed using equipment provided by the employer or, by agreement, by the employee. Where equipment is provided by the employee, the employer must, as part of the risk assessment, ensure that the work equipment is in a safe condition that does not endanger health, while maintaining this condition is the employee’s responsibility.

If work is not performed using IT equipment, it may only be carried out at a remote workplace that has been preliminarily assessed by the employer as appropriate from an occupational safety perspective, and the employer must regularly monitor working conditions and compliance with the applicable rules.

The situation differs when work is performed using IT equipment. In such cases, the employer is not required to conduct a risk assessment; it is sufficient for the employer to inform the employee of the rules for ensuring safe and healthy working conditions and to oblige the employee to comply with these rules, and the employer may obtain a declaration from the employee acknowledging this obligation. The employer may keep a register of work equipment. The employee is required to select the place of remote work in compliance with these conditions. Compliance with the rules may, of course, be monitored remotely by the employer through the use of IT tools. Although an individual risk assessment is not required in this case, proper employee information and regular monitoring remain part of the employer’s occupational safety obligations.

Employer obligations and liability

The employer’s ongoing responsibility does not end with the preparation of documentation. Employers must ensure proper information and instruction for employees, regularly monitor working conditions and compliance with regulations, provide safe work equipment, and promptly investigate irregularities and reports. In addition, employers must ensure the proper usability and condition of personal protective equipment, as well as the lawful investigation of work accidents and occupational diseases.

Compliance with occupational safety regulations is also of outstanding importance from the perspective of employer liability for damages, as under Act I of 2012 on the Labour Code the employer bears objective liability for damage caused to employees in connection with the employment relationship. To be exempted from liability, the employer must prove that the damage was caused by a circumstance beyond its control that it could not have foreseen and that it was not reasonably expected to prevent or mitigate. Under this strict regulatory framework, any failure to comply with occupational safety regulations is necessarily assessed to the detriment of the employer. For these reasons, it is particularly important that employers always have up-to-date occupational safety measures in force and that these are properly and verifiably documented.

Summary

Occupational safety regulations make it clear that ensuring occupational safety and health is not merely a formal obligation, but one of the most important elements of employer responsibility. Failure to properly prepare and regularly review the risk assessment and prevention strategy, as well as failure to actually comply with occupational safety requirements, entails not only regulatory sanctions but also significant compensation risks, given the employer’s objective liability. Our firm is pleased to assist in preparing for regulatory changes and in establishing operations that comply with applicable legislation.

Photo source: pexels.com, suntorn somtong

Changes to Occupational Safety Rules at the Beginning of the Year Read More »

Data protection considerations related to the development of AI models

Reading time: 5 minutes

Artificial intelligence (“AI“) is a rapidly evolving family of technologies that contributes to a wide range of economic, environmental, and social benefits across all sectors and social activities. By improving predictive accuracy, optimizing operational processes and the allocation of resources, and enabling the personalization of digital solutions available to individuals and organizations, the use of AI can confer a decisive competitive advantage on businesses while also delivering beneficial social and environmental outcomes.

The use of artificial intelligence, alongside its potential benefits, is also associated with certain risks. In order to mitigate these risks, Regulation (EU) 2024/1689 of the European Parliament and of the Council on artificial intelligence (“AI Act”) has been adopted, several provisions of which have already entered into force. At the same time, the development of many AI models involves the use of personal data, which raises the question of how the AI Act affects data processing activities related to AI systems.

The relationship between the AI Act and the GDPR

The AI Act makes it clear that it does not amend the application of existing EU rules on the processing of personal data, including the requirements set out in the GDPR. Accordingly, organizations falling within the scope of the AI Act must, in the course of their data processing activities, comply fully with the provisions of the GDPR.

Through the enforcement of the right to the protection of personal data, the GDPR also supports the effective exercise of other fundamental rights, including, inter alia, freedom of thought and expression, the right to information and education, and the freedom to conduct a business. On this basis, it can be concluded that the GDPR establishes a legal framework that facilitates responsible innovation, including the responsible development and deployment of AI-related technologies.

Data protection considerations in relation with the development of AI Models

In connection with the development of AI models, the European Data Protection Board (“EDPB”) adopted a standalone opinion on data protection aspects arising in relation to the processing of personal data in the context of artificial intelligence models (“Opinion”).

The Opinion examines how personal data may be used in the development of AI models and highlights the issues requiring particular attention when placing on the market AI systems developed using personal data.

Lifecycle of AI Models

The EDPB divides the lifecycle of AI models into two stages, emphasizing that data processing may occur in either of them. The first stage covers the processes preceding the deployment of the model (including e.g. its creation, development, the training, the fine-tuning). The second stage relates to the deployment phase, encompassing the use of the model following its development.

Existence of a legal basis for data processing by data controllers

One of the cornerstones of data protection regulation is that personal data may only be processed where a specific legal basis exists. The Opinion reiterates the general expectation that data controllers must determine the appropriate legal basis for their processing activities.

However, the EDPB found that, as a general rule, an AI model developer may rely on legitimate interest as a legal basis, provided that the existence of such legitimate interest is duly substantiated. For this purpose, a three-step test – already familiar to those with experience in data protection compliance practice – serves to properly assess whether a legitimate interest genuinely exists.

The EDPB emphasizes that the balancing test must take into account whether the data subjects can reasonably expect their personal data to be used. The Opinion is significant in this regard because it sets out several criteria intended to assist data protection authorities in assessing the “reasonably foreseeable” criteria

The Opinion also recalls that, where it appears that the interests, rights, and freedoms of data subjects override the legitimate interests of the data controller or of a third party, all is not lost. Namely, the data controller may consider the implementation of mitigating measures to limit such adverse effects. These may include, for example, pseudonymization, or measures aimed at masking personal data or replacing them with fictitious personal data within the training dataset. The introduction of appropriate data protection measures can make data processing lawful again.

Anonymity

The GDPR classifies as personal data any information relating to an identified or identifiable natural person, whether directly or indirectly. According to the position of the EU institution, in the context of AI model development, personal data may only be used where they are properly anonymized, such that even in the event of a potential reverse engineering of the model, the identification of data subjects is not possible. With regard to anonymization, the EDPB emphasizes that the competent data protection authorities must assess, on a case-by-case basis, whether the organization developing the AI model has complied with this requirement. The body also sets out several recommended technique that may be suitable for preserving anonymity (e.g. prevent or limit the extraction of personal data used for training purposes).

Summary

The EU body emphasizes in its Opinion that compliance with data protection requirements governing the processing of personal data must be ensured throughout both the development and deployment of AI models. It is evident that the expansion of AI and its potential risks are being treated and monitored as a priority in law enforcement, and therefore numerous regulatory guidelines from authorities can be expected in the near future.

Photo source: pexels.com, Tara Winstead

Data protection considerations related to the development of AI models Read More »

The foundations of artificial intelligence regulation in the European Union

Reading time: 4 minutes

In 2024, the European Union adopted its Artificial Intelligence Regulation (the “AI Regulation“), which established the world’s first comprehensive regulatory framework for artificial intelligence. The provisions of the AI Regulation will gradually become mandatory until August 2, 2027. The AI Regulation refers certain implementation and supervisory tasks to the Member States, as a result of which a domestic regulatory framework for the use of artificial intelligence (“AI“) was also promulgated in Hungary in the fall of 2025.

Given that the AI Regulation will have to be applied almost in its entirety from August this year, CLVPartners is launching a series of newsletters on artificial intelligence to help with preparations. The aim of the series of articles is to present the legal issues related to the use of artificial intelligence in a practical yet easy-to-understand way. In the first part of the series, we will outline the basic concept of the current EU and Hungarian regulatory framework and its main objectives.

Purpose of the AI Regulation, concept of its regulation

AI is one of the fastest-growing areas of technology, and according to some forecasts, its application could bring significant benefits across a wide range of economic and social activities. At the same time, the European Union has recognized that the use of AI also carries a number of risks, such as the risk that its inappropriate use could jeopardize the fundamental rights and freedoms protected by EU law.

The purpose of the AI Regulation is to ensure that the development and use of AI systems takes place within a responsible framework. It is important to note that the AI Regulation applies not only to manufacturers, importers, distributors, and service providers operating in the European Union, but also to companies outside the EU if their products or services are available on the EU market or have an impact on EU citizens. To this end, the AI Regulation imposes obligations on developers and users of AI systems and establishes a uniform regulatory system for their authorization on the EU market. The AI Regulation stipulates that its regulatory framework serves to strengthen transparency and accountability and to promote the spread of human-centered and reliable artificial intelligence. It also aims to eliminate discrimination and bias, while ensuring that EU fundamental values and rights are upheld and providing effective protection against the risks posed by AI systems.

The AI Regulation takes a risk-based approach, classifying AI systems into four risk categories and assigning different rules and obligations to each category. The use of so-called prohibited AI systems that pose an unacceptable risk, such as cognitive behavioral manipulation or emotion recognition in the workplace, is already prohibited in the European Union. High-risk AI systems are subject to strict requirements, in particular testing, transparency, and human oversight obligations, and may only be placed on the market once these requirements have been met. These include, among others, systems used in medical diagnostics, self-driving vehicles, or biometric identification. For low-risk AI systems, such as chatbots, transparency obligations are the main requirement, while the AI Regulation does not set out specific rules for minimal or risk-free AI systems.

The AI Regulation is directly applicable in all EU Member States and, due to its nature as a source of law, cannot be transposed into national law and does not need to be promulgated separately. As a result, the AI Regulation creates a uniform legal framework for the regulation of artificial intelligence throughout the European Union.

Hungarian regulations

In addition to creating a uniform EU regulatory framework, the AI Regulation also imposes several obligations on Member States. Accordingly, Member States, including Hungary, have begun to develop the institutional and legal frameworks necessary to ensure the effective implementation and supervision of the provisions of the AI Regulation.

Under the AI Regulation, the supervision of compliance with the requirements for AI systems classified in each risk category will be the responsibility of the Member States. Accordingly, Member States are required to designate a market surveillance authority and a notifying authority responsible for assessing technical compliance. In addition, each Member State must establish regulatory test environments to support the development of safe and lawful AI.

To ensure compliance with these requirements, in the fall of 2025, the Hungarian Parliament passed Act LXXV of 2025 on the implementation of the European Union’s Artificial Intelligence Regulation in Hungary (“AI Act“), which lays the foundations for the domestic regulatory and institutional structure. The AI Act is also implemented by Government Decree 344/2025 (X. 31.) on the implementation of Act LXXV of 2025 on the implementation of the European Union’s regulation on artificial intelligence in Hungary, which lays down detailed rules on the operation of authorities performing tasks related to artificial intelligence. (X. 31.) on the implementation of Act LXXV of 2025 on the implementation of the European Union’s regulation on artificial intelligence in Hungary (“AI Government Decree“), which lays down detailed rules on the functioning of authorities performing tasks related to artificial intelligence.

Under the AI Act, the reporting authority tasks are performed by a single body, the AI reporting authority. This authority is responsible for designating conformity assessment bodies that examine and certify the technical conformity of high-risk AI systems in advance. Under the provisions of the AI Government Decree, the National Accreditation Authority performs this task.

Under the AI Act, market surveillance tasks are also performed by a single authority. The market surveillance authority is responsible for examining the lawful use of AI systems after they have been placed on the market. The Act also requires the AI market surveillance authority to establish and operate an AI regulatory test environment from August 2026 and to act as a point of contact. Under the provisions of the AI Government Decree, the Minister for National Economy is responsible for performing these tasks.

The AI Act also establishes the Hungarian Artificial Intelligence Council, which acts as a coordinating and advisory body. The task of the Hungarian Artificial Intelligence Council is to promote the uniform interpretation of the AI Regulation in Hungary through guidelines and position statements.

Summary

In summary, it can be said that in 2024, the European Union was the first in the world to adopt a comprehensive regulatory framework whose primary objectives are to promote the spread of human-centered, transparent, and reliable artificial intelligence, protect EU fundamental values and rights, and adequately address the risks arising from AI systems. The AI Regulation applies a risk-based regulatory approach, setting differentiated requirements according to the risk posed by each AI system.

The AI Regulation is directly applicable in all Member States, but leaves the implementation and supervisory tasks to national authorities. As a result, in the fall of 2025, Hungary enacted the AI Act and the related AI Government Decree to ensure the domestic implementation of the AI Regulation.

Photo source: pexels.com, Dušan Cvetanović

The foundations of artificial intelligence regulation in the European Union Read More »

Cybersecurity – new regulations, new tasks

On January 1 this year, Act LXIX of 2024 on cybersecurity in Hungary (the “Cybersecurity Act“) came into force, which was adopted in accordance with Directive (EU) 2022/2555 of the European Parliament and of the Council of 14 December 2022 on measures for a high common level of cybersecurity across the Union, amending Regulation (EU) No 910/2014 and Directive (EU) 2018/1972, and repealing Directive (EU) 2016/1148 (“NIS2 Directive”) which aims to mitigate threats to electronic information systems due to threats to the information society and to ensure the continuity of services in key sectors. The Cybersecurity Act and related legislation impose strict requirements and provide for serious legal consequences in the event of non-compliance.

As we support many companies in preparing for compliance with the NIS2 Directive and the Cybersecurity Act, the purpose of this article is to draw the attention of all potentially affected companies to the provisions of the Cybersecurity Act that will become relevant in the near future, namely the obligations and deadlines related to contracting and conducting cybersecurity audits.

Scope of affected organizations

The Cybersecurity Act broadly defines the organizations that are required to monitor the security of their electronic systems and audit them. Private sector companies that reach a certain size and engage in activities classified as high-risk or risky fall into this category, as follows:

  • In terms of size, the companies concerned are those that qualify as medium-sized enterprises or exceed the thresholds set for medium-sized enterprises, i.e. those with a total workforce of more than 50 and an annual net turnover or balance sheet total exceeding the equivalent of EUR 10 million in Hungarian forints.
  • The condition relating to the scope of activity is that the enterprises operate in (highly) risky sectors, such as healthcare, telecommunications services, digital infrastructure (cloud service providers, data center service providers), food production, processing and distribution, computers, electronics, optical product manufacturing, or machinery and equipment manufacturing.

If it is unclear whether the obligations under the regulation apply to a given company, it is recommended to clarify this as soon as possible by reviewing the legislation.

Cybersecurity obligations

  • Audit contract:

The current obligation of the enterprises concerned is to enter into a contract with an independent economic operator authorized to perform cybersecurity audits registered by the Supervisory Authority for Regulatory Affairs of Hungary (SZTFH) in order to verify the cybersecurity of their electronic systems. The SZTFH is already sending out notifications to potentially affected parties, requiring them to provide proof of the conclusion of such a contract by September 15, 2025. Failure to comply with this obligation may result in a fine of between HUF 1 million and HUF 15 million being imposed on the company.

  • Cybersecurity audit:

Following the conclusion of the contract with the auditor, a cybersecurity audit must be carried out by June 30, 2026, during which the security classification of electronic information systems and the adequacy of protective measures according to the security classification will be checked. Failure to perform the audit may result in severe penalties, including fines of up to 2% of the previous year’s turnover, but at least HUF 1 million and up to HUF 150 million.

A cybersecurity audit may take longer depending on the size of the business and the technological and organizational complexity of its activities. For this reason, it is advisable to plan the timing and schedule of the review in advance so that the process not only serves the purpose of compliance, but also actually identifies areas where further action or deficiencies may exist. Examples include reviewing data protection compliance, updating information security policies, or fine-tuning risk management procedures.

The importance of compliance

Due to stricter cybersecurity regulations and the risk of high fines, compliance is not only a legal obligation but also a key business interest. Available benefits:

  • Reduced financial and reputational risk;
  • Strengthened cybersecurity protection and digital stability for the business;
  • With the right contract, the content, schedule, and definition of tasks and responsibilities of the audit become predictable;
  • At the same time, data protection aspects can be reviewed and, if necessary, data protection impact assessment documents can be revised, thus fulfilling the NAIH’s expectation of compliance with the principle of accountability.

Image source: Brian Penny, pixabay.com

Cybersecurity – new regulations, new tasks Read More »

CLVPartners
Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.