CLVPartners

Compliance

Changes to Occupational Safety Rules at the Beginning of the Year

Reading time: 7 minutes

As we reported in our extraordinary newsletter, Act XCIII of 1993 on Labour Safety (“Labour Safety and Health Act”) introduces new rules as of 1 January 2026 for employer organizations regarding the provision of conditions for occupational safety and health. In this article, we summarize the requirements necessary to comply with these obligations.

Principles and requirements

The Labour Safety and Health Act sets out in detail the requirements that employers must take into account to ensure occupational safety and health. In this context, employers must strive to avoid hazards, assess risks that cannot be avoided, and combat hazards at their source. Furthermore, undertakings are required to take human factors into consideration when designing workplaces and selecting work equipment and work processes, to apply the achievements of technical progress, to replace hazardous solutions with less hazardous ones, and to provide appropriate instructions to employees. Companies must develop a coherent and comprehensive prevention strategy covering work processes, technology, work organization, working conditions, social relationships, and the effects of workplace environmental factors.

The role of risk assessment

One of the employer’s most important obligations is the preparation and maintenance of a risk assessment, including risk management and the determination of preventive measures. The assessment is carried out by a specialist, who identifies the hazard sources, determines the group of employees exposed to risks, and assesses the nature of the hazards and the extent of exposure. The risk assessment must be carried out before the commencement of the activity and reviewed when justified—at least every five years. Justifiable cases include changes in technology, work equipment, the method of work, or the scope of the employer’s activities. A risk assessment is likewise justified and required if a work accident or occupational disease occurs in connection with deficiencies in the applied activity, technology, work equipment, or method of work. These tasks qualify in all cases as occupational safety and occupational health professional activities and may only be performed by persons with the prescribed qualifications.

Persons authorized to carry out risk assessments

The Labour Safety and Health Act also contains differentiated rules regarding the qualifications required to carry out risk assessments and to define the occupational safety and occupational health content of the prevention strategy, with particular regard to the hazard class and the number of employees. The detailed rules are set out in Decree 5/1993. (XII. 26.) MüM (hereinafter: “MüM Decree“), which classifies employers into hazard categories and stipulates the qualifications required to perform the tasks accordingly.

In the case of employers classified in hazard class III with a maximum of 50 employees (e.g., labour market service providers, IT infrastructure providers, and wholesale and retail trade in general), there has been no change since 1 July 2025, in accordance with the MÜM Decree, the activity may also be carried out by a person holding a specialist medical qualification in occupational medicine, industrial medicine, occupational hygiene, public health and epidemiology, preventive medicine and public health, or by a person holding a qualification as a public health or epidemiological inspector or supervisor.

As of 1 January 2026, a new rule provides that, for employers employing at least 50 employees, the occupational safety content of the prevention strategy must be developed by a person with higher-level occupational safety qualifications in the case of activities classified under Hazard Classes I and II pursuant to the MüM Decree, such as paper manufacturing, pharmaceutical manufacturing, machinery manufacturing, computer, electronic and optical product manufacturing, and tobacco product manufacturing.

Also introduced as of this year is the rule that, for activities classified under Hazard Class I pursuant to the MüM Decree—such as paper manufacturing, pharmaceutical manufacturing, and machinery manufacturing—the preparation of the risk assessment at employers employing at least 50 employees must be carried out by a person with higher-level occupational safety qualifications.

Special rules for teleworking

In the case of teleworking, the employee performs work for part or all of their working time at a location separate from the employer’s premises. In such cases, work may be performed using equipment provided by the employer or, by agreement, by the employee. Where equipment is provided by the employee, the employer must, as part of the risk assessment, ensure that the work equipment is in a safe condition that does not endanger health, while maintaining this condition is the employee’s responsibility.

If work is not performed using IT equipment, it may only be carried out at a remote workplace that has been preliminarily assessed by the employer as appropriate from an occupational safety perspective, and the employer must regularly monitor working conditions and compliance with the applicable rules.

The situation differs when work is performed using IT equipment. In such cases, the employer is not required to conduct a risk assessment; it is sufficient for the employer to inform the employee of the rules for ensuring safe and healthy working conditions and to oblige the employee to comply with these rules, and the employer may obtain a declaration from the employee acknowledging this obligation. The employer may keep a register of work equipment. The employee is required to select the place of remote work in compliance with these conditions. Compliance with the rules may, of course, be monitored remotely by the employer through the use of IT tools. Although an individual risk assessment is not required in this case, proper employee information and regular monitoring remain part of the employer’s occupational safety obligations.

Employer obligations and liability

The employer’s ongoing responsibility does not end with the preparation of documentation. Employers must ensure proper information and instruction for employees, regularly monitor working conditions and compliance with regulations, provide safe work equipment, and promptly investigate irregularities and reports. In addition, employers must ensure the proper usability and condition of personal protective equipment, as well as the lawful investigation of work accidents and occupational diseases.

Compliance with occupational safety regulations is also of outstanding importance from the perspective of employer liability for damages, as under Act I of 2012 on the Labour Code the employer bears objective liability for damage caused to employees in connection with the employment relationship. To be exempted from liability, the employer must prove that the damage was caused by a circumstance beyond its control that it could not have foreseen and that it was not reasonably expected to prevent or mitigate. Under this strict regulatory framework, any failure to comply with occupational safety regulations is necessarily assessed to the detriment of the employer. For these reasons, it is particularly important that employers always have up-to-date occupational safety measures in force and that these are properly and verifiably documented.

Summary

Occupational safety regulations make it clear that ensuring occupational safety and health is not merely a formal obligation, but one of the most important elements of employer responsibility. Failure to properly prepare and regularly review the risk assessment and prevention strategy, as well as failure to actually comply with occupational safety requirements, entails not only regulatory sanctions but also significant compensation risks, given the employer’s objective liability. Our firm is pleased to assist in preparing for regulatory changes and in establishing operations that comply with applicable legislation.

Photo source: pexels.com, suntorn somtong

Changes to Occupational Safety Rules at the Beginning of the Year Read More »

Data protection considerations related to the development of AI models

Reading time: 5 minutes

Artificial intelligence (“AI“) is a rapidly evolving family of technologies that contributes to a wide range of economic, environmental, and social benefits across all sectors and social activities. By improving predictive accuracy, optimizing operational processes and the allocation of resources, and enabling the personalization of digital solutions available to individuals and organizations, the use of AI can confer a decisive competitive advantage on businesses while also delivering beneficial social and environmental outcomes.

The use of artificial intelligence, alongside its potential benefits, is also associated with certain risks. In order to mitigate these risks, Regulation (EU) 2024/1689 of the European Parliament and of the Council on artificial intelligence (“AI Act”) has been adopted, several provisions of which have already entered into force. At the same time, the development of many AI models involves the use of personal data, which raises the question of how the AI Act affects data processing activities related to AI systems.

The relationship between the AI Act and the GDPR

The AI Act makes it clear that it does not amend the application of existing EU rules on the processing of personal data, including the requirements set out in the GDPR. Accordingly, organizations falling within the scope of the AI Act must, in the course of their data processing activities, comply fully with the provisions of the GDPR.

Through the enforcement of the right to the protection of personal data, the GDPR also supports the effective exercise of other fundamental rights, including, inter alia, freedom of thought and expression, the right to information and education, and the freedom to conduct a business. On this basis, it can be concluded that the GDPR establishes a legal framework that facilitates responsible innovation, including the responsible development and deployment of AI-related technologies.

Data protection considerations in relation with the development of AI Models

In connection with the development of AI models, the European Data Protection Board (“EDPB”) adopted a standalone opinion on data protection aspects arising in relation to the processing of personal data in the context of artificial intelligence models (“Opinion”).

The Opinion examines how personal data may be used in the development of AI models and highlights the issues requiring particular attention when placing on the market AI systems developed using personal data.

Lifecycle of AI Models

The EDPB divides the lifecycle of AI models into two stages, emphasizing that data processing may occur in either of them. The first stage covers the processes preceding the deployment of the model (including e.g. its creation, development, the training, the fine-tuning). The second stage relates to the deployment phase, encompassing the use of the model following its development.

Existence of a legal basis for data processing by data controllers

One of the cornerstones of data protection regulation is that personal data may only be processed where a specific legal basis exists. The Opinion reiterates the general expectation that data controllers must determine the appropriate legal basis for their processing activities.

However, the EDPB found that, as a general rule, an AI model developer may rely on legitimate interest as a legal basis, provided that the existence of such legitimate interest is duly substantiated. For this purpose, a three-step test – already familiar to those with experience in data protection compliance practice – serves to properly assess whether a legitimate interest genuinely exists.

The EDPB emphasizes that the balancing test must take into account whether the data subjects can reasonably expect their personal data to be used. The Opinion is significant in this regard because it sets out several criteria intended to assist data protection authorities in assessing the “reasonably foreseeable” criteria

The Opinion also recalls that, where it appears that the interests, rights, and freedoms of data subjects override the legitimate interests of the data controller or of a third party, all is not lost. Namely, the data controller may consider the implementation of mitigating measures to limit such adverse effects. These may include, for example, pseudonymization, or measures aimed at masking personal data or replacing them with fictitious personal data within the training dataset. The introduction of appropriate data protection measures can make data processing lawful again.

Anonymity

The GDPR classifies as personal data any information relating to an identified or identifiable natural person, whether directly or indirectly. According to the position of the EU institution, in the context of AI model development, personal data may only be used where they are properly anonymized, such that even in the event of a potential reverse engineering of the model, the identification of data subjects is not possible. With regard to anonymization, the EDPB emphasizes that the competent data protection authorities must assess, on a case-by-case basis, whether the organization developing the AI model has complied with this requirement. The body also sets out several recommended technique that may be suitable for preserving anonymity (e.g. prevent or limit the extraction of personal data used for training purposes).

Summary

The EU body emphasizes in its Opinion that compliance with data protection requirements governing the processing of personal data must be ensured throughout both the development and deployment of AI models. It is evident that the expansion of AI and its potential risks are being treated and monitored as a priority in law enforcement, and therefore numerous regulatory guidelines from authorities can be expected in the near future.

Photo source: pexels.com, Tara Winstead

Data protection considerations related to the development of AI models Read More »

The foundations of artificial intelligence regulation in the European Union

Reading time: 4 minutes

In 2024, the European Union adopted its Artificial Intelligence Regulation (the “AI Regulation“), which established the world’s first comprehensive regulatory framework for artificial intelligence. The provisions of the AI Regulation will gradually become mandatory until August 2, 2027. The AI Regulation refers certain implementation and supervisory tasks to the Member States, as a result of which a domestic regulatory framework for the use of artificial intelligence (“AI“) was also promulgated in Hungary in the fall of 2025.

Given that the AI Regulation will have to be applied almost in its entirety from August this year, CLVPartners is launching a series of newsletters on artificial intelligence to help with preparations. The aim of the series of articles is to present the legal issues related to the use of artificial intelligence in a practical yet easy-to-understand way. In the first part of the series, we will outline the basic concept of the current EU and Hungarian regulatory framework and its main objectives.

Purpose of the AI Regulation, concept of its regulation

AI is one of the fastest-growing areas of technology, and according to some forecasts, its application could bring significant benefits across a wide range of economic and social activities. At the same time, the European Union has recognized that the use of AI also carries a number of risks, such as the risk that its inappropriate use could jeopardize the fundamental rights and freedoms protected by EU law.

The purpose of the AI Regulation is to ensure that the development and use of AI systems takes place within a responsible framework. It is important to note that the AI Regulation applies not only to manufacturers, importers, distributors, and service providers operating in the European Union, but also to companies outside the EU if their products or services are available on the EU market or have an impact on EU citizens. To this end, the AI Regulation imposes obligations on developers and users of AI systems and establishes a uniform regulatory system for their authorization on the EU market. The AI Regulation stipulates that its regulatory framework serves to strengthen transparency and accountability and to promote the spread of human-centered and reliable artificial intelligence. It also aims to eliminate discrimination and bias, while ensuring that EU fundamental values and rights are upheld and providing effective protection against the risks posed by AI systems.

The AI Regulation takes a risk-based approach, classifying AI systems into four risk categories and assigning different rules and obligations to each category. The use of so-called prohibited AI systems that pose an unacceptable risk, such as cognitive behavioral manipulation or emotion recognition in the workplace, is already prohibited in the European Union. High-risk AI systems are subject to strict requirements, in particular testing, transparency, and human oversight obligations, and may only be placed on the market once these requirements have been met. These include, among others, systems used in medical diagnostics, self-driving vehicles, or biometric identification. For low-risk AI systems, such as chatbots, transparency obligations are the main requirement, while the AI Regulation does not set out specific rules for minimal or risk-free AI systems.

The AI Regulation is directly applicable in all EU Member States and, due to its nature as a source of law, cannot be transposed into national law and does not need to be promulgated separately. As a result, the AI Regulation creates a uniform legal framework for the regulation of artificial intelligence throughout the European Union.

Hungarian regulations

In addition to creating a uniform EU regulatory framework, the AI Regulation also imposes several obligations on Member States. Accordingly, Member States, including Hungary, have begun to develop the institutional and legal frameworks necessary to ensure the effective implementation and supervision of the provisions of the AI Regulation.

Under the AI Regulation, the supervision of compliance with the requirements for AI systems classified in each risk category will be the responsibility of the Member States. Accordingly, Member States are required to designate a market surveillance authority and a notifying authority responsible for assessing technical compliance. In addition, each Member State must establish regulatory test environments to support the development of safe and lawful AI.

To ensure compliance with these requirements, in the fall of 2025, the Hungarian Parliament passed Act LXXV of 2025 on the implementation of the European Union’s Artificial Intelligence Regulation in Hungary (“AI Act“), which lays the foundations for the domestic regulatory and institutional structure. The AI Act is also implemented by Government Decree 344/2025 (X. 31.) on the implementation of Act LXXV of 2025 on the implementation of the European Union’s regulation on artificial intelligence in Hungary, which lays down detailed rules on the operation of authorities performing tasks related to artificial intelligence. (X. 31.) on the implementation of Act LXXV of 2025 on the implementation of the European Union’s regulation on artificial intelligence in Hungary (“AI Government Decree“), which lays down detailed rules on the functioning of authorities performing tasks related to artificial intelligence.

Under the AI Act, the reporting authority tasks are performed by a single body, the AI reporting authority. This authority is responsible for designating conformity assessment bodies that examine and certify the technical conformity of high-risk AI systems in advance. Under the provisions of the AI Government Decree, the National Accreditation Authority performs this task.

Under the AI Act, market surveillance tasks are also performed by a single authority. The market surveillance authority is responsible for examining the lawful use of AI systems after they have been placed on the market. The Act also requires the AI market surveillance authority to establish and operate an AI regulatory test environment from August 2026 and to act as a point of contact. Under the provisions of the AI Government Decree, the Minister for National Economy is responsible for performing these tasks.

The AI Act also establishes the Hungarian Artificial Intelligence Council, which acts as a coordinating and advisory body. The task of the Hungarian Artificial Intelligence Council is to promote the uniform interpretation of the AI Regulation in Hungary through guidelines and position statements.

Summary

In summary, it can be said that in 2024, the European Union was the first in the world to adopt a comprehensive regulatory framework whose primary objectives are to promote the spread of human-centered, transparent, and reliable artificial intelligence, protect EU fundamental values and rights, and adequately address the risks arising from AI systems. The AI Regulation applies a risk-based regulatory approach, setting differentiated requirements according to the risk posed by each AI system.

The AI Regulation is directly applicable in all Member States, but leaves the implementation and supervisory tasks to national authorities. As a result, in the fall of 2025, Hungary enacted the AI Act and the related AI Government Decree to ensure the domestic implementation of the AI Regulation.

Photo source: pexels.com, Dušan Cvetanović

The foundations of artificial intelligence regulation in the European Union Read More »

Cybersecurity – new regulations, new tasks

On January 1 this year, Act LXIX of 2024 on cybersecurity in Hungary (the “Cybersecurity Act“) came into force, which was adopted in accordance with Directive (EU) 2022/2555 of the European Parliament and of the Council of 14 December 2022 on measures for a high common level of cybersecurity across the Union, amending Regulation (EU) No 910/2014 and Directive (EU) 2018/1972, and repealing Directive (EU) 2016/1148 (“NIS2 Directive”) which aims to mitigate threats to electronic information systems due to threats to the information society and to ensure the continuity of services in key sectors. The Cybersecurity Act and related legislation impose strict requirements and provide for serious legal consequences in the event of non-compliance.

As we support many companies in preparing for compliance with the NIS2 Directive and the Cybersecurity Act, the purpose of this article is to draw the attention of all potentially affected companies to the provisions of the Cybersecurity Act that will become relevant in the near future, namely the obligations and deadlines related to contracting and conducting cybersecurity audits.

Scope of affected organizations

The Cybersecurity Act broadly defines the organizations that are required to monitor the security of their electronic systems and audit them. Private sector companies that reach a certain size and engage in activities classified as high-risk or risky fall into this category, as follows:

  • In terms of size, the companies concerned are those that qualify as medium-sized enterprises or exceed the thresholds set for medium-sized enterprises, i.e. those with a total workforce of more than 50 and an annual net turnover or balance sheet total exceeding the equivalent of EUR 10 million in Hungarian forints.
  • The condition relating to the scope of activity is that the enterprises operate in (highly) risky sectors, such as healthcare, telecommunications services, digital infrastructure (cloud service providers, data center service providers), food production, processing and distribution, computers, electronics, optical product manufacturing, or machinery and equipment manufacturing.

If it is unclear whether the obligations under the regulation apply to a given company, it is recommended to clarify this as soon as possible by reviewing the legislation.

Cybersecurity obligations

  • Audit contract:

The current obligation of the enterprises concerned is to enter into a contract with an independent economic operator authorized to perform cybersecurity audits registered by the Supervisory Authority for Regulatory Affairs of Hungary (SZTFH) in order to verify the cybersecurity of their electronic systems. The SZTFH is already sending out notifications to potentially affected parties, requiring them to provide proof of the conclusion of such a contract by September 15, 2025. Failure to comply with this obligation may result in a fine of between HUF 1 million and HUF 15 million being imposed on the company.

  • Cybersecurity audit:

Following the conclusion of the contract with the auditor, a cybersecurity audit must be carried out by June 30, 2026, during which the security classification of electronic information systems and the adequacy of protective measures according to the security classification will be checked. Failure to perform the audit may result in severe penalties, including fines of up to 2% of the previous year’s turnover, but at least HUF 1 million and up to HUF 150 million.

A cybersecurity audit may take longer depending on the size of the business and the technological and organizational complexity of its activities. For this reason, it is advisable to plan the timing and schedule of the review in advance so that the process not only serves the purpose of compliance, but also actually identifies areas where further action or deficiencies may exist. Examples include reviewing data protection compliance, updating information security policies, or fine-tuning risk management procedures.

The importance of compliance

Due to stricter cybersecurity regulations and the risk of high fines, compliance is not only a legal obligation but also a key business interest. Available benefits:

  • Reduced financial and reputational risk;
  • Strengthened cybersecurity protection and digital stability for the business;
  • With the right contract, the content, schedule, and definition of tasks and responsibilities of the audit become predictable;
  • At the same time, data protection aspects can be reviewed and, if necessary, data protection impact assessment documents can be revised, thus fulfilling the NAIH’s expectation of compliance with the principle of accountability.

Image source: Brian Penny, pixabay.com

Cybersecurity – new regulations, new tasks Read More »

CLVPartners
Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.