What significance and consequences does the European law regulating artificial intelligence, the Artificial Intelligence Act (AI Act), which came into force on August 1, 2024, have for companies in Germany? This article provides an initial overview of the new regulation.

AI-Generated cloudebility.com

1. Background to the new legal regulation

Artificial intelligence is considered one of the most critical technologies of the future, and its impact is already described as the next industrial revolution. In addition to the undeniable benefits for authorities, institutions, and commercial enterprises of all sectors and sizes, European legislators are also aware of the risks and dangers associated with this technology. The EU is countering this with a new regulation based on risk analyses, the Artificial Intelligence Act.

The AI Act is a unique set of regulations in the EU and comprises more than 200 pages in the German version. It aims to limit the already foreseeable negative consequences and risks of a completely new technology whose disruptive development cannot yet be estimated. At the same time, the innovative possibilities of artificial intelligence for companies, science, and research should not be unnecessarily hindered and should be promoted as much as possible. After around five years of work, the regulation came into force on 01.08.2024 and will come into force in several stages from 02.02.2025, mainly from 02.08.2026. Germany and the other member states are now required to transpose the provisions of the AI Act into national law as part of their digital strategies.

2. Classification according to risk levels

In line with its intention, the AI Act divides the areas of application of artificial intelligence and the associated obligations and legal consequences into risk categories:

2.1 Prohibited, unacceptable areas of application

These are models that must be regarded as prohibited under the rule of law and take into account the fundamental democratic rights of the Member States. This includes, for example, monitoring based on social behavior using AI applications, known as social scoring.

2.2 Areas of application with a high-risk

The part that takes up the most space in the AI Act is Article 6 of the Regulation, in conjunction with Annexes I. and III. contains an extensive catalog of models associated with a high risk to humans.

These are primarily those that could affect health integrity (e.g., medical treatment/operations based on artificial intelligence), fundamental rights (e.g., monitoring or recruitment of employees based on artificial intelligence), or security. The requirements for specific security components are critical in risk classification. Companies and institutions that offer or use their systems in critical infrastructure supply, transportation, justice, and law enforcement (use of biometric data) are just as affected as those working in education or healthcare.

2.3 Areas of application with a limited risk

End users must be informed that artificial intelligence is involved (transparency requirement when using chatbots, prevention of deepfakes).

2.4 Low-risk artificial intelligence applications

Finally, uncomplicated, low-risk artificial intelligence applications that do not require further regulation, such as spam filters based on artificial intelligence.

3. Obligations

The requirements for the respective users of AI models and their obligations are defined according to the classification into risk groups.

The most stringent obligations under the AI Act apply to providers and downstream users of high-risk AI applications. The EU regulation distinguishes between providers and users of AI systems as follows.

Providers are considered the developers and operators of artificial intelligence systems in the EU, particularly those with a high risk, regardless of whether they are based in the EU or a third country.

Users are understood to be the institutions, companies, and professional groups that use the providers’ AI models for business purposes. This does not include private users to whom the provisions of the AI Act do not apply.

Particularly in the case of high-risk artificial intelligence systems, the responsibility of the companies concerned for the respective risk assessment and IT risk management is high. This concerns the quality of the data sets, the traceability and verifiability of the results, the information of the supervisory authorities, and their cooperation. The companies involved must ensure that qualified human intervention always remains possible.

4. Implementation in practice

Companies working with models should first carry out a risk assessment of their systems according to the above categories, if necessary, with qualified consultants’ help.

After reliable classification, the measures to be taken by the AI Act must be applied to the IT infrastructure, data records, and risk management. Last, the awareness and training of those responsible for AI and the employees working with the systems must be considered.

The high requirements of the AI Act, which are still unspecific overall despite their scope, require regular checks and monitoring of the effectiveness of the measures implemented and, if necessary, adaptation to changing requirements. Using qualified AI consultants and IT security experts is also recommended here.

5. Reporting obligations and sanctions

Compliance with the requirements and reporting obligations of the AI Act is monitored in the member states by the European Office for Artificial Intelligence and, if necessary, enforced with appropriate measures. Violations of using systems prohibited under Article 5 of the AI Act can result in penalties for companies of up to 7% of annual global turnover or up to 35 million euros. Other violations are subject to staggered penalties of up to 3% of annual turnover or up to 15 million euros. Proportionately adjusted fines apply to small and medium-sized enterprises (SMEs). In some cases, these penalties exceed the already high fines that can be imposed under the EU General Data Protection Regulation (GDPR).

6. Conclusion and outlook

Entrepreneurs in Germany should prepare their companies and their employees for the requirements of the AI Act. If not as a provider, many entrepreneurs may be considered as users of any AI models, even at high risk. In case of doubt, qualified legal and technical advice should be sought. Inaction, unprepared or careless use of AI models can otherwise be unnecessarily expensive.

This article provides an overview of the new provisions of the AI Act. The basics have been carefully researched. Please remember that legal and technical developments are still fluid, especially about the AI Act. This article cannot constitute or replace legal advice in individual cases, and no liability is accepted for any incorrect decisions.

Read more


en_USEnglish