Services

Implement the AI Act

Services

Implement the AI Act

What you should know about the AI Act - FAQ

  • Which AI systems are affected?

    Which AI systems fall under the AI Act

    The AI Act applies to systems that are based on AI - especially if they support or automate decision-making. The decisive factor is not only the technology, but also its planned and foreseeable intended use.

    The following are affected, for example

    • AI-based control and assistance systems
    • Image processing and object recognition
    • Prediction and optimization algorithms
    • Autonomous or semi-autonomous functions

    Not affected by the AI Act

    AI systems in the areas of national security, defense and military purposes are excluded from the AI Act. In addition, the Act does not apply to pure research, testing or development activities as long as systems or models have not yet been placed on the market or put into operation.

  • Roles and responsibilities

    Similar to the CRA, the AI Act distinguishes between different players along the value chain:

    • Manufacturers (development and provision of AI systems)
    • Importers (introduction to the EU market)
    • Operators (use of the AI system)
    • Authorized representatives (representation of the manufacturer within the EU)

    Depending on the role, different obligations arise - especially between manufacturers and operators.

  • Risk classification as a key difference

    Requirements depend on the risk of the AI system

    The AI Act distinguishes between different risk categories - from minimal risk to high-risk AI.

    • Low risk: low requirements
    • High-risk AI: comprehensive regulatory obligations

    High-risk systems are particularly relevant, for example in safety-critical applications or as a component of machines.

    General purpose AI (GPAI) - AI models that can be used for a wide range of applications - are also taken into account. Depending on the application and performance, extensive requirements may also apply here

  • What does the AI Act specifically require in relation to high-risk AI?

    Requirements along the AI life cycle

    The requirements of the AI Act relate to both the development and operation of AI systems.

    Development and design

    Key requirements must already be met during development:

    • Risk management system
    • Data quality and data management
    • Technical documentation
    • Ensuring accuracy, robustness and cyber security

    AI systems must be designed to be comprehensible and controllable.

    Transparency and control

    A key difference to other sets of rules is the requirement for dealing with decisions:

    • Transparency about functionality and use
    • Traceability of results
    • Possibilities for human supervision

    This ensures that AI systems do not remain "black boxes".

    Conformity and placing on the market

    The following evidence is required for regulated AI systems:

    • Conformity assessment
    • CE marking
    • Declaration of conformity
    • Registration in relevant databases

    Operation and monitoring

    There are also extensive obligations during operation:

    • Monitoring of system behavior
    • Documentation of events
    • Reporting of serious incidents
    • Ongoing evaluation of system performance
  • Why implementation is complex

    The challenge lies not only in the technology, but also in the combination of technology, regulation and application context.

    Practice shows that

    • Requirements depend heavily on the specific application
    • AI systems change through data and updates
    • Interfaces with the Machinery Ordinance and Cyber Resilience Act
    • High documentation and traceability requirements

    The combination of AI development, system integration and regulatory assessment is particularly challenging

  • When does the AI Act come into force?

    The AI Act will come into full force on August 2, 2027. However, parts of it will apply earlier.

  • What happens in the event of violations?

    InfringementPenalty
    Use of prohibited AI systemsUp to €35 million or 7% of the total global annual turnover of the previous financial year, whichever is higher
    Non-compliance with obligations (excluding use of prohibited AI systems)Up to €15 million or 3% of the total worldwide annual turnover of the previous financial year, whichever is higher
    False/misleading/incomplete information provided to notifying bodies or market surveillance authoritiesUp to € 7.5 million or 1% of the total worldwide annual turnover of the previous financial year, whichever is higher
    Non-compliance obligations by providers of AI models for general purposesUp to €15 million or 3% of the total worldwide annual turnover of the previous financial year, whichever is higher

    Note: For newly established companies and SMEs, the lower amount applies instead of the higher amount.

    If non-compliance with the requirements of the EU AI Act is identified — for example in the case of high-risk AI systems — and is not remedied within the deadline set by the competent market surveillance authority, this has immediate consequences for placing the system on the EU market: authorities may restrict or prohibit the further provision of the AI system.

    If the system is found to pose a significant risk to health, safety or fundamental rights, authorities may also order the withdrawal or recall of systems that have already been put into operation.

Do not hesitate to contact me!
Michael Weidinger
Innovation & Engineering
Send message
Contact