How will the European Union AI Act impact you?

The EU Artificial Intelligence Act (AI Act) is the world’s first major AI regulation aimed at establishing a comprehensive legal framework for AI. It entered into force on 1st August 2024 and aims to ensure AI systems used in the EU are safe, transparent, ethical, and respect fundamental rights. Developers, integrators, and users of AI systems need to understand the EU AI Act and AI standards, as this has a significant impact on companies developing and/or producing within the EU and UK.

The AI Act is being implemented in phases to allow organisations time to comply with its provisions. On 2nd February 2025, provisions concerning prohibited AI practices and AI literacy requirements became applicable. This includes bans on certain AI applications deemed harmful, such as social scoring by governments and AI systems that exploit vulnerabilities of specific groups. Organisations are also required to implement AI literacy programmes to ensure appropriate training and awareness.

From 2nd August 2026, most of the AI Act’s provisions, including obligations for high-risk AI systems, will come into effect. This encompasses requirements for risk management, data governance, transparency, and human oversight for AI systems classified as high-risk.

The Act follows a risk-based approach, categorising AI systems into four levels of risk:

  1. Unacceptable risk – AI applications that pose a clear threat to safety or rights (e.g., social scoring by governments) are banned
  2. High risk – AI used in critical areas (e.g., healthcare, hiring, law enforcement) must comply with strict requirements, including risk assessments and human oversight
  3. Limited risk – AI systems like chatbots must meet transparency obligations, such as informing users they are interacting with AI
  4. Minimal risk – AI with little to no risk (e.g. spam filters, AI-powered video games) remains largely unregulated

Integrating AI for safety

ISO/IEC TR 5469:2024, ‘Artificial intelligence – Functional safety and AI systems’, is a technical report that addresses the integration of AI into safety-related functions across various industries. The report emphasises the need for thorough risk assessments, validation, and verification techniques tailored to AI technologies.

The document outlines properties, risk factors, methods, and processes related to:

  • Incorporating AI within safety-related functions to achieve specific functionalities
  • Employing traditional (non-AI) safety functions to ensure the safety of equipment controlled by AI
  • Using AI systems in the design and development of safety-related functions

Guidance helps organisations to effectively integrate AI into products while maintaining high safety standards. This is crucial for AI systems to ensure their safe operation through specialised approaches. There is also ISO/IEC 42001, which enables organisations to implement Quality Management Systems (QMS) for AI, a key element of EU AI Act compliance.

Maturity levels

When developing an AI model, it is vital to understand the risks you are facing and analyse your organisation’s maturity.

Developers, integrators, and users of AI systems need to understand the EU AI Act and AI standards, as this has a significant impact on companies developing and/ or producing products for placement on the EU market. An AI Quality framework should therefore be developed, based on international standards and regulations.

In December 2023, ISO/IEC 42001 was published, which enables organisations to implement Quality Management systems for artificial intelligence. This is a key element of EU AI Act compliance.

ISO/IEC TR 5469:2024 ‘Artificial intelligence – Functional safety and AI systems’ was published in January 2024. This document describes the properties, related risk factors, available methods and processes relating to:

  • Use of AI inside a safety-related function to realise the functionality
  • Use of non-AI safety-related functions to ensure safety for an AI-controlled equipment
  • Use of AI systems to design and develop safety-related functions

Are you prepared?

Companies already employing AI should introduce guidelines and processes to ensure awareness when using limited or high-risk applications. This involves emphasising risk mitigation throughout the entire project lifecycle, from defining requirements to deployment and decommissioning. Establishing appropriate safeguards is vital.

Organisations must also ensure that quality management approaches are up to date, because usually AI risk is only addressed very narrowly and not across the entire life cycle. Furthermore, to avoid pitfalls later, it is crucial to focus on data governance, ensuring it aligns with risk management requirements.

There is also the question of whether there will be a requirement for exiting products on the market to comply with the AI Act. As no law is retroactive, if your system or product is already on the market, unless you make a significant change to the system after the law has been passed, there is no need to consider the AI Act. Obviously, the exception is if you are in the prohibited category, as you would have to remove your system from the market.

Organisations involved in the development, deployment, or use of AI systems within the EU should familiarise themselves with the AI Act’s requirements and ensure compliance to avoid potential penalties. Acting now is vital. At the very least, you should start researching today, even though you might not be implementing it for quite some time.

This article originally appeared in the October’25 magazine issue of Electronic Specifier Design – see ES’s Magazine Archives for more featured publications.

Joe Lomako Headshot
Author: Joe Lomako, Business Development Manager (IoT), TÜV SÜD

Keep Up to Date with the Most Important News

By pressing the Subscribe button, you confirm that you have read and are agreeing to our Privacy Policy and Terms of Use
Previous Post

How Edge AI will transform the next generation of connected systems

Next Post

Machine vision inspired by human eyesight and the brain