34 n COMMUNICATIONS AND SECURITY June 2025 www.drivesncontrols.com The EU AI Act: how should you prepare? The EU’s Artificial Intelligence Act (AI Act) came into force on 1 August 2024. It is the world’s first major law aimed at establishing a comprehensive legal framework for AI. It aims to ensure that AI systems used in the EU are safe, transparent, ethical and respect fundamental rights. Developers, integrators and users of AI systems need to understand the EU AI Act and AI standards, as this has a significant impact on companies developing and/or producing within the EU and UK. The Act’s key provisions include: n strict rules for high-risk AI, including risk management, data governance and human oversight; n prohibition of harmful AI – for example, biometric surveillance in public spaces; n transparency obligations for AI-generated content; and n fines for non-compliance of up to €35m, or 7% of global revenues. The Act is being implemented in phases to give organisations time to comply. Provisions concerning prohibited AI practices and AI literacy requirements became applicable on 2 February 2025. This includes bans on certain AI applications deemed to be harmful – such as AI systems that exploit vulnerabilities of specific groups. Organisations are also required to implement AI literacy programmes to ensure appropriate training and awareness. Most of the AI Act's provisions, including obligations for high-risk AI systems, come into effect from 2 August 2026. This encompasses requirements for risk management, data governance, transparency, and human oversight for AI systems classified as high-risk. The Act categorises AI system risks into four levels: 1. Unacceptable AI applications that pose a clear threat to safety or rights are banned. 2. High AI used in critical areas – for example, healthcare, hiring and law enforcement – must comply with strict requirements, including risk assessments and human oversight. 3. Limited AI systems such as chatbots must meet transparency obligations – for example, informing users they are interacting with AI. 4. Minimal AI with little to no risk remains largely unregulated. Integrating AI into machinery ISO/IEC TR 5469:2024, Artificial intelligence – Functional safety and AI systems is a technical report that addresses the integration of AI into safety-related functions across various industries. It emphasises the need for thorough risk assessments, validation, and verification techniques tailored to AI technologies. Its importance to machinery safety stems from its comprehensive guidance on managing the complexities and risks associated with AI in safety-critical applications. The document outlines properties, risk factors, methods, and processes related to: n incorporating AI in safety-related functions to achieve specific functions; n employing traditional (non-AI) safety functions to ensure the safety of equipment controlled by AI; and n using AI systems in the design and development of safety-related functions. ISO/IEC TR 5469:2024 helps organisations to integrate AI effectively into machinery while maintaining high safety standards. This is crucial because AI systems can exhibit unpredictable behaviours, and ensuring their safe operation requires specialised approaches. The report enables organisations to implement Quality Management Systems (QMS) for AI – a key element of EU AI Act compliance. Ensuring preparedness Companies already using AI should introduce guidelines and processes to ensure awareness when using limited or high-risk applications. This could occur in almost any organisational function – including procurement, product development and HR. There is a parallel need to raise awareness among those procuring or developing AI. This involves emphasising risk mitigation throughout the project lifecycle, from defining requirements to deployment and decommissioning. Organisations must also ensure that their quality management approaches are up-todate, because AI risk is usually only addressed very narrowly and not across the entire lifecycle. Furthermore, to avoid pitfalls later, it is crucial to focus on data governance, ensuring that it aligns with risk management requirements. Developing an AI model, demonstrating an ethical approach, and being compliant, is a challenge. It is vital to understand the risks that you are facing and to analyse your organisation’s maturity. An AI quality framework should be developed, based on international standards and regulations. Organisations involved in the development, deployment, or use of AI systems in the EU should familiarise themselves with the AI Act's requirements and ensure compliance to avoid potential penalties. It is vital to act now. At the very least, you should start researching today, even though you might not be implementing it for quite some time. n With many provisions of the EU’s AI Act coming into force next year, Joe Lomako, business development manager for IoT at the product testing and certification organisation, TÜV SÜD, looks at the implications for integrating AI into machinery while maintaining safety.
RkJQdWJsaXNoZXIy MjQ0NzM=