AI regulation, currently a hot topic is set for a shake-up. The European Union is set to lead the way with its EU AI Act (Regulation (EU) 2024/1689), a comprehensive framework aimed at regulating artificial intelligence and its applications. Enforcement of the Act’s requirements on prohibited use cases will possibly commence as private action in February 2025, with broader oversight extending to General-Purpose AI (GPAI) models and hence Large Language Models (LLM), including generative AI, by mid 2025.
“The complexity and opacity of these systems poses risks to fundamental rights, health and safety”
The newly established EU AI Office, in collaboration with data protection authorities, is expected to take an active role in monitoring compliance, and enforcement will likely be swift and decisive. Make no mistake, by the end of 2025, we could see the first penalties imposed on GPAI providers for non-compliance (3% to 7% of worldwide annual turnover, depending what provisions are breached). For businesses across sectors, the implications are profound especially given the prolific usage of General-Purpose AI systems.
The Core of the EU AI Act: What It Means for GPAI Providers
The EU AI Act establishes rigorous requirements for GPAI providers, focusing on transparency and accountability. Key mandates include:
- General Purpose AI: Providers must take note of enhanced accountability measures, such as mandatory registration in the EU AI database.
- Disclosure of training sources: Providers must reveal the datasets used to train their AI models, ensuring they meet ethical and legal standards and must be fully documented.
- Model evaluation results: Organizations must share the outcomes of comprehensive evaluations, including adversarial testing, to demonstrate model safety and fairness.
- Shared responsibility across AI actors: Compliance extends beyond providers, creating interlinked obligations for users, deployers, developers and integrators of AI systems.
While the Act’s primary aim is to regulate GPAI providers, its cascading compliance obligations pose significant risks for businesses utilizing these technologies.
The Compliance Challenge: Third-Party Risks and Accountability
As generative AI becomes increasingly integral to business operations, companies are diversifying the models they employ. The cascading nature of compliance obligations places businesses at risk even if non-compliance originates with an AI provider. This creates a third-party risk management challenge that cannot be ignored. Organizations must:
- Vet providers thoroughly: Assess the compliance posture of AI providers to ensure adherence to the EU AI Act, focusing on data quality, fairness, and risk management.
- Document evidence of compliance: Maintain comprehensive records, including agreements, certifications, and audit results, to demonstrate diligence.
- Monitor use cases and AI usage: Ensure AI applications align with the Act’s guidelines, particularly regarding prohibited uses.
Failure to address these areas could expose companies to investigations, reputation damage, and financial penalties, even if the initial non-compliance originates with their AI provider.
What Businesses Should Do Now
Preparation is critical to avoiding the pitfalls of non-compliance. Here’s how companies can get ahead:
- Conduct a compliance audit: Assess your current use of GPAI models and identify gaps in compliance with the EU AI Act.
- Identify high risk AI systems: Classified as systems that pose significant risks to health, safety, and fundamental rights. Including applications in areas like law enforcement, migration, healthcare, education, and critical infrastructure management. These systems are subject to rigorous requirements, including risk assessments, conformity assessments, and robust documentation.
- Engage legal and regulatory experts: Consult with legal, technical and ethical specialist experts to understand the nuances of the Act and ensure compliance.
- Develop and establish internal protocols: Develop a governance framework of clear policies and procedures for vetting AI providers, monitoring AI usage, roles and responsibilities and responding to regulatory inquiries.
- Invest in testing: Regularly test AI models for vulnerabilities and biases to ensure they meet ethical and regulatory standards.
- Educate staff: Develop AI literacy across teams and stakeholders. The responsibility of AI models, usage and outcomes rest on developers, data scientists, architects, operations, and users of the business. Ultimately the business pays the fine, so practices need to be started now, understanding the principles of the Act throughout your staffing.
The Path Forward: Building Trust and Resilience
The EU AI Act represents a landmark in artificial intelligence regulation, establishing a global benchmark for transparency and accountability. While its enforcement may appear challenging, it also offers businesses a strategic opportunity to strengthen stakeholder trust by showcasing their dedication to ethical AI practices strengthening relationships with customers, partners, and regulators.
By taking proactive measures to align with the Act, organizations can not only mitigate the risks of fines and regulatory scrutiny but also distinguish themselves as pioneers in responsible AI adoption, enhancing their reputation. Acting now is critical—well before the regulatory focus intensifies.
With enforcement set to begin in 2025, companies that prepare in advance will gain a competitive edge in navigating this transformative regulatory landscape. Don’t wait for penalties to set the tone—initiate your compliance efforts today.