A year after its adoption, the EU AI Act has shifted from policy to practice, compelling businesses, regulators, and innovators to define what “responsible AI” means in reality.
Positioned as the world’s first comprehensive AI regulation, the Act has set a global benchmark, creating both compliance challenges and fresh opportunities for competitive advantage.
A major milestone came with the August rollout of the General-Purpose AI (GPAI) provisions. Providers of GPAI models – such as foundation models – are now required to meet strict obligations, including transparency reporting, technical documentation, copyright safeguards, and risk management processes.
This phase also triggers enforcement. National regulators across the EU are empowered to apply penalties of up to €35 million or 7% of global annual revenue for non-compliance. Member states must also strengthen governance: designating oversight bodies, notifying conformity assessors, and beginning formal reporting on AI regulation.
What the EU AI Act means for companies
For businesses, the Act is now an operational reality. With a phased rollout, the European Commission has signaled that adherence is non-negotiable, and the cost of non-compliance is steep. Organisations are now moving quickly to build the infrastructure needed for rigorous oversight, documentation, and accountability in AI use.
Agur Jõgi, CTO, Pipedrive said: “As AI becomes integral to software used by small, medium, and large firms, organisations face a dilemma: How to adopt new features before competitors, without compromising safety or trust. Recent Pipedrive research found 48% of businesses cite lack of knowledge as the primary barrier to AI adoption.
“The EU AI Act puts a premium on governance and transparency. That requires all tech talent to commit to continuous education and to gold-plated standards to deploy AI responsibly. Teams, and the technology industry, must share strong and easily implemented best practices, because SaaS suppliers are not simply providing tools – they empower people with the solutions to real problems. That requires effectiveness, transparency, and trust in AI features.
“Coming soon after the GPAI Code of Practice, this is the EU sending a strong signal to industry that Europe is serious about AI safety and ensuring AI solutions perform as promised.”
How to best tackle compliance with the EU AI Act
When it comes to compliance, organisations need the right expertise in place to manage data, ensure transparency, and maintain proper documentation – all essential for standing up to regulatory scrutiny.
Exploring this as a skills issue, Nikolaz Foucaud, Managing Director EMEA, Coursera said: “The next application of the EU AI Act marks a turning point for AI oversight, obliging organisations to meet rigorous compliance standards. General-Purpose AI obligations require transparency, detailed technical documentation, data mapping and risk management. With 78% of companies now deploying AI in at least one business function, according to McKinsey, it’s essential that enterprises cultivate AI literacy across their organisations to manage it effectively and legally.
“This compliance challenge is also a skills challenge. Coursera research shows that UK tech leaders are already concerned about cross-functional GenAI literacy, with 63% of technology leaders observing that non-technical teams consistently underestimate the resources and training needed to achieve GenAI objectives. For the cross-functional teams that will be involved in ensuring compliance with this legislation, this represents an urgent imperative, particularly as British tech leaders are already seeing skills gaps jeopardising their strategic priorities: more than half (52%) do not think their current team has the skills to meet business transformation goals in the next 12-18 months.
“With AI transformation high on the agenda and organisations now open to significant penalties for non-compliance, attracting and developing AI-literate talent needs to be top of the agenda as British businesses seek to thrive in an age of accelerating AI regulation.”
Why keeping humans in the loop is critical
As the Act calls for transparency and accountability, organisations need to scrutinise the way they maintain ‘human-in-the-loop’ protection measures. Those checks and balances still need to be in place for emerging technologies. With a propensity for hallucination – providing incorrect responses and not self-correcting, GenAI can miss the mark, even as its operation is finetuned and improved. This is why keeping humans central to AI’s deployment and oversight is critical.
Focusing on this, as an additional key strategy around compliance for the EU AI Act, Eduardo Crespo, VP EMEA, PagerDuty said: “As the EU AI Act begins implementation, enterprise leaders must focus on operationalising AI responsibly, not just compliantly. That means understanding how AI systems behave in real time and having the ability to intervene immediately when something goes wrong. For organisations applying AI in critical operations, such as incident management or infrastructure monitoring, this level of oversight is essential to ensure good customer service and help prevent any spreading activities that would actively harm that service.
“These growing regulations enforce the need for transparency, traceability and control in AI systems. Enterprises should treat this as an opportunity to embed more robust AI operations practices, ensuring that models perform reliably and align with business and ethical standards. Operational resilience and human-in-the-loop safeguards are best practices, ensuring users are supported by growing AI capabilities, and not alienated by poor experiences that damage trust in technology providers.”
Year 2 for the EU AI Act: what’s in store?
Although the EU AI Act is only a year old, it has already transformed the conversation, shifting from abstract debates on ethics to binding, enforceable standards. While industries are adapting at different paces, one point is undeniable: the Act is a catalyst redefining how AI is developed, governed, and trusted worldwide.
Described as “the first comprehensive regulation on AI by a major regulator anywhere”, its long-term influence will extend far beyond Europe, shaping global approaches to emerging technology governance. The real question now is how other countries and regions will respond with their own regulatory frameworks.
History offers a clear warning. With GDPR generating more than €6 billion in fines as of August 2025, enforcement tends to follow legislation. The EU AI Act is likely to follow a similar trajectory – making it essential for businesses to establish compliance strategies early to avoid costly missteps later on.


