On March 30, 2026, the European Union's AI Act entered into force for its most consequential provisions — the rules governing general-purpose AI systems and high-risk AI applications. The moment marks the culmination of a five-year legislative process that began with a proposal from the European Commission in April 2021 and survived intense lobbying, multiple revisions, and a last-minute crisis over how to handle foundation models.
The AI Act is the world's first comprehensive legal framework for artificial intelligence, and its implications extend far beyond Europe's borders. Any company that offers AI products or services to EU customers — regardless of where the company is headquartered — must comply with its requirements. This means that OpenAI, Anthropic, Google, Microsoft, and virtually every other major AI company in the world is now subject to EU AI regulation, making the Act effectively a global standard for the industry.
What the AI Act Actually Requires
The AI Act takes a risk-based approach to regulation, classifying AI systems into four categories: unacceptable risk (prohibited), high risk (subject to strict requirements), limited risk (subject to transparency obligations), and minimal risk (largely unregulated). The classification of a particular AI system depends on its intended use and the potential harm it could cause.
Prohibited AI systems include social scoring systems used by governments, real-time biometric surveillance in public spaces (with narrow exceptions for law enforcement), and AI systems that exploit psychological vulnerabilities to manipulate behavior. These prohibitions took effect in February 2025 and have already resulted in several enforcement actions against companies operating in the EU.

High-Risk AI: The Compliance Challenge
The high-risk category is where the AI Act's requirements are most demanding and where compliance costs are highest. High-risk AI systems include those used in critical infrastructure, education, employment, essential services, law enforcement, migration, and the administration of justice. For these systems, the AI Act requires conformity assessments, technical documentation, data governance measures, human oversight mechanisms, and registration in an EU database.
Data Visualization
EU AI Act Compliance Costs by Company Size (Estimated Annual, 2026)
- High-Risk AI ($M)
- GPAI Systems ($M)
"The AI Act is not a barrier to innovation — it is a framework for trustworthy innovation. Companies that invest in compliance now will have a competitive advantage as the world moves toward regulated AI markets."
— Margrethe Vestager, Executive VP, European CommissionGeneral-Purpose AI: The Foundation Model Rules
The most contentious provisions of the AI Act concern general-purpose AI systems — foundation models like GPT-5, Claude Mythos, and Gemini Ultra that can be used for a wide range of tasks. These provisions were added late in the legislative process, following intense debate about whether and how to regulate systems whose capabilities and risks are difficult to assess in advance.
Under the final rules, providers of general-purpose AI systems must maintain technical documentation, comply with EU copyright law in their training data, and publish summaries of the content used for training. Systems that are deemed to pose 'systemic risk' — defined as those trained with more than 10^25 FLOPs of compute — face additional requirements including adversarial testing, incident reporting, and cybersecurity measures.
Data Visualization
AI Act Compliance Timeline: Key Milestones
- Provisions in Force
Industry Response and Compliance Strategies
The AI industry's response to the AI Act has been a mixture of adaptation and resistance. Several major companies have established dedicated EU compliance teams and are investing heavily in the documentation, testing, and governance infrastructure required by the Act. Others have indicated that they may limit the availability of certain AI features in the EU rather than bear the compliance costs.
Apple has announced that several AI features introduced in iOS 20 will not be available in the EU at launch, citing the complexity of complying with the AI Act's requirements for on-device AI systems. Meta has delayed the European rollout of its AI assistant features. These decisions have drawn criticism from EU officials, who argue that companies are using compliance complexity as a pretext for market withdrawal.
The Brussels Effect: Global Regulatory Convergence
The most significant long-term impact of the EU AI Act may not be on AI companies operating in Europe, but on the global regulatory landscape. The 'Brussels Effect' — the tendency for EU regulations to become de facto global standards because companies find it more efficient to comply globally than to maintain separate compliance regimes — is already visible in AI.
Several major AI companies have indicated that they will apply EU AI Act requirements globally rather than only in the EU, citing the operational complexity of maintaining different compliance standards in different markets. The UK, Canada, and Australia are all developing AI regulations that draw heavily on the EU framework. Even the US, which has historically resisted prescriptive technology regulation, is seeing increased legislative activity at the state level that mirrors EU approaches.
The AI Act represents a bet by the EU that trustworthy AI — AI that is transparent, accountable, and subject to human oversight — is not just ethically preferable but commercially advantageous in the long run. Whether that bet pays off will depend on whether the compliance costs imposed by the Act are offset by the trust premium that regulated AI commands in the market. The next two years will provide the first real evidence on that question.