The European Commission has reiterated its commitment to the scheduled implementation of the AI Act, rejecting renewed appeals from major technology firms and some EU member states for a delay.
Despite lobbying efforts by companies including Alphabet, Meta, ASML and Mistral, the Commission confirmed that the legal deadlines set out in the legislation will remain in force.
Speaking at a press briefing on 4 July, Commission spokesperson Thomas Regnier stated unequivocally: “There is no stop the clock. There is no grace period. There is no pause.” His comments follow a series of industry letters and public statements calling for a multi-year postponement of the Act’s provisions, citing high compliance costs, a lack of technical guidance, and fears of reduced competitiveness.
The AI Act, passed by the European Parliament in March 2024 and in force since 1 August 2024, is the first comprehensive legal framework of its kind. It adopts a risk-based classification of AI systems, with escalating obligations for general-purpose and high-risk applications. The first obligations took effect in February 2025, targeting banned practices and transparency requirements. Obligations for general-purpose AI (GPAI) models — such as those developed by OpenAI, Google and Mistral — will become binding on 2 August 2025. Requirements for high-risk systems will follow from 2 August 2026.
Some firms argue that without a published Code of Practice — originally scheduled for 2 May — the legislation creates legal uncertainty. The Commission has acknowledged the delay in releasing the Code, which is intended to guide the development and deployment of GPAI systems. However, officials have confirmed that the Code will be presented in the coming days, with voluntary sign-up expected by August and application likely by the end of 2025.
While signing the Code will remain optional, non-signatory companies will not benefit from the legal assurances it provides. According to the Commission, the Code will help clarify expectations around transparency, robustness, and accountability — key concerns for downstream users of GPAI systems. The Future Society, an AI advocacy group, described the Code as a “central instrument” in ensuring quality standards across the AI value chain.
The Commission has also indicated that it will propose limited simplifications to digital regulation later this year. These are expected to include reduced reporting requirements for small and medium-sized enterprises. However, these adjustments will not affect the core enforcement deadlines of the AI Act.
Despite this, industry figures have continued to voice concern. A coalition of European and American companies warned that the compliance burden risks stifling innovation and shifting development outside of the EU. Firms such as ASML and Mistral, alongside U.S. giants Google and Meta, have requested a delay of up to two years.
The Commission’s firm stance has been welcomed by civil society groups. Bram Vranken of Corporate Europe Observatory said that industry pressure to delay implementation is part of a broader attempt to weaken the regulation. “Delay. Pause. Deregulate. That is Big Tech’s lobby playbook to fatally weaken rules that should protect us from biased and unfair AI systems,” he said.
Commission officials argue that the regulation is necessary to safeguard fundamental rights, ensure legal certainty, and build public trust in AI technologies. The legislation includes outright bans on certain AI practices — such as real-time biometric surveillance and systems that exploit vulnerable individuals — and sets strict conditions for deployment in sensitive areas like health, law enforcement, and employment.
While the United States and China continue to lead the development of large-scale AI systems, the EU has taken a regulatory-first approach, positioning itself as a global standard-setter. With the Commission now preparing final guidance and technical documentation, the countdown to full enforcement is proceeding as planned.