As Brussels considers scaling back key rules, leaders from Airbus, BNP Paribas, and others warn that the AI law could stifle innovation and global competitiveness.

A coalition of 44 of Europe’s most powerful business leaders has issued a stark warning to Brussels: pause the European Union’s landmark Artificial Intelligence Act or risk throttling the region’s technological and economic competitiveness.
The executives — representing major players including Airbus, Siemens, BMW, and BNP Paribas — sent an open letter this week to top EU lawmakers urging them to halt or significantly revise the legislation, which is set to come into force in August. They argue that the sweeping regulations, which would impose strict controls on “high-risk” AI systems, threaten to undermine the very innovation the bloc hopes to foster.
“The current draft of the AI Act places a disproportionate burden on European companies, while global competitors, particularly from the U.S. and China, remain relatively unregulated,” the letter states. “This imbalance could hinder European firms’ ability to scale and compete globally.”
The AI Act is the world’s first comprehensive legislative effort to regulate artificial intelligence. It includes a classification system that restricts or bans AI systems deemed risky — including those used in facial recognition, predictive policing, or employment screening — and mandates transparency and accountability requirements for AI developers and users.
While the law has been lauded by digital rights advocates as a milestone in ethical technology governance, Europe’s business elite is sounding the alarm over its unintended consequences.
Industry leaders argue that compliance costs, bureaucratic red tape, and legal uncertainty could discourage investment and push AI talent and startups out of the EU. Some warn of a “brain drain” similar to what followed the implementation of the GDPR privacy rules in 2018.
“Europe has the talent and infrastructure to lead in AI,” said Guillaume Faury, CEO of Airbus. “But without a regulatory environment that supports growth and innovation, we risk becoming dependent on foreign technologies.”
In response to mounting pressure, EU officials are reportedly considering softening several provisions, particularly those related to foundation models — the large, general-purpose AI systems that power tools like ChatGPT. The bloc’s Internal Market Commissioner, Thierry Breton, acknowledged the debate, stating, “We must find a balance between innovation and safeguards.”
The tension underscores a broader dilemma facing Europe: how to uphold democratic values and consumer protections while ensuring the continent remains competitive in one of the most transformative technologies of the century.
Digital ministers from France and Germany have also expressed concern, advocating for a more flexible regulatory framework. Meanwhile, civil society groups caution against weakening the rules, stressing the importance of trust and accountability in emerging technologies.
“The EU has a chance to set the global standard,” said Eva König, a policy fellow at the Centre for Digital Democracy. “But doing so means leading by example — not compromising under corporate pressure.”
The next few weeks will be critical. With the AI Act slated for final approval before the end of the summer, lawmakers must weigh the economic stakes against their ambitions for ethical leadership in AI.
As Europe grapples with how to regulate the future, the voices of its largest employers — and fiercest critics — are growing louder.



