Tech firms are convinced that the E.U.’s regulations on artificial intelligence prevent its citizens from getting access to the most recent and greatest items. Despite this, a number of civil society organizations claim then, arguing that AI designers must develop products that protect their customers ‘ privacy and safety.
Some software companies delayed EU launch.
There have been cases where the E. U. launch of AI products have been delayed or canceled as a result of rules. For example, Meta’s Llama 4 line of AI models were released in every country, aside from Europe, this month. Its AI bots integrated into Instagram, WhatsApp, and Messenger were only allowed to enter the union 18 months after the U.S. began to regulate it.
Similar to this, Google’s Bard and Gemini models ‘ delayed Western releases and delayed release dates mean that only eight member states ‘ AI Outlines are now available. Apple Intelligence has only recently released iphone 18.4 and is available in the E. U. after “regulatory difficulties brought about by the Digital Markets Act” prevented its launch in the region.
Customers are not missing out on these items because they are simply not safe to be released on the E. U. business but, according to Sébastien Pant, deputy head of contacts at the German buyer organization BEUC, according to Euronews.
It is not against the law to veto new features released by tech companies. Instead, businesses must ensure that new features, products, or technologies comply with current laws before entering the EU market.
SEE: The New Rules for Artificial Intelligence in Europe, a new version of the EU AI Act.
EU laws encourage businesses to create more privacy-conscious tools.
Instead of excluding E. U. citizens from AI products, E. U. legislation frequently forced tech companies to adapt and provide them with better, more privacy-conscious solutions. For instance:
- After the Data Protection Commission took it to court, X agreed to permanently stop collecting personal data from E. U. users ‘ public posts to train its AI model Grok.
- Due to concerns about how it handled its citizens ‘ data, DeepSeek, the Chinese AI model, was banned in Italy.
- After EU regulators suggested it might require explicit consent from content owners, Meta delayed training of its large language models on public content shared on Facebook and Instagram until last June, and it has not yet resumed.
Users typically don’t anticipate their public posts being used to train AI models, according to Kleanthi Sardeli, a data protection lawyer for the advocacy group noyb, but that’s exactly what many tech companies are doing, frequently with little regard for transparency. The protection of data is a fundamental human right, and it should be taken into account when creating and deploying AI tools.
Google and Meta assert that EU AI laws are unfair to citizens and that their revenue is also at risk.
Google and Meta have openly criticized European AI regulation, suggesting that it will stifle the region’s potential for innovation.
In a report released last year, Google revealed how Europe lags behind other world superpowers in terms of AI innovation. Only 34 % of E. U. businesses used cloud computing in 2022, which is a significant advance in AI development, which is significantly behind the European Commission’s goal of 75 % by 2030. In 2022, only 2 % of global AI patents were filed in Europe, compared to 61 % and 21 %, respectively, for China and the U.S., the top two largest producers.
The report attributed a large portion of the challenges facing the region as a result of E. U. regulations. Over 100 pieces of legislation that have had an impact on the digital economy and society have been introduced by the EU since 2019. The challenge lies in the complexity, not just the sheer number of regulations, says Google EMEA president Matt Brittin in a blog post that follows. Moving away from the “regulation-first” approach can lead to the development of AI.”
However, Google, Meta, and other tech giants are in financial jeopardy if the regulations forbid them from launching products in the E. U., where the region represents a sizable market with 448 million people. On the other hand, if they continue to launch products but violate the regulations, they could face severe fines of up to €35 million, or 7 % of global turnover, in the case of the AI Act.
Europe is currently engaged in numerous regulatory battles with major tech companies in the United States, many of which have already resulted in significant fines. In February, Meta stated that it was prepared to approach the U.S. president with its concerns about what it perceived as unfair regulation.
At the World Economic Forum in January, U.S. President Donald Trump referred to the fines as” a form of taxation.” U.S. Vice President Vance criticized Europe’s “excessive regulation” in a speech at the Paris AI Action Summit in which he argued that the international strategy should “foster the creation of AI technology rather than strangle it.”