In a recent divergence of approaches to AI regulation, Microsoft has signaled its likely commitment to the European Union's AI code of practice, while Meta has firmly rejected the guidelines. This split highlights the growing regulatory tensions between US tech firms and European regulators concerning artificial intelligence.
The EU's AI code of practice, officially known as the General-Purpose AI (GPAI) Code of Practice, is a voluntary framework designed to assist companies in complying with the EU AI Act's obligations regarding safety, transparency, and copyright. The code requires signatories to publish summaries of the content used to train their general-purpose AI models and implement policies to comply with EU copyright law. Drawn up by 13 independent experts, the voluntary code of practice aims to provide legal certainty to signatories. The AI Act itself came into force in June 2024 and applies to a wide range of tech companies, including Alphabet (Google), Meta, OpenAI, Anthropic, and Mistral. The European Commission will begin to enforce fines against non-compliance by August 2026.
Microsoft's willingness to engage with the EU's AI code aligns with its broader strategy of advocating for balanced regulation that fosters innovation while addressing societal concerns. Microsoft President Brad Smith stated that the company intends to sign the code, pending a review of the documents. This position underscores Microsoft's commitment to adhering to regulatory standards and promoting ethical practices in the AI industry. Microsoft is investing heavily in artificial intelligence, with plans to spend approximately $80 billion on data centers to train AI models. The company has also asserted that its use of AI tools internally has enhanced productivity across various departments, including sales, customer support, and software development.
In contrast, Meta has taken a firm stance against the EU's AI code. Meta's chief global affairs officer, Joel Kaplan, argued that the code creates legal ambiguities for model developers and imposes requirements that extend well beyond the scope of the AI Act. Kaplan also voiced concerns that the code's "over-reach" would hinder the development and deployment of frontier AI models in Europe, potentially stunting the growth of European companies. Meta's decision to reject the code aligns with concerns expressed by a group of 45 European companies. These companies share Meta's apprehension that the EU's approach to AI regulation could stifle innovation and competition.
Meta's rejection of the EU's AI code exposes the company to potential legal challenges and enforcement actions from the EU AI Office. By forgoing the voluntary code, Meta may face increased scrutiny and legal risks. Thomas Regnier, spokesperson for the EU's Digital Affairs Office, emphasized that companies not participating in the code must provide alternative “compliance measures,” or face more rigorous regulatory scrutiny. The AI Act empowers the EU to impose fines of up to 7% of a company's annual global revenue for non-compliance.
The divergent stances of Microsoft and Meta highlight the ongoing debate surrounding AI regulation. Microsoft's support for the EU's code positions the company as a leader in responsible AI development. Meta's resistance reflects concerns about potential overregulation and its impact on innovation. The EU's approach to AI regulation, as exemplified by the AI Act and the GPAI Code of Practice, seeks to balance the promotion of innovation with the need to address ethical and societal concerns. The coming years will reveal the long-term consequences of these differing approaches and their impact on the development and deployment of AI technologies in Europe and beyond.