EU Approves Landmark AI Regulation Law
  • 381 views
  • 3 min read

The European Union has officially approved the Artificial Intelligence (AI) Act, a landmark piece of legislation that aims to regulate AI systems within the EU. This comprehensive law, considered the first of its kind globally, seeks to foster trustworthy AI, protect fundamental rights, and boost innovation. The AI Act establishes a risk-based framework, categorizing AI systems based on their potential risks and imposing corresponding obligations on developers and deployers.

The EU AI Act, formally known as Regulation (EU) 2024/1689, was published in the EU's Official Journal on July 12, 2024, and entered into force on August 1, 2024. The journey to approval involved multiple stages, including a proposal by the European Commission in April 2021, endorsement by the European Parliament in March 2024, and final approval by the EU Council in May 2024. The Act's provisions will be implemented gradually over the next few years.

A central principle of the AI Act is its risk-based approach. AI systems are classified into different categories based on their potential risks to society and individuals. AI systems deemed to pose an "unacceptable risk," such as those that manipulate human behavior or enable social scoring by governments, are prohibited outright. High-risk AI systems, which include those used in critical infrastructure, education, employment, and law enforcement, are subject to strict requirements regarding data quality, transparency, human oversight, and accuracy. AI systems that are not explicitly banned or classified as high-risk face fewer regulations.

The EU AI Act introduces specific disclosure obligations to ensure transparency and build trust in AI systems. For example, individuals should be informed when interacting with AI systems like chatbots, enabling them to make informed decisions. Furthermore, providers of generative AI models are required to ensure that AI-generated content is identifiable, and certain AI-generated content, such as deepfakes, must be clearly labeled.

The AI Act applies not only to AI systems developed within the EU but also to those deployed or used within the EU, regardless of the provider's location. This broad scope means that companies from around the world must comply with the AI Act if they offer AI products or services to EU citizens or businesses. The Act outlines obligations for various actors in the AI value chain, including providers, deployers, importers, and distributors.

The AI Act establishes a complex governance system involving authorities at both the EU and national levels. The European AI Office will be responsible for overseeing the implementation of the Act at the EU level, while member states will be required to establish national supervisory authorities. This multi-layered governance structure may lead to organizations facing inquiries or enforcement actions in multiple EU jurisdictions simultaneously.

The EU AI Act has a staggered implementation timeline. Prohibitions on unacceptable-risk AI systems take effect starting February 2, 2025. Rules for general-purpose AI (GPAI) models become applicable on August 2, 2025, with a delayed compliance date of August 2, 2027, for GPAI models already on the market. Requirements for high-risk AI systems will be enforced from August 2, 2026. Finally, rules for high-risk AI systems used as safety components in regulated products will be implemented from August 2, 2027.

Non-compliance with the AI Act can result in significant fines, ranging from EUR 7.5 million or 1.5% of worldwide annual turnover to EUR 35 million or 7% of worldwide annual turnover, depending on the type of violation. These substantial penalties underscore the importance of compliance for organizations operating in the EU.

The EU AI Act is expected to have a significant impact on the development and deployment of AI systems globally. By setting clear standards for AI safety, transparency, and ethical considerations, the Act aims to foster innovation while mitigating potential risks. It is anticipated that the EU AI Act will serve as a model for AI regulation in other countries, similar to how the EU's General Data Protection Regulation (GDPR) influenced data privacy laws worldwide. Businesses that utilize AI should assess the risk level of their AI applications and prepare for the new law by implementing risk management, oversight, and other compliance measures.


Writer - Avani Desai
Avani Desai is a seasoned tech news writer with a passion for uncovering the latest trends and innovations in the digital world. She possesses a keen ability to translate complex technical concepts into engaging and accessible narratives. Avani is highly regarded for her sharp wit, meticulous research, and unwavering commitment to delivering accurate and informative content, making her a trusted voice in tech journalism.
Advertisement

Latest Post


OpenAI, the company behind the revolutionary ChatGPT and other AI models, is navigating a complex transition from a non-profit research lab to a for-profit entity. As part of this evolution, OpenAI is taking steps to ensure that its original mission ...
  • 156 views
  • 2 min

Meta Platforms recently launched its standalone AI app, marking a significant move to compete with the likes of OpenAI's ChatGPT. The announcement was made at Meta's inaugural LlamaCon developer conference held in Menlo Park, California. This event a...
  • 410 views
  • 2 min

Denmark is spearheading a novel legal initiative to combat the spread of misinformation and safeguard its citizens in the face of increasingly sophisticated deepfake technology. The government is proposing a law that would criminalize the disseminati...
  • 213 views
  • 2 min

A U. S. federal judge in San Francisco has sided with Meta Platforms, dismissing a copyright infringement lawsuit filed by a group of authors who claimed the company used their books without permission to train its Llama generative AI. The court ruled...
  • 117 views
  • 2 min

Advertisement
About   •   Terms   •   Privacy
© 2025 TechScoop360