European Union’s AI Act: First Take

Dr. Santanu Bhattacharya
2 min readMar 14, 2024

Today, the European Parliament members have cast their votes to finalize the text of the much-awaited AI Act, slated to come into effect this May. While this marks an exciting milestone, there are lingering questions regarding which regulators will take charge, the division of governance between the EU and individual nations, and the handling of generative AI.

European Union Building. Photo by Guillaume Périgois on Unsplash

The EU’s Artificial Intelligence Act is crafted to regulate AI systems by categorizing them based on risk levels and enforcing development and usage standards, with a focus on transparency, data integrity, and human oversight. It tackles ethical dilemmas and operational hurdles across diverse sectors such as healthcare, education, finance, and energy.

Categories of AI Risk

This Act stands as the most extensive AI legislation globally, poised to significantly impact multinational corporations due to its global reach. Any entity deploying or developing AI systems within the EU boundaries will need to comply with its provisions.

A pivotal aspect of the Act underscores ethical AI, mandating adherence to fundamental rights and safety standards. It outright bans AI in biometric surveillance or social scoring and mandates disclosure of AI-generated content, underscoring the EU’s commitment to fostering trustworthy AI aligned with European values.

AI systems are categorized into four tiers, each carrying escalating obligations and penalties.

While tasks like email spam filters pose minimal risk, techniques like social scoring and real-time facial recognition are deemed unacceptable and face outright bans.

Penalties

Non-compliance with the Act will result in substantial fines, reaching up to €35 million or 7% of global annual turnover for banned AI usage, and €15 million or 3% of global annual turnover for inadequate risk assessments. Companies utilizing externally developed AI systems must adhere to developer instructions and assume additional responsibilities in critical domains like public services or finance.

Firms developing high-risk AI systems must proactively assess and prepare to comply with new obligations, including impact assessments, operational monitoring, and transparency measures. Numerous mission-critical AI systems across industries like banking, insurance, and healthcare will fall under the “high risk” category, subjecting them to new legal obligations and hefty penalties.

Reactions

Following the Act’s reinforcement in June 2023, European companies, including Renault and Heineken, expressed concerns over its potential impact on competitiveness and technological sovereignty. With implementation underway, industry reactions will be closely watched in the coming weeks.

--

--

Dr. Santanu Bhattacharya

Chief Technologist at NatWest, Prof/Scholar at IISc & MIT, worked for NASA, Facebook & Airtel, built start-ups, and future settler for Mars & Tatooine