AI

EU’s AI Act Enforces Strict Regulations on High-Risk Systems

LinkedIn Google+ Pinterest Tumblr

The Artificial Intelligence (AI) Act has been officially published in the Official Journal of the European Union, marking the beginning of a comprehensive regulatory framework for AI development and usage within the EU. The Act, endorsed by the European Commission in April 2021 and passed by the European Parliament in May 2024, stipulates a phased implementation approach. Key provisions will come into effect by August 2, 2024, while full compliance is expected by August 2, 2026.

The AI Act differentiates AI systems by risk level, imposing varying degrees of regulatory requirements. Systems found to be of “unacceptable risk” are outright banned. High-risk systems must meet stringent security and transparency standards and undergo formal conformity testing. Systems with limited risk are primarily subject to transparency obligations, whereas minimal-risk AI systems face no regulations.

A centerpiece of the AI Act is its regulation of facial recognition technology in public spaces, classified as high-risk but not prohibited. This decision has sparked debate, with organizations like Amnesty International advocating for an outright ban on general usage of facial recognition technology.

Dan Nechita, the Head of Cabinet for Dragoş Tudorache, member of the European Parliament who guided the Act through numerous votes, emphasized its broad influence. According to Nechita, “Like with the GDPR, where we decided, okay, this is how to protect personal data. GDPR is not perfect, but it has had a global influence. The AI Act will be the same.”

The AI Act requires member states to establish various new bodies to ensure consistent application and compliance. These include the AI Office and the European Artificial Intelligence Board (EAIB), among others. The AI Office will oversee major AI system developers, while the EAIB is tasked with consistent application of regulations across member states. Additionally, national-level supervisory bodies will form an Advisory Forum and Scientific Panel, integrating input from enterprises, academia, and civil society.

The Act’s risk-based approach aims to balance innovation with safeguarding public interest. It targets applications that significantly impact fundamental human rights, such as those used in employment, law enforcement, and social benefits decisions. Medium-risk applications, like chatbots and deepfakes, are subject to transparency obligations.

Nechita explains, “The bulk of the regulation applies to AI systems that have a very, very significant impact on the fundamental rights of humans.” He further clarifies that high-risk cases are prioritized at the top of the regulatory pyramid.

Write A Comment