The AI Act is Here

Alvaro Montoro - Aug 4 - - Dev Community

August 1st, 2024 marked a milestone for politics and Artificial Intelligence (AI). It was the day that the European Artificial Intelligence Act (AI Act) entered into force—the first comprehensive regulation of AI in the world.

Anything related to AI restrictions receives a divided reception in social networks and AI circles, and the AI Act was no exception. Some people viewed it as an unnecessary rein for development and integration, while others welcomed it as much-needed control over systems operating "lawlessly."

What is the AI Act?

The Artificial Intelligence Act (AI Act) is a comprehensive regulation by the European Union to ensure safe and trustworthy Artificial Intelligence (AI) development. It focuses on protecting people's interests over corporations' interests.

This legislation is part of a larger effort to prioritize human rights and fairness in developing and deploying AI systems within the European Union's borders.

It intends to balance the many benefits AI potentially brings and the new risks this technology entails. It will embrace the societal benefits and economic growth while preventing abuse that could risk the safety (including physical safety) and rights of the people.

Risk Classification

The AI Act sets rules to address different AI risks and how providers and developers should handle them. It categorizes the risks into four types (ordered by severity):

  • Minimal or no risk AI
  • Limited risk AI
  • High-risk AI
  • Unacceptable risk AI

Triangle with the risk categories

Pyramid of risk categories (source: European Commission)
 

In the following sections, we'll review each risk, explain their meanings, and provide some examples.

Minimal or No Risk

These AI systems pose little to no risk to people's rights and safety and can operate without additional obligations. However, companies could voluntarily adopt codes of conduct to provide more transparency.

Some examples of minimal-risk AI:

  • Spam filters
  • AI-enabled video games
  • AI-enabled recommendation systems

According to the European Union Commission, most of the AI systems currently operating in the EU would fall under this category.

Limited Risk

The limited risk is associated with information and (lack of) transparency in AI usage. Providers must disclose the origin and nature of the content and identify what is/was AI-generated in a way that is readable by humans and machines.

The idea is to give people a choice: continue interacting with AI or step back from it. People should be aware of whether they are dealing with AI or another person, and AI-generated/modified videos and images must be clearly identified as such.

Some examples of limited-risk AI:

  • Artificially-generated text
  • AI Chatbots
  • Deep-fake videos
  • Biometric categorization
  • Emotion recognition systems

Notice how this category applies to more than artificially generated text; it also covers audio, video, images, and other media.

High Risk

The AI Act focuses on more complex apps and systems that imply a bigger safety risk that must meet strict requirements to ensure they include risk-mitigation systems, high-quality datasets, activity logging, detailed documentation, clear user information, human oversight, and high levels of robustness, accuracy, and cybersecurity.

Because of their nature and impact on society, these systems will be subject to greater scrutiny by the authorities and go through an approval process to access the European market after development and after every substantial change they undergo.

An schema with the 4 steps to get approved for a high-risk AI: 1) develop the AI; 2) undergo conformity assessment; 3) registration in AI database; 4) Once approved, the system can be placed on the market. If any substantial change happens, go back to step 2.

Development process for high-risk AI systems (source: European Commission)
 

Some examples of high-risk AI:

  • Remote biometric identification systems
  • AI systems to handle critical infrastructure
  • Migration, asylum, and border control management
  • Programs to determine admission to education at all levels
  • Apps used by a judicial authority to research and interpret facts and the law
  • Recruitment systems to analyze and filter job applications and evaluate candidates
  • AI systems intended to influence the outcome of an election or referendum or people's voting behavior

Some of the high-risk systems may resemble the unacceptable risks from the next section. Sometimes, the line may be blurry, and some companies will push the limits.

Unacceptable Risk

The AI Act bans AI systems and applications which are considered a clear threat to people's fundamental rights. These systems try to manipulate human behavior, shun free will, or police indiscriminately.

Some examples of unacceptable-risk AI:

  • Emotion recognition systems in the workplace
  • Toys that encourage dangerous behavior in children
  • AI systems that create 'social scores' by governments or companies
  • Real-time remote biometric identification in public spaces for law enforcement purposes
  • Indiscriminate scraping of Internet or CCTV content for facial images to build or expand databases and training sets

As you may have noticed, these requirements target not only companies or private entities but also governments and public institutions.

There has been heavy criticism online as these unacceptable risks allow for "narrow exceptions," leaving a door open for governments and corporations to abuse the AI Act.

What's Next?

Though the AI Act entered into force on August 1st, that doesn't mean all companies must comply with all requirements immediately. There are some adjustment times and deadlines to meet.

Here's a timeline of key deadlines and events:

  • August 1st, 2024: the AI Act enters into force
  • February 2nd, 2025: prohibition on AI systems that present unacceptable risks
  • August 2nd, 2025: deadline for EU Member States to designate authorities to oversee and carry out market surveillance activities
  • August 2nd, 2025: rules for General-Purpose AI models apply
  • August 2nd, 2026: most of the regulations in the AI Act apply
  • August 2nd, 2027: rules for high-risk AI systems embedded in regulated products apply

AI Act timeline (created with Canva by Alvaro Montoro)
 

As highlighted by the dates, the AI Act provides companies and developers with a two-to-three-year cushion to comply with all the regulations.

The European Commission has launched the AI Pact to help developers adapt to the AI Act. This initiative facilitates the adoption of critical obligations of the AI Act ahead of the legal deadlines.

More Information and Bibliography

Read more about the new European AI policies on these links:

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Terabox Video Player