EU set to adopt world’s first AI legislation that will ban facial recognition in public places
The EU's new AI Act sets global standards for AI regulation with a risk-based approach that balances innovation and safety.
The European Union (EU) is leading the race to regulate artificial intelligence (AI). Putting an end to three days of negotiations, the European Council and the European Parliament reached a provisional agreement earlier today on what’s set to become the world’s first comprehensive regulation of AI.
Carme Artigas, the Spanish Secretary of State for digitalization and AI, called the agreement a “historical achievement” in a press release. Artigas said that the rules struck an “extremely delicate balance” between encouraging safe and trustworthy AI innovation and adoption across the EU and protecting the “fundamental rights” of citizens.
The draft legislation—the Artificial Intelligence Act— was first proposed by the European Commission in April 2021. The parliament and EU member states will vote to approve the draft legislation next year, but the rules will not come into effect until 2025.
A risk-based approach to regulating AI
The AI Act is designed using a risk-based approach, where the higher the risk an AI system poses, the more stringent the rules are. To achieve this, the regulation will classify AIs to identify those that pose ‘high-risk.’
The AIs that are deemed to be non-threatening and low-risk will be subject to “very light transparency obligations.” For instance, such AI systems will be required to disclose that their content is AI-generated to enable users to make informed decisions.
For high-risk AIs, the legislation will add a number of obligations and requirements, including:
Human Oversight: The act mandates a human-centered approach, emphasizing clear and effective human oversight mechanisms of high-risk AI systems. This means having humans in the loop, actively monitoring and overseeing the AI system’s operation. Their role includes ensuring the system works as intended, identifying and addressing potential harms or unintended consequences, and ultimately holding responsibility for its decisions and actions.
Transparency and Explainability: Demystifying the inner workings of high-risk AI systems is crucial for building trust and ensuring accountability. Developers must provide clear and accessible information about how their systems make decisions. This includes details on the underlying algorithms, training data, and potential biases that may influence the system’s outputs.
Data Governance: The AI Act emphasizes responsible data practices, aiming to prevent discrimination, bias, and privacy violations. Developers must ensure the data used to train and operate high-risk AI systems is accurate, complete, and representative. Data minimization principles are crucial, collecting only the necessary information for the system’s function and minimizing the risk of misuse or breaches. Furthermore, individuals must have clear rights to access, rectify, and erase their data used in AI systems, empowering them to control their information and ensure its ethical use.
Risk Management: Proactive risk identification and mitigation will become a key requirement for high-risk AIs. Developers must implement robust risk management frameworks that systematically assess potential harms, vulnerabilities, and unintended consequences of their systems.
Ban on certain AI uses
The regulation will outright ban the use of certain AI systems whose risks are considered to be “unacceptable.” For instance, the use of facial recognition AI in public areas will be banned, with exceptions for use by law enforcement.
The regulation also prohibits AIs that manipulate human behaviour, use social scoring systems, or exploit vulnerable groups. Additionally, the legislation will also ban emotional recognition systems in areas such as schools and offices as well as scraping of images from surveillance footage and the internet.
Penalties and provisions to attract innovation
The AI Act will also penalize companies in case of violations. For instance, violating the banned AI applications laws will result in a penalty of 7% of the company’s global revenue, while those that violate their obligations and requirements will be fined 3% of their global revenue.
In a bid to boost innovation, the regulation will allow the testing of innovative AI systems in real-world conditions with appropriate safeguards.
While the EU is already ahead in the race, the U.S., U.K., and Japan are also trying to bring in their own AI legislation. The EU’s AI Act could serve as a global standard for countries that seek to regulate AI.