The European Union (EU) has established itself as a trailblazer in regulating Artificial Intelligence (Ai) with the landmark Artificial Intelligence Act (Ai Act). This comprehensive legislation, adopted in March 2024, sets a precedent for other regions grappling with the ethical and legal ramifications of this transformative technology.[1] This article explores the EU Ai Act in more detail.
About the EU Ai Act
The Ai Act is the first-ever legal framework for Ai by a regional regulatory body, a welcomed step forward in the legal regulation of Ai. This ambitious legislation establishes a comprehensive approach to Ai design, development and use within the EU. According to the Act, its purpose is to improve the functioning of the EU internal market and promote the uptake of human-centric and trustworthy Ai, while ensuring a high level of protection of health, safety and fundamental rights - specifically, those enshrined in the EU Charter for Fundamental Rights, including democracy, the rule of law and environmental protection, against the harmful effects of Ai systems in the Union and supporting innovation.[2] By providing a regulatory framework, the Act sets a region-wide standard for trustworthy Ai norms and practices. Its influence is already being felt internationally, with countries like USA and Brazil drawing inspiration for their own Ai regulatory frameworks.[3]
The Act uses a risk-based regulation; where risk is defined as the combination of the probability of an occurrence of harm and the severity of that harm. It classifies Ai applications into three categories based on their potential for harm: unacceptable risk, high risk, and low risk. This means that the Act adjusts or prioritises regulatory efforts according to the risk levels attributed to each Ai system. This tiered approach ensures that the most rigorous regulations are focused on high-risk applications, such as Ai-powered recruitment tools or facial recognition systems, while allowing for innovation in low-risk applications.[4] As such, most rules in the Act are for Ai systems categorised as “High Risk” & Ai Systems that are categorised as low risk only have transparency obligations. Some, like social scoring or biometric tracking systems are banned entirely.
A risk-based approach is only one option. For example, a rights-based approach takes a different route, it prevents Ai risks by creating laws that apply to all Ai systems regardless of the level of risks they present, and of course, different approaches will work for different contexts and objectives.
What are the rules?
The Act span well over 100 pages. Basically, the rules include design requirements for high-risk Ai systems, e.g. they must be trained on high quality data sets. The Act also provides actor specific obligations for key actors in the life cycle of high-risk Ai systems. Key Actors are providers, users, importers, distributors – except where they act in personal capacity. Finally, the Act includes general principles and rules promoting transparency and explainability in Ai systems, as well as oversight and accountability mechanisms.
Prohibited Ai Systems
The Ai Act takes a cautious stance to prohibiting certain Ai applications deemed unacceptable, and detrimental to fundamental rights and societal well-being. It explicitly prohibits the development and use of AI systems that present an "unacceptable risk." This includes social Scoring Systems, Deceptive and Manipulative Ai, Untargeted Biometric Scraping, Emotion Recognition in Sensitive Environments.[5]
Challenges and Opportunities
The EU's AI Act is a landmark achievement, but challenges remain. Implementing and enforcing its provisions across the diverse EU member states will require ongoing collaboration. Additionally, the Act's effectiveness in keeping pace with the rapid evolution of AI technology will need to be evaluated and adjusted over time.[6]
The Act, while trailblazing in its ambition to regulate Ai, it shows signs of hasty development and political compromise. The rush to establish a global standard is evident in several areas of ambiguity that leaves some crucial questions unanswered. The Act has several open questions relating to the implementation and the discharge of obligations under it. It relies on delegated acts and future guiding documents to provide much needed clarity to achieve specific effective and uniform implementation across EU member states.
As noted earlier, the Act primarily focuses on high-risk Ai systems, leaving a significant portion of Ai applications outside its scope.
Despite these challenges, the Ai Act presents a unique opportunity for the EU to become a leader in the development and deployment of trustworthy Ai.
What does this mean for persons in Africa
The Act's influence is likely to be far-reaching. Its already inspiring other countries and regions to establish their own AI regulatory frameworks, ultimately leading to a more regulated approach to AI development on a global scale.
Companies outside of the EU, that do business in the EU, or offer AI-powered services to people in the EU, will need to comply with the EU AI Act. This means understanding the act's requirements and ensuring their AI systems meet the standards for responsible and trustworthy development and use.
The EU AI Act is a significant step towards regulating AI. African countries are developing their own AI policies, and the EU Act could serve as a model or inspiration for these frameworks. The African Union (AU) is currently working on a continental AI strategy that promotes responsible AI adoption considering Africa's specific needs.
The Act officially enters into force 20 days after publication in the EU Official Journal (anticipated in June/July 2024). Different sections have staggered implementation timelines.
1. The prohibitions of AI applications that present an "unacceptable risk" like social scoring systems, take effect within six months of the Act's entry into force. We will update you once published.
2. Ai Literacy obligations for companies at Article 3 come into effect 6 months after publication. This means companies using Ai tools will need to ensure their employees and anyone involved with the Ai system are equipped with Ai literacy. This will require an understanding of how Ai works, its potential risks and biases, and how to use it responsibly. Companies will likely develop or outsource training programs to educate their workforce and potentially provide basic information to those interacting with the Ai on their behalf. This Ai literacy requirement aims to create a workforce that can leverage Ai's capabilities while mitigating risks and ensuring ethical and transparent use.
3. Provisions for high-risk Ai systems will be come into force 24 months after entry into force. Ai systems already regulated by other EU laws, like medical devices, will have a longer grace period of 36 months.
This phased approach allows stakeholders time to adapt to the new regulations while ensuring swift action against the most high-risk applications.
In closing
The EU's AI Act is a welcomed and necessary step towards ensuring the ethical and responsible design, development and use of Ai. By addressing potential risks, promoting transparency, and empowering users, the Act sets a commendable precedent for navigating the complexities of Ai. As the world grapples with the transformative power of Ai, the EU's leadership in this domain offers hope for a future where this technology serves the greater good.
As experts in the intersection on Ai and law, we partner with businesses and organisations to support compliance with Ai legal regulations such as the EU Ai Act. In addition, we offer customised Ai Literacy and Leadership training, specifically designed for your context and for compliance with legal regulation. Ultimately, our aim is to support the effective human oversight of Ai systems, towards trustworthy, responsible systems.
[1] European Parliament. (2024, March 13). Artificial Intelligence Act. https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence
[2] Article 1 of the EU AI Act.
[3] European Parliament. (2024, March 13). Artificial Intelligence Act. https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence
[4] European Union Artificial Intelligence Act. (n.d.). The AI Act Explorer. https://artificialintelligenceact.eu/ai-act-explorer/
[5] See fn 2.
[6] Pellegrini, N., & Panetta, L. (2023, September 22). AI Governance: A Consolidated Reference EU AI Act European Commission Draft OECD Recommendations on AI NIST AI Risk Management Framework Regulation Book. OneTrust. https://www.onetrust.com/resources/ai-governance-a-consolidated-reference-eu-ai-act-european-commission-draft-oecd-recommendations-on-ai-nist-ai-risk-management-framework-regulation-book/