The Next Step for AI Regulation: EU publishes first Draft Regulation on Artificial Intelligence.

Robert Hobbs
6 min readApr 22, 2021

TLDR: European Commission has taken the next step toward regulating the use of Artificial Intelligence in all EU markets. The proposed scope is broad and the fines are significant. Starting core requirements sooner rather than later offers lower costs and risks.

https://digital-strategy.ec.europa.eu/en/library/proposal-regulation-european-approach-artificial-intelligence

Photo by h heyerlein on Unsplash

Scope of proposed AI Regulation

On April 20, 2021, the European Union published the first draft regulation on Artificial Intelligence and Machine Learning [1, p. 37]. For those organizations experiencing the challenges of General Data Privacy Rights (GDPR), the path of the AI regulation will be remarkably familiar. While currently only a proposal, the new AI regulation has broad implications to companies worldwide. The draft applies to any provider of a product with an AI component

1. In service in the EU

2. On the market in the EU

3. Used by people in the EU

4. Creates output affecting people in the EU

Corresponding to the broad usage definition, the regulation takes a broad definition of “AI”. The companion Annex I defines “Artificial Intelligence” to include multiple approaches: machine learning, logic-based, knowledge-based, inductive reasoning, inference, expert systems, statistical approaches, and optimization methods. [2, p. 1] The inclusion of “logic-based” systems could cover classic if-then logic although further clarification may reduce the scope of several terms. The boundary between AI, machine learning, and static programming needs clarification. Still, the EU definition covers many tools not normally associated with AI or machine learning yet commonly used in applications.

Risk-Based Classification of AI

The AI regulation would also classify the AI implementations based on a risk analysis on the potential impact on fundamental rights and safety. The draft states the risk analysis is based on the intended purpose while also stating the risk level depends “on the specific purpose and modalities in which the system is used.” [1, p. 13]. The risk analysis results in the classification of AI services as either high-risk or non-high-risk. The inclusion of human rights in the risk analysis is in alignment with the EU human rights positions and has a significant impact on the evaluations. The regulation includes services such as critical infrastructure, remote biometric identification (face recognition), and others not normally associated as high-risk. Annex III specifically lists high-risk areas driven by human rights concerns including access to training, employee recruitment, employee retention, and access to public services [2, p. 5].

The systems impacting law enforcement and migration are specific concerns. [1, p. 7] In addition to remote and local biometric applications, The EU will consider high-risk those functions that assess an individual’s likelihood to re-offend, deep-fake detection, evidence reliability, criminal behavior identification, and data-integration services that create relationships in the data. Migration processes pose similar concerns with immigrant risk evaluations and travel document evaluations meriting high-risk status. The volume of migration in the EU has driven increased use of automation systems for document screening and increased risk to human rights.

Implications for High-Risk AI

The identification of an AI service as high-risk brings requirements for internal and external action. Under the proposed regulation, Article 6(2) requires the providers to register their high-risk AI implementations in a new EU database. [1, p. 68] Internally, the provider has multiple obligations including additional quality management processes, technical documentation, log management, conformity assessment, and CE marking of conformity. [1, p. 16] The providers must demonstrate the conformity of the high-risk applications to the national competent authority when requested. The designation as high-risk brings additional requirements and costs. The proposal includes data collected during the review period of the Trustworthy AI Guide on compliance costs. Based on feedback, the cost of compliance would add approximately 4% to the cost of an AI system with an additional 4% cost for verification. [1, p. 8] Non-high-risk systems would still have additional requirements and costs although at lower levels than for the high-risk systems.

AI Fines: Max 6% on Total Worldwide Annual Turnover

Similar to General Data Protection Rights (GDPR), the AI regulations include multiple levels of fines to ensure compliance with the AI requirements. [1], [3] The Commission intends for the fines to be “effective, proportionate, and dissuasive” with the tiers reflecting the risk of the AI applications. [1, p. 82] Non-compliance with Article 5 or Article 10 could result in fines of up to 30 million € or “6% of total worldwide annual turnover for the preceding financial year, whichever is higher.” [1, p. 82] Article 5 identifies proscribed uses of AI such as exploiting minors, exploiting individuals with mental disabilities, and prohibited real-time remote biometric identification. Article 10 addresses quality concerns for High-risk implementations including security design, test planning, and bias analysis. Violations of other articles and nono-high-risk applications may reach fines of 20 million € or 4 % annual turnover. As a multi-jurisdictional entity, system events affecting multiple national areas may be subject to fines from multiple competent national authorities. While each authority is constrained in the size of the fine, the AI provider may be subject to a find from multiple national agencies. If practices from GDPR hold, the multiple fines would be a rarity but is a possibility within the current AI proposal.

To Minimize Cost, Start Compliance Work Now

The current proposal for AI regulation reflects the evolution of both the EU human rights and the HLEG Guide to Trustworthy AI. With the increasing capabilities and adoption of AI tools, the European Union has recognized the potential downside to individuals and democratic institutions. While not yet an official regulation, the AI proposal shows the direction the EU Commission will take. The proposed regulation and the Trustworthy AI guide give actionable guidance for starting down the path to complying with the EU’s future regulations. Companies have long had to make strategic plans based on moving regulatory and ethical targets across multiple jurisdictions. [4] While the final version of the EU AI regulation is still evolving, several requirements are clear. Companies planning on a European market or customers must prepare to comply with broad AI requirements to implement the trustworthy requirements including human rights protections. Beginning the compliance analysis can provide foresight of the effort and costs to ensure existing and future AI systems can compete in European markets with lower risk of market loss or fines.

References

[1] “Proposal for a Regulation on a European approach for Artificial Intelligence | Shaping Europe’s digital future,” Apr. 21, 2021. https://digital-strategy.ec.europa.eu/en/library/proposal-regulation-european-approach-artificial-intelligence (accessed Apr. 21, 2021).

[2] “ANNEXES to the Proposal for a Regulation of the European Parliament and of the Council.” Apr. 21, 2021, [Online]. Available: https://ec.europa.eu/newsroom/dae/document.cfm?doc_id=75789.

[3] “General Data Protection Regulation (GDPR) — Official Legal Text,” General Data Protection Regulation (GDPR). https://gdpr-info.eu/ (accessed Jan. 31, 2021).

[4] R. Finn and K. Wadhwa, “The ethics of ‘smart’ advertising and regulatory initiatives in the consumer intelligence industry,” Info J. Policy Regul. Strategy Telecommun. Inf. Media Bradf., vol. 16, no. 3, pp. 22–39, 2014.

--

--

Robert Hobbs

ML Researcher and Enterprise Project Manager. Bringing an enterprise perspective to dependable machine learning.