Read time: 4 mins

by Luca Bertuzzi

The upcoming Czech Presidency shared a discussion paper with the other EU governments to gather their views on AI definition, high-risk systems, governance and national security.

The paper, obtained by EURACTIV, will be the basis for the discussion in the Telecom Working Party on 5 July, with the view of providing an updated compromise text by 20 July. The member states will then be asked to provide written comments on the new compromise by 2 September.

“The CZ Presidency has identified four high-level outstanding issues which require a more thorough discussion and where receiving directions from the member states would be crucial to moving the negotiations to the next level,” the document reads.

The document is the first of the Czech Presidency, which formally will only start in July. The draft indicates continuity with the direction taken by the French Presidency and provides the main topics where the Czechs will focus.

Definition

The internal document notes that ‘a large number’ of EU countries have questioned the definition of what constitutes an AI-based system, considering the present definition too broad and ambiguous, thereby entailing the risk of also covering simple software.

One related question is the extent to which the Commission should be able to amend via secondary legislation Annex I of the regulation – the one defining Artificial Intelligence techniques and approaches.

The Czech Presidency offers different alternatives on how to address these concerns.

The most conservative option is to maintain the Commission’s proposal or go for the wording proposed by the French Presidency, adding some elements of clarification by including references such as learning, reasoning and modelling.

This scenario sees the EU executive maintain its delegated powers, or changes can only occur through an ordinary legislative procedure.

The other possibilities entail a narrower definition covering AI systems developed through either only machine learning techniques or machine learning and knowledge-based approaches.

Annex I is removed in this instance, and the AI techniques are moved directly into the text, either in the law’s preamble or the relevant article. The Commission would only have the power to adopt implementing acts to clarify existing categories.

High-risk systems

The AI Act’s Annex III includes a list of AI applications considered high-risk for human wellbeing and fundamental rights. However, for some member states, the wording is too broad, and the use cases covered should only be those for which there has been an impact assessment.

In this case, the most conservative option is maintaining the text per the French compromise.

Alternatively, EU countries can argue for deleting or adding certain use cases or making the wording more precise.

The Czech Presidency also proposed adding a layer, namely some high-level criteria for evaluating what is, in practice, a significant risk. Providers would then self-assess whether their system meets such criteria.

Another way to narrow the classification would be to distinguish whether the AI system provides fully automated decision-making, which would be automatically high-risk, or if they just inform human decisions.

In the latter case, the system would only be considered high-risk if the AI-generated information would be significant in the decision-making. However, what is a considerable input would have to be further clarified by the Commission via secondary legislation.

The EU countries are asked whether the Commission should maintain the power to add new high-risk cases to the annex, whether it should also be able to remove use cases under certain conditions or if these powers should be deleted.

Governance and enforcement

Several EU countries have raised concerns that the regulation’s “overly decentralised national-level governance framework could pose limitations to effective enforcement,” particularly as they fear they have insufficient capacity and expertise to enforce the AI rules.

At the same time, the Czech Presidency notes that the legislation should provide a “certain level of flexibility for national law and specificities” and that “delegating enforcement powers to a more central level also requires careful practical and budget implications considerations.”

As elaborated under the French Presidency, the current governance framework follows the EU’s Market Surveillance Regulation, with national authorities in the driving seat, an AI Board for coordination and the Commission’s interventions limited to extreme cases.

Another way to go about it would be to further support member states with Union Testing Facilities, a pool of experts and an emergency mechanism to fast-track support.

The AI Board could also be strengthened to assist national authorities and have a more explicit mandate based on the Medical Devices Regulation by guiding and coordinating market surveillance activities.

Finally, the Commission could be empowered to launch direct investigations under exceptional circumstances, but this “implies considerate practical and financial implications.”