Anita Prinzie
The AI Act is a European regulation promoting the uptake of human-centric and trustworthy AI, while ensuring protection of health, safety, and fundamental rights. Companies can prove conformity with the AI Act by complying with the 10 harmonised standards drafted by CEN-CENELEC. My fellowship contributes to two harmonised standards supporting the AI Act.
Firstly, EN AI Trustworthiness Framework provides requirements for trustworthy AI systems that align with European stakeholders and regulation and European values. Enable the design and management of trustworthy AI systems that proactively respect European norms and values and fundamental rights. It also indicates the need for holistic risk management taking into account the risks to users and society. The requirements for logging, transparency, human oversight, accuracy and robustness account for managing the risks to affected users and society at large.
Secondly, The EN AI Risk Management standard enables us to control risks not only on the individual level but also on the level of the society (e.g., misinformation and disinformation risks, risks to democratic processes, …). The scope of the standard indicates that risks covered include both risks to health and safety and risks to fundamental rights which can arise from AI systems, with impact for individuals, organisations, market and society. The risk policy (section 5.1.2), the risk management plan (section 5.1.4), the risk evaluation (section 5.2.1.4) specify requirements on consultation with potentially affected stakeholders (or their proxies, including civil society organisations). The implementation and verification of risk control measures (section 5.2.2.2) and the evaluation of residual risk (section 5.2.3) refer to the test of necessity and proportionality in a democratic society, for risks pertaining to a potential interference with a fundamental right that permits qualifications.