Managing the safety, health and fundamental right risks of AI systems .
AI systems can have positive impacts but at the same time they can also bring risks to safety, health and fundamental rights.
The AI Act, art. 9 requires that high-risk AI Systems are subject to a risk management system.
The harmonized standard EN AI System Risk Management specifies requirements on risk management for AI systems. It provides clear and actionable guidance on how risk can be addressed and mitigated throughout the entire lifecycle of the AI system. It applies to risk management for a broad range of products and services which use AI technology, including explicit considerations for vulnerable people. Risks covered include both risks to health and safety and risks to fundamental rights which can arise from AI systems, with impact for individuals, organisations, market and society.
A key task in managing risks is to define the acceptable residual risk. For safety and health risks there are many existing methods to define such acceptable residual health and safety risks. However, there is lack of methods to define acceptable residual risks to fundamental rights. For example, when an AI system is used to decide whether or not a person can enrol in a certain education program, wrongly rejecting a student might infringe his/her right to education. The infringement of a fundamental right can typically not be compensated by potential benefits the AI system might have.
Could you suggest methods to define acceptable residual risks to fundamental rights?
The challenge of defining acceptable residual risks to fundamental rights is complex, as it often involves subjective values and societal norms. The IETF draft "Research Challenges in Coupling Artificial Intelligence and Network Management" explores challenges in integrating AI into network management and highlights unresolved problems that may benefit from novel AI-driven approaches. While this draft primarily focuses on technical aspects of AI in network environments, its insights into addressing difficult problems could inform broader discussions on risk management, including those related to fundamental rights.
To address the specific issue of residual risks to fundamental rights, interdisciplinary approaches combining technical, legal, and ethical perspectives are essential. These approaches could include developing risk assessment frameworks that incorporate ethical impact evaluations alongside traditional risk management methods. Transparency and explainability are also critical, as AI systems must provide clear and understandable explanations for their decisions to enable oversight and accountability. Additionally, involving diverse stakeholders, including those directly affected by AI decisions, can help ensure that the definition of acceptable residual risks aligns with societal values and priorities.
Please login to post comments