Managing the risks of AI systems to reduce safety, health and fundamental right risks .
AI systems can have positive impacts but at the same time they can also bring risks to safety, health and fundamental rights.
The AI Act, art. 9 requires that high-risk AI Systems are subject to a risk management system.
The harmonized standard EN AI System Risk Management specifies requirements on risk management for AI systems. It provides clear and actionable guidance on how risk can be addressed and mitigated throughout the entire lifecycle of the AI system. It applies to risk management for a broad range of products and services which use AI technology, including explicit considerations for vulnerable people. Risks covered include both risks to health and safety and risks to fundamental rights which can arise from AI systems, with impact for individuals, organisations, market and society.
A key task in managing risks is to define the acceptable residual risk. For safety and health risks there are many existing methods to define such acceptable residual health and safety risks. However, there is lack of methods to define acceptable residual risks to fundamental rights. For example, when an AI system is used to decide whether or not a person can enrol in a certain education program, wrongly rejecting a student might infringe his/her right to education. The infringement of a fundamental right can typically not be compensated by potential benefits the AI system might have.
Could you suggest methods to define acceptable residual risks to fundamental rights?
Please login to post comments