StandICT.eu
User area
EU-OS Logo powered by StandICT.eu
  • EUOS
  • Discussion groups
  • Standards repository
  • Landscape and gap analysis

follow us

Managing the safety, health and fundamental right risks of AI systems

Breadcrumb

  • Home
  • Discussion Groups
  • Artificial Intelligence
  • 267
  • Managing The Safety, Health And Fundamental Right Risks Of AI Systems
Up
0
Down
  • Posted By Anita Prinzie
  • 8 months 3 weeks ago
  • 1 Replies

Managing the safety, health and fundamental right risks of AI systems .

AI systems can have positive impacts but at the same time they can also bring risks to safety, health and fundamental rights.

The AI Act, art. 9 requires that high-risk AI Systems are subject to a risk management system.

The harmonized standard EN AI System Risk Management specifies requirements on risk management for AI systems. It provides clear and actionable guidance on how risk can be addressed and mitigated throughout the entire lifecycle of the AI system. It applies to risk management for a broad range of products and services which use AI technology, including explicit considerations for vulnerable people. Risks covered include both risks to health and safety and risks to fundamental rights which can arise from AI systems, with impact for individuals, organisations, market and society. 

A key task in managing risks is to define the acceptable residual risk. For safety and health risks there are many existing methods to define such acceptable residual health and safety risks. However, there is lack of methods to define acceptable residual risks to fundamental rights. For example, when an AI system is used to decide whether or not a person can enrol in a certain education program, wrongly rejecting a student might infringe his/her right to education. The infringement of a fundamental right can typically not be compensated by potential benefits the AI system might have.

Could you suggest methods to define acceptable residual risks to fundamental rights?

Add a comment
  • Answered By Maria Ines Robles
  • 3 months 3 weeks ago
Up
0
Down

The challenge of defining acceptable residual risks to fundamental rights is complex, as it often involves subjective values and societal norms. The IETF draft "Research Challenges in Coupling Artificial Intelligence and Network Management" explores challenges in integrating AI into network management and highlights unresolved problems that may benefit from novel AI-driven approaches. While this draft primarily focuses on technical aspects of AI in network environments, its insights into addressing difficult problems could inform broader discussions on risk management, including those related to fundamental rights.

To address the specific issue of residual risks to fundamental rights, interdisciplinary approaches combining technical, legal, and ethical perspectives are essential. These approaches could include developing risk assessment frameworks that incorporate ethical impact evaluations alongside traditional risk management methods. Transparency and explainability are also critical, as AI systems must provide clear and understandable explanations for their decisions to enable oversight and accountability. Additionally, involving diverse stakeholders, including those directly affected by AI decisions, can help ensure that the definition of acceptable residual risks aligns with societal values and priorities.

  • Log in or register to post comments

Please login to post comments

Latest Discussions

Posted in Artificial Intelligence

Artificial Intelligence for Network Operations

  • 2 weeks 6 days ago
Posted in IoT

Guidance on RESTful Design for Internet of Things Systems

  • 2 weeks 6 days ago
Posted in Interoperability

Conveying the More Instant Messaging Interoperability Message ID

  • 2 weeks 6 days ago

Recent comments

Commented in Study on Narrow-Band Internet…

The new NTN IoT topics in Release 19

  • 3 months 2 weeks ago

Study on Narrow-Band Internet…

Commented in 3GPP RAN2 Working Group Discus…

The new candidate NTN IoT topics for the 3GPP Release 20

  • 3 months 2 weeks ago

3GPP RAN2 Working Group Discus…

Commented in Managing the safety, health an…

Defining Residual Risks to Fundamental Rights in AI

  • 2 months ago

Managing the safety, health an…

Commented in New and emerging DLT and Block…

Blockchain Gateways: Use-Cases

  • 2 months ago

New and emerging DLT and Block…

Commented in Towards autonomous open radio…

A specific IETF document…

  • 2 months ago

Towards autonomous open radio…

Commented in AI Use in the Standards Making…

Domenico Natale is currently…

  • 8 months 1 week ago

AI Use in the Standards Making…

Commented in AI Use in the Standards Making…

AI (NLP and semantic search)…

  • 8 months 1 week ago

AI Use in the Standards Making…

Commented in Security Certification of QKD

Quantum Key Distribution …

  • 1 year 1 month ago

Security Certification of QKD

Commented in Can the usage of simulators an…

Simulation and digital twins as model-based AI

  • 1 year 1 month ago

Can the usage of simulators an…

Commented in Towards autonomous open radio…

The concept of autonomous…

  • 1 year 1 month ago

Towards autonomous open radio…

Most recent tags

AI
Big Data
Blockchain

In collaboration with

Logo
Logo
Logo
Logo
Logo
Logo
  • About
    • StandICT.eu 2026
    • External Advisory Group
    • External Pool of Experts
    • Synergies
  • Open Calls
    • 9th Open Call
    • StandICT.eu 2026 Closed Calls
      • 8th Open Call
      • 7th Open Call
      • 6th Open Call
      • 5th Open Call
      • EPE open call
      • 4th Open Call
      • 3rd Open Call
      • 2nd Open Call
      • 1st Open Call
      • StandICT.eu 2023
    • FAQs
  • Results
    • StandICT.eu Fellows
    • Success stories
    • Impact Reports
    • Landscape & Gap Analyses
    • Publications
  • EUOS
    • ICT Standards Observatory
    • Semiconductors Expression of Interest
  • Academy
    • StandICT.eu 2026 Academy
    • Mentorship Programme
    • Standards Education Group
    • Academy Archive
  • SMEs
  • News & Events
    • News
    • Events
    • Newsletters
Menu

eu-flag

The StandICT.eu 2026 project is funded by the European Union under grant agreement no. 101091933. The content of this website does not represent the opinion of the European Union, and the European Union is not responsible for any use that might be made of such content.

eu-flag

The StandICT.eu 2026 project is funded by the European Union under grant agreement no. 101091933. The content of this website does not represent the opinion of the European Union, and the European Union is not responsible for any use that might be made of such content.

© Copyright 2024 - StandICT.eu

Change your cookie preferences
  • About
    • StandICT.eu 2026
    • External Advisory Group
    • External Pool of Experts
    • Synergies
  • Open Calls
    • 9th Open Call
    • StandICT.eu 2026 Closed Calls
      • 8th Open Call
      • 7th Open Call
      • 6th Open Call
      • 5th Open Call
      • EPE open call
      • 4th Open Call
      • 3rd Open Call
      • 2nd Open Call
      • 1st Open Call
      • StandICT.eu 2023
    • FAQs
  • Results
    • StandICT.eu Fellows
    • Success stories
    • Impact Reports
    • Landscape & Gap Analyses
    • Publications
  • EUOS
    • ICT Standards Observatory
    • Semiconductors Expression of Interest
  • Academy
    • StandICT.eu 2026 Academy
    • Mentorship Programme
    • Standards Education Group
    • Academy Archive
  • SMEs
  • News & Events
    • News
    • Events
    • Newsletters