Improving the trustworthiness of AI systems with a harmonized standard EN AI Trustworthiness Framework .

As AI is omnipresent and impacting everyones life, ensuring that AI systems are trustworthy is quintessential.

The AI Act is a European regulation on artificial intelligence (AI) promoting the uptake of human-centric and trustworthy artificial intelligence (AI), while ensuring a high level of protection of health, safety, fundamental rights. One way for companies to prove conformity with the AI Act is to meet the underlying harmonized standards.

The EN AI Trustworthiness Framework is one of these harmonized standards.

It provides a framework for AI systems' trustworthiness which contains terminology, concepts, high-level horizontal requirements, guidance and a method to contextualize those to specific stakeholders, domains or applications. The high-level horizontal requirements address foundational aspects and characteristics of trustworthiness of AI systems.

The EN AI Trustworthiness Framework standard serves as an entry point to more in-depth harmonized standards on different aspects of trustworthiness:

  • robustness
  • accuracy
  • governance and quality of data
  • transparency and documentation
  • human oversight
  • record keeping through logging
  • cybersecurity

One of the aims is to clarify which requirements are to be met by whom where in the AI life cycle.

A challenge is to map the AI Act defined stakeholders of providers, deployers, importers, distributors, product manufacturers, authorized representatives of providers, affected persons to the industry known stakeholders in the AI life cycle. Furthermore, certain transparency requirements have to be enforced upstream to the providers of AI systems to enable human oversight by deployers downstream the AI life cycle.

How would you map the AI Act stakeholders to the stakeholders you define as part of the Business Requirement Document for a project including an AI system?