Emilia Tantar
I focus on the development of a new standard Work Model type (Technical Specification) that facilitates the consolidation, integration, and implementation of requirements, helping organisations comply with AI laws, regulations, and standards more effectively. The objective is to guide and support organisations on how to meet the multiple requirements imposed by laws, regulations, and standards on AI-based systems. The initiative will not create new requirements but will provide assistance and guidance to organisations on how to consolidate, integrate, implement and audit different sources of requirements
My fellowship addresses three critical gaps in the European AI standardization landscape: The first gap concerns the harmonisation of Documentation Development, as there is an urgent need for technical documentation (Annex ZA, HAS checklists) to connect developing standards with AI Act requirements following the M/593 request. Without this work, standards risk delayed OJEU citation, creating regulatory uncertainty. I've worked on developing preliminary harmonization documents for JT021008 (Trustworthiness), JT021039 (QMS), and JT021024 (Risk Management). The second gap is related to cross-Standard Technical Coherence. As multiple AI standards are developed simultaneously, it creates potential inconsistencies in terminology, requirements, and implementation approaches. I've created mapping documents highlighting interconnections between standards, particularly focusing on how QMS requirements interface with other M/593 standards, to ensure a coherent framework. The third gap focuses on the alignment with EU AI Act Articles, as technical specifications in draft standards must precisely align with AI Act articles to support regulatory compliance. I have contributed targeted technical refinements to clauses 6.4 (transparency) and 6.5 (human oversight) in the Trustworthiness Framework to strengthen alignment with Articles 13 and 14 of the AI Act.
I addressed priorities and gaps on three specific AI areas, including:
The main priorities of my fellowship are to support the development of two European standards for AI systems, Risk Management and Cybersecurity, which will enable organisations to manage risks and address cybersecurity concerns in alignment with the AI Act.
The s-X-AIPI project endeavour is to research, develop, test, and validate a bespoke suite of trustworthy self-X AI technologies tailored for process industries. This initiative aims to bridge the gap between AI capabilities and traditional automation processes, ensuring that AI tools are both accessible and effective across various industrial applications.
With the support of this fellowship, I tackle specific bias detection and mitigation requirements with accompanying illustrative example within CEN/CLC/JTC21 WG3 "Concepts, measures and requirements for managing bias in AI systems" standard that are aligned/harmonised with relevant EU AI Act legislation.
It aims to develop technical specifications and standards to efficiently manage terminology work ensuring seamless information exchange, minimizing misunderstandings, and enhancing both human-human and human-AI interactions.
The work I am leading in European Standardisation through the CEN and CENELEC JTC 21 WG 2, answers directly the main operational pillars of the Standardisation request received from the European Commission as to provide technical specifications through standards (candidate for harmonization) in support of the EU AI Act.
In this fellowship, the main priority focuses on helping organisations to drive innovation and technological transformation using the Centre of Excellence (CoE) as the best management mechanism in a context of a shortage of professional profiles with expertise in Artificial Intelligence and other disruptive technologies.
My work to date has been seeking to promote trustworthiness through fundamental rights protections in European harmonised technical standards concerning AI, in particular JTC21.
My fellowship focuses on researching the feasibility of developing an international (e.g. ISO) standard for deploying Artificial Intelligence (AI) in climate action, culminating in a Technical Report following consultation with chairs of relevant ISO technical committees.