Luca Nannini
My fellowship addresses three critical gaps in the European AI standardization landscape: The first gap concerns the harmonisation of Documentation Development, as there is an urgent need for technical documentation (Annex ZA, HAS checklists) to connect developing standards with AI Act requirements following the M/593 request. Without this work, standards risk delayed OJEU citation, creating regulatory uncertainty. I've worked on developing preliminary harmonization documents for JT021008 (Trustworthiness), JT021039 (QMS), and JT021024 (Risk Management). The second gap is related to cross-Standard Technical Coherence. As multiple AI standards are developed simultaneously, it creates potential inconsistencies in terminology, requirements, and implementation approaches. I've created mapping documents highlighting interconnections between standards, particularly focusing on how QMS requirements interface with other M/593 standards, to ensure a coherent framework. The third gap focuses on the alignment with EU AI Act Articles, as technical specifications in draft standards must precisely align with AI Act articles to support regulatory compliance. I have contributed targeted technical refinements to clauses 6.4 (transparency) and 6.5 (human oversight) in the Trustworthiness Framework to strengthen alignment with Articles 13 and 14 of the AI Act.
AI Accuracy and Robustness Standards: As Editor of EN AI Trustworthiness Framework Part II, my work directly supports European citizens' rights to accurate and robust AI systems. The standard establishes technical requirements ensuring AI systems deployed across the EU meet rigorous accuracy standards and maintain performance across operational conditions, protecting citizens from unreliable algorithmic decision-making in high-risk contexts.
SME Innovation Ecosystem: The editorial leadership through N1106 coordination enables European SMEs to compete effectively in AI markets by providing clear compliance pathways rather than costly regulatory uncertainty. This supports innovation while ensuring responsible AI deployment protecting European citizens.
European Leadership in Global AI Governance: The editorial role positions European values-based approaches to AI accuracy and robustness for global influence. The framework embeds principles of reliability, trustworthiness, and accountability into technical specifications that influence international AI standardization discussions.
Consumer Protection Framework: The cross-WG coordination through N1106 ensures AI standards address consumer concerns around system reliability, performance consistency, and safety while remaining technically implementable. This balance protects European consumers while supporting technological advancement and maintaining Europe's competitive position in global AI markets.