AI

Available (14)

Showing 1 - 12 per page



Bridging Semantic Interoperability and Trustworthy AI: Towards a Common Framework for High-Risk AI Medical Devices under the EU AI Act and ISO Standards

Body

A critical topic for future discussion is the convergence between clinical information interoperability standards—such as ISO 13606, ISO 13940, and HL7 FHIR—and emerging standards for artificial intelligence under development within ISO/TC 215 Task Force 5 and ISO/IEC JTC 1/SC 42. This alignment is essential for operationalising the EU Artificial Intelligence Act in synergy with the Medical Device Regulation (MDR), particularly for high-risk AI systems deployed as medical technologies. The harmonisation of health information models with AI-related terminologies, ethical frameworks, and interoperability requirements offers a path towards technically robust and regulation-ready AI systems. By encoding patient context, consent, clinical reasoning, and system constraints using established semantic models, these standards can provide the traceability, accountability, and transparency demanded by European regulatory frameworks. Beyond compliance, such convergence would foster greater trust and adoption of AI in clinical settings, reinforcing the safe and meaningful integration of AI into the fabric of health systems.

Groups
Tags

Contribute robustness and accuracy requirements to the CEN/CLC JTC 21 prEN AI Trustworthiness Framework

Body

Are you an expert in AI System accuracy and/or robustness? Then join CEN/CENELEC JTC 21, WG 4 Foundational and Societal Aspects or WG 3 Engineering Aspects!

prEN AI Trustworthiness Framework is one of the standards developed by CEN/CENELEC (JTC 21, WG 4 Foundational and Societal Aspects) to support the following standardization requests from the European Commission (M/593), to enable companies with high-risk AI Systems in Annex III, to obtain the presumption of conformity:

  • SR 3: Record-keeping through logging capabilities
  • SR 4: Transparency
  • SR 5: Human oversight
  • SR 6: Accuracy
  • SR 7: Robustness

To meet the European Commission's standardization request for AI system accuracy and robustness, we are urgently looking for experts!

Interested? Contact Enrico Panai or me

Tags

Provide comments on WD prEN AI Trustworthiness Framework

Body

I am proud to have contributed as a StandICT fellow call 4 to the working draft of the prEN AI Trustworthiness Framework standard, produced by CEN/CENELEC, JTC 21, WG 4 Societal and Foundational Aspects, Task Group 3 (WI=JT021008).

The working draft was circulated to experts of the national standardization bodies on November 12th 2024 (WD prEN AI trustworthiness framework (Doc. N 830)).

Please provide your comments as a national expert on the WD prENAI Trustworthiness Framework standard by December 10th 2024.

This standard provides high-level horizontal requirements on trustworthiness for AI systems. It relates to other harmonized standards that meet the 10 standardization requests of the European Commission to support the presumption of conformity with the AI Act.

It serves as an entry point to related standards:

- prEN AI Systems Risk Management (WI=JT021024), prEN Conformity Assessment (WI=JT021038)
, and quality management standards: prEN ISO/IEC 25059 rev (WI=JT021027), prEN ISOprEN XXX (WI=JT021039)/IEC 42001 (WI=JT021011), prEN XXX (WI=JT021039) Artificial intelligence - Quality management system for EU AI Act regulatory purposes (WI=JT021039)

and other new standards providing more detailed requirements for various aspects of trustworthiness:

- accuracy (prEN ISO/IEC 23282 (WI=JT021012), prEN XXX (WI=JT021025))

- data governance and quality for AI (prEN ISO/IEC 5259 1-4, prEN XXX (WI=JT021037), prEN XXX (WI=JT021036))

- logging (prEN ISO/IEC 24970 (WI=JT021021))

- cybersecurity (prEN XXX (WI=JT021029))

 

Please provide your comments as a national expert by December 10th, 2024.

As a StandICT fellow of call 5, I will make sure your comments are processed duly. 

Tags

Managing the risks of AI systems to reduce safety, health and fundamental right risks

Body

AI systems can have positive impacts but at the same time they can also bring risks to safety, health and fundamental rights.

The AI Act, art. 9 requires that high-risk AI Systems are subject to a risk management system.

The harmonized standard EN AI System Risk Management specifies requirements on risk management for AI systems. It provides clear and actionable guidance on how risk can be addressed and mitigated throughout the entire lifecycle of the AI system. It applies to risk management for a broad range of products and services which use AI technology, including explicit considerations for vulnerable people. Risks covered include both risks to health and safety and risks to fundamental rights which can arise from AI systems, with impact for individuals, organisations, market and society. 

A key task in managing risks is to define the acceptable residual risk. For safety and health risks there are many existing methods to define such acceptable residual health and safety risks. However, there is lack of methods to define acceptable residual risks to fundamental rights. For example, when an AI system is used to decide whether or not a person can enrol in a certain education program, wrongly rejecting a student might infringe his/her right to education. The infringement of a fundamental right can typically not be compensated by potential benefits the AI system might have.

Could you suggest methods to define acceptable residual risks to fundamental rights?

 

 

Tags

A new standard for AI-based Network Applications in beyond 5G

Body

In today's rapidly evolving digital landscape, the integration of Artificial Intelligence (AI) into 5G and beyond networks has reached a critical juncture. While the potential of such integration offers great opportunities for innovation, efficiency, and service enhancement, it is not without its challenges. The primary obstacle lies in the complexity of the underlying network infrastructure, compounded by the lack of standardized guidelines for AI integration. This has resulted in fragmented solutions that hinder interoperability, scalability, and security, ultimately slowing down the deployment of next-generation network applications and limiting their potential impact across various sectors.

The necessity for a standardized approach cannot be overstated. The absence of a unified framework for AI integration in 5G and beyond networks poses a significant barrier to progress. A standard is needed to simplify the network infrastructure complexity, ensure interoperability across different systems and devices, accelerate service creation and deployment timelines, and optimize the utilization of network resources. Also, with the exponential increase in digital threats, a standard is critical for enhancing the security and resilience of network applications. It is also important as it would facilitate cost-effective service deployments, unlocking innovation potential, and ensuring that the technological advancements are accessible and beneficial to all stakeholders.

Recognizing the pressing need for a solution, I proposed the development of the new IEEE P1948 Standard for AI-based Network Applications in 5G and beyond. This initiative is aimed at establishing harmonized guidelines and protocols that would address the current gaps in AI integration within network infrastructures. My work involved extensive research to identify the core areas of focus, collaboration with industry experts to gather insights and feedback, and leading discussions within the COM/AccessCore-SC/NAB5G Working Group to draft the initial standard framework.

The PAR (Project Authorization Request) for the development of the standard will be discussed at the next New Standards Committee (NesCom) meeting in May 2024, with an expected date for completing the standard and going to the balloting process in early 2025

Tags

Cybersecurity for AI Systems

Body

According to the AI Standardization Request from the European Commission to CEN/CENELEC, European standards or standardisation deliverables shall provide suitable organisational and technical solutions to ensure that AI systems are resilient against attempts to alter their use, behaviour, or performance or compromise their security properties, by malicious third parties, exploiting vulnerabilities of AI systems. Organisational and technical solutions shall include, where appropriate, measures to prevent and control cyberattacks trying to manipulate AI specific assets, such as training data sets (e.g. data poisoning) or trained models (e.g. adversarial examples), or trying to exploit vulnerabilities in AI system’s digital assets or in the underlying ICT infrastructure. These solutions shall be appropriate to the relevant circumstances and risks. Furthermore, the requested European standards or standardisation deliverables shall take due account of the essential
requirements for products with digital elements as listed in Annex I of the EC proposed Regulation on horizontal cybersecurity requirements for products with digital elements (CRA proposal of 15 September 2022).

Discussion: identify existing standards and analyse the gaps (new standards) that still need to be developed to respond to the AI Standardization Request from the European Commission.

Groups
Tags

Standarising Urban Air mobility

Body

Advanced Air Mobility becomes a topic for EU since the new legislation 2021/664 gets into force across member states on 26th January 2023.

Basic terminology:

Urban Air Mobility and Advanced Air Mobility

What is Urban Air Mobility?

Urban Air Mobility (UAM) envisions a safe and efficient aviation transportation system that will use highly automated aircraft that will operate and transport passengers or cargo at lower altitudes within urban and suburban areas.

UAM will be composed of an ecosystem that considers the evolution and safety of the aircraft, the framework for operation, access to airspace, infrastructure development, and community engagement.

What is Advanced Air Mobility?

Advanced Air Mobility (AAM) builds upon the UAM concept by incorporating use cases not specific to operations in urban environments, such as:

  • Commercial Inter-city (Longer Range/Thin Haul)
  • Cargo Delivery
  • Public Services
  • Private / Recreational Vehicles

Where Will UAM Aircraft Land?

The initial UAM ecosystem will use existing helicopter infrastructure such as routes, helipads, and Air Traffic Control (ATC) services, where practicable given the aircraft characteristics. Looking toward the future, the CAA is working to identify infrastructure design needs for these aircraft. CAA expects to develop a new vertiport standard in the coming years.

 

There is a need for provision of the standardisation of the UTM / ATM systems interoperability to assure common and brad adoption of Unmanned Air Mobility Services

Human-in-the-loop and OETP

Body

Human-in-the-loop (HITL) - front

Human-in-the-loop (HITL) - back

Human-in-the-loop (HITL) is a design pattern in AI that leverages both human and machine intelligence to create machine learning models and to bring meaningful automation scenarios into the real world. With this approach AI systems are designed to augment or enhance human capacity, serving as a tool to be exercised through human interaction.

Open Transparency Protocol offers the following model for HITL disclosure to then allow accessing each system HITL properties using the oetp:// URI scheme

HITL Disclosure

---

"Big Tech" is an important driver of innovation, however, the consequent concentration of power creates Big Risks for the economy, ethical use of technology, and basic human rights (we consider privacy as one of them).

A decentralization of SBOM (Software Bill of Materials) and data processing disclosures was earlier described as a key requirement for the Open Ethics Transparency Protocol, OETP.

Fulfillment of this requirement allows disclosure formation and validation by multiple parties and avoids harmful concentration of power. To allow efficient decentralization and access to the disclosures of autonomous systems, such as AI systems powered by trained machine learning models, the vendor (or a developer) MUST send requests to a Disclosure Identity Provider, which in turn, processes structured data of the disclosure with a cryptographic signature generator, and then stores the integrity hash with the persistent storage, for example using Federated Identity Provider. This process was described in the Open Ethics Transparency Protocol I-D document, however, the exact way how to access disclosures was not described there. The specification for the RI scheme described here closes this gap.

My recent work builds on top of our previous contribution to IETF and targets to simplify the access to AI disclosures, and more generally to disclosures of autonomous systems.

Https://Github.Com/OpenEthicsAI/OETP-RI-Scheme

Tags

Invitation to collaboration in applying AI to smart energy

Body

Dear Colleagues,

If it would be of interest to anyone dealing with AI, especially applied to smart energy, I am looking for a collaboration working on standards in applying AI to smart energy and particularly to smart PV systems.

If this theme coincides with your interests or professional activities (and especially if you are engaged in related themes of smart energy and smart grids standardisation in any capacity of engagement in SDOs/SSOs activities), please feel invited to join the EITCI hosted Smart Energy Standards Group at https://eitci.org/sesg (possibly also in an observer capacity). For ease of communication there is also a dedicated LinkedIn group at https://www.linkedin.com/groups/12498639/

The EITCI SESG group supports international SDOs in development of standards for AI assisted PV, as well as in smart energy in general. It brings together acedemics and practitioners in smart grids, PV & AI to jointly work on technical standards in overlap of these domains. The initiative aims at supporting the EU clean energy transition policies with smart energy standards development for digitization and artificial intelligence applications.

I'm looking forward to working together in the future.

Best regards,
Agnieszka

Tags

A book, a question, and an answer.

Body

MPAI has published a book entitled: “Towards Pervasive and Trustworthy Artificial Intelligence: How standards can put a great technology at the service of humankind”.

With the printing industry sparing no efforts publishing books on Artificial Intelligence (AI), why should there be another that, in its title and subtitle, combines the overused words “AI” and “trustworthy”, with the alien words “standards” and “pervasive”?

The answer is that the book describes a solution that covers all the elements of the title: to effectively combine the AI and trustworthy words, but also to make AI pervasive. How? By developing standards for AI-based data coding.

Many industries need standards to run their business and used to have high respect for them. Users benefit from standards: MP3 put users in control of the content they wanted to enjoy, and the television – and now the video – experiences have little to do with how users used to approach audio-visual content 30 years ago.

At that time, the media industry was loath to invest in open standards. The successful MPEG standards development model, however, changed its attitude. Similarly, the AI industry has been slow in developing AI-based data coding standards making proprietary solutions their preferred route.

MPAI has shown that can take different types of data, encode them using AI and develop standards that make the technology and the benefits it brings with it pervasive. At the same time, MPAI standards can take a technology that may well be untrusted and make it trustworthy.

The MPAI book describes how MPAI develops standards that can also be used, how standards can make AI pervasive, and how MPAI gives users the means to make informed decisions about how to choose an implementation having the required level of trustworthiness.

This is the time to join the MPAI unique adventure. MPAI is open to those who want to make its vision real.

 

More info on MPAI at: https://mpai.community/

MPAI book available at: https://www.amazon.com/dp/B09NS4T6WN/

Tags

Why algorithmic transparency needs a protocol?

Body

Why algorithmic transparency needs a protocol?

 

As (algorithmic) operations are becoming more complex, we realize that less and less we can continue relying on the methods of the past where Privacy Policy or ToC served (did they?) in building trust in the business. Moreover, it rarely helped any user to understand what’s going on with their data under the hood. “I agree, I understand, I accept.” — the big lies we told ourselves when clicking on the website’s cookie notice or when ticking the checkbox of one another digital platform. In the age of artificial intelligence, the privacy and cybersecurity risks remained, but now we’re observing the expansion of the risk profiles for every service to include bias and discriminatory issues. What should we do? A typical answer is a top-down regulation brought by national and cross-national entities. Countries and trade unions are now competing for AI ethics guidelines and standards. Good. What if you’re building an international business? As a business, you have to comply. Tons of digital paperwork (thanks, now it’s digital!) — and you could get settled in one single economic space. Once you’re there, there’s a chance you can move to another one by repeating the costly bureaucratic procedure. Unfortunately, this is not scalable. We call it the “cost of compliance”, and these costs are high. There is a possible way of avoiding the compliance scalability issue: disclosing the modus operandi once and matching it with existing requirements on each market. To make it possible we need a universally-accepted concept of product disclosure.

The complete article on Medium is available to learn more about disclosure and the transparency protocol to be used in conjunction with it.

https://lukianets.medium.com/why-algorithmic-transparency-needs-a-protocol-2b6d5098572f

Tags

Open Ethics Transparency Protocol

Body

The Open Ethics Transparency Protocol (OETP) describes the creation and exchange of voluntary ethics Disclosures for IT products. It is brought as a solution to increase the transparency of how IT products are built and deployed. The scope of the Protocol covers Disclosures for systems such as Software as a Service (SaaS) Applications, Software Applications, Software Components, Application Programming Interfaces (API), Automated Decision-Making (ADM) systems, and systems using Artificial Intelligence (AI). The IETF I-D document provides details on how disclosures for data collection and data processing practice are formed, stored, validated, and exchanged in a standardized and open format.

OETP provides facilities for:

  • Informed consumer choices : End-users able to make informed choices based on their own ethical preferences and product disclosure.
  • Industrial-scale monitoring : Discovery of best and worst practices within market verticals, technology stacks, and product value offerings.
  • Legally-agnostic guidelines : Suggestions for developers and product-owners, formulated in factual language, which are legally-agnostic and could be easily transformed into product requirements and safeguards.
  • Iterative improvement : Digital products, specifically, the ones powered by artificial intelligence could receive nearly real-time feedback on how their performance and ethical posture could be improved to cover security, privacy, diversity, fairness, power balance, non-discrimination, and other requirements.
  • Labeling and certification : Mapping to existing and future regulatory initiatives and standards.

Please feel free to join the discussion here and in the GitHub repository
 

IETF datatracker link: https://datatracker.ietf.org/doc/draft-lukianets-open-ethics-transparency-protocol/

Comments

Tags