Artificial Intelligence

IETF AI Preferences Working Group

Body

The IETF has a working group (WG) called AI Preferences (aipref), whose charter reads:

"The AI Preferences Working Group will standardize building blocks that allow for the expression of preferences about how content is collected and processed for Artificial Intelligence (AI) model development, deployment, and use.

There are many ways that preferences regarding content might be expressed. The Working Group will focus on attaching preferences to content either by including preferences in content metadata or by signaling preferences using the protocol that delivers the content.

The Working Group will deliver:

A standards-track document covering vocabulary for expressing AI-related preferences, independent of how those preferences are associated with content.

Standards-track document(s) describing means of attaching or associating those preferences with content in IETF-defined protocols and formats, including but not limited to using Well-Known URIs (RFC 8615), such as the Robots Exclusion Protocol (RFC 9309), and HTTP response header fields.

A standard method for reconciling multiple expressions of preferences."

Link to the WG:

https://datatracker.ietf.org/group/aipref/about/

Link to documents in the WG:

https://datatracker.ietf.org/group/aipref/documents/

Engineering Reliable Autonomous Systems

Body

The 1st IEEE International Conference on Engineering Reliable Autonomous Systems (ERAS) was held on May 29-30, 2025 in the USA (Worcester, Massachusetts). ERAS offered a space for all stakeholders across autonomous system reliability to come together to discuss the key challenges and present progress made towards solving them. Along with the presentations of the original, selected, peer-reviewed papers, three keynote talks from Academia, Industry, and Government each, were invited. The conference also hosted one tutorial delivered by IBM, and one workshop organized by MIT, MA, USA.

The scope of the conference is on all the aspects of engineering reliable autonomous systems, ranging from systems and software engineering to their evaluation and verification. The proceedings will be soon available on:
https://ieeexplore.ieee.org

 

Contribute robustness and accuracy requirements to the CEN/CLC JTC 21 prEN AI Trustworthiness Framework

Body

Are you an expert in AI System accuracy and/or robustness? Then join CEN/CENELEC JTC 21, WG 4 Foundational and Societal Aspects or WG 3 Engineering Aspects!

prEN AI Trustworthiness Framework is one of the standards developed by CEN/CENELEC (JTC 21, WG 4 Foundational and Societal Aspects) to support the following standardization requests from the European Commission (M/593), to enable companies with high-risk AI Systems in Annex III, to obtain the presumption of conformity:

  • SR 3: Record-keeping through logging capabilities
  • SR 4: Transparency
  • SR 5: Human oversight
  • SR 6: Accuracy
  • SR 7: Robustness

To meet the European Commission's standardization request for AI system accuracy and robustness, we are urgently looking for experts!

Interested? Contact Enrico Panai or me

Tags

Artificial Intelligence for Network Operations

Body

The IETF draft titled "Artificial Intelligence (AI) for Network Operations" (https://datatracker.ietf.org/doc/draft-king-rokui-ainetops-usecases/) explores how AI and machine learning (ML) can be integrated into network operations—a concept referred to as AINetOps. The primary aim is to automate and optimize network management tasks, thereby improving efficiency, reliability, and scalability. This approach is relevant to both single-layer (IP or Optical) and multi-layer (IP/Optical) networks and is intended to tackle the growing complexity of modern network infrastructures.

AINetOps includes a broad set of use cases such as reactive troubleshooting, proactive assurance, closed-loop optimization, misconfiguration detection, and virtual operator support. By using AI and ML, networks can evolve from static, manually operated systems into dynamic environments capable of real-time adaptation and autonomous decision-making. This transformation enables predictive analytics, helping operators to detect and resolve issues before they affect service quality.

The draft highlights the need for existing IETF protocols and architectures to evolve in support of AINetOps. It outlines the architectural, procedural, and protocol-level changes required to implement AI-powered operations effectively. These include developing standardized interfaces and APIs, integrating AI engines with network components, and creating data models that accurately represent the network’s state and configuration.

Provide comments on WD prEN AI Trustworthiness Framework

Body

I am proud to have contributed as a StandICT fellow call 4 to the working draft of the prEN AI Trustworthiness Framework standard, produced by CEN/CENELEC, JTC 21, WG 4 Societal and Foundational Aspects, Task Group 3 (WI=JT021008).

The working draft was circulated to experts of the national standardization bodies on November 12th 2024 (WD prEN AI trustworthiness framework (Doc. N 830)).

Please provide your comments as a national expert on the WD prENAI Trustworthiness Framework standard by December 10th 2024.

This standard provides high-level horizontal requirements on trustworthiness for AI systems. It relates to other harmonized standards that meet the 10 standardization requests of the European Commission to support the presumption of conformity with the AI Act.

It serves as an entry point to related standards:

- prEN AI Systems Risk Management (WI=JT021024), prEN Conformity Assessment (WI=JT021038)
, and quality management standards: prEN ISO/IEC 25059 rev (WI=JT021027), prEN ISOprEN XXX (WI=JT021039)/IEC 42001 (WI=JT021011), prEN XXX (WI=JT021039) Artificial intelligence - Quality management system for EU AI Act regulatory purposes (WI=JT021039)

and other new standards providing more detailed requirements for various aspects of trustworthiness:

- accuracy (prEN ISO/IEC 23282 (WI=JT021012), prEN XXX (WI=JT021025))

- data governance and quality for AI (prEN ISO/IEC 5259 1-4, prEN XXX (WI=JT021037), prEN XXX (WI=JT021036))

- logging (prEN ISO/IEC 24970 (WI=JT021021))

- cybersecurity (prEN XXX (WI=JT021029))

 

Please provide your comments as a national expert by December 10th, 2024.

As a StandICT fellow of call 5, I will make sure your comments are processed duly. 

Tags

Managing the safety, health and fundamental right risks of AI systems

Body

AI systems can have positive impacts but at the same time they can also bring risks to safety, health and fundamental rights.

The AI Act, art. 9 requires that high-risk AI Systems are subject to a risk management system.

The harmonized standard EN AI System Risk Management specifies requirements on risk management for AI systems. It provides clear and actionable guidance on how risk can be addressed and mitigated throughout the entire lifecycle of the AI system. It applies to risk management for a broad range of products and services which use AI technology, including explicit considerations for vulnerable people. Risks covered include both risks to health and safety and risks to fundamental rights which can arise from AI systems, with impact for individuals, organisations, market and society. 

A key task in managing risks is to define the acceptable residual risk. For safety and health risks there are many existing methods to define such acceptable residual health and safety risks. However, there is lack of methods to define acceptable residual risks to fundamental rights. For example, when an AI system is used to decide whether or not a person can enrol in a certain education program, wrongly rejecting a student might infringe his/her right to education. The infringement of a fundamental right can typically not be compensated by potential benefits the AI system might have.

Could you suggest methods to define acceptable residual risks to fundamental rights?

Comments

Managing the risks of AI systems to reduce safety, health and fundamental right risks

Body

AI systems can have positive impacts but at the same time they can also bring risks to safety, health and fundamental rights.

The AI Act, art. 9 requires that high-risk AI Systems are subject to a risk management system.

The harmonized standard EN AI System Risk Management specifies requirements on risk management for AI systems. It provides clear and actionable guidance on how risk can be addressed and mitigated throughout the entire lifecycle of the AI system. It applies to risk management for a broad range of products and services which use AI technology, including explicit considerations for vulnerable people. Risks covered include both risks to health and safety and risks to fundamental rights which can arise from AI systems, with impact for individuals, organisations, market and society. 

A key task in managing risks is to define the acceptable residual risk. For safety and health risks there are many existing methods to define such acceptable residual health and safety risks. However, there is lack of methods to define acceptable residual risks to fundamental rights. For example, when an AI system is used to decide whether or not a person can enrol in a certain education program, wrongly rejecting a student might infringe his/her right to education. The infringement of a fundamental right can typically not be compensated by potential benefits the AI system might have.

Could you suggest methods to define acceptable residual risks to fundamental rights?

 

 

Tags

Improving the trustworthiness of AI systems with a harmonized standard EN AI Trustworthiness Framework

Body

As AI is omnipresent and impacting everyones life, ensuring that AI systems are trustworthy is quintessential.

The AI Act is a European regulation on artificial intelligence (AI) promoting the uptake of human-centric and trustworthy artificial intelligence (AI), while ensuring a high level of protection of health, safety, fundamental rights. One way for companies to prove conformity with the AI Act is to meet the underlying harmonized standards.

The EN AI Trustworthiness Framework is one of these harmonized standards.

It provides a framework for AI systems' trustworthiness which contains terminology, concepts, high-level horizontal requirements, guidance and a method to contextualize those to specific stakeholders, domains or applications. The high-level horizontal requirements address foundational aspects and characteristics of trustworthiness of AI systems.

The EN AI Trustworthiness Framework standard serves as an entry point to more in-depth harmonized standards on different aspects of trustworthiness:

  • robustness
  • accuracy
  • governance and quality of data
  • transparency and documentation
  • human oversight
  • record keeping through logging
  • cybersecurity

One of the aims is to clarify which requirements are to be met by whom where in the AI life cycle.

A challenge is to map the AI Act defined stakeholders of providers, deployers, importers, distributors, product manufacturers, authorized representatives of providers, affected persons to the industry known stakeholders in the AI life cycle. Furthermore, certain transparency requirements have to be enforced upstream to the providers of AI systems to enable human oversight by deployers downstream the AI life cycle.

How would you map the AI Act stakeholders to the stakeholders you define as part of the Business Requirement Document for a project including an AI system?

A new standard for AI-based Network Applications in beyond 5G

Body

In today's rapidly evolving digital landscape, the integration of Artificial Intelligence (AI) into 5G and beyond networks has reached a critical juncture. While the potential of such integration offers great opportunities for innovation, efficiency, and service enhancement, it is not without its challenges. The primary obstacle lies in the complexity of the underlying network infrastructure, compounded by the lack of standardized guidelines for AI integration. This has resulted in fragmented solutions that hinder interoperability, scalability, and security, ultimately slowing down the deployment of next-generation network applications and limiting their potential impact across various sectors.

The necessity for a standardized approach cannot be overstated. The absence of a unified framework for AI integration in 5G and beyond networks poses a significant barrier to progress. A standard is needed to simplify the network infrastructure complexity, ensure interoperability across different systems and devices, accelerate service creation and deployment timelines, and optimize the utilization of network resources. Also, with the exponential increase in digital threats, a standard is critical for enhancing the security and resilience of network applications. It is also important as it would facilitate cost-effective service deployments, unlocking innovation potential, and ensuring that the technological advancements are accessible and beneficial to all stakeholders.

Recognizing the pressing need for a solution, I proposed the development of the new IEEE P1948 Standard for AI-based Network Applications in 5G and beyond. This initiative is aimed at establishing harmonized guidelines and protocols that would address the current gaps in AI integration within network infrastructures. My work involved extensive research to identify the core areas of focus, collaboration with industry experts to gather insights and feedback, and leading discussions within the COM/AccessCore-SC/NAB5G Working Group to draft the initial standard framework.

The PAR (Project Authorization Request) for the development of the standard will be discussed at the next New Standards Committee (NesCom) meeting in May 2024, with an expected date for completing the standard and going to the balloting process in early 2025

Tags

Addressing clinical information interoperability standards with AI standards to increase its power in its application in health.

Body

Progress in the harmonization of clinical information interoperability standards such as HL7 FHIR, ISO 13606, and ISO 13940 (as the ISO 24305 project does) should be joined with new AI standards such as the reference framework (next versions for ISO/IEC23053) and ISO/IEC AWI TR 18988 "Artificial intelligence - Application of AI technologies in health informatics," as well as for the security and effective use of standards for language models in the Electronic Health Record.

AI Use in the Standards Making Process

Body

I've been involved in contributing to the development of standards (International, Regional & National) for over ten years.

Recently, its become evident that Artificial Intelligence (AI) could have an impact on the standards development process and procedures.  A list of issues, and where available, the resolution of those issues, would be a useful resource within all Standard Development Organisations (SDO).

If anyone has thoughts on how such a list could be organised and how best to format its contents, do contribute.  There are also rapidly developing standards in this area, but I'm not aware of any work developing a standard on 'How best to develop a Standard in an AI world'.  If there is such a thing, or if anyone knows of work ongoing in this area, please contribute.

My work has been in developing standards by consensus among nominated experts.  Other approaches and SDOs may be presented with different AI challenges, which would be equally valuable.

In the next couple of days, I'll post some ideas on organisation and classification, and also some issues both 'resolved' (hopefully) and those that require some further thought.  The sort of issues that have raised questions to date exist around the standards creation lifecycle, the publication and use of standards by AI systems, the market perception of standards where AI cites existing standards, the cultural issues raised by AI applying standards outside their 'home' jurisdiction and more.  Specific challenges include management of Intellectual Property (IP), generation of an initial 'Working Draft', comments received during editorial work, finalising the draft standards, declarations of AI content, auditing of content contributed.  There are sure to be others.

What tags should this post use?

Paul    

Comments

Recall and Withdrawal of AI systems

Body

Please find below an extract, derived from a paper written by AI ethicist Alessio Tartaro

 

Recall and withdrawal of AI systems

In instances where the risk level associated with an AI system is classified as unacceptable, organizations should initiate procedures for recall or withdrawal, whether temporary or permanent. This course of action is pivotal in preventing the occurrence of adverse effects. These actions should be considered as measures of last resort [1].

The guidance is an adaptation of the general guidelines provided by ISO 10393:2013 - Consumer Product Recall - Guidelines for Suppliers, specifically tailored to address the unique challenges and considerations pertaining to AI systems.

 

1 General recommendations

1.1 General

Organizations should establish a structured recall management system for AI systems, which includes the formation of a recall management team, delineation of processes, and allocation of necessary resources. The system should ensure preparedness for recall events, encompassing the entire lifecycle of AI systems.

1.2 Policy

Each organization should develop and implement a recall policy for AI systems that specifies the conditions under which a recall will be initiated. The policy should define the roles and responsibilities of personnel, expected outcomes, and the framework for risk assessment and communication strategies during a recall event.

1.3 Documentation and record keeping

Organizations should maintain comprehensive documentation that records all aspects of the AI system's design, development, deployment, monitoring, and maintenance activities. Records should include, among others, data on performance metrics, ethical compliance assessments, user feedback, and detailed accounts of any decisions and actions taken in the event of a recall.

1.4 Expertise required to manage a recall

Organizations should ensure that personnel involved in the recall process have the necessary expertise, which includes, among others, knowledge of AI technology, risk management, legal and ethical compliance, and communication. Appropriate training should be provided to ensure that the team can identify potential risks and execute recall measures effectively.

1.5 Authority for key decision

The organization should designate individuals with the authority to make critical decisions during a recall. This includes the initiation of a recall, stakeholder communication, and the termination of the AI system's deployment. The chain of command and decision-making protocols should be clearly established and communicated to all relevant personnel.

1.6 Training and recall simulation Organizations should conduct regular training and recall simulation exercises to ensure that personnel are prepared to execute the recall policy effectively. Training should cover all aspects of the recall process, from risk identification to communication with stakeholders and the restoration of service post-recall.

 

2 Assessing the need for an AI system recall

2.1 General

Organizations should establish clear guidelines for the initiation of an assessment process to determine the necessity of an AI system recall. These guidelines should outline the triggers for assessment initiation, such as performance anomalies, ethical breaches, user complaints, or regulatory inquiries. The assessment process should be systematic and commence promptly upon the identification of a potential issue that may warrant a recall.

2.2 Incident notification

Organizations should implement an incident notification protocol to promptly inform all relevant parties, including regulatory bodies, stakeholders, and affected users, of a potential issue that could lead to a recall. The notification system should enable rapid dissemination of information and facilitate the immediate commencement of the assessment process.

2.3 Incident investigation

Upon notification of a potential incident, organizations should conduct a thorough investigation to ascertain the nature and severity of the issue. The investigation should involve collecting and analyzing data related to the incident, consulting with experts, and reviewing the AI system's operational history to identify any contributing factors or patterns.

2.4 Assess the risk

Organizations should employ a structured methodology to evaluate the risks associated with the identified issue. This assessment should consider the potential for harm to users, violations of ethical standards, legal non-compliance, and broader societal impacts. The risk assessment should guide the organization in determining the appropriate course of action.

2.5 Traceability

Organizations should maintain a traceability system for all AI systems to facilitate tracking and location throughout their operational lifecycle. Traceability measures should enable the organization to quickly identify all instances of the AI system in use, including deployment locations, responsible parties, and affected users.

2.6 Product recall decision

The organization should establish decision-making criteria for initiating a recall of an AI system. This should include a threshold for action based on the risk assessment, the potential impact of the recall, and the feasibility of corrective measures. The decision-making process should be documented, transparent, and involve key stakeholders to ensure that all relevant factors are considered.

 

3 Implementing an AI system recall

3.1 General

Organizations should develop a comprehensive recall implementation plan for AI systems, detailing the steps from initiation to completion. This plan should be activated upon the decision to recall and should include provisions for resource allocation, stakeholder communication, and measures to mitigate the impact of the recall.

3.2 Initiate the recall action

Once a decision to recall an AI system has been made, the organization should initiate the recall action as per the established plan. The initiation process should include the issuance of formal recall notices and activation of the recall management team. The scope and urgency of the recall should dictate the immediate actions taken.

3.3 Communication

Effective communication strategies should be employed to inform all affected parties, including users, partners, and regulatory bodies, about the recall. Information disseminated should clearly describe the reason for the recall, the actions required by the recipients, and the channels through which they can seek further information or support.

3.4 Implement the recall

The recall should be implemented according to the plan, with actions taken to cease the operation of the recalled AI system, inform affected users, and address the identified issue. If the recall involves a physical product, procedures should include the retrieval of the product from all distribution points. If it is a software-based or cloud-based AI system, the recall may involve remote deactivation, patching, rollback, user access restriction, and similar measures.

3.5 Monitor and report

The organization should continuously monitor the recall process to ensure compliance and effectiveness. Progress reports should be generated and communicated to stakeholders at regular intervals. Monitoring activities should also include tracking the recall's reach and verifying that corrective actions have been implemented

3.6 Evaluate effectiveness

Post-recall, the organization should evaluate the effectiveness of the recall process. This evaluation should assess whether the recall objectives were met, if the risk was mitigated, and how the recall impacted the stakeholders. The findings should be documented and used to inform future recalls.

3.7 Review and adjust recall strategy

Following the evaluation, the organization should review the recall strategy and make necessary adjustments to improve future responses. This review should consider feedback from stakeholders, the results of the effectiveness evaluation, and any changes in regulatory requirements or organizational policies.

 

4. Continual improvement of recall programme

4.1 General

Organizations should establish a framework for the continuous improvement of the recall program for AI systems. This framework should be based on the principles of iterative learning, feedback incorporation, and process optimization. It should aim to enhance the organization's ability to respond to recall situations effectively and efficiently.

4.2 Reviewing the recall

After the completion of a recall, the organization should conduct a comprehensive review of the recall process. This review should assess how the recall was executed, the efficacy of the communication strategies, the adequacy of the resources allocated, and the overall management of the recall. The review should identify both strengths and areas for improvement.

4.3 Corrective actions to prevent reoccurrence Based on the review, the organization should identify and implement corrective actions to address any deficiencies observed during the recall process. These actions should aim to prevent the reoccurrence of similar issues. The organization should also revise risk assessment and management strategies to incorporate lessons learned from the recall.

 

[1] Tartaro, Alessio (2023). When things go wrong: the recall of AI systems as a last resort for ethical and lawful AI. AI and Ethics. https://doi.org/10.1007/s43681-023-00327-z

Cybersecurity for AI Systems

Body

According to the AI Standardization Request from the European Commission to CEN/CENELEC, European standards or standardisation deliverables shall provide suitable organisational and technical solutions to ensure that AI systems are resilient against attempts to alter their use, behaviour, or performance or compromise their security properties, by malicious third parties, exploiting vulnerabilities of AI systems. Organisational and technical solutions shall include, where appropriate, measures to prevent and control cyberattacks trying to manipulate AI specific assets, such as training data sets (e.g. data poisoning) or trained models (e.g. adversarial examples), or trying to exploit vulnerabilities in AI system’s digital assets or in the underlying ICT infrastructure. These solutions shall be appropriate to the relevant circumstances and risks. Furthermore, the requested European standards or standardisation deliverables shall take due account of the essential
requirements for products with digital elements as listed in Annex I of the EC proposed Regulation on horizontal cybersecurity requirements for products with digital elements (CRA proposal of 15 September 2022).

 

DIN SPEC 92005 Artificial Intelligence - Uncertainty quantification in machine learning

Body

This DIN SPEC is currently under development with an expected publication date in September 2023.

For further information see Business plans (din.de) or feel free to reach out to me.

Comments

Classification of AI algorithms

Body

What would be the requirements of an AI algorithm service to be made available to the public?

Answer (anonymously) these 3 questions here https://algorithmclassification.org/

1.- What would you make you trust an algorithm applied to your data?

2.- What is your major concern about an algorithm applied to your data?

3.- What would be a possible solution to trust AI algorithms?

Thanks

Human-in-the-loop and OETP

Body

Human-in-the-loop (HITL) - front

Human-in-the-loop (HITL) - back

Human-in-the-loop (HITL) is a design pattern in AI that leverages both human and machine intelligence to create machine learning models and to bring meaningful automation scenarios into the real world. With this approach AI systems are designed to augment or enhance human capacity, serving as a tool to be exercised through human interaction.

Open Transparency Protocol offers the following model for HITL disclosure to then allow accessing each system HITL properties using the oetp:// URI scheme

HITL Disclosure

---

"Big Tech" is an important driver of innovation, however, the consequent concentration of power creates Big Risks for the economy, ethical use of technology, and basic human rights (we consider privacy as one of them).

A decentralization of SBOM (Software Bill of Materials) and data processing disclosures was earlier described as a key requirement for the Open Ethics Transparency Protocol, OETP.

Fulfillment of this requirement allows disclosure formation and validation by multiple parties and avoids harmful concentration of power. To allow efficient decentralization and access to the disclosures of autonomous systems, such as AI systems powered by trained machine learning models, the vendor (or a developer) MUST send requests to a Disclosure Identity Provider, which in turn, processes structured data of the disclosure with a cryptographic signature generator, and then stores the integrity hash with the persistent storage, for example using Federated Identity Provider. This process was described in the Open Ethics Transparency Protocol I-D document, however, the exact way how to access disclosures was not described there. The specification for the RI scheme described here closes this gap.

My recent work builds on top of our previous contribution to IETF and targets to simplify the access to AI disclosures, and more generally to disclosures of autonomous systems.

Https://Github.Com/OpenEthicsAI/OETP-RI-Scheme

Tags

Data Labeling and OETP

Body

Data Labelling - front

Data Labelling - back

 

Disclosure of Data Labeling plays a key role in transparency of the models trained by subject-matter experts, or for a crowdsourced data labeling approaches.

Bring transparency to the systemic properties of the AI models by developing an Open Ethics Data “Passport” (https://openethics.ai/oedp/). The Data Passport has a purpose at depicting the origins of the training datasets by bringing a standardized approach to convey information about data annotation processes, data labelers profiles, and correct scoping of the labeler’s job. Data Passport is integral part of the Disclosures and will be accessible along with other information about autonomous systems.

---

"Big Tech" is an important driver of innovation, however, the consequent concentration of power creates Big Risks for the economy, ethical use of technology, and basic human rights (we consider privacy as one of them).

A decentralization of SBOM (Software Bill of Materials) and data processing disclosures was earlier described as a key requirement for the Open Ethics Transparency Protocol, OETP.

Fulfillment of this requirement allows disclosure formation and validation by multiple parties and avoids harmful concentration of power. To allow efficient decentralization and access to the disclosures of autonomous systems, such as AI systems powered by trained machine learning models, the vendor (or a developer) MUST send requests to a Disclosure Identity Provider, which in turn, processes structured data of the disclosure with a cryptographic signature generator, and then stores the integrity hash with the persistent storage, for example using Federated Identity Provider. This process was described in the Open Ethics Transparency Protocol I-D document, however, the exact way how to access disclosures was not described there. The specification for the RI scheme described here closes this gap.

My recent work builds on top of our previous contribution to IETF and targets to simplify the access to AI disclosures, and more generally to disclosures of autonomous systems.

Https://Github.Com/OpenEthicsAI/OETP-RI-Scheme

AI Disclosure

Body

AI Disclosure - front

AI Disclosure - back

---

"Big Tech" is an important driver of innovation, however, the consequent concentration of power creates Big Risks for the economy, ethical use of technology, and basic human rights (we consider privacy as one of them).

A decentralization of SBOM (Software Bill of Materials) and data processing disclosures was earlier described as a key requirement for the Open Ethics Transparency Protocol, OETP.

Fulfillment of this requirement allows disclosure formation and validation by multiple parties and avoids harmful concentration of power. To allow efficient decentralization and access to the disclosures of autonomous systems, such as AI systems powered by trained machine learning models, the vendor (or a developer) MUST send requests to a Disclosure Identity Provider, which in turn, processes structured data of the disclosure with a cryptographic signature generator, and then stores the integrity hash with the persistent storage, for example using Federated Identity Provider. This process was described in the Open Ethics Transparency Protocol I-D document, however, the exact way how to access disclosures was not described there. The specification for the RI scheme described here closes this gap.

My recent work builds on top of our previous contribution to IETF and targets to simplify the access to AI disclosures, and more generally to disclosures of autonomous systems.

https://github.com/OpenEthicsAI/OETP-RI-scheme

Strengthening the ethical dimension in the development of standards on AVs

Body

My project has the goal to provide an important basis for further work on standards that incorporate in particular ethical considerations. The focus of the project lies on one particular application of AI, namely automated vehicles (AVs). As the acceptance and use of AVs is contingent upon the trust people build in this technology, it is of utmost importance that technology is developed that is human-centred and trustworthy. At the same time this should also be reflected in the development of the standards on AVs. My activities during the fellowship will enable to showcase what elements constitute a trustworthy AV and how these can be translated into concrete building blocks for the development of a standard for trustworthy AVs.

Comments

Can the usage of simulators and digital twins enhance trustworthiness in AI?

Body

AI (and more specifically ML) can produce hard-to-explain/understand outputs (e.g., non-linear prediction functions), thus increasing the "black-box" perception that humans have about these systems. To address that, there is an ongoing discussion on the integration of simulators and digital twins to assist the operation of AI systems. The usage of simulators is expected to improve the trustworthiness and reliability of AI. However, there are still a lot of questions to be addressed and a lot of standardization work to be done. 

A simulator or a digital twin can be considered a safe environment where ML models can be trained, tested, evaluated, and validated. But the insights obtained in a simulation domain are tied to how realistic simulations are and how close their characterizations are to real phenomena.

Highly related to this, in the last ITU-T Study Group 13 meeting (4-15 July 2022, Geneva), a recommendation on ML sandbox for future networks, entitled "Architectural framework for Machine Learning Sandbox in future networks including IMT-2020", was consented for approval (more details can be found here). This is one of the first of its kind type of standard and opens the door to a new exciting field.

 

 

Comments

Revision of the IEEE 1855 standard on the Fuzzy Markup Language

Body

Dear colleagues,

The IEEE standard on the Fuzzy Markup Language, is under revision. 

Several extensions are under development to include other systems, e.g., type-2 , learning.

Abstract of the previous standard:

" A new specification language, named Fuzzy Markup Language (FML), is presented in this standard, exploiting the benefits offered by eXtensible Markup Language (XML) specifications and related tools in order to model a fuzzy logic system in a human-readable and hardware independent way. Therefore, designers of industrial fuzzy systems are provided with a unified and high-level methodology for describing interoperable fuzzy systems. The W3C XML Schema definition language is used by this standard to define the syntax and semantics of the FML programs."

 

Please consider to join the effort: https://sagroups.ieee.org/1855/

More information on the previous standard: https://standards.ieee.org/ieee/2976/10522/

New Standard on eXplainable Artificial Intelligence under development at IEEE

Body

Dear colleagues,

The IEEE SA, has an ongoing work onfor XAI – eXplainable Artificial Intelligence

Abstract of the work:

"This standard defines mandatory and optional requirements and constraints that need to be satisfied for an AI method, algorithm, application or system to be recognized as explainable. Both partially explainable and fully or strongly explainable methods, algorithms and systems are defined. XML Schema are also defined."

 

Please consider to join the effort: https://sagroups.ieee.org/2976/

More information at: https://standards.ieee.org/ieee/2976/10522/

The IEEE Standard for Autonomous Robotics Ontology was published

Body

Dear all,

The standard was published at 12th May 2022.

Can be found at:

https://standards.ieee.org/ieee/1872.2/7094/

Abstract:
"This standard extends IEEE Std 1872-2015, IEEE Standard for Ontologies for Robotics and Automation, to represent additional domain-specific concepts, definitions, and axioms commonly used in Autonomous Robotics (AuR). This standard is general and can be used in many ways--for example, to specify the domain knowledge needed to unambiguously describe the design patterns of AuR systems; to represent AuR system architectures in a unified way; or as a guideline to build autonomous systems consisting of robots operating in various environments."

Invitation to collaboration in applying AI to smart energy

Body

Dear Colleagues,

If it would be of interest to anyone dealing with AI, especially applied to smart energy, I am looking for a collaboration working on standards in applying AI to smart energy and particularly to smart PV systems.

If this theme coincides with your interests or professional activities (and especially if you are engaged in related themes of smart energy and smart grids standardisation in any capacity of engagement in SDOs/SSOs activities), please feel invited to join the EITCI hosted Smart Energy Standards Group at https://eitci.org/sesg (possibly also in an observer capacity). For ease of communication there is also a dedicated LinkedIn group at https://www.linkedin.com/groups/12498639/

The EITCI SESG group supports international SDOs in development of standards for AI assisted PV, as well as in smart energy in general. It brings together acedemics and practitioners in smart grids, PV & AI to jointly work on technical standards in overlap of these domains. The initiative aims at supporting the EU clean energy transition policies with smart energy standards development for digitization and artificial intelligence applications.

I'm looking forward to working together in the future.

Best regards,
Agnieszka

Tags

A book, a question, and an answer.

Body

MPAI has published a book entitled: “Towards Pervasive and Trustworthy Artificial Intelligence: How standards can put a great technology at the service of humankind”.

With the printing industry sparing no efforts publishing books on Artificial Intelligence (AI), why should there be another that, in its title and subtitle, combines the overused words “AI” and “trustworthy”, with the alien words “standards” and “pervasive”?

The answer is that the book describes a solution that covers all the elements of the title: to effectively combine the AI and trustworthy words, but also to make AI pervasive. How? By developing standards for AI-based data coding.

Many industries need standards to run their business and used to have high respect for them. Users benefit from standards: MP3 put users in control of the content they wanted to enjoy, and the television – and now the video – experiences have little to do with how users used to approach audio-visual content 30 years ago.

At that time, the media industry was loath to invest in open standards. The successful MPEG standards development model, however, changed its attitude. Similarly, the AI industry has been slow in developing AI-based data coding standards making proprietary solutions their preferred route.

MPAI has shown that can take different types of data, encode them using AI and develop standards that make the technology and the benefits it brings with it pervasive. At the same time, MPAI standards can take a technology that may well be untrusted and make it trustworthy.

The MPAI book describes how MPAI develops standards that can also be used, how standards can make AI pervasive, and how MPAI gives users the means to make informed decisions about how to choose an implementation having the required level of trustworthiness.

This is the time to join the MPAI unique adventure. MPAI is open to those who want to make its vision real.

 

More info on MPAI at: https://mpai.community/

MPAI book available at: https://www.amazon.com/dp/B09NS4T6WN/

Tags

Why algorithmic transparency needs a protocol?

Body

Why algorithmic transparency needs a protocol?

 

As (algorithmic) operations are becoming more complex, we realize that less and less we can continue relying on the methods of the past where Privacy Policy or ToC served (did they?) in building trust in the business. Moreover, it rarely helped any user to understand what’s going on with their data under the hood. “I agree, I understand, I accept.” — the big lies we told ourselves when clicking on the website’s cookie notice or when ticking the checkbox of one another digital platform. In the age of artificial intelligence, the privacy and cybersecurity risks remained, but now we’re observing the expansion of the risk profiles for every service to include bias and discriminatory issues. What should we do? A typical answer is a top-down regulation brought by national and cross-national entities. Countries and trade unions are now competing for AI ethics guidelines and standards. Good. What if you’re building an international business? As a business, you have to comply. Tons of digital paperwork (thanks, now it’s digital!) — and you could get settled in one single economic space. Once you’re there, there’s a chance you can move to another one by repeating the costly bureaucratic procedure. Unfortunately, this is not scalable. We call it the “cost of compliance”, and these costs are high. There is a possible way of avoiding the compliance scalability issue: disclosing the modus operandi once and matching it with existing requirements on each market. To make it possible we need a universally-accepted concept of product disclosure.

The complete article on Medium is available to learn more about disclosure and the transparency protocol to be used in conjunction with it.

https://lukianets.medium.com/why-algorithmic-transparency-needs-a-protocol-2b6d5098572f

Tags

MPAI – Moving Picture, Audio and Data Coding by Artificial Intelligence

Body

Use of technologies based on Artificial Intelligence (AI) is extending to more and more applic­ations yielding one of the fastest-grow­ing markets in the data analysis and service sector.

However, industry must overcome hurdles for stakeholders to fully exploit this historical oppor­tunity: the current framework-based development model that makes applic­ation redep­loyment difficult, and monolithic and opaque AI applications that generate mistrust in users.

MPAI – Moving Picture, Audio and Data Coding by Artificial Intelligence – believes that univer­sally accessible standards can have the same positive effects on AI as digital media stan­dards and has identified data coding as the area where standards can foster development of AI tech­nologies, promote use of AI applications and contribute to the solution of existing problems.

MPAI defines data coding as the transformation of data from a given representation to an equiv­alent one more suited to a specific application. Examples are compression and semantics extraction.

MPAI considers AI module (AIM) and its interfaces as the AI building block. The syntax and semantics of interfaces determine what AIMs should per­form, not how. AIMs can be implemented in hardware or software, with AI or Machine Learning legacy Data Processing.

MPAI’s AI framework enabling creation, execution, com­pos­ition and update of AIM-based work­flows (MPAI-AIF) is the cornerstone of MPAI standardisation because it enables building high-com­plexity AI solutions by interconnecting multi-vendor AIMs trained to specific tasks, operating in the standard AI framework and exchanging data in standard formats.

MPAI standards will address many of the problems mentioned above and benefit various actors:

  • Technology providers will be able to offer their conforming AIMs to an open market
  • Application developers will find on the open market the AIMs their applications need
  • Innovation will be fuelled by the demand for novel and more performing AIMs
  • Consumers will be offered a wider choice of better AI applications by a competitive market
  • Society will be able to lift the veil of opacity from large, monolithic AI-based applications.

Focusing on AI-based data coding will also allow MPAI to take advantage of the results of emer­ging and future research in representation learning, transfer learning, edge AI, and reproducibility of perfor­mance.

MPAI is mindful of IPR-related problems which have accompanied high-tech standardisation. Unlike standards developed by other bodies, which are based on vague and contention-prone Fair, Reasonable and Non-Discriminatory (FRAND) declarations, MPAI standards are based on Frame­work Licences where IPR holders set out in advance IPR guidelines.

Finally, although it is a technical body, MPAI is aware of the revolutionary impact AI will have on the future of human society. MPAI pledges to address ethical questions raised by its technical work with the involvement of high-profile external thinkers. The initial significant step is to enable the understanding of the inner working of complex AI systems.

MORE INFO at https://mpai.community/

Open Ethics Transparency Protocol

Body

The Open Ethics Transparency Protocol (OETP) describes the creation and exchange of voluntary ethics Disclosures for IT products. It is brought as a solution to increase the transparency of how IT products are built and deployed. The scope of the Protocol covers Disclosures for systems such as Software as a Service (SaaS) Applications, Software Applications, Software Components, Application Programming Interfaces (API), Automated Decision-Making (ADM) systems, and systems using Artificial Intelligence (AI). The IETF I-D document provides details on how disclosures for data collection and data processing practice are formed, stored, validated, and exchanged in a standardized and open format.

OETP provides facilities for:

  • Informed consumer choices : End-users able to make informed choices based on their own ethical preferences and product disclosure.
  • Industrial-scale monitoring : Discovery of best and worst practices within market verticals, technology stacks, and product value offerings.
  • Legally-agnostic guidelines : Suggestions for developers and product-owners, formulated in factual language, which are legally-agnostic and could be easily transformed into product requirements and safeguards.
  • Iterative improvement : Digital products, specifically, the ones powered by artificial intelligence could receive nearly real-time feedback on how their performance and ethical posture could be improved to cover security, privacy, diversity, fairness, power balance, non-discrimination, and other requirements.
  • Labeling and certification : Mapping to existing and future regulatory initiatives and standards.

Please feel free to join the discussion here and in the GitHub repository
 

IETF datatracker link: https://datatracker.ietf.org/doc/draft-lukianets-open-ethics-transparency-protocol/

Comments

Tags

IEEE 7007 Ontologies for Ethically Driven Robotics and Automation

Body

IEEE 7007 WG created a unique standard that will contribute to the development of new technologies ethically aligned to human values. The IEEE 7007 Standard has an ontological representation which facilitates the investigation of the domain; and a formal language that adds precision to the knowledge and data collected during this investigation. The nature of the ontologies allows this ontological representation to be used in a wide variety of applications across all AIS domain.

The IEEE 7007 WG elaborated a formal representation, using formal Logics, for the following domains: Norms and Ethical Principles, Data Privacy and Protection, Transparency and Accountability, and Ethical Violation Management. In addition, during the elaboration of 7007 Standard, IEEE 7007 WG developed its own methodology to deal with the complexity of the Ethics of AI domain. It is based on agile methodology and can be used in heterogenous and spatially distributed groups like the IEEE 7007 WG.

The link to the standard: https://standards.ieee.org/standard/7007-2021.html

AI Landscape Report Published by StandICT.eu EUOS TWG-AI

Body

The EU-funded, StandICT.Eu 2023  Project, ICT Standardisation Observatory and Support Facility in Europe,  has just published the first of a series of Landscape Reports on ICT standards, Landscape Of AI Standards[1] a palpable, go-to reference, providing an overview of the diverse array of global standardisation work underway in Artificial Intelligence and the various organisations behind it. This information will be continuously updated and evolve via the “EUOS – Observatory For ICT Standardisation” a database powered by StandICT.eu 2023 to cover the wide-range of topics identified in the Rolling Plan For ICT Standardisation, and will ultimately result in the release of a dedicated Gaps Analysis Report for each theme.

Landscape of Artificial Intelligence Standards is the fruit of the dedicated Technical Working Group (TWG AI) set-up to harness expert advice and stimulate discussion among SDOs, public bodies, academic institutions and acclaimed specialists to provide an expert overview of documents and activities relevant to standardisation in this field. Ultimately, the TWG AI will pinpoint gaps where additional activity and investment can strengthen Europe’s position and also help create

“a structured dialogue between the EC, Member States and standardisation organisations to stay at the forefront of artificial intelligence through the twin objectives of Europe in adopting a European approach to excellence in AI and a European approach to trust in AI”,as Kilian Gross, the EC’s Head of Unit in Artificial Intelligence at DG Connect, states in his foreword of the Report.

The Report provides an encompassing compilation of standardisation efforts underway in the framework of European SDOs, such as CEN and ETSI, Government, Public Bodies and Agencies, such as the European Commission, the European Data Portal, European Parliament and the HLEG-AI and JRC, Global SDOs and initiatives, such as IEC, IEEE, ISO/IEC, ITU-T, WEF, W3C, as well as non-EU, country-specific contributions including China, Germany, Japan, UK and USA and relevant contributions from other organisations (BDVA, G20, Khronos, OECD, SAE International).

Since its start, StandICT.eu 2023 has launched 6 other such Technical Working Groups in Blockchain (TWG BLOCK), Big Data Spaces and Data Interoperability (TWG BDDI), Cybersecurity (TWG Cyber), Smart Cities (TWG CITIES), Trusted Information (TWG TRUSTI) and Standards Education (TWG ACADEMY) where further Landscape and Gap Analysis Reports will be published as part of the series.

In parallel, StandICT.eu 2023 is providing 3M EURO to fund European ICT experts through a series of (10) Open Calls to participate in international Standardisation Developing Organisations' working groups across the wide-range of topics identified in the EC Rolling Plan for ICT Standardisation..

From the Authors:

This overview provides an easy “look up” regarding what AI standardisation is happening in various organisations and brief information on the organisations themselves. There is no attempt to say here which documents are more fit-for-purpose than others: that will be considered in the next step, in a Gaps Analysis Report. This document is just one possible output from a multi-dimensional database; we look forward to extending, filtering, discussing, mindmapping and collaborating to benefit standards experts and users globally.”  Lindsay Frost, Chief Standardisation Engineer, NEC, Editor, TWG AI Chair

“The ICT Standardization landscape is an every changing, living and dynamic ecosystem of ecosystems where new technologies, tools, techniques, components, products and services are being innovated and disrupted on a constant basis. To remain current, regular standardization landscape research exercises like this AI Landscape Report are crucial to capture the latest state-of-the-art.” Ray Walshe, StandICT.eu 2023 EAG Chair & EUOS Director, Series Editor 

“Having such a thorough and detailed reference document that provides this bird’s eye view on all AI standardisation will allow us to provide our Standardisation Developing Organisations SDOs and Funding Agencies with the mechanisms to help intertwine efforts collectively which is the purpose of the StandICT.eu ecosystem “      Silvana Muscella, CEO Trust-It and StandICT.eu 2023 Project Coordinator, Series Editor

Free online webinar AI-SPRINT: An EU Perspective on the Future of AI and Edge Computing 30.03 at 10:00 CEST

Body

Edge AI is rapidly gaining momentum with growing investments and awareness of its benefits, such as reducing costs and latency times for improved user experience and increased levels of security in terms of data privacy through local processing. 


Yet its full potential has yet to be realised. One way to achieve this is by combining various execution platforms for ubiquitous and seamless execution computing environments for a complete cloud continuum. As a result, application developers will have greater control over computing, network and data infrastructures and services and end-users will benefit from seamless access to continuous service environments. 


AI-SPRINT is a newly funded initiative under Europe’s Horizon 2020 programme aiming to drive innovations in AI and edge computing.

On the 30.03.2021 at 10:00 CEST the initiative is organising the first in a series of AI-SPRINT webinars, AI-SPRINT: An EU Perspective on the Future of AI and Edge Computing,  analysing challenges and needs from an AI and Edge computing perspective and giving practical solutions that we’re developing in the context of the AI-SPRINT as we embark on our R&I journey.

Lightning talks from key partners will give examples of new AI applications in edge and cloud environments, top challenges that need addresses and real-world scenarios aimed at proving competitive edge and replicability. 


The panel discussion brings together members of the AI-SPRINT Alliance and project experts for a deep-dive into the challenges, needs and future trends of AI and edge computing from various viewpoints. Participants will also get a chance to learn about the Alliance designed for small SW houses and EU cloud providers to support the AI and edge computing ecosystem while the interactive polls will capture viewpoints from the audience. 

Full programme, speakers and registration are available at the following link: https://ai-sprint-project.eu/events/ai-sprint-eu-perspective-future-ai-and-edge-computing

We do look forward to meeting you online on the 30.03.2021 at 10:00 CEST

Tags

DIN - GERMAN STANDARDIZATION ROADMAP ON ARTIFICIAL INTELLIGENCE

Body

In a joint project with the Federal Ministry of Economic Affairs and Energy (BMWi), DIN and DKE developed together with experts from industry, science, the public sector and civil society a roadmap for standards and specifications in the field of artificial intelligence. The aim was the early development of a framework for action in the field of standardization which will strengthen the global competitiveness of German industry and make European values the global benchmark. With this step, DIN is implementing the AI strategy of the German Federal Government. The field of Action 10, as outlined in the Strategy, explicitly deals with the topic “Setting standards”.

The EU Observatory For ICT Standards WG-AI

Body

The EU Observatory For ICT Standards WG-AI

Great effort in the last few months by the EUOS WG on Artificial Intelligence chaired by Lindsay Frost (NEC, ETSI) has already produced a draft Technology Landscape Research Report for the European Commissions DG JRC (joint Research Centre) on standardization related to AI Risk.

This collaboration with Stefano Nativi of DG JRCwho coordinates the European Commission AI Watch initiative has been very successful and the EUOS WG-AI is looking forward to contributing to the next iteration of this report.

The WG-AI membership is Lindsay Frost (Chair: ETSI,NEC), Fergal Finn (StandICT.eu EAG, NSAI), Karl Grun (StandICT.eu EAG, AS), Jens Gayko (StandICT.eu EAG, VDE) and Ray Walshe (StandICT EUOS, DCU)

Comments

Tags

On performing A Standardization Landscape research

Body

The ICT Standardization landscape is an every changing, living and dynamic ecosystem of ecosystems where new technologies, tools, techniques, components, products and services are being innovated and disrupted on a constant basis. To remain current regular standardization landscape research exercises need to take place to capture the latest state-of-the-art. What is needed is to find out what technology reports, specifications and standards 1) have been published, 2) are in the process of being published or 3) are being investigated/studied for potential standardization.

In StandICT.eu we are lucky to be able to draw on a large ecosystem of internationally recognised experts across the complete ICT Standardization super-ecosystem covering major Standards Development initiatives like ISO, IEC, IETF, ITU, IEEE, ETSI, CEN-CENELEC, OPC, IMG, OASIS, IEEE, W3C etc to name a few. These experts have decades of experience in horizontal technologies like Cloud, Big Data, AI, IoT, 5G, Cybersecurity and are at the leading edge of Blockchain, Digital Twin and Quantum Computing standardization efforts. 

Leveraging this expertise through the EU Observatory for ICT Standards (EUOS) has enable us to establish multiple working groups drawing together the necessary expertise and access to the contributing SDO's to provide the most up-to-date research into the current status of International ICT Standardization in the area covered by the working group. 

The Chair of the WG and the WG Experts then generate a Technology Landscape Research Report detailing the current standardization status in that technology area and then the report is approved for publication.

Usually he next step is an evaluation and analysis of that published report to identify any standardization gaps, needs, challenges or opportunities for the European Digital Single Market.       

Comments