Artificial Intelligence

Available (58)

Showing 1 - 12 per page



IETF AI Preferences Working Group

Body

The IETF has a working group (WG) called AI Preferences (aipref), whose charter reads:

"The AI Preferences Working Group will standardize building blocks that allow for the expression of preferences about how content is collected and processed for Artificial Intelligence (AI) model development, deployment, and use.

There are many ways that preferences regarding content might be expressed. The Working Group will focus on attaching preferences to content either by including preferences in content metadata or by signaling preferences using the protocol that delivers the content.

The Working Group will deliver:

A standards-track document covering vocabulary for expressing AI-related preferences, independent of how those preferences are associated with content.

Standards-track document(s) describing means of attaching or associating those preferences with content in IETF-defined protocols and formats, including but not limited to using Well-Known URIs (RFC 8615), such as the Robots Exclusion Protocol (RFC 9309), and HTTP response header fields.

A standard method for reconciling multiple expressions of preferences."

Link to the WG:

https://datatracker.ietf.org/group/aipref/about/

Link to documents in the WG:

https://datatracker.ietf.org/group/aipref/documents/

Engineering Reliable Autonomous Systems

Body

The 1st IEEE International Conference on Engineering Reliable Autonomous Systems (ERAS) was held on May 29-30, 2025 in the USA (Worcester, Massachusetts). ERAS offered a space for all stakeholders across autonomous system reliability to come together to discuss the key challenges and present progress made towards solving them. Along with the presentations of the original, selected, peer-reviewed papers, three keynote talks from Academia, Industry, and Government each, were invited. The conference also hosted one tutorial delivered by IBM, and one workshop organized by MIT, MA, USA.

The scope of the conference is on all the aspects of engineering reliable autonomous systems, ranging from systems and software engineering to their evaluation and verification. The proceedings will be soon available on:
https://ieeexplore.ieee.org

 

Contribute robustness and accuracy requirements to the CEN/CLC JTC 21 prEN AI Trustworthiness Framework

Body

Are you an expert in AI System accuracy and/or robustness? Then join CEN/CENELEC JTC 21, WG 4 Foundational and Societal Aspects or WG 3 Engineering Aspects!

prEN AI Trustworthiness Framework is one of the standards developed by CEN/CENELEC (JTC 21, WG 4 Foundational and Societal Aspects) to support the following standardization requests from the European Commission (M/593), to enable companies with high-risk AI Systems in Annex III, to obtain the presumption of conformity:

  • SR 3: Record-keeping through logging capabilities
  • SR 4: Transparency
  • SR 5: Human oversight
  • SR 6: Accuracy
  • SR 7: Robustness

To meet the European Commission's standardization request for AI system accuracy and robustness, we are urgently looking for experts!

Interested? Contact Enrico Panai or me

Tags

Artificial Intelligence for Network Operations

Body

The IETF draft titled "Artificial Intelligence (AI) for Network Operations" (https://datatracker.ietf.org/doc/draft-king-rokui-ainetops-usecases/) explores how AI and machine learning (ML) can be integrated into network operations—a concept referred to as AINetOps. The primary aim is to automate and optimize network management tasks, thereby improving efficiency, reliability, and scalability. This approach is relevant to both single-layer (IP or Optical) and multi-layer (IP/Optical) networks and is intended to tackle the growing complexity of modern network infrastructures.

AINetOps includes a broad set of use cases such as reactive troubleshooting, proactive assurance, closed-loop optimization, misconfiguration detection, and virtual operator support. By using AI and ML, networks can evolve from static, manually operated systems into dynamic environments capable of real-time adaptation and autonomous decision-making. This transformation enables predictive analytics, helping operators to detect and resolve issues before they affect service quality.

The draft highlights the need for existing IETF protocols and architectures to evolve in support of AINetOps. It outlines the architectural, procedural, and protocol-level changes required to implement AI-powered operations effectively. These include developing standardized interfaces and APIs, integrating AI engines with network components, and creating data models that accurately represent the network’s state and configuration.

Provide comments on WD prEN AI Trustworthiness Framework

Body

I am proud to have contributed as a StandICT fellow call 4 to the working draft of the prEN AI Trustworthiness Framework standard, produced by CEN/CENELEC, JTC 21, WG 4 Societal and Foundational Aspects, Task Group 3 (WI=JT021008).

The working draft was circulated to experts of the national standardization bodies on November 12th 2024 (WD prEN AI trustworthiness framework (Doc. N 830)).

Please provide your comments as a national expert on the WD prENAI Trustworthiness Framework standard by December 10th 2024.

This standard provides high-level horizontal requirements on trustworthiness for AI systems. It relates to other harmonized standards that meet the 10 standardization requests of the European Commission to support the presumption of conformity with the AI Act.

It serves as an entry point to related standards:

- prEN AI Systems Risk Management (WI=JT021024), prEN Conformity Assessment (WI=JT021038)
, and quality management standards: prEN ISO/IEC 25059 rev (WI=JT021027), prEN ISOprEN XXX (WI=JT021039)/IEC 42001 (WI=JT021011), prEN XXX (WI=JT021039) Artificial intelligence - Quality management system for EU AI Act regulatory purposes (WI=JT021039)

and other new standards providing more detailed requirements for various aspects of trustworthiness:

- accuracy (prEN ISO/IEC 23282 (WI=JT021012), prEN XXX (WI=JT021025))

- data governance and quality for AI (prEN ISO/IEC 5259 1-4, prEN XXX (WI=JT021037), prEN XXX (WI=JT021036))

- logging (prEN ISO/IEC 24970 (WI=JT021021))

- cybersecurity (prEN XXX (WI=JT021029))

 

Please provide your comments as a national expert by December 10th, 2024.

As a StandICT fellow of call 5, I will make sure your comments are processed duly. 

Tags

Managing the safety, health and fundamental right risks of AI systems

Body

AI systems can have positive impacts but at the same time they can also bring risks to safety, health and fundamental rights.

The AI Act, art. 9 requires that high-risk AI Systems are subject to a risk management system.

The harmonized standard EN AI System Risk Management specifies requirements on risk management for AI systems. It provides clear and actionable guidance on how risk can be addressed and mitigated throughout the entire lifecycle of the AI system. It applies to risk management for a broad range of products and services which use AI technology, including explicit considerations for vulnerable people. Risks covered include both risks to health and safety and risks to fundamental rights which can arise from AI systems, with impact for individuals, organisations, market and society. 

A key task in managing risks is to define the acceptable residual risk. For safety and health risks there are many existing methods to define such acceptable residual health and safety risks. However, there is lack of methods to define acceptable residual risks to fundamental rights. For example, when an AI system is used to decide whether or not a person can enrol in a certain education program, wrongly rejecting a student might infringe his/her right to education. The infringement of a fundamental right can typically not be compensated by potential benefits the AI system might have.

Could you suggest methods to define acceptable residual risks to fundamental rights?

Comments

Managing the risks of AI systems to reduce safety, health and fundamental right risks

Body

AI systems can have positive impacts but at the same time they can also bring risks to safety, health and fundamental rights.

The AI Act, art. 9 requires that high-risk AI Systems are subject to a risk management system.

The harmonized standard EN AI System Risk Management specifies requirements on risk management for AI systems. It provides clear and actionable guidance on how risk can be addressed and mitigated throughout the entire lifecycle of the AI system. It applies to risk management for a broad range of products and services which use AI technology, including explicit considerations for vulnerable people. Risks covered include both risks to health and safety and risks to fundamental rights which can arise from AI systems, with impact for individuals, organisations, market and society. 

A key task in managing risks is to define the acceptable residual risk. For safety and health risks there are many existing methods to define such acceptable residual health and safety risks. However, there is lack of methods to define acceptable residual risks to fundamental rights. For example, when an AI system is used to decide whether or not a person can enrol in a certain education program, wrongly rejecting a student might infringe his/her right to education. The infringement of a fundamental right can typically not be compensated by potential benefits the AI system might have.

Could you suggest methods to define acceptable residual risks to fundamental rights?

 

 

Tags

Improving the trustworthiness of AI systems with a harmonized standard EN AI Trustworthiness Framework

Body

As AI is omnipresent and impacting everyones life, ensuring that AI systems are trustworthy is quintessential.

The AI Act is a European regulation on artificial intelligence (AI) promoting the uptake of human-centric and trustworthy artificial intelligence (AI), while ensuring a high level of protection of health, safety, fundamental rights. One way for companies to prove conformity with the AI Act is to meet the underlying harmonized standards.

The EN AI Trustworthiness Framework is one of these harmonized standards.

It provides a framework for AI systems' trustworthiness which contains terminology, concepts, high-level horizontal requirements, guidance and a method to contextualize those to specific stakeholders, domains or applications. The high-level horizontal requirements address foundational aspects and characteristics of trustworthiness of AI systems.

The EN AI Trustworthiness Framework standard serves as an entry point to more in-depth harmonized standards on different aspects of trustworthiness:

  • robustness
  • accuracy
  • governance and quality of data
  • transparency and documentation
  • human oversight
  • record keeping through logging
  • cybersecurity

One of the aims is to clarify which requirements are to be met by whom where in the AI life cycle.

A challenge is to map the AI Act defined stakeholders of providers, deployers, importers, distributors, product manufacturers, authorized representatives of providers, affected persons to the industry known stakeholders in the AI life cycle. Furthermore, certain transparency requirements have to be enforced upstream to the providers of AI systems to enable human oversight by deployers downstream the AI life cycle.

How would you map the AI Act stakeholders to the stakeholders you define as part of the Business Requirement Document for a project including an AI system?

A new standard for AI-based Network Applications in beyond 5G

Body

In today's rapidly evolving digital landscape, the integration of Artificial Intelligence (AI) into 5G and beyond networks has reached a critical juncture. While the potential of such integration offers great opportunities for innovation, efficiency, and service enhancement, it is not without its challenges. The primary obstacle lies in the complexity of the underlying network infrastructure, compounded by the lack of standardized guidelines for AI integration. This has resulted in fragmented solutions that hinder interoperability, scalability, and security, ultimately slowing down the deployment of next-generation network applications and limiting their potential impact across various sectors.

The necessity for a standardized approach cannot be overstated. The absence of a unified framework for AI integration in 5G and beyond networks poses a significant barrier to progress. A standard is needed to simplify the network infrastructure complexity, ensure interoperability across different systems and devices, accelerate service creation and deployment timelines, and optimize the utilization of network resources. Also, with the exponential increase in digital threats, a standard is critical for enhancing the security and resilience of network applications. It is also important as it would facilitate cost-effective service deployments, unlocking innovation potential, and ensuring that the technological advancements are accessible and beneficial to all stakeholders.

Recognizing the pressing need for a solution, I proposed the development of the new IEEE P1948 Standard for AI-based Network Applications in 5G and beyond. This initiative is aimed at establishing harmonized guidelines and protocols that would address the current gaps in AI integration within network infrastructures. My work involved extensive research to identify the core areas of focus, collaboration with industry experts to gather insights and feedback, and leading discussions within the COM/AccessCore-SC/NAB5G Working Group to draft the initial standard framework.

The PAR (Project Authorization Request) for the development of the standard will be discussed at the next New Standards Committee (NesCom) meeting in May 2024, with an expected date for completing the standard and going to the balloting process in early 2025

Tags

Addressing clinical information interoperability standards with AI standards to increase its power in its application in health.

Body

Progress in the harmonization of clinical information interoperability standards such as HL7 FHIR, ISO 13606, and ISO 13940 (as the ISO 24305 project does) should be joined with new AI standards such as the reference framework (next versions for ISO/IEC23053) and ISO/IEC AWI TR 18988 "Artificial intelligence - Application of AI technologies in health informatics," as well as for the security and effective use of standards for language models in the Electronic Health Record.

AI Use in the Standards Making Process

Body

I've been involved in contributing to the development of standards (International, Regional & National) for over ten years.

Recently, its become evident that Artificial Intelligence (AI) could have an impact on the standards development process and procedures.  A list of issues, and where available, the resolution of those issues, would be a useful resource within all Standard Development Organisations (SDO).

If anyone has thoughts on how such a list could be organised and how best to format its contents, do contribute.  There are also rapidly developing standards in this area, but I'm not aware of any work developing a standard on 'How best to develop a Standard in an AI world'.  If there is such a thing, or if anyone knows of work ongoing in this area, please contribute.

My work has been in developing standards by consensus among nominated experts.  Other approaches and SDOs may be presented with different AI challenges, which would be equally valuable.

In the next couple of days, I'll post some ideas on organisation and classification, and also some issues both 'resolved' (hopefully) and those that require some further thought.  The sort of issues that have raised questions to date exist around the standards creation lifecycle, the publication and use of standards by AI systems, the market perception of standards where AI cites existing standards, the cultural issues raised by AI applying standards outside their 'home' jurisdiction and more.  Specific challenges include management of Intellectual Property (IP), generation of an initial 'Working Draft', comments received during editorial work, finalising the draft standards, declarations of AI content, auditing of content contributed.  There are sure to be others.

What tags should this post use?

Paul    

Comments

Recall and Withdrawal of AI systems

Body

Please find below an extract, derived from a paper written by AI ethicist Alessio Tartaro

 

Recall and withdrawal of AI systems

In instances where the risk level associated with an AI system is classified as unacceptable, organizations should initiate procedures for recall or withdrawal, whether temporary or permanent. This course of action is pivotal in preventing the occurrence of adverse effects. These actions should be considered as measures of last resort [1].

The guidance is an adaptation of the general guidelines provided by ISO 10393:2013 - Consumer Product Recall - Guidelines for Suppliers, specifically tailored to address the unique challenges and considerations pertaining to AI systems.

 

1 General recommendations

1.1 General

Organizations should establish a structured recall management system for AI systems, which includes the formation of a recall management team, delineation of processes, and allocation of necessary resources. The system should ensure preparedness for recall events, encompassing the entire lifecycle of AI systems.

1.2 Policy

Each organization should develop and implement a recall policy for AI systems that specifies the conditions under which a recall will be initiated. The policy should define the roles and responsibilities of personnel, expected outcomes, and the framework for risk assessment and communication strategies during a recall event.

1.3 Documentation and record keeping

Organizations should maintain comprehensive documentation that records all aspects of the AI system's design, development, deployment, monitoring, and maintenance activities. Records should include, among others, data on performance metrics, ethical compliance assessments, user feedback, and detailed accounts of any decisions and actions taken in the event of a recall.

1.4 Expertise required to manage a recall

Organizations should ensure that personnel involved in the recall process have the necessary expertise, which includes, among others, knowledge of AI technology, risk management, legal and ethical compliance, and communication. Appropriate training should be provided to ensure that the team can identify potential risks and execute recall measures effectively.

1.5 Authority for key decision

The organization should designate individuals with the authority to make critical decisions during a recall. This includes the initiation of a recall, stakeholder communication, and the termination of the AI system's deployment. The chain of command and decision-making protocols should be clearly established and communicated to all relevant personnel.

1.6 Training and recall simulation Organizations should conduct regular training and recall simulation exercises to ensure that personnel are prepared to execute the recall policy effectively. Training should cover all aspects of the recall process, from risk identification to communication with stakeholders and the restoration of service post-recall.

 

2 Assessing the need for an AI system recall

2.1 General

Organizations should establish clear guidelines for the initiation of an assessment process to determine the necessity of an AI system recall. These guidelines should outline the triggers for assessment initiation, such as performance anomalies, ethical breaches, user complaints, or regulatory inquiries. The assessment process should be systematic and commence promptly upon the identification of a potential issue that may warrant a recall.

2.2 Incident notification

Organizations should implement an incident notification protocol to promptly inform all relevant parties, including regulatory bodies, stakeholders, and affected users, of a potential issue that could lead to a recall. The notification system should enable rapid dissemination of information and facilitate the immediate commencement of the assessment process.

2.3 Incident investigation

Upon notification of a potential incident, organizations should conduct a thorough investigation to ascertain the nature and severity of the issue. The investigation should involve collecting and analyzing data related to the incident, consulting with experts, and reviewing the AI system's operational history to identify any contributing factors or patterns.

2.4 Assess the risk

Organizations should employ a structured methodology to evaluate the risks associated with the identified issue. This assessment should consider the potential for harm to users, violations of ethical standards, legal non-compliance, and broader societal impacts. The risk assessment should guide the organization in determining the appropriate course of action.

2.5 Traceability

Organizations should maintain a traceability system for all AI systems to facilitate tracking and location throughout their operational lifecycle. Traceability measures should enable the organization to quickly identify all instances of the AI system in use, including deployment locations, responsible parties, and affected users.

2.6 Product recall decision

The organization should establish decision-making criteria for initiating a recall of an AI system. This should include a threshold for action based on the risk assessment, the potential impact of the recall, and the feasibility of corrective measures. The decision-making process should be documented, transparent, and involve key stakeholders to ensure that all relevant factors are considered.

 

3 Implementing an AI system recall

3.1 General

Organizations should develop a comprehensive recall implementation plan for AI systems, detailing the steps from initiation to completion. This plan should be activated upon the decision to recall and should include provisions for resource allocation, stakeholder communication, and measures to mitigate the impact of the recall.

3.2 Initiate the recall action

Once a decision to recall an AI system has been made, the organization should initiate the recall action as per the established plan. The initiation process should include the issuance of formal recall notices and activation of the recall management team. The scope and urgency of the recall should dictate the immediate actions taken.

3.3 Communication

Effective communication strategies should be employed to inform all affected parties, including users, partners, and regulatory bodies, about the recall. Information disseminated should clearly describe the reason for the recall, the actions required by the recipients, and the channels through which they can seek further information or support.

3.4 Implement the recall

The recall should be implemented according to the plan, with actions taken to cease the operation of the recalled AI system, inform affected users, and address the identified issue. If the recall involves a physical product, procedures should include the retrieval of the product from all distribution points. If it is a software-based or cloud-based AI system, the recall may involve remote deactivation, patching, rollback, user access restriction, and similar measures.

3.5 Monitor and report

The organization should continuously monitor the recall process to ensure compliance and effectiveness. Progress reports should be generated and communicated to stakeholders at regular intervals. Monitoring activities should also include tracking the recall's reach and verifying that corrective actions have been implemented

3.6 Evaluate effectiveness

Post-recall, the organization should evaluate the effectiveness of the recall process. This evaluation should assess whether the recall objectives were met, if the risk was mitigated, and how the recall impacted the stakeholders. The findings should be documented and used to inform future recalls.

3.7 Review and adjust recall strategy

Following the evaluation, the organization should review the recall strategy and make necessary adjustments to improve future responses. This review should consider feedback from stakeholders, the results of the effectiveness evaluation, and any changes in regulatory requirements or organizational policies.

 

4. Continual improvement of recall programme

4.1 General

Organizations should establish a framework for the continuous improvement of the recall program for AI systems. This framework should be based on the principles of iterative learning, feedback incorporation, and process optimization. It should aim to enhance the organization's ability to respond to recall situations effectively and efficiently.

4.2 Reviewing the recall

After the completion of a recall, the organization should conduct a comprehensive review of the recall process. This review should assess how the recall was executed, the efficacy of the communication strategies, the adequacy of the resources allocated, and the overall management of the recall. The review should identify both strengths and areas for improvement.

4.3 Corrective actions to prevent reoccurrence Based on the review, the organization should identify and implement corrective actions to address any deficiencies observed during the recall process. These actions should aim to prevent the reoccurrence of similar issues. The organization should also revise risk assessment and management strategies to incorporate lessons learned from the recall.

 

[1] Tartaro, Alessio (2023). When things go wrong: the recall of AI systems as a last resort for ethical and lawful AI. AI and Ethics. https://doi.org/10.1007/s43681-023-00327-z