Artificial Intelligence

Available (58)

Showing 13 - 24 per page



Cybersecurity for AI Systems

Body

According to the AI Standardization Request from the European Commission to CEN/CENELEC, European standards or standardisation deliverables shall provide suitable organisational and technical solutions to ensure that AI systems are resilient against attempts to alter their use, behaviour, or performance or compromise their security properties, by malicious third parties, exploiting vulnerabilities of AI systems. Organisational and technical solutions shall include, where appropriate, measures to prevent and control cyberattacks trying to manipulate AI specific assets, such as training data sets (e.g. data poisoning) or trained models (e.g. adversarial examples), or trying to exploit vulnerabilities in AI system’s digital assets or in the underlying ICT infrastructure. These solutions shall be appropriate to the relevant circumstances and risks. Furthermore, the requested European standards or standardisation deliverables shall take due account of the essential
requirements for products with digital elements as listed in Annex I of the EC proposed Regulation on horizontal cybersecurity requirements for products with digital elements (CRA proposal of 15 September 2022).

 

DIN SPEC 92005 Artificial Intelligence - Uncertainty quantification in machine learning

Body

This DIN SPEC is currently under development with an expected publication date in September 2023.

For further information see Business plans (din.de) or feel free to reach out to me.

Comments

Classification of AI algorithms

Body

What would be the requirements of an AI algorithm service to be made available to the public?

Answer (anonymously) these 3 questions here https://algorithmclassification.org/

1.- What would you make you trust an algorithm applied to your data?

2.- What is your major concern about an algorithm applied to your data?

3.- What would be a possible solution to trust AI algorithms?

Thanks

Human-in-the-loop and OETP

Body

Human-in-the-loop (HITL) - front

Human-in-the-loop (HITL) - back

Human-in-the-loop (HITL) is a design pattern in AI that leverages both human and machine intelligence to create machine learning models and to bring meaningful automation scenarios into the real world. With this approach AI systems are designed to augment or enhance human capacity, serving as a tool to be exercised through human interaction.

Open Transparency Protocol offers the following model for HITL disclosure to then allow accessing each system HITL properties using the oetp:// URI scheme

HITL Disclosure

---

"Big Tech" is an important driver of innovation, however, the consequent concentration of power creates Big Risks for the economy, ethical use of technology, and basic human rights (we consider privacy as one of them).

A decentralization of SBOM (Software Bill of Materials) and data processing disclosures was earlier described as a key requirement for the Open Ethics Transparency Protocol, OETP.

Fulfillment of this requirement allows disclosure formation and validation by multiple parties and avoids harmful concentration of power. To allow efficient decentralization and access to the disclosures of autonomous systems, such as AI systems powered by trained machine learning models, the vendor (or a developer) MUST send requests to a Disclosure Identity Provider, which in turn, processes structured data of the disclosure with a cryptographic signature generator, and then stores the integrity hash with the persistent storage, for example using Federated Identity Provider. This process was described in the Open Ethics Transparency Protocol I-D document, however, the exact way how to access disclosures was not described there. The specification for the RI scheme described here closes this gap.

My recent work builds on top of our previous contribution to IETF and targets to simplify the access to AI disclosures, and more generally to disclosures of autonomous systems.

Https://Github.Com/OpenEthicsAI/OETP-RI-Scheme

Tags

Data Labeling and OETP

Body

Data Labelling - front

Data Labelling - back

 

Disclosure of Data Labeling plays a key role in transparency of the models trained by subject-matter experts, or for a crowdsourced data labeling approaches.

Bring transparency to the systemic properties of the AI models by developing an Open Ethics Data “Passport” (https://openethics.ai/oedp/). The Data Passport has a purpose at depicting the origins of the training datasets by bringing a standardized approach to convey information about data annotation processes, data labelers profiles, and correct scoping of the labeler’s job. Data Passport is integral part of the Disclosures and will be accessible along with other information about autonomous systems.

---

"Big Tech" is an important driver of innovation, however, the consequent concentration of power creates Big Risks for the economy, ethical use of technology, and basic human rights (we consider privacy as one of them).

A decentralization of SBOM (Software Bill of Materials) and data processing disclosures was earlier described as a key requirement for the Open Ethics Transparency Protocol, OETP.

Fulfillment of this requirement allows disclosure formation and validation by multiple parties and avoids harmful concentration of power. To allow efficient decentralization and access to the disclosures of autonomous systems, such as AI systems powered by trained machine learning models, the vendor (or a developer) MUST send requests to a Disclosure Identity Provider, which in turn, processes structured data of the disclosure with a cryptographic signature generator, and then stores the integrity hash with the persistent storage, for example using Federated Identity Provider. This process was described in the Open Ethics Transparency Protocol I-D document, however, the exact way how to access disclosures was not described there. The specification for the RI scheme described here closes this gap.

My recent work builds on top of our previous contribution to IETF and targets to simplify the access to AI disclosures, and more generally to disclosures of autonomous systems.

Https://Github.Com/OpenEthicsAI/OETP-RI-Scheme

AI Disclosure

Body

AI Disclosure - front

AI Disclosure - back

---

"Big Tech" is an important driver of innovation, however, the consequent concentration of power creates Big Risks for the economy, ethical use of technology, and basic human rights (we consider privacy as one of them).

A decentralization of SBOM (Software Bill of Materials) and data processing disclosures was earlier described as a key requirement for the Open Ethics Transparency Protocol, OETP.

Fulfillment of this requirement allows disclosure formation and validation by multiple parties and avoids harmful concentration of power. To allow efficient decentralization and access to the disclosures of autonomous systems, such as AI systems powered by trained machine learning models, the vendor (or a developer) MUST send requests to a Disclosure Identity Provider, which in turn, processes structured data of the disclosure with a cryptographic signature generator, and then stores the integrity hash with the persistent storage, for example using Federated Identity Provider. This process was described in the Open Ethics Transparency Protocol I-D document, however, the exact way how to access disclosures was not described there. The specification for the RI scheme described here closes this gap.

My recent work builds on top of our previous contribution to IETF and targets to simplify the access to AI disclosures, and more generally to disclosures of autonomous systems.

https://github.com/OpenEthicsAI/OETP-RI-scheme

Strengthening the ethical dimension in the development of standards on AVs

Body

My project has the goal to provide an important basis for further work on standards that incorporate in particular ethical considerations. The focus of the project lies on one particular application of AI, namely automated vehicles (AVs). As the acceptance and use of AVs is contingent upon the trust people build in this technology, it is of utmost importance that technology is developed that is human-centred and trustworthy. At the same time this should also be reflected in the development of the standards on AVs. My activities during the fellowship will enable to showcase what elements constitute a trustworthy AV and how these can be translated into concrete building blocks for the development of a standard for trustworthy AVs.

Comments

Can the usage of simulators and digital twins enhance trustworthiness in AI?

Body

AI (and more specifically ML) can produce hard-to-explain/understand outputs (e.g., non-linear prediction functions), thus increasing the "black-box" perception that humans have about these systems. To address that, there is an ongoing discussion on the integration of simulators and digital twins to assist the operation of AI systems. The usage of simulators is expected to improve the trustworthiness and reliability of AI. However, there are still a lot of questions to be addressed and a lot of standardization work to be done. 

A simulator or a digital twin can be considered a safe environment where ML models can be trained, tested, evaluated, and validated. But the insights obtained in a simulation domain are tied to how realistic simulations are and how close their characterizations are to real phenomena.

Highly related to this, in the last ITU-T Study Group 13 meeting (4-15 July 2022, Geneva), a recommendation on ML sandbox for future networks, entitled "Architectural framework for Machine Learning Sandbox in future networks including IMT-2020", was consented for approval (more details can be found here). This is one of the first of its kind type of standard and opens the door to a new exciting field.

 

 

Comments

Revision of the IEEE 1855 standard on the Fuzzy Markup Language

Body

Dear colleagues,

The IEEE standard on the Fuzzy Markup Language, is under revision. 

Several extensions are under development to include other systems, e.g., type-2 , learning.

Abstract of the previous standard:

" A new specification language, named Fuzzy Markup Language (FML), is presented in this standard, exploiting the benefits offered by eXtensible Markup Language (XML) specifications and related tools in order to model a fuzzy logic system in a human-readable and hardware independent way. Therefore, designers of industrial fuzzy systems are provided with a unified and high-level methodology for describing interoperable fuzzy systems. The W3C XML Schema definition language is used by this standard to define the syntax and semantics of the FML programs."

 

Please consider to join the effort: https://sagroups.ieee.org/1855/

More information on the previous standard: https://standards.ieee.org/ieee/2976/10522/

New Standard on eXplainable Artificial Intelligence under development at IEEE

Body

Dear colleagues,

The IEEE SA, has an ongoing work onfor XAI – eXplainable Artificial Intelligence

Abstract of the work:

"This standard defines mandatory and optional requirements and constraints that need to be satisfied for an AI method, algorithm, application or system to be recognized as explainable. Both partially explainable and fully or strongly explainable methods, algorithms and systems are defined. XML Schema are also defined."

 

Please consider to join the effort: https://sagroups.ieee.org/2976/

More information at: https://standards.ieee.org/ieee/2976/10522/