TWG AI Artificial Intelligence

Human-in-the-loop and OETP

Body

Human-in-the-loop (HITL) - front

Human-in-the-loop (HITL) - back

Human-in-the-loop (HITL) is a design pattern in AI that leverages both human and machine intelligence to create machine learning models and to bring meaningful automation scenarios into the real world. With this approach AI systems are designed to augment or enhance human capacity, serving as a tool to be exercised through human interaction.

Open Transparency Protocol offers the following model for HITL disclosure to then allow accessing each system HITL properties using the oetp:// URI scheme

HITL Disclosure

---

"Big Tech" is an important driver of innovation, however, the consequent concentration of power creates Big Risks for the economy, ethical use of technology, and basic human rights (we consider privacy as one of them).

A decentralization of SBOM (Software Bill of Materials) and data processing disclosures was earlier described as a key requirement for the Open Ethics Transparency Protocol, OETP.

Fulfillment of this requirement allows disclosure formation and validation by multiple parties and avoids harmful concentration of power. To allow efficient decentralization and access to the disclosures of autonomous systems, such as AI systems powered by trained machine learning models, the vendor (or a developer) MUST send requests to a Disclosure Identity Provider, which in turn, processes structured data of the disclosure with a cryptographic signature generator, and then stores the integrity hash with the persistent storage, for example using Federated Identity Provider. This process was described in the Open Ethics Transparency Protocol I-D document, however, the exact way how to access disclosures was not described there. The specification for the RI scheme described here closes this gap.

My recent work builds on top of our previous contribution to IETF and targets to simplify the access to AI disclosures, and more generally to disclosures of autonomous systems.

Https://Github.Com/OpenEthicsAI/OETP-RI-Scheme

Tags

Public resources (databases) of AI specs and documents should be reviewed

Body

ETSI Wiki on Cybersec and (some) AI = https://cyberpublicwiki.etsi.org/index.php?title=ETSI

GAIA-X Whitepaper on Inventoring Cloud/Data/AI = https://www.data-infrastructure.eu/GAIAX/Redaktion/EN/Publications/gaia-x-policy-rules-and-architecture-of-standards.pdf?__blob=publicationFile&v=5

IEC has new tool for landscaping standards, just starting = https://mapping.iec.ch/#/maps

ITU-T has (draft, not public landscaping of AI standards, and update due soon = https://www.itu.int/md/T17-SG13-200720-TD-WP2-0608 2020-07-28

(Further resources to be added as discovered).

Comments

EUOS WG-AI and JRC Update

Body

TWG-AIThanks to the hard work of EUOS Technical Working Group on AI (TWG-AI) Chair Lindsay Frost (ETSI, NEC) and the executive committee members  Fergal Finn, Karl Grün, Jens Gayko, Sebastien Hallensleben, Stefan Weisgerber and Stefano Nativi, the EUOS had its first deliverable in the form of a draft AI Risk Landscape document which Lindsay submitted to the EC DG Joint Research Centre (JRC). A joint EUOS and DG JRC initiative will complete a 2nd draft in March for the European Commission's AI Watch and in parallel complete the Global AI Standards Landscape Assessment which is under way. Contact Lindsay Frost <lindsay.frost@neclab.eu > for further information.