Guideline for the develop-ment of deep learning image recognition systems
Procedure for data collection, structuring of data for learn-ing AI image recognition, process structure of learning experiments and quality assurance
Procedure for data collection, structuring of data for learn-ing AI image recognition, process structure of learning experiments and quality assurance
This document describes the framework of the big data reference architecture and the process for how a user of the document can apply it to their particular problem domain.
The purpose of the standard is to extend the CORA ontology to represent more specific concepts and axioms that are commonly used in Autonomous Robotics. The extended ontology specifies the domain knowledge needed to build autonomous systems comprised of robots that can operate in all classes of unstructured environments. The standard provides a unified way of representing Autonomous Robotics system architectures across different R&A domains, including, but not limited to, aerial, ground, surface, underwater, and space robots. This allows unambiguous identification of the basic hardware and software components necessary to provide a robot, or a group of robots, with autonomy (i.e. endow robots with the ability to perform desired tasks in unstructured environments without continuous explicit human guidance).
This standard is a logical extension to IEEE 1872-2015 Standard for Ontologies for Robotics and Automation. The standard extends the CORA ontology by defining additional ontologies appropriate for Autonomous Robotics (AuR) relating to: 1) The core design patterns specific to AuR in common R&A sub-domains; 2) General ontological concepts and domain-specific axioms for AuR; and 3) General use cases and/or case studies for AuR.
The goal is to have a technical survey for mitigating against threats introduced by adopting AI into systems. The technical survey shed light on available methods of securing AI-based systems by mitigating against known or potential security threats. It also addresses security capabilities, challenges, and limitations when adopting mitigation for AI-based systems in certain potential use cases
Methods for testing whether TETRA Voice plus Data (V+D) Base Station (BS) and Mobile Station (MS) equipment and TETRA Direct Mode Operation (DMO) equipment achieve the performance specified in ETSI EN 300 392-2 [1]. Specific test methods for DMO equipment are recommended in annex F of the present document. The purpose of these specifications is to provide a sufficient quality of radio transmission and reception for equipment operating in a TETRA system and to minimize harmful interference to other equipment. The present document is applicable to TETRA systems operating at radio frequencies in the range of 137 MHz to 1 GHz. Versions V3.3.1 [i.5] and earlier of the present document specified the methods used for type testing. The minimum technical characteristics of TETRA Voice plus Data (V+D) Base Station (BS) and Mobile Station (MS) equipment and TETRA Direct Mode Operation (DMO) equipment and radio test methods to be used for providing presumption of conformity, are now specified in ETSI EN 303 758
The present document summarizes and analyses existing and potential mitigation against threats for AI-based systems as discussed in ETSI GR SAI 004 [i.1]. The goal is to have a technical survey for mitigating against threats introduced by adopting AI into systems. The technical survey shed light on available methods of securing AI-based systems by mitigating against known or potential security threats. It also addresses security capabilities, challenges, and limitations when adopting mitigation for AI-based systems in certain potential use cases.
The present document describes the problem of securing AI-based systems and solutions, with a focus on machine learning, and the challenges relating to confidentiality, integrity and availability at each stage of the machine learning lifecycle. It also describes some of the broader challenges of AI systems including bias, ethics and explainability. A number of different attack vectors are described, as well as several real-world use cases and attacks.
AI-specific requirements with regard to robustness, espe-cially regarding adversarial robustness and corruption robustness
Software quality: Quality assessment for AI-based systems (see also 4.1.1 and 4.3.1.4)
Assessment of AI systems: Metrics for the performance capability of AI