ETSI - GR ENI 003 - Experiential Networked Intelligence (ENI) - Context-Aware Policy Management Gap Analysis
GR ENI 003 Experiential Networked Intelligence (ENI); Context-Aware Policy Management Gap Analysis
GR ENI 003 Experiential Networked Intelligence (ENI); Context-Aware Policy Management Gap Analysis
GR ENI 001 Experiential Networked Intelligence (ENI); ENI use cases
GS ENI 002 Experiential Networked Intelligence (ENI); ENI requirements
SG20 develops international standards to enable the coordinated development of IoT technologies, including machine-to-machine communications and ubiquitous sensor networks. A central part of this study is the standardization of end-to-end architectures for IoT, and mechanisms for the interoperability of IoT applications and datasets employed by various vertically oriented industry sectors.
ITU-T SG 20 Meetings Documents are available here
The ITU-T Focus Group on Artificial Intelligence for Health (AI4H) was established by ITU-T Study Group 16 at its meeting in Ljubljana, Slovenia, 9-20 July 2018. The Focus Group will work in partnership with the World Health Organization (WHO) to establish a standardized assessment framework for the evaluation of AI-based methods for health, diagnosis, triage or treatment decisions. Participation in the FG-AI4H is free of charge and open to all.
The FG-AI4H will pursue the following broad sets of goals:
1. To be a platform to facilitate a global dialogue for AI for health.
2. To collaborate with WHO in developing appropriate national guidance documents for establishing policy-enabled environment to ensure the safe and appropriate use of AI in health.
3. To identify standardization opportunities for a benchmarking framework that will enable broad use of AI for health.
4. To create a technical framework and standardization approach of AI for health algorithm assessment and validation.
5. To develop open benchmarks, targeted to become international standards, and serve as guidance for the assessment of new AI for health algorithms.
6. To develop, together with WHO, an assessment framework for an evaluation and validation process of AI for health.
7. To collaborate with stakeholders to monitor and collect feedback from the use of AI algorithms in healthcare delivery environment, and to provide feedback to development of improved international standards.
8. To generate a transparent documentation by creating reports and specifications towards enabling external assessment of the benchmarking framework and the benchmarked AI for health methods.
The Focus Group on Cybersecurity (CSCG) will support CEN and CENELEC to explore ways and means for supporting the growth of the Digital Single market. To this end, the CSCG will analyse technology developments and develop a set of recommendations to its parent bodies for international standards setting ensuring a proper level playing field for businesses and public authorities.
The Group will preparing a European roadmap on cybersecurity standardization and will actively support global initiatives on cybersecurity standards that are compliant with EU requirements in view of development of trustworthy ICT products, systems and services.
In 2016, the Focus Group looked into the different usages/ meanings of the 'cybersecurity' word by various stakeholders in different standards and finalized a document Definition of Cybersecurity consisting of an overview of overlaps and gaps of those definitions with a view of moving towards a common understanding of the cyber security domain.
TC CYBER is recognized as a major trusted centre of expertise offering market-driven cyber security standardization solutions, advice and guidance to users, manufacturers, network, infrastructure and service operators and regulators. ETSI TC CYBER works closely with stakeholders to develop standards that increase privacy and security for organizations and citizens across Europe and worldwide. We provide standards that are applicable across different domains, for the security of infrastructures, devices, services, protocols, and to create security tools and techniques.
Some of our latest standards have been in network security (implementing the NIS Directive TR 103 456, the Middlebox Security Protocol TS 103 523 series, a survey of network gateways TR 103 421), cryptography for access control and personally identifying information (Attribute-Based Encryption TS 103 458 and TS 103 532), Critical Security Controls (the TR 103 305 series), protecting PII in line with GDPR (TR 103 370), Quantum-Safe Key Exchanges (TR 103 570), and more. You can see a full list on our standards page.
In addition to TC CYBER, other ETSI groups also work on standards for cross-domain cybersecurity, the security of infrastructures, devices, services and protocols and security tools and techniques. They address the following areas and more information can be found in the related technologies pages:
Engineers, technologists and other project stakeholders need a methodology for identifying, analysing and reconciling ethical concerns of end users at the beginning of systems and software life cycles.
The purpose of this standard is to enable the pragmatic application of this type of Value-Based System Design methodology which demonstrates that conceptual analysis of values and an extensive feasibility analysis can help to refine ethical system requirements in systems and software life cycles.
This standard will provide engineers and technologists with an implementable process aligning innovation management processes, IS system design approaches and software engineering methods to minimize ethical risk for their organizations, stakeholders and end users.
A key concern over autonomous systems (AS) is that their operation must be transparent to a wide range of stakeholders, for different reasons. (i) For users, transparency is important because it builds trust in the system, by providing a simple way for the user to understand what the system is doing and why. If we take a care robot as an example, transparency means the user can quickly understand what the robot might do in different circumstances, or if the robot should do anything unexpected, the user should be able to ask the robot 'why did you just do that?'. (ii) For validation and certification of an AS transparency is important because it exposes the system's processes for scrutiny. (iii) If accidents occur, the AS will need to be transparent to an accident investigator; the internal process that led to the accident need to be traceable. Following an accident (iv) lawyers or other expert witnesses, who may be required to give evidence, require transparency to inform their evidence. And (v) for disruptive technologies, such as driverless cars, a certain level of transparency to wider society is needed in order to build public confidence in the technology. For designers, the standard will provide a guide for self-assessing transparency during development and suggest mechanisms for improving transparency (for instance the need for secure storage of sensor and internal state data, comparable to a flight data recorder or black box).
The purpose of this standard is to have one overall methodological approach that specifies practices to manage privacy issues within the systems/software engineering life cycle processes.
This standard is designed to provide individuals or organizations creating algorithms, largely in regards to autonomous or intelligent systems, certification-oriented methodologies to provide clearly articulated accountability and clarity around how algorithms are targeting, assessing and influencing the users and stakeholders of said algorithm. Certification under this standard will allow algorithm creators to communicate to users, and regulatory authorities, that up-to-date best practices were used in the design, testing and evaluation of the algorithm to avoid unjustified differential impact on users.
This standard is designed to provide organizations handling child and student data governance-oriented processes and certifications guaranteeing the transparency and accountability of their actions as it relates to the safety and wellbeing of children, their parents, the educational institutions where they are enrolled, and the community and societies where they spend their time, both on and offline. It is also designed to help parents and educators, with an understanding that most individuals may not be tech-savvy enough to understand underlying issues of data usage, but still must be properly informed about the safety of their children's data and provided with tools and services that provide proper opportunities for content based, pre-informed choice regarding their family's data.