Artificial Intelligence

Available (251)

Showing 205 - 216 per page



IEEE - P7009 - Standard for Fail-Safe Design of Autonomous and Semi-Autonomous Systems

This standard establishes a practical, technical baseline of specific methodologies and tools for the development, implementation, and use of effective fail-safe mechanisms in autonomous and semi-autonomous systems. The standard includes (but is not limited to): clear procedures for measuring, testing, and certifying a system's ability to fail safely on a scale from weak to strong, and instructions for improvement in the case of unsatisfactory performance. The standard serves as the basis for developers, as well as users and regulators, to design fail-safe mechanisms in a robust, transparent, and accountable manner.

IEEE - ASV WG_P7001 - Autonomous Systems Validation Working Group_P7001

A key concern over autonomous systems (AS) is that their operation must be transparent to a wide range of stakeholders, for different reasons. (i) For users, transparency is important because it builds trust in the system, by providing a simple way for the user to understand what the system is doing and why. If we take a care robot as an example, transparency means the user can quickly understand what the robot might do in different circumstances, or if the robot should do anything unexpected, the user should be able to ask the robot 'why did you just do that?'. (ii) For validation and certification of an AS transparency is important because it exposes the system's processes for scrutiny. (iii) If accidents occur, the AS will need to be transparent to an accident investigator; the internal process that led to the accident need to be traceable. Following an accident (iv) lawyers or other expert witnesses, who may be required to give evidence, require transparency to inform their evidence. And (v) for disruptive technologies, such as driverless cars, a certain level of transparency to wider society is needed in order to build public confidence in the technology. For designers, the standard will provide a guide for self-assessing transparency during development and suggest mechanisms for improving transparency (for instance the need for secure storage of sensor and internal state data, comparable to a flight data recorder or black box).

IEEE - WG-CSDG - Working Group for Child and Student Data Governance

This standard is designed to provide organizations handling child and student data governance-oriented processes and certifications guaranteeing the transparency and accountability of their actions as it relates to the safety and wellbeing of children, their parents, the educational institutions where they are enrolled, and the community and societies where they spend their time, both on and offline. It is also designed to help parents and educators, with an understanding that most individuals may not be tech-savvy enough to understand underlying issues of data usage, but still must be properly informed about the safety of their children's data and provided with tools and services that provide proper opportunities for content based, pre-informed choice regarding their family's data.

Inclusion and Application Standards for Automated Facial Analysis Technology

The standard provides phenotypic and demographic definitions that technologists and auditors can use to assess the diversity of face data used for training and benchmarking algorithmic performance, establishes accuracy reporting and data diversity protocols/rubrics for automated facial analysis, and outlines a rating system to determine contexts in which automated facial analysis technology should not be used.

P7013

Akoma Ntoso Version 1.0

The Akoma Ntoso standard distinguishes between concepts regarding the description and identification of legal documents, their content, and the context in which they areused.  Names are used to associate the document representations to concepts so that documents can be “read/understood” by a machine, thus allowing sophisticated services that are impossible to attain with documents containing only typographical information, such as documents created in word-processing applications.To make documents machine-readable, every part with a relevant meaning and role must have a “name” (or “tag”) that machines can read. The content is marked up as precisely as possible according to the legal analysis of the text. This requires precisely identifying the boundaries of the different text segments, providing an element name that best describes the text in each situation, and also providing a correct identifier to each labelled fragment.

IEEE - ALGB-WG - Algorithmic Bias Working Group

This standard is designed to provide individuals or organizations creating algorithms, largely in regards to autonomous or intelligent systems, certification-oriented methodologies to provide clearly articulated accountability and clarity around how algorithms are targeting, assessing and influencing the users and stakeholders of said algorithm. Certification under this standard will allow algorithm creators to communicate to users, and regulatory authorities, that up-to-date best practices were used in the design, testing and evaluation of the algorithm to avoid unjustified differential impact on users.

Emilia Tantar

Fellow's country
Impact on SMEs (7th Open Call)
A clear, actionable EN AI Conformity Assessment standard makes compliance with the EU AI Act far easier and less costly for smaller companies. With a coordinated set of standards instead of a fragmented landscape, SMEs save time, reduce legal uncertainty, and avoid investing in multiple overlapping compliance tools. This streamlined approach supports faster product deployment, lowers administrative burden, and enables SMEs to build trustworthy AI solutions that meet European requirements from day one.
Impact on society (7th Open Call)
A unified set of AI conformity standards strengthens public trust in how AI systems are developed, assessed, and deployed. By making risk management transparent and consistent, these standards help ensure that AI used in critical domains is safe, fair, and reliable. A coordinated framework also enables early detection and mitigation of societal risks, fostering a resilient AI ecosystem where innovation happens responsibly and benefits reach citizens, public services, and the broader European economy.
Open Call
Organisation type
Organization
Luxembourg House of Cybersecurity
Portrait Picture
Emilia Tantar
Proposal Title (7th Open Call)
Progress and lead deliver to enquiry of EN AI Conformity assessment and supporting standards
Role in SDO
Standards Development Organisation
StandICT.eu Year
2026
Topic (7th Open Call)

Luis Moran Abad

Description of Activities

I focus on the development of a new standard Work Model type (Technical Specification) that facilitates the consolidation, integration, and implementation of requirements, helping organisations comply with AI laws, regulations, and standards more effectively. The objective is to guide and support organisations on how to meet the multiple requirements imposed by laws, regulations, and standards on AI-based systems. The initiative will not create new requirements but will provide assistance and guidance to organisations on how to consolidate, integrate, implement and audit different sources of requirements

Fellow's country
Open Call Topics
Impact on SMEs (7th Open Call)
The AI-Compliance initiative aims to develop a new standard to help European organisations, especially SMEs, comply with complex AI-related laws, regulations and standards. This new standard will be especially valuable for small and medium-sized enterprises (SMEs) because these organisations often lack the internal resources, specialised staff, and structured processes necessary to implement regulatory environments.
SMEs frequently struggle to interpret legal and technical requirements, allocate time for implementation, and ensure ongoing adherence. A practical standard would provide a clear framework for implementation reducing the cost and effort of compliance.
Impact on society (7th Open Call)
The European Union can push its values and ethics in AI without fear of crippling economic development by having a new standard to help with regulatory compliance. For the EU, it is primarily about finding ways to seize the opportunities offered by AI in a way that is human-centred, ethical, safe and consistent with our core values as Europeans.
Open Call
Organisation type
Portrait Picture
Luis Moran
Proposal Title (7th Open Call)
AI-Compliance: Artificial Intelligence Compliance Enabler new standard Guidelines and Work Model
Standards Development Organisation
StandICT.eu Year
2026
Year
Topic (7th Open Call)