Artificial Intelligence

Available (253)

Showing 205 - 216 per page



IEEE - P7000 - Model Process for Addressing Ethical Concerns During System Design

The standard establishes a process model by which engineers and technologists can address ethical consideration throughout the various stages of system initiation, analysis and design. Expected process requirements include management and engineering view of new IT product development, computer ethics and IT system design, value-sensitive design, and, stakeholder involvement in ethical IT system design.

IEEE - P7009 - Standard for Fail-Safe Design of Autonomous and Semi-Autonomous Systems

This standard establishes a practical, technical baseline of specific methodologies and tools for the development, implementation, and use of effective fail-safe mechanisms in autonomous and semi-autonomous systems. The standard includes (but is not limited to): clear procedures for measuring, testing, and certifying a system's ability to fail safely on a scale from weak to strong, and instructions for improvement in the case of unsatisfactory performance. The standard serves as the basis for developers, as well as users and regulators, to design fail-safe mechanisms in a robust, transparent, and accountable manner.

IEEE - ASV WG_P7001 - Autonomous Systems Validation Working Group_P7001

A key concern over autonomous systems (AS) is that their operation must be transparent to a wide range of stakeholders, for different reasons. (i) For users, transparency is important because it builds trust in the system, by providing a simple way for the user to understand what the system is doing and why. If we take a care robot as an example, transparency means the user can quickly understand what the robot might do in different circumstances, or if the robot should do anything unexpected, the user should be able to ask the robot 'why did you just do that?'. (ii) For validation and certification of an AS transparency is important because it exposes the system's processes for scrutiny. (iii) If accidents occur, the AS will need to be transparent to an accident investigator; the internal process that led to the accident need to be traceable. Following an accident (iv) lawyers or other expert witnesses, who may be required to give evidence, require transparency to inform their evidence. And (v) for disruptive technologies, such as driverless cars, a certain level of transparency to wider society is needed in order to build public confidence in the technology. For designers, the standard will provide a guide for self-assessing transparency during development and suggest mechanisms for improving transparency (for instance the need for secure storage of sensor and internal state data, comparable to a flight data recorder or black box).

IEEE - WG-CSDG - Working Group for Child and Student Data Governance

This standard is designed to provide organizations handling child and student data governance-oriented processes and certifications guaranteeing the transparency and accountability of their actions as it relates to the safety and wellbeing of children, their parents, the educational institutions where they are enrolled, and the community and societies where they spend their time, both on and offline. It is also designed to help parents and educators, with an understanding that most individuals may not be tech-savvy enough to understand underlying issues of data usage, but still must be properly informed about the safety of their children's data and provided with tools and services that provide proper opportunities for content based, pre-informed choice regarding their family's data.

Inclusion and Application Standards for Automated Facial Analysis Technology

The standard provides phenotypic and demographic definitions that technologists and auditors can use to assess the diversity of face data used for training and benchmarking algorithmic performance, establishes accuracy reporting and data diversity protocols/rubrics for automated facial analysis, and outlines a rating system to determine contexts in which automated facial analysis technology should not be used.

P7013

Akoma Ntoso Version 1.0

The Akoma Ntoso standard distinguishes between concepts regarding the description and identification of legal documents, their content, and the context in which they areused.  Names are used to associate the document representations to concepts so that documents can be “read/understood” by a machine, thus allowing sophisticated services that are impossible to attain with documents containing only typographical information, such as documents created in word-processing applications.To make documents machine-readable, every part with a relevant meaning and role must have a “name” (or “tag”) that machines can read. The content is marked up as precisely as possible according to the legal analysis of the text. This requires precisely identifying the boundaries of the different text segments, providing an element name that best describes the text in each situation, and also providing a correct identifier to each labelled fragment.

IEEE - ALGB-WG - Algorithmic Bias Working Group

This standard is designed to provide individuals or organizations creating algorithms, largely in regards to autonomous or intelligent systems, certification-oriented methodologies to provide clearly articulated accountability and clarity around how algorithms are targeting, assessing and influencing the users and stakeholders of said algorithm. Certification under this standard will allow algorithm creators to communicate to users, and regulatory authorities, that up-to-date best practices were used in the design, testing and evaluation of the algorithm to avoid unjustified differential impact on users.