This recommended practice specifies governance criteria such as safety, transparency, accountability, responsibility and minimizing bias, and process steps for effective implementation, performance auditing, training and compliance in the development or use of artificial intelligence within organizations.
This recommended practice establishes an evaluation framework for the capabilities of artificial intelligence dialogue systems such as chatbots, consulting terminals, or operation interfaces. The recommended practice defines and classifies the types and levels of the intelligence capabilities according to a checklist of criteria. The checklist tables describe the criteria used to determine the level that a dialogue system achieves based on the analysis of behavior and performance.
Test specifications with a set of indicators for common corruption and adversarial attacks, which can be used to evaluate the robustness of artificial intelligence-based image recognition services are provided in this standard. Robustness attack threats and establishes an assessment framework to evaluate the robustness of artificial intelligence-based image recognition service under various settings are also specified in this standard.
This recommended practice provides recommendations for next steps in the application of IEEE Std 7010, applied to meeting Environmental Social Governance (ESG) and Social Development Goal (SDG) initiatives and targets. It provides action steps and map elements to review and address when applying IEEE Std 7010. This recommended practice serves to enhance the quality of the published standard by validating the design outcomes with expanded use. It provides recommendations for multiple users to align processes, collect data, develop policies and practices and measure activities against the impact on corporate goals and resulting stakeholders. This recommended practice does not set metrics for measurement and/or reporting, but rather identifies well recognized indicators to consider in assessment and measurement of progress.
Specific methodologies to help employers in accessing, collecting, storing, utilizing, sharing, and destroying employee data are described in this standard. Specific metrics and conformance criteria regarding these types of uses from trusted global partners and how third parties and employers can meet them are provided in this standard. Certification processes, success criteria, and execution procedures are not within the scope of this standard.
ISO/IEC/IEEE 12207:2017 also provides processes that can be employed for defining, controlling, and improving software life cycle processes within an organization or a project.
The processes, activities, and tasks of this document can also be applied during the acquisition of a system that contains software, either alone or in conjunction with ISO/IEC/IEEE 15288:2015, Systems and software engineering?System life cycle processes.
In the context of this document and ISO/IEC/IEEE 15288, there is a continuum of human-made systems from those that use little or no software to those in which software is the primary interest. It is rare to encounter a complex system without software, and all software systems require physical system components (hardware) to operate, either as part of the software system-of-interest or as an enabling system or infrastructure. Thus, the choice of whether to apply this document for the software life cycle processes, or ISO/IEC/IEEE 15288:2015, Systems and software engineering?System life cycle processes, depends on the system-of-interest. Processes in both documents have the same process purpose and process outcomes, but differ in activities and tasks to perform software engineering or systems engineering, respectively.
A set of ontologies with different abstraction levels that contain concepts, definitions, axioms, and use cases that assist in the development of ethically driven methodologies for the design of robots and automation systems is established by this standard. It focuses on the robotics and automation domain without considering any particular applications and can be used in multiple ways, for instance, during the development of robotics and automation systems as a guideline or as a reference “taxonomy” to enable clear and precise communication among members from different communities that include robotics and automation, ethics, and correlated areas. Users of this standard need to have a minimal knowledge of formal logics to understand the axiomatization expressed in Common Logic Interchange Format.
This standard describes specific methodologies to help users certify how they worked to address and eliminate issues of negative bias in the creation of their algorithms, where negative bias infers the usage of overly subjective or uniformed data sets or information known to be inconsistent with legislation concerning certain protected characteristics (such as race, gender, sexuality, etc); or with instances of bias against groups not necessarily protected explicitly by legislation, but otherwise diminishing stakeholder or user well being and for which there are good reasons to be considered inappropriate. Possible elements include (but are not limited to): benchmarking procedures and criteria for the selection of validation data sets for bias quality control; guidelines on establishing and communicating the application boundaries for which the algorithm has been designed and validated to guard against unintended consequences arising from out-of-bound application of algorithms; suggestions for user expectation management to mitigate bias due to incorrect interpretation of systems outputs by users (e.g. correlation vs. causation).
To coordinate global data and AI literacy building efforts, this standard establishes an operational framework and associated capabilities for designing policy interventions, tracking their progress, and empirically evaluating their outcomes. The standard includes a common set of definitions, language, and understanding of data and AI literacy, skills, and readiness.
This standard describes recognizable audio and visual marks to assist with the identification of communicating entities as human or machine intelligence to facilitate transparency, understanding, and trust during online, telephone, or other electronic interactions. Interventions to discern whether an interaction is with a machine or not (such as a Turing Test) are not within the scope of this standard. This standard is concerned only about the declaration of the nature of the agency influencing an interaction.
This recommended practice describes ethical considerations and recommended best practices in the design of artificial intelligence as used by adaptive instructional systems. The ethical considerations derived from P2247.1, Standard for the Classification of Adaptive Instructional Systems, is directly related to: P2247.1 Standard for the Classification of Adaptive Instructional Systems, P2247.2 Interoperability Standards for Adaptive Instructional Systems (AISs), and P2247.3 Recommended Practices for Evaluation of Adaptive Instructional Systems.