Recall and Withdrawal of AI systems .
Please find below an extract, derived from a paper written by AI ethicist Alessio Tartaro
Recall and withdrawal of AI systems
In instances where the risk level associated with an AI system is classified as unacceptable, organizations should initiate procedures for recall or withdrawal, whether temporary or permanent. This course of action is pivotal in preventing the occurrence of adverse effects. These actions should be considered as measures of last resort [1].
The guidance is an adaptation of the general guidelines provided by ISO 10393:2013 - Consumer Product Recall - Guidelines for Suppliers, specifically tailored to address the unique challenges and considerations pertaining to AI systems.
1 General recommendations
1.1 General
Organizations should establish a structured recall management system for AI systems, which includes the formation of a recall management team, delineation of processes, and allocation of necessary resources. The system should ensure preparedness for recall events, encompassing the entire lifecycle of AI systems.
1.2 Policy
Each organization should develop and implement a recall policy for AI systems that specifies the conditions under which a recall will be initiated. The policy should define the roles and responsibilities of personnel, expected outcomes, and the framework for risk assessment and communication strategies during a recall event.
1.3 Documentation and record keeping
Organizations should maintain comprehensive documentation that records all aspects of the AI system's design, development, deployment, monitoring, and maintenance activities. Records should include, among others, data on performance metrics, ethical compliance assessments, user feedback, and detailed accounts of any decisions and actions taken in the event of a recall.
1.4 Expertise required to manage a recall
Organizations should ensure that personnel involved in the recall process have the necessary expertise, which includes, among others, knowledge of AI technology, risk management, legal and ethical compliance, and communication. Appropriate training should be provided to ensure that the team can identify potential risks and execute recall measures effectively.
1.5 Authority for key decision
The organization should designate individuals with the authority to make critical decisions during a recall. This includes the initiation of a recall, stakeholder communication, and the termination of the AI system's deployment. The chain of command and decision-making protocols should be clearly established and communicated to all relevant personnel.
1.6 Training and recall simulation Organizations should conduct regular training and recall simulation exercises to ensure that personnel are prepared to execute the recall policy effectively. Training should cover all aspects of the recall process, from risk identification to communication with stakeholders and the restoration of service post-recall.
2 Assessing the need for an AI system recall
2.1 General
Organizations should establish clear guidelines for the initiation of an assessment process to determine the necessity of an AI system recall. These guidelines should outline the triggers for assessment initiation, such as performance anomalies, ethical breaches, user complaints, or regulatory inquiries. The assessment process should be systematic and commence promptly upon the identification of a potential issue that may warrant a recall.
2.2 Incident notification
Organizations should implement an incident notification protocol to promptly inform all relevant parties, including regulatory bodies, stakeholders, and affected users, of a potential issue that could lead to a recall. The notification system should enable rapid dissemination of information and facilitate the immediate commencement of the assessment process.
2.3 Incident investigation
Upon notification of a potential incident, organizations should conduct a thorough investigation to ascertain the nature and severity of the issue. The investigation should involve collecting and analyzing data related to the incident, consulting with experts, and reviewing the AI system's operational history to identify any contributing factors or patterns.
2.4 Assess the risk
Organizations should employ a structured methodology to evaluate the risks associated with the identified issue. This assessment should consider the potential for harm to users, violations of ethical standards, legal non-compliance, and broader societal impacts. The risk assessment should guide the organization in determining the appropriate course of action.
2.5 Traceability
Organizations should maintain a traceability system for all AI systems to facilitate tracking and location throughout their operational lifecycle. Traceability measures should enable the organization to quickly identify all instances of the AI system in use, including deployment locations, responsible parties, and affected users.
2.6 Product recall decision
The organization should establish decision-making criteria for initiating a recall of an AI system. This should include a threshold for action based on the risk assessment, the potential impact of the recall, and the feasibility of corrective measures. The decision-making process should be documented, transparent, and involve key stakeholders to ensure that all relevant factors are considered.
3 Implementing an AI system recall
3.1 General
Organizations should develop a comprehensive recall implementation plan for AI systems, detailing the steps from initiation to completion. This plan should be activated upon the decision to recall and should include provisions for resource allocation, stakeholder communication, and measures to mitigate the impact of the recall.
3.2 Initiate the recall action
Once a decision to recall an AI system has been made, the organization should initiate the recall action as per the established plan. The initiation process should include the issuance of formal recall notices and activation of the recall management team. The scope and urgency of the recall should dictate the immediate actions taken.
3.3 Communication
Effective communication strategies should be employed to inform all affected parties, including users, partners, and regulatory bodies, about the recall. Information disseminated should clearly describe the reason for the recall, the actions required by the recipients, and the channels through which they can seek further information or support.
3.4 Implement the recall
The recall should be implemented according to the plan, with actions taken to cease the operation of the recalled AI system, inform affected users, and address the identified issue. If the recall involves a physical product, procedures should include the retrieval of the product from all distribution points. If it is a software-based or cloud-based AI system, the recall may involve remote deactivation, patching, rollback, user access restriction, and similar measures.
3.5 Monitor and report
The organization should continuously monitor the recall process to ensure compliance and effectiveness. Progress reports should be generated and communicated to stakeholders at regular intervals. Monitoring activities should also include tracking the recall's reach and verifying that corrective actions have been implemented
3.6 Evaluate effectiveness
Post-recall, the organization should evaluate the effectiveness of the recall process. This evaluation should assess whether the recall objectives were met, if the risk was mitigated, and how the recall impacted the stakeholders. The findings should be documented and used to inform future recalls.
3.7 Review and adjust recall strategy
Following the evaluation, the organization should review the recall strategy and make necessary adjustments to improve future responses. This review should consider feedback from stakeholders, the results of the effectiveness evaluation, and any changes in regulatory requirements or organizational policies.
4. Continual improvement of recall programme
4.1 General
Organizations should establish a framework for the continuous improvement of the recall program for AI systems. This framework should be based on the principles of iterative learning, feedback incorporation, and process optimization. It should aim to enhance the organization's ability to respond to recall situations effectively and efficiently.
4.2 Reviewing the recall
After the completion of a recall, the organization should conduct a comprehensive review of the recall process. This review should assess how the recall was executed, the efficacy of the communication strategies, the adequacy of the resources allocated, and the overall management of the recall. The review should identify both strengths and areas for improvement.
4.3 Corrective actions to prevent reoccurrence Based on the review, the organization should identify and implement corrective actions to address any deficiencies observed during the recall process. These actions should aim to prevent the reoccurrence of similar issues. The organization should also revise risk assessment and management strategies to incorporate lessons learned from the recall.
[1] Tartaro, Alessio (2023). When things go wrong: the recall of AI systems as a last resort for ethical and lawful AI. AI and Ethics. https://doi.org/10.1007/s43681-023-00327-z
Please login to post comments