Cybersecurity for AI Systems .

According to the AI Standardization Request from the European Commission to CEN/CENELEC, European standards or standardisation deliverables shall provide suitable organisational and technical solutions to ensure that AI systems are resilient against attempts to alter their use, behaviour, or performance or compromise their security properties, by malicious third parties, exploiting vulnerabilities of AI systems. Organisational and technical solutions shall include, where appropriate, measures to prevent and control cyberattacks trying to manipulate AI specific assets, such as training data sets (e.g. data poisoning) or trained models (e.g. adversarial examples), or trying to exploit vulnerabilities in AI system’s digital assets or in the underlying ICT infrastructure. These solutions shall be appropriate to the relevant circumstances and risks. Furthermore, the requested European standards or standardisation deliverables shall take due account of the essential
requirements for products with digital elements as listed in Annex I of the EC proposed Regulation on horizontal cybersecurity requirements for products with digital elements (CRA proposal of 15 September 2022).

Discussion: identify existing standards and analyse the gaps (new standards) that still need to be developed to respond to the AI Standardization Request from the European Commission.