Contribute to a Security Standard for AI Systems

An ISO/IEC proposal has been approved to standardize guidelines by which failures, due to cyber-attacks, in Artificial Intelligence (AI) systems can be reduced. The goal is to have more assistance in the development of this standard.

 

Awareness and understanding of this new threat landscape is necessary for securing AI systems against attacks. AI systems are increasingly being deployed as organizations embrace their digital transformation. This increases the likelihood of security attacks on AI systems. 

 

Organizations are generally not well prepared to protect their AI systems from attacks. These systems have several vulnerabilities compared to conventional systems. This is because of the way it is developed and their strong dependence on data. There are already cases of attacks on AI systems that have been documented (for example, an evasion attack on an email) security system. Due to the disparate and widespread use of AI systems, and especially within security-critical contexts, the result of security attacks can be severe. In some cases, it can even put people at risk.

 

 

Intent of the standard

Addressing security threats to AI systems in a timely manner helps to improve reliability and trust in AI systems. The proposed standard is intended for organizations that use or develop artificial intelligence, or AI systems. It aims to identify what measures are needed to address the security of AI systems. To do this, the threats to AI systems are first identified as well as the errors that cause these threats. It then provides insight into security issues related to AI.

 

Benefits of the standard

The proposed standard has a number of important elements where it should be noted that existing security measures used to secure conventional software and information systems are also applicable to AI systems. The standard will:

  • provide guidance to organizations that choose to develop or deploy AI systems;
  • create a better awareness of the types of security risks that such systems may face;
  • help organizations understand the implications of such threats;
  • provide techniques for detecting threats

 

Further standardization

The document will act as a precursor to further standardization of security controls where existing security controls are modified or extended to address attacks on AI systems. This could include management controls in ISO/IEC 27002 and ISO/IEC 42001. Also, new security controls can be defined to address attacks on AI systems.

  

Source: NEN