A Taxonomy of Security Threats, Vulnerabilities, and Controls of AI Systems

Yusuke Kawamoto
AIST

Ethical AI systems are required to achieve various kinds of qualities, usually with trade-offs. Managing these qualities can be very complicated due to the presence of various stakeholders that can be attackers. To have a clear view of the whole landscape, we systematize the knowledge of security threats, vulnerabilities, and controls of machine-learning-based (ML-based) systems. We first classify the damage caused by attacks against ML-based systems, define ML-specific security, and discuss its characteristics. Next, we enumerate all relevant assets and stakeholders and provide a general taxonomy for ML-specific threats. Then, we collect a wide range of security controls against ML-specific threats through an extensive review of recent literature. Finally, we classify the vulnerabilities and controls of an ML-based system in terms of each vulnerable asset in the system’s entire lifecycle.

This talk is based on the arXiv paper [1] and the guideline [2] our institute has been developing.

[1] Yusuke Kawamoto, Kazumasa Miyake, Koichi Konishi, and Yutaka Oiwa. Threats, Vulnerabilities, and Controls of Machine Learning Based Systems: A Survey and Taxonomy. https://arxiv.org/pdf/2301.07474.pdf

[2] National Institute of Advanced Industrial Science and Technology (AIST). Machine Learning Quality Management Guideline, 3rd English Edition. https://www.digiarc.aist.go.jp/en/publication/aiqm/guideline-rev3.html