Paper Title: Adversarial AI/ML Attacks: How to Protect Your Applications
Paper Abstract: Artificial Intelligence and deep learning solutions have ability to improve many areas of life, and to analyze massive amounts of information in real time. The data processed by these solutions is often received through thousands of sensors, probes, video feeds, application data, security logs, media, GPS data, and many other kinds of sources. One of the big challenges being able to preserve data integrity, as it is generated, ingested, organized/labeled for training, and used for inference. The threat of adversarial AI/ML attacks is very real, but this important topic hasn't received much coverage in literature or at conferences. It is important for AI/ML researchers, data scientists, security teams, infrastructure teams and others involved in the AI/ML data pipeline to know how these attacks are typically carried out, and how to protect against them. This talk will examine the types of adversarial AI/ML attacks with a high-level explanation how they are carried out, and methods to optimize data integrity in order to preserve the correct operation of training models used for inference so that AI applications will produce the correct output.
Paper Author: Dejan Kocic, Sr Systems Engineer, NetApp
Author Bio: Long history of storage/security projects with focus on AI in the last 5 years
|