Why security shouldn’t be overlooked when implementing AI applications
Many researches have and are proving that AI applications can be easily fooled. They are hacked so they make wrong decisions and fail in their tasks in ways that benefit potential attackers. Different attacks have been proposed against AI that compromise the confidentiality, integrity and availability of the systems deploying these solutions.
Those attacks against AI are fundamentally different from traditional cyberattacks. This is because the underlying algorithms used to implement AI systems are inherently vulnerable. They can’t be easily patched or replaced as compared to fixing bugs and securing codes when it comes to facing traditional cyberattacks. Furthermore, when compliance programs have been utilized in different industries to protect against traditional cyberattacks, there are no clear standardized guidelines that help industries to implement AI solutions that protect from possible attacks on their AI systems.
Why this publication?
Inspired by these facts, we focus in this article on some of the attacks proposed against AI applications. We also provide insight into best practices that businesses implementing AI solutions can adopt to secure against bad actors.
About the author
Samraa Alzubi is a Cyber Security Consultant at Approach. Samraa has a master’s in cyber security from ULB university, her master thesis, done last year, researched the attacks against machine learning and proposed a new black-box adversarial reprogramming attack against image classifiers.
Want to stay up to date with the latest threats? Subscribe then to our SOC newsletter.