Menu

Blog

Oct 22, 2020

Cyberattacks against machine learning systems are more common than you think

Posted by in categories: business, cybercrime/malcode, finance, robotics/AI

Machine learning (ML) is making incredible transformations in critical areas such as finance, healthcare, and defense, impacting nearly every aspect of our lives. Many businesses, eager to capitalize on advancements in ML, have not scrutinized the security of their ML systems. Today, along with MITRE, and contributions from 11 organizations including IBM, NVIDIA, Bosch, Microsoft is releasing the Adversarial ML Threat Matrix, an industry-focused open framework, to empower security analysts to detect, respond to, and remediate threats against ML systems.

During the last four years, Microsoft has seen a notable increase in attacks on commercial ML systems. Market reports are also bringing attention to this problem: Gartner’s Top 10 Strategic Technology Trends for 2020, published in October 2019, predicts that “Through 2022, 30% of all AI cyberattacks will leverage training-data poisoning, AI model theft, or adversarial samples to attack AI-powered systems.” Despite these compelling reasons to secure ML systems, Microsoft’s survey spanning 28 businesses found that most industry practitioners have yet to come to terms with adversarial machine learning. Twenty-five out of the 28 businesses indicated that they don’t have the right tools in place to secure their ML systems. What’s more, they are explicitly looking for guidance. We found that preparation is not just limited to smaller organizations. We spoke to Fortune 500 companies, governments, non-profits, and small and mid-sized organizations.

Our survey pointed to marked cognitive dissonance especially among security analysts who generally believe that risk to ML systems is a futuristic concern. This is a problem because cyber attacks on ML systems are now on the uptick. For instance, in 2020 we saw the first CVE for an ML component in a commercial system and SEI/CERT issued the first vuln note bringing to attention how many of the current ML systems can be subjected to arbitrary misclassification attacks assaulting the confidentiality, integrity, and availability of ML systems. The academic community has been sounding the alarm since 2004, and have routinely shown that ML systems, if not mindfully secured, can be compromised.

Comments are closed.