Threat Analysis Unit Network Security

Countering the Rise of Adversarial Machine Learning

The security community has found an important application for machine learning (ML) in its ongoing fight against cybercriminals. Many of us are turning to ML-powered security solutions like NSX Network Detection and Response that analyze network traffic for anomalous and suspicious activity. In turn, these ML solutions defend us from threats better than other solutions can by drawing on their evolving knowledge of what a network attack looks like. 

Attackers are well-aware of the fact that security solutions are using AI and ML for security purposes. They also know that there are certain limitations when it comes to applying artificial intelligence to computer security. This explains why cyber criminals are leveraging ML to their advantage in something known as adversarial machine learning. 

In this post I’ll explain just what adversarial machine learning is and what it is not. To start, the label itself can be a bit misleading. It sounds like criminals are actually using ML as part of their attack. But that is not the case. The simple explanation is that they’re using more conventional methods to understand how security solutions are using ML so that they can then figure out how to either pollute the dataset used to bootstrap the learning process or bypass the ML-based detection altogether. 

What Is Adversarial Machine Learning? 

Relating to cybersecurity, adversarial machine learning boils down to any attack that accounts for and assumes an AI-based solution is on the defensive side. Attackers attempt to bypass that solution’s ability to distinguish what is good from what is bad. In that sense, they usually just need a very good program, not necessarily AI-based, to learn the inner workings of a chosen ML-based solution and bypass the tool. 

This type of attack involves two possible approaches. First, bad actors examine the tool’s learning processes to glean more about the solution’s domain of data, what models it uses and what specifically governs that data. They then try to influence that learning process assuming the ML solution learns from a large pool of data. 

As an example, let’s assume that attackers have broken into an ML-based system or were able to purchase an ML-powered solution. They then can use that access to study the solution’s decision process and learn what types of things the solution is looking for, how its thresholds are set, and what analogies it makes based upon the ML algorithms it is using. At that point, they can follow up by injecting instances of bad things into that pool so that the solution learns those things as normal. This is called dataset pollution or adversarial learning because the attacker is effectively trying to confuse a solution’s learning models so that they can conceal their attack campaigns. 

In the second type of attack, bad actors don’t pollute any data. They simply obtain or infer ML models as a starting point to morph their attacks so that they can evade detection. This type of attack works particularly well when there’s a perfect learning set and when the attackers don’t know what the solution is specifically looking for. In that scenario, they attempt to learn the ML tool’s classifier so that they can evade its algorithms going forward. 

What’s Behind the Rise of Adversarial Machine Learning? 

Attackers aren’t pursuing adversarial ML for any special reason other than they have to if they want to be successful in their campaigns. New weapons breed new anti-weapons, after all. It’s the same arms race that has been driving the security community to create more sophisticated tools and malware authors to integrate new evasive techniques into their payloads and develop increasingly sophisticated attacks. 

For now though, today’s digital criminals are doing just fine with basic ransomware. Therefore, it is not that bad actors are using their own AI to attack organizations. Instead, they’re using their own, typically conventional, tools to bypass organizations’ AI-based security systems. Organizations are increasingly using these types of solutions, and attackers need to figure out a way to evade these tools if they hope to accomplish their nefarious ends. 

But it won’t stay this way for long. In the future, the security community will undoubtedly see more AI-based attacks. Bad actors could theoretically use AI to increase their attacks’ rate of effectiveness by examining their campaigns, analyzing the profiles of affected users, and tweaking their campaigns accordingly, all with the aid of ML that automatically separates and organizes successful efforts from those that were defeated (one could even imagine that this could be an application of reinforcement learning). 

How Can Organizations Defend Against Adversarial Machine Learning? 

Given the challenges discussed above, organizations need to have a lot of “belts and suspenders” in place, if they hope to circumvent attackers’ attempts to bypass these ML tools. What these organizations need is machine learning in depth where they configure several mechanisms to look at the same data. This will give them a broad security perspective and save them from the dangers of hitching their defenses to a single ML implementation. 

This is not to say organizations need multiple products. But they should look to a product that uses multiple ML algorithms along with other levels of detection, such as model-based analysis and heuristics. Say one algorithm looks at “X,” for example. If an attacker figures out how to fool the first algorithm, organizations will be in a weaker position to detect incoming threats. That’s why organizations need algorithms that can detect “Y” and “Z,” as these processes can use other data to spot anomalies even if attacker learn how to bypass the first ML-based solution. 

Learn whyNSX Network Detection and Response solutions are resilient to adversarial machine learning and how these tools’ network-centric viewpoint can help keep your systems safe against attackers.