With security, the battle between good and evil is always a swinging pendulum. Traditionally, the shrewdness of the attack has depended on the skill of the attacker and the sophistication of the arsenal. This is true on the protection side of the equation, too—over $200B in investments have been poured in year on year to strengthen cybersecurity and train personnel.

It is fair to say that Generative-AI has upended this paradigm on its head. Now, an unskilled hacker with low sophistication could leverage Gen-AI “crowdsourced” constructs to become significantly more destructive with relatively little to no investment and training. This explodes the threat surface significantly.

Consider a recent example that one of VMware’s security technologists shared leveraging generally available ChatGPT. When he requested ChatGPT to create an exploit code for a vulnerability, it resulted in an appropriate denial.

A screenshot of a computer error Description automatically generated

 

Note that the software can understand the malicious nature of the request and invokes its ethical underpinning to justify the denial.

But what if you slightly shift the question’s tonality, and frame it as seeking “knowledge” instead?

A screenshot of a computer Description automatically generated

 

What was previously denied is now easily granted with just a few keystrokes, and the exploit code is dished up.

A screenshot of a computer Description automatically generated

 

Admittedly, you could see this example as search on steroids. It is basic but powerful, and grows more so with each passing day. Variations of ChatGPT are continuing to evolve and these examples clearly show how deadly the combination can be when bad intent meets derived sophistication. Now, the hacker or attacker is subsidized both in terms of time and resources, amplifying their threat potential.

The above example as aforementioned is quite basic, but it demonstrates the explosion of the attack surface that I talked about earlier.

Enterprise executives have recognized this problem—this has been top of mind in several conversations we’ve had with customers. They are educating themselves and some are even experimenting the new technology with innovation labs. And, despite all the “AI-washing” that’s underway, they are doing their best to improve the signal-to-noise ratio.  Gen-AI is no longer just hype for CISOs—it has many applications within the enterprise, with cybersecurity being just one focal area.

On the vendor side, there are several initiatives underway that aim to leverage Gen-AI for several promising use cases. Some are being introduced as “co-pilots.” Since this is an emerging area, with a lot of hype still surrounding it, vendors have to make conscious bets mostly based on active consultative engagement with customers.

For instance, consider this problem statement that Security Operations Centers (SOCs) experience. Despite all the instrumentation, a SOC can be a very noisy environment, especially with large enterprises. There are too many alerts, from too many sources, coming from several tools, and, invariably, a lot of false positives false negatives that get through.

I liken this to the security screener at the airport. Despite having X-ray machines and metal detectors, quite a lot of harmful things do get through. It could be due to operator fatigue, objects appearing to be different from what the screener has been trained to see, or too much clutter—you name it.

 

Many alerts also lack context: they may not fit the pattern for an anomaly, or they could mimic regular behavior. These are very hard to understand and detect, especially if the sample space is small. If the sample space is large, that presents an entirely different problem, as it is usually accompanied by large swaths of alerts and red notifications which operators tend to disregard.

So how do you tackle this? VMware is actively investing and innovating in this space, introducing a Generative AI-based security co-pilot functionality at VMware Explore this year.

A screenshot of a computer Description automatically generated

In this instance, the security co-pilot functionality supports rapid triage without compromising on accuracy. It can automatically layer in a higher degree of contextual information to prioritize and co-relate alerts. It can reduce the number of false positives significantly, allowing human time and effort to be properly utilized. Further, Gen-AI-based rapid triaging and co-relation allow root cause to be discerned accurately.

Gen-AI can also be useful in modeling low-threshold anomalies where signature patterns aren’t available, based on below-normal deviations, but where additional context is applied.

Once the alerts are correlated and the root cause is triaged, these co-pilot offerings can help remediate—making recommendations of security policies that are specific to that alert or incident. This faster remediation significantly reduces the time to response and will only get faster as the AI engine can, over time, more rapidly discern the right policy application.

The recommendations should certainly be vetted by qualified operators before they are applied. Based on the severity of the alert, the response policy application could also be automated.

Generative AI could also help model cause-effect scenarios more rapidly, and with significant iterative evolution to help predict the impact of the response. If the desired outcome is not achieved through this modeling, the policy application or the incident response can be quickly modified. This is particularly useful when varying recommendations are made and the operator has to choose the policy deployment that they deem most pertinent. Paths of least resistance can also be predicted where the change is the most innocuous.

The power of Gen AI tools gets unlocked not just when there’s a large swath of data, but also when there’s minimal data, and AI can step in to detect patterns and bring sophisticated correlations that may not be easily apparent. This can be useful in the case of Day-0 attacks as well, when pattern matching, or large-scale data, may not be readily available.

These examples are just scratching the tip of the surface. I’m quite excited about the potential that Gen AI holds. For those in positions of leadership and influence, it’s important to strike a balance between leveraging these powerful constructs while not sidelining ethics. Whether the tool will become more powerful than the craftsman will be something that only time will tell. Regardless, we can say with confidence that the next decade will belong to Gen AI.