Tanzu Platform AI developers security

Shifting Security Left with AI — Is It Truly AI-Assisted Security, or an Infinite Loop?

A recent survey of more than 23,000 developers found that nearly half (49%) are now using AI regularly for coding and other development-related tasks. Among those developers, 73% report saving up to four hours per week through AI assistance.

But a key question remains: what are developers actually doing with that extra time?

While large language models (LLMs) have proven remarkably effective as coding assistants—especially because developer frameworks are so well-documented—they still have blind spots. LLMs don’t inherently understand an organization’s existing applications, data models, or infrastructure. As a result, the time savings from AI-assisted coding often get redirected elsewhere in the software development lifecycle (SDLC).

According to Atlassian’s State of Developer Experience Survey 2025, most developers are reinvesting their AI-driven time savings into improving code quality. That shift makes sense. As AI accelerates code generation, the sheer volume of new code has been increasing—bringing with it a higher need for review, testing, and debugging.

Research from Apirro reinforces this point: vulnerabilities introduced by AI coding assistants require significant human oversight. The trade-off is clear—4x faster code generation can come with 10x greater risk if not properly managed.

The AI Security Paradox: Are we achieving AI-assisted security? Or simply creating nested dependencies?

For developers, AI is being used not just for faster coding, but also for debugging and vulnerability scanning. When used as both a coding assistant, and as a debugger it’s important to not create a kind of recursive loop—AI writing code that’s then reviewed and fixed by the same AI. While efficient in theory, it can also compound assumptions and errors – just like a game of telephone.  

In the rush to automate threat detection, code reviews, and policy enforcement, security teams are increasingly deploying LLM-based agents to detect threats like prompt injection, data exfiltration attempts, or unauthorized queries. But the same sophistication that makes these models capable of identifying nuanced patterns also makes them vulnerable to the very tactics they’re trained to catch.

For example: The AI system designed to detect prompt injection can itself be manipulated through prompt injection.  A malicious actor doesn’t need to breach infrastructure or exploit a buffer overflow—they can simply convince the AI to overlook, reinterpret, or “approve” something harmful.

How the Paradox Unfolds

Let’s walk through a common sequence of events in this new security landscape:

  1. AI flags suspicious input.  An LLM integrated into a developer workflow detects an unusual instruction in a user prompt. It classifies the content as potentially malicious—a clever attempt at data leakage, for example.
  2. Developer asks the AI to explain its reasoning. The AI’s flag seems overcautious, so a developer asks it to elaborate. Why was this prompt suspicious? The model begins to reason through its decision, generating a natural-language explanation.
  3. An attacker exploits the explanation loop. The attacker crafts a secondary prompt designed to embed a hidden payload within the AI’s reasoning process. The model, attempting to be helpful, may interpret this input as part of its “analysis” and inadvertently override its own guardrails.
  4. AI explains away the suspicion. In the worst case, the model justifies the malicious input as safe, allowing it to pass through internal checks. The AI has, in essence, talked itself out of being secure.

This recursive vulnerability—where AI systems manipulate or are manipulated through dialogue—creates an “infinite loop” of trust and deception.  However, at its core, this is not a failure of technology—it’s a failure of boundary definition

AI is not a security silver bullet, but it can be useful if you “break the loop”

AI systems are conversational by design. They interpret, reason, and generate based on context. But when the boundaries between analysis and action are blurred, a model can inadvertently become part of the attack surface.  Security logic becomes entangled with natural-language logic. And that’s the danger.

Despite the sophistication of today’s models, they are still pattern-matchers, not sophisticated arbiters of nuance. They can be tricked, confused, or persuaded—sometimes spectacularly.

This means that relying solely on LLMs for threat detection, vulnerability analysis, or automated code approval introduces a new layer of systemic risk.  For example, model drift can weaken security judgments over time, content poisoning can alter how a model perceives safe or unsafe behavior, and adversarial prompts can reverse engineer filters and cause data leakage. 

However if you still want to use LLMs, you need to ensure you are breaking the loop. Enterprises – at a minimum –  must adopt multi-model security reviews, or better yet mutli-layered LLM-driven security reviews, to avoid the recursive trapIn addition, the chain of testing and debugging needs a non-AI enforcement mechanism.

Here are some practical best practices to apply:

  • Separation of concerns: AI models that detect should not be the same models that build, explain or enforce.
  • Immutable policies: Use hard-coded rule sets or non-AI validators for final approval of critical operations.
  • Observability and audit trails: Every model decision—flagged, approved, or overridden—should be logged and reviewed by a human.
  • Prompt provenance tracking: Maintain lineage of how each input, intermediate response, and output was generated and modified over time.

This structure helps ensure that AI remains an intelligent assistant, not the sole authority in the security chain.

From AI Loops to Enterprise Ready Application Security

While this paradox seems unique to AI, it mirrors challenges developers have faced for decades—particularly in the Java and Spring Framework ecosystems.

In traditional applications, developers have long relied on layered security: web filters, interceptors, controllers, service-level validations and access controls to guard against injection, spoofing, and session hijacking. AI introduces new versions of these same problems—only now they exist in the semantic layer instead of the code layer.

Furthermore, AI-assisted coding has dramatically increased the volume of code commits. Enterprise security teams, already stretched thin for years, require additional support to manage this surge. Leveraging AI for security can help address the increased code volume. Yet, as the distinction between code logic and conversational logic blurs, security teams will still face considerable challenges. AI-assisted coding underscores the need for security models to evolve and shift left.  

For developers, frameworks like Spring Security can play a crucial role in bridging AI trust boundaries. Spring Security is a comprehensive and extensible support for both authentication and authorization that provides protection against attacks like session fixation, clickjacking, cross site request forgery and more. When combined with the AI assisted testing and debugging, implementing an application platform like Tanzu Platform is highly recommended. Tanzu Platform is based on Cloud Foundry which has a long history of security and compliance capabilities. Such platforms can enable organizations to proactively manage the influx of code generated by AI-assisted coding and maintain risk control. 

To learn more about AI security trends, watch this video on “What Platform Engineers Need to Know About GenAI Security and Compliance”.