Tanzu Platform AI api GenAI

The Security Gap in AI Applications: Rethinking API Protection for a New Era

Integrating AI into applications has significantly increased the complexity of application security. While AI applications share fundamental vulnerabilities with traditional applications and can benefit from existing cybersecurity tools, these tools are often insufficient for AI applications.

Modern AI security necessitates specialized tools and practices. These include AI model firewalls that detect prompt injection and exfiltration, model vulnerability scanners, and attack simulators. It also requires data and model lineage tracking for provenance and explainability, as well as observability platforms for AI behavior and drift. Furthermore, specialized tools include AI governance frameworks, such as NIST AI RMF and ISO/IEC 42001, and continuous validation pipelines for retraining and adversarial testing.

Whether AI is used within applications or for application development—or both—it is important to update security plans to address new AI-driven attack vectors. A recent report highlights that 75% of organizations understand that rapid growth of AI in their enterprise exposes the limitations of legacy governance processes. The rapid adoption of AI in production applications indicates these new risks are being overlooked. Further, the APIs themselves have become more risky in the age of AI. A recent study highlighted this risk, revealing that 57% of AI-powered APIs were externally accessible, and 89% used insecure authentication methods like static keys. These novel attack vectors require organizations to rethink their application security posture when it comes to AI, and to adopt new solutions. 

The risk of API sprawl and AI

For generative AI (GenAI) apps to deliver real business value, they need access to your company’s proprietary data. Without it, models default to the public data they were trained on—meaning you get the same generic ideas as your competitors. If everyone is starting with the same new ideas, competitive advantage disappears. This is exactly like hiring an outside consultant and refusing to show them how your business actually operates.

APIs are the predominant way to give GenAI apps access to this crucial data. This is leading to a massive proliferation of custom endpoints that are skipping rigorous security checks, governance, and cost analysis. This phenomenon is not new. In fact, it’s the classic enterprise adoption pattern whereby teams prioritize features and speed over safety and oversight. This AI sprawl is introducing immediate and serious security problems. Thankfully, those problems can be solved with a proven approach and existing tools. But first, we must acknowledge the scale of the risk they pose.

This vulnerability explosion is now staggering: A recent study revealed that 439 AI-related CVEs surfaced in 2024—a 1,025% year-over-year increase from 2023. Nearly all of these new vulnerabilities (98.9%) were linked directly to APIs. This underscores the rapid, unsecured expansion of AI, making robust API security a non-negotiable requirement.

For example, many teams are currently using the Model Context Protocol (MCP) to act as this kind of API for LLMs. MCP itself has very little security built in, let alone the complex access and content controls needed for enterprise data. This is an excellent example of where having an API gateway brings in all the governance, enforcement, and content control you need immediately. 

The vital role of API gateways for AI security

To ensure you are managing security vulnerabilities, you can either reauthor your APIs to follow a consistent set of rules or use a platform. If you choose to rearchitect APIs, it’s important to make clear separations of components (nouns) and actions that can be performed (verbs). This will help greatly with agentic apps that may perform autonomous problem-solving tasks. Agents can only act to solve problems, but they need to be able to interact with the API in an expected way that outlines what tools and actions are available to the agent. Expected behavior also improves security and enables teams to address security consistently and as prescribed.

But rearchitecting still might not be enough. For AI app security, enterprise organizations should still have an AI app platform with an API gateway. Gateways ensure secure access along with common access control patterns that are an effective security layer. At the foundational level, gateways can enable you to add quota and token limits for AI apps to avoid blowing out budgets by apps ending up in an endless loop of agentic speculation. In an API gateway, an agent should be able to see the registered endpoints for the agents along with the context and skills of those endpoints. Platforms can also utilize deterministic workflows for specific situations and use cases for agents.  

Tanzu Platform enables safer AI applications

VMware Tanzu Platform, built on Cloud Foundry, benefits from a strong foundation of existing security and compliance features. Enhancing this security, Tanzu Platform incorporates an enterprise-grade API gateway based on Spring. This gateway offers robust traffic filtering and routing, along with essential enterprise capabilities, such as integrated security (OAuth2/JWT, SSO, HMAC), advanced traffic control (multi-factor rate limiting and traffic replay), extensions for API standards (OpenAPI, gRPC, GraphQL), and request/response content transformation and removal.

We recently launched Tanzu Platform 10.3, which offers enhanced AI observability and control. By using the AI services tile within Tanzu Platform to run your AI models or coding assistants, you gain crucial capabilities like access control, rate limiting, and quota management. These features help you ensure that only authorized personnel have access and help prevent resource overspending.

In addition, platform engineers can centralize security intelligence and gain a detailed view of risk exposure through the newly introducedVulnerability Insights Dashboard in Tanzu Hub. This dashboard provides vital information, including CVE criticality, patch levels, and environmental exposure for all Tanzu Platform components, enabling teams to rapidly triage, prioritize, and remediate security risks. The platform also allows teams to download the software bill of materials (SBOM) for all deployed Tanzu Platform components.

To learn more about the VMware Tanzu approach to security in the age of AI, watch this video: