Tanzu Platform AI

AI App Delivery with Tanzu Platform: Empowering Platform Teams with Enhanced Integrations and Enterprise Governance

The initial barrier to companies adopting a new technology trend is the set of features we often lump into the category of “enterprise ready.” These include categories like security and compliance, governance, integrations, and cost management. In the AI space, this also includes AI model governance. 

In our continued commitment to empowering organizations to build and operate enterprise-ready AI applications safely, the Tanzu Platform team is introducing additional enterprise-readiness features at VMware Explore Las Vegas 2025 that include AI tools integrations and model quota capabilities for AI models. These new features on Tanzu Platform are being engineered to provide enterprises with more control and safety for their AI models’ lifecycle. 

Increased interoperability with AI tools and safety integrations (tech preview)

Building on our legacy of enabling customers to better meet their security and governance obligations, we are excited to announce a new architectural feature designed to enhance the flexibility and security of AI inference calls within the Tanzu Platform with inbound and outbound webhooks for Tanzu Platform inference calls. This crucial new capability can provide a more seamless mechanism for integrating external tools with AI workflows on the Tanzu Platform. Here are some of the new features included in this tech preview:

Figure 1: Inbound/Outbound Webhooks Architecture with Tanzu Platform

  • Inbound webhooks can allow external applications to be triggered for all incoming inference calls, enabling the injection of content review before it is passed to the backing model.
  • Outbound webhooks can facilitate the review of the responses from LLMs before they are sent back to the user. This enables organizations to inject post-processing for things like response validation or sensitive data scrubbing.

These webhooks can significantly improve the extensibility and automation possibilities for organizations leveraging AI on the Tanzu Platform, enabling complex AI-driven processes to be orchestrated with greater ease and efficiency. 

For example, these new inbound/ outbound webhooks can be used to apply AI content safety tools. Specifically, it can be used to help address a paramount concern for organizations utilizing AI: the protection of sensitive information.  With this extensible architectural feature, organizations using Tanzu Platform for their AI applications will be able to utilize their preferred personally identifiable information (PII) scrubbing and redaction services to automatically identify and remove or mask sensitive data points, such as names, addresses, Social Security numbers, and financial details before they are processed by AI models or exposed to system users. With an extensible architecture, organizations can allow for the integration of various content safety solutions based on an organization’s specific governance and security requirements. This can help customers as they work to comply with privacy regulations and internal policies throughout the AI lifecycle. Not only can these webhooks be used to facilitate integration with AI content safety tools, this feature can also offer a powerful solution to integrate any pre- or post-inference processing an organization would like to implement as a standard feature for all consuming applications.

Improved cost control for AI models with new quota capabilities 

Platform teams face the critical challenge of managing AI model consumption and associated expenditure. A plan-level AI model quota feature coming to Tanzu Platform can enable more precise cost control per use case, thereby helping to prevent overconsumption and unexpected costs. This feature can benefit organizations in a few ways:

Figure 2: Tanzu Platform enables platform teams to define request and token quotas at plan level

  • Granular consumption control – This feature can empower administrators to define and enforce consumption quotas across various teams and individual consumers on a per-plan basis. This can improve resource allocation by aligning it with business priorities and prevents any overuse of AI model capacity.
  • Support for diverse metrics – The platform is designed to support both request-based and token-based quotas. By providing flexibility in how enterprise limits are applied, platform teams can utilize the most appropriate consumption metric based on the specific AI model and its usage patterns. For instance, a model with high computational demands might be better managed by token limits, while a simpler API might use quota limits.
  • Synergy with rate limits – Plan-level quotas work in conjunction with existing rate-limiting mechanisms. While rate limits control the immediate flow of requests to prevent model overload, quotas provide a longer-term, cumulative control over resource consumption within a defined plan. This multilayered approach enables better cost control.
  • Universal applicability – The quota system is implemented consistently across all AI models and all consumers within a given plan. This can provide a unified and predictable framework for resource management, simplifying administration that can be aligned to budget cycles and improving fairness in resource distribution.

By providing plan-level quotas, platform teams can better optimize AI model usage, balance usage across teams, and maintain better control over costs associated with AI services. This proactive approach to consumption management is crucial for scalable and sustainable AI adoption within any organization.

Confidently pursue AI-native application transformation with Tanzu Platform

The Tanzu Platform team is enhancing its enterprise governance and secure-extensibility features to help customers better meet the increasing demand for secure, compliant, and scalable AI application development in complex enterprise settings.

These new features aim to reduce operational complexities and risks associated with AI adoption in enterprises. This empowers organizations to confidently leverage AI for innovation, improved operational efficiency, and competitive advantage, all while striving towards the highest standards of security and regulatory compliance practices for AI. With these new features, Tanzu Platform is an attractive option for enterprise innovation with AI.   

Get started building AI apps today

Current Tanzu Platform customers can begin building AI applications with AI Starter Kit for Tanzu Platform. AI Starter Kit is an AI adoption path curated by the Tanzu team that includes installation scripts along with how-to guidance to help you build, operate, and optimize production-ready AI applications. Download now!