Since its original inception as Pivotal Cloud Foundry, Tanzu Platform’s mission of helping enterprise customers build and deliver great software quickly, has not changed. Through the years great software has included everything from cloud native and 12 factor apps to microservices and functions to mobile apps, moduliths and software running at the edge. Today the list includes GenAI and agentic AI.
To quote a famous Pivotal Cloud Foundry Haiku written by Onsi Fakhouri:
Here is my source code
Run it in the cloud for me
I do not care how
A core tenet of Tanzu is that your underlying platform should treat all applications the same – a workload is a workload after all. This underscores Tanzu’s value and helps our customers release apps more quickly and securely so they can:
- adjust to dynamic market conditions quickly,
- deliver new, revenue-generating features regularly,
- save money by improving application and operations efficiency, and
- meet security and compliance standards at scale while reducing the time it takes to recover from CVEs and failures.
Tanzu is the best place to run AI Apps
Today that Tanzu Platform Haiku could read like this:
Here is my source code
Connect an AI model
I do not care how
While some industry research claims that two-thirds of enterprises are still struggling to move beyond experimentation, Tanzu customers are running AI-enabled applications in production. Some are releasing net-new applications and others are adding AI-models to refactor existing applications. And some are adding autonomy to their new and existing applications with agentic AI features. Either way, as some enterprises are struggling to move beyond exploration with AI, Tanzu customers are pushing enterprise-ready agentic AI and GenAI apps to production in a matter of weeks.
AI-enabled applications can make highly paid knowledge workers and engineers more productive. One large Tanzu manufacturing customer built an internal agent registry from which data science and dev teams can access the right services for them whether they are using Python, NodeJS or Java.
Evolving from Speed to Velocity
Speed is a cliche when it comes to apps and new innovations, but it’s been one of the key ways of rating software development success for our customers for many years. The faster you can get new features into production, the sooner and more frequently you can experiment with and discover how to improve how your business functions. Great apps rely on a fast build-operate-optimize loop; they also have direction in that they are oriented to a specific goal whether it’s improving engagement, increasing accuracy, or just generating revenue. Speed is great, but velocity gets you somewhere.
None the less, right now, this need for speed is especially important for AI-driven applications. We’re all discovering how to use AI to improve how business works, and in such a phase, experimenting is vital.
That’s why we think an AI platform is critical to every organization’s AI strategy. Platforms focus on removing toil from the build-operate-optimize cycle to speed up app delivery. And, just as with apps, speeding up that build-operate-optimize loop will help you discover how to use AI to run your business better. Platforms not only help you go fast, they allow you to do so consistently and at scale.
Here are some examples:
1. One organization added AI capabilities to its test analysis tool suite to speed up the loop data scientists went through to study and then make design recommendations for their products.
2. A maintenance team at an organization used AI to find inconsistencies in repair logs to quickly diagnose product failures.
3. Operating globally means keeping track of and complying with numerous, evolving regulations in each region. Analysis of how those regulations affect product enhancements can slow down the release cycle. So one organization added AI to scan these regulations and match them up to product features.
4. In a similar case, one organization that sells products in different countries used AI to make sense of regional sales practices and pricing on their products, matching regional purchasing behavior to products that would help increase sales and margin.
What’s common in each of these is how quickly platform engineers are able to deliver the services, models and APIs to developers safely, and at scale. Ultimately this let the developers safely and easily add AI capabilities to their application architectures, quickly and with little friction in the app dev and delivery process.
Any App, Anywhere
Tanzu Platform is a proven and trusted AI-Ready, private PaaS platform for engineers, data scientists and developers alike can easily push their AI-embedded workload to the platform like any other app. This is because Tanzu is designed as a modular, app-aware, platform and is pre-engineered to make it easy for developers to build, deploy, run and secure any GenAI applications as well as microservices, business apps, and any other workload.
To be sure, our customers are achieving great results building and delivering innovative, AI applications to help them improve and grow their business. Just as we began this article reiterating Tanzu’s mission, we’ll reiterate that Tanzu is not just great for AI applications but for any workload running on any cloud – private or public, as highlighted in this case study featuring Tanzu Customer DATAEV.
Build your AI Engineering Skills
Tanzu customers looking to sharpen their AI skills should consider attending Cloud Foundry Day on May 14th and registering for the Cloud Foundry Day Platform Engineering Skills for GenAI and Agentic Training Workshop being held the day before. You can also check out Cloud Foundry Weekly’s multiple shows dedicated to GenAI .