cloud native containers kubernetes news products serverless Tanzu Advanced

Cloud Native Runtimes for VMware Tanzu Is Now GA, Plus an Integration with TriggerMesh

Back in March, during our Cloud Transformation event, we released the public beta of Cloud Native Runtimes for VMware Tanzu, which is based on Knative serving and eventing technology. Today, we have a couple of new, exciting announcements to make about Cloud Native Runtimes. 

The first is that Cloud Native Runtimes’ serving capabilities are now generally available for all VMware Tanzu Advanced edition customers. If you have purchased Tanzu Advanced, you can go to Tanzu Network and download Cloud Native Runtimes to get started with serverless capabilities for Kubernetes today. The second announcement is that we have integrated event sources created by TriggerMesh into Cloud Native Runtimes to address one of the largest challenges that organizations face in this increasingly hybrid and multi-cloud world: how to connect modern, event-driven applications that span multiple infrastructure technologies, software services, data centers, and clouds.

The addition of Cloud Native Runtimes to Tanzu Advanced edition is a major milestone in our journey to deliver the best application team experience on top of Kubernetes. We’ve been working alongside our customers as they undergo profound operational and cultural changes in the name of creating better digital experiences and business outcomes through modern applications. Indeed, at Team Tanzu, everything we do is geared toward enabling our customers to achieve those outcomes faster, more efficiently, and more securely than they ever thought possible. When you combine Cloud Native Runtimes with the other capabilities in Tanzu Advanced edition, a cohesive application platform built around those outcomes and superior application team experiences begins to emerge.

Reduce Kubernetes complexity with Knative serving 

Today, Tanzu Advanced gives you a great foundation on which to build and deploy container-based applications and to construct robust DevSecOps pipelines that shepherd software all the way from ideation to production. Its value for application teams has always been that it expedites the path to production; it also streamlines security and compliance—whether that be through an automated container build and management system, a cross-cluster and cross-cloud control plane, or a trusted catalog of open source app components. Even despite these capabilities, however, many developers and operations teams still face a steep learning curve when it comes to starting the process of building container-based software for Kubernetes and efficiently managing that software in various environments. 

Cloud Native Runtimes’ serving capability addresses some of their core challenges. It not only simplifies the way that developers interact with Kubernetes to continuously test and improve their applications, but it abstracts away much of the complexity so that operators can deploy and manage applications in the most efficient and resilient way possible. 

Developers can get a URL to test their app in seconds

Developers continuously test their applications as they’re iterating on them; they need to see how new code behaves in the context of both a container and an environment like the one it will be deployed in when changes go into production. But getting a URL to test your application on Kubernetes can be complicated, as there are multiple steps to take and decisions to make, each of which can alter the way your application behaves. And while you might grasp the basics of Kubernetes services, ingress, and networking, you can run into trouble if you are configuring those things differently than they’ll be configured in production environments. 

If, however, your team uses Cloud Native Runtimes, the process of getting a URL consists of executing one command that deploys your application container along with all the routing and services you need to access it. That’s all there is to it. Moreover, if the container will ultimately run on Cloud Native Runtimes in production, inconsistencies between the two environments are far less likely—even if your development environment is a cluster running in KIND or MiniKube on your laptop. Deploying subsequent iterations of your app is about as easy as it was to deploy it the first time. And if you need to troubleshoot your app or even just better understand what’s going on, the underlying Kubernetes infrastructure is all still at your fingertips; the abstraction does not hide or restrict access to any of it. 

Operators get simpler yet more powerful app management capabilities

Cloud Native Runtimes’ serverless capabilities not only drastically simplify the process for developers to run applications in a Kubernetes development environment, they also drastically simplify the process for operators to deploy and manage applications in production. Building and running applications in stateless containers and scheduling them with Kubernetes opens up a whole new world of possibilities when it comes to scaling, upgrading, and rolling them back. But to realize those gains, you need to navigate a lot of complexity—even if you have advanced Kubernetes skills. Cloud Native Runtimes’ Knative serving technology eliminates much of that complexity to make any ops team capable of advanced application management on Kubernetes. Teams that leverage Cloud Native Runtimes can automatically scale their workloads in and out based on how many requests they are receiving, including down to zero when there is no traffic. 

Operators have a huge variety of options for simplifying application management with advanced routing that enables, among other things, upgrade orchestration patterns like blue-green and canary. Traffic policies can be applied to workload revisions to direct a percentage of incoming requests to them, and custom labels can be applied to make revisions invokable via subroutes instead of the main route. While you could manipulate Kubernetes directly to achieve similar outcomes, doing so would be far more difficult and costly to maintain. In light of an ever-increasing number of applications and microservices to manage combined with the rapid pace of development, operations teams welcome the combination of simplicity and power they get from Cloud Native Runtimes.

While Cloud Native Runtimes delivers a lot of value on its own, in the context of Tanzu Advanced edition, it really shines. That’s because when combined with the innovative capabilities included in Tanzu Advanced, a seamless platform experience for Kubernetes begins to emerge. A good example is the combination of VMware Tanzu Build Service with Cloud Native Runtimes. From a developer perspective, this combination further simplifies the process of testing applications because Tanzu Build Service automates the process of building and updating containers whenever a change is detected in their code. These updated containers can then be quickly deployed for testing. 

The benefit of using Tanzu Build Service together with Cloud Native Runtimes for applications already running in production is twofold. The first is that Tanzu Build Service continuously updates containers based not only on code changes, but on dependency and operating system updates as well. Couple that with Cloud Native Runtimes’ simplified yet advanced upgrades, and you can easily create workflows that constantly deploy the most up-to-date and securely patched containers possible. In case you missed our demo showing how that works, you can watch it in our Cloud Native Runtimes beta announcement from March. 

Using Tanzu Build Service and Cloud Native Runtimes together is just one of many examples of how this new serverless capability works with the entirety of Tanzu Advanced to deliver a great application experience on Kubernetes. Keep checking our blog for more demos over the next few months!

Build multi-environment, event-driven apps with Cloud Native Runtimes and TriggerMesh

Cloud Native Runtimes simplifies building event-driven applications on Kubernetes with Knative eventing. Yet it remains difficult to consume events from disparate sources and environments. What is needed is a single API through which events can be consumed in an automated way, regardless of the event source. To that end, we are excited about a new integration between Cloud Native Runtimes and TriggerMesh that makes it easy for Knative eventing resources to consume external events. TriggerMesh is a Technology Alliance Partner with VMware.

Built on Kubernetes and Knative, the TriggerMesh integration platform connects different types of applications with each other, regardless of infrastructure, leveraging the industry-standard CNCF CloudEvents spec. Users compose TriggerMesh Bridges out of one or more Sources, a Broker that provides event routing, filtering, transformation, and splitting, and one or more Targets. By combining TriggerMesh and Cloud Native Runtimes, you will not only be able to easily create event-driven applications on Kubernetes but will also have a smooth path to integrating those applications with external and legacy apps and services. 

As our initial use case for Cloud Native Runtimes and TriggerMesh working together, we have developed an integration that connects workloads and data located in Amazon Web Services (AWS) with event-driven Kubernetes applications running on VMware Tanzu Kubernetes Grid. Any AWS service can generate events, but consuming those events with applications in different environments can require writing and maintaining a lot of AWS-specific code. TriggerMesh gives you a standardized mechanism for integrating AWS (and other cloud) events into your application through Cloud Native Runtimes Knative eventing, without the need to alter your application or write specialized code. 

To demonstrate a real-world example of how a Tanzu Advanced customer might leverage these technologies together, VMware Staff Technical Marketing Architect Myles Gray and TriggerMesh Engineer Jeff Neff have created a demo video showing how TriggerMesh can connect data upload events from Amazon S3 with a machine learning (ML) app built using Cloud Native Runtimes and TensorFlow Serving from the VMware Tanzu Application Catalog. In this hypothetical use case, an industrial organization photographs vehicle license plates as they enter a facility. Every time a vehicle enters, its license plate photo is uploaded by the camera to an S3 bucket. Each upload triggers an event that is routed through TriggerMesh to the TensorFlow application running on Kubernetes, where the photographic data is converted to text. The license plate number is then recorded in a Google Sheet along with some other details, like the time of capture. This data can then be easily compared with, for example, an access list to automate and accelerate physical security operations.

This use case, with its time-of-day traffic peaks and multi-cloud application flow, demonstrates the power of using TriggerMesh and Cloud Native Runtimes together. As more S3 upload events are seamlessly funneled to the ML application running on-prem, Cloud Native Runtimes automatically scales the application out to quickly process the new data in S3. As soon as those processes are complete and the data is recorded, the application is scaled back down to zero. Everything in this video could be accomplished using components sourced from Tanzu Advanced combined with TriggerMesh and Amazon S3.

The Knative eventing functionality in Cloud Native Runtimes is still in beta, so even if you are not already a Tanzu Advanced edition customer, we encourage you to install an evaluation copy of Cloud Native Runtimes, which comes with TriggerMesh out of the box.

Learn more about Cloud Native Runtimes

If you are a Tanzu Advanced edition customer, you are now entitled to supported serverless capabilities on Kubernetes through Cloud Native Runtimes. If you are not yet a Tanzu Advanced customer, check it out on our website and contact your VMware sales rep for more information. And once you’re ready to use Cloud Native Runtimes, the best place to get started is our comprehensive documentation.