envoy istio networking pas PCF pivotal application service Pivotal Cloud Foundry pivotal_cloud_foundry

PCF 2.5 Strengthens Istio and Envoy Integration, Brings Weighted Routing and Multi-Port Support

The routing layer in Pivotal’s Application Service (PAS) is an underappreciated piece of the platform. After all, it’s responsible for all the traffic to and from your applications. At first, the routing tier did the basics: matching headers and passing along requests. As time went on, it improved. The router was rewritten in Go, becoming the "Gorouter", largely for performance gains. A TCP router was added for non-HTTP(S) traffic. The team built websocket support. Piece-by-piece, routing in Cloud Foundry became more secure and highly scalable.

Lately, though, routing has experienced a revival, something more than just a steady flow of enhancements.

This reawakening is thanks to deeper integration with Istio and Envoy. As my colleague Jared Ruckle described, we laid out plans for four major enhancements to the Cloud Foundry routing tier:

  1. Mutual TLS between the Gorouter and application instances

  2. Enhanced ingress routing

  3. Enhanced app-to-app routing and load balancing

  4. Deeper application security policies

PAS 2.1 brought us TLS integration between router and application. This helped increase security by ensuring every application's identity as well as encrypting communication between router and application, marking PAS’s first integration with Envoy. PAS 2.3 and 2.4 made this the expected behavior, emphasizing Pivotal's commitment to a platform that's secure by default.

With mTLS checked off the to-do list, PAS 2.5 brings us to point number two: enhanced ingress traffic. By this, we mean how your traffic gets routed through the layers of Cloud Foundry and to your application. While the guts of the router have improved, the developer couldn’t necessarily benefit directly from this work. That changes in PAS 2.5.

Now developers can enjoy weighted routing (a beta feature) and multi-port support for their apps.

 

Better Control Over Your Traffic With Weighted Routing

There’s more to deploying software than just pushing code and calling it a day. Blue/Green deployments, A/B testing, and gradual rollouts are increasingly popular, and for good reason.

No one likes it when deployments go wrong. But we’re human, so mistakes happen especially when you ship code daily. The question is: what are your options when something does go wrong? Can we easily get back to a version that we know works? PAS has always had an answer for this scenario. But with weighted routing, these techniques become much easier to implement.

Previously, if you wanted to route traffic to two versions of an application, the solution was to map the same route to each. If you wanted to split traffic unevenly, say 90% to version 1 and 10% version 2, you would need to manually adjust the number of app instances. For every instance of version 2, you would need 9 instances of version 1 running. In some cases, this meant you may be running more instances than actually needed. In PAS 2.5, each route mapping now has its own weight. When you map a route to an application, the platform tells it (in relation to the other app) how much traffic it should receive. This is done largely in part thanks to Istio and the real-time flexibility it provides over routing traffic. Currently, the Istio-backed routing tier will run along-side the current routing tier and offered as a separate domain inside of PAS. Over time though, these will merge and become one.

Consider a more concrete example. Let’s say we want to deploy a new version of our backend service, version 2, with some new features. This service handles a large number of requests and as such, we want to make sure it remains online and performant during the deployment. As usual, we'll deploy this independently of version 1, the current version running in production. (This move ensures that v2 is up and running as expected.) Before, we would either cut over to version 2 completely and all at once. Or, we would map our production route to both versions using several manual steps. Now, with weighted routes, we have a new option. We can gradually shift traffic from version 1 to version 2 by adjusting both of their route mapping weights.

We can add some numbers to this scenario to illustrate the concept. We’ve already deployed two versions of our application, my-app-1 and my-app-2, and mapped the same route to my-app.example.com. We'll start with giving the mapping for version 1 a weight of 9 and version 2 a weight of 1. Now, 90% of the traffic to this service to go to version 1, and 10% to version 2. We can specify this routing in the PAS API directly.

cf curl /v3/route_mappings/$(echo "$(cf curl /v3/apps/$(cf app my-app-1 -- guid )/route_mappings |  jq  .resources[1].guid)" | tr -d '"') -X PATCH -d '{"weight": 9}'

There are a few things going on with this command. We can see that inline with the cf curl command, we’re pulling the GUID of the route mapping to v1 of our application. Specifically, we’re interested in the route mapping, the relationship between application and route. We update the configuration of this mapping, setting the “weight” value to “9.” We can observe, measure, and collect metrics from version 2, ensuring it's behaving as expected without sending too much traffic to our new version just quite yet.

We start by sending a small amount of traffic to version two.

If everything goes well, we can proceed to give both routes a weight of 5 and split traffic evenly. We can observe how our new version performs under a greater load. Further, we’ll continue to shift these weights to give version 2 a majority of traffic. If there’s a hiccup or something doesn’t go as expected, we can swap these weights back to instantly revert these changes.

cf curl /v3/route_mappings/$(echo "$(cf curl /v3/apps/$(cf app my-app-2 -- guid )/route_mappings |  jq  .resources[1].guid)" | tr -d '"') -X PATCH -d '{"weight": 9}'

cf curl /v3/route_mappings/$(echo "$(cf curl /v3/apps/$(cf app my-app-1 -- guid )/route_mappings |  jq  .resources[1].guid)" | tr -d '"') -X PATCH -d '{"weight": 1}'

We can gradually shift weights on route mappings, slowly introducing more and more traffic to v2 of our application.

Finally, once we’re happy with how v2 is performing, we can stop v1. Now v2 is handling 100% of the traffic. You can see how this opens up a more natural path to advanced deployment techniques. Blue/green deployments, canary deployments, and A/B testing are all much easier to achieve with this new feature.

 

More Ports? No Problem. Introducing Multi-Port Support!

Most applications only listen on a single port. But what about the use case where our application might listen on multiple ports? What if we have a management interface or a metrics endpoint that we don't want users even reaching? That brings us to the other big routing feature of PAS 2.5: multi-port support!

Let's use the most popular case: Spring Boot Actuator. This project gives developers and operators wonderful insight into the health of running Spring applications. From health checks to metrics to thread dumps, it's an awesome way to track the health of your application. By default, Actuator runs on the same port as your application, all prefixed by the path '/actuator'. Features can be turned on and off as needed, but even then this is a lot to expose on a public application. Lucky for us, Spring Boot allows us to configure Actuator to run on a separate port by adding an additional line to our application’s configuration.

management.server.port = 8081

We can deploy our application as normal; it will start as expected. PAS will provide it a port to listen on. Our app will serve traffic as normal, using the route that we gave it when we pushed it. So far, so good!  

From here, we need to tell PAS that our application has additional ports that it’s listening on. We can use a cf curl command.

cf curl /v2/apps/$(cf app my-app -- guid ) -X PUT -d '{"ports": [8080, 8081]}'

NOTE: We’ve embedded a cf CLI command inside our cf curl command by including cf app my-app --guid. But all this is doing is returning the unique identifier for our application named “my-app”, which we need to make the API call.

Finally, we have to create a second route for the additional port and then bind it to the app. We’ll need to specify which port it should send traffic to.

cf create-route dev apps.example.com --hostname actuator

cf curl /v2/route_mappings -X POST -d "{"app_guid": "$(cf app my-app --guid)", "route_guid": "$(cf curl /v2/routes?q=host:actuator | jq .resources[0].metadata.guid | tr -d '"')", "app_port": 8081}"

Again, we’ve embedded quite a few subcommands in the cf curl command to make it a one-liner. Let’s look at what this might look like once the various identifiers are looked up:

cf curl /v2/route_mappings -X POST -d {"app_guid": "00000-00000-00000-00000-00000", "route_guid": "11111-11111-11111-11111-11111", "app_port": 8081}

That’s easier to read. We can now look up both the GUID for our application and for our newly created route. Then we created a new mapping between the two, specifying the port. Despite being served by the same application, our traffic is served at my-app.apps.example.com and Actuator traffic is served at actuator.apps.example.com. This setup grants us greater control over how traffic is served between the two.

Our application serving traffic on two ports.

 

Try Pivotal Cloud Foundry 2.5 Now!

As these features improve, we’ll see these integrated into the cf CLI. More importantly, though is how the new Cloud Foundry routing tier continues to make your life easier with Istio and Envoy. There’s still plenty already planned to further this integration. Follow @Pivotal and keep an eye out for these features in future releases!

Of course there is a lot more packed in the 2.5 release of PAS and Pivotal Cloud Foundry. Make sure to check out the 2.5 release notes for a full list of features. Even better, why not kick the tires a bit? Sign up for a free Pivotal Web Services trial and try PCF yourself!