Cloud Foundry data products security Tanzu Application Service

Pivotal Cloud Cache 1.9 Eases Data Sharing Across Teams, Adds TLS Encryption Across Sites

Caching is a great way to improve performance of your microservices, and with Pivotal Cloud Cache 1.9, now GA, we offer new capabilities to make your caching layer more accessible and more secure. We’ve also made it easier to get started. Here’s a quick look at the highlights:

  • Service instance sharing. Multiple microservices in different Pivotal Application Service spaces can share the same instance of Cloud Cache.

  • Transport Layer Security (TLS) over Wide Area Networks (WAN). Mutual TLS improves the efficiency of how data is encrypted, thereby increasing data transfer rates over WANs.

  • Spring Initializr support for Apache Geode™. Adding a cache to your app just got easier. Developers can use the familiar Spring Initializr site to bootstrap caching for an application. This workflow will be familiar to Spring developers; the site generates a project that contains all the dependencies so you can get started quickly.

Service Instance Sharing Simplifies Access to Your Cache in Key Scenarios

With this enhancement, multiple microservices in different Pivotal Application Service (PAS) spaces can share the same instance of Cloud Cache. 

Let’s explain what we mean by “data sharing.” More than one development team can access the same Cloud Cache cluster. This is distinct from a messaging layer or sharing data via an API. The idea of service instance sharing is used throughout Pivotal Platform. 

Now, let’s dive into when you’ll want to use this new capability.

Use Cases for Service Instance Sharing

Hold on, you might be saying. Isn’t the sharing of data via a shared datastore a flagrant violation of a hallmark characteristic of microservices—the isolation of resources? This pattern is a best practice; it leads to independent and autonomous services.

You’re right—resource isolation is a solid architectural principle to follow in most cases. That’s why we recommend direct data sharing in a few narrow, but common, cases. 

The direct sharing of data can simplify the architecture in a few key scenarios. These cases warrant an exception to the isolation principle for the sake of simplicity.

Here’s a look at two cases where you’ll appreciate service instance sharing.

Cross-Cutting Data

Customer profile data is a good example. For an e-commerce site, each of the following microservices will need access to the user profile: authentication, order entry, billing, website preferences, and customer support. Another example is inventory data. If we split up this data into multiple microservices and multiple Cloud Cache clusters, then lookups for this data will span multiple microservices, and developers will need to assemble this data in their code. Sure, there are Domain-Driven Design concepts like Aggregates that can help. But for simple cases, a single shared Cloud Cache cluster can do the job.

Transactions That Span Multiple Microservices. 

Strict isolation can sometimes result in too many transactions that span multiple microservices, resulting in needless invocation of APIs to code a transaction. Let’s consider that e-commerce example. A place-order transaction may have to hit the order entry microservice. It may also have to verify the customer’s credit limit in the customer microservice. If we implement the same scenario with a shared cache, the interaction between services is cleaner. You’ll see better performance as well because the request doesn’t have to step through multiple microservices.

Yes, Cloud Cache can replicate this common data across clusters. But you’d have to contend with other complexities related to data consistency, and the extra expense of storing copies of data. A single shared Cloud Cache cluster provides a simple, elegant way to provide data. An added bonus: you have a single source of truth. And your operations staff needs to tend to one shared instance of Cloud Cache.

One final point: keep the bigger picture in mind. 

In many cases, an application based on a single monolithic architecture is being evolved and incrementally decomposed into microservices. My colleague, Greg Green, describes the process of decomposing a monolith and provides a roadmap for this journey in a recently authored blog post. Even if the desired end state is the strict isolation of resources, arriving at this end state is a long and winding journey. Various points in this journey will still involve a shared data layer across microservices.

A careful examination of the data your microservices need to share and how often it is updated are essential considerations in making this decision. Either way, Cloud Cache has you covered!

Sounds good, you say. What about security? How you control access responsibly? We’ve added controls to made direct sharing of data safe and secure. Read on!

Share Data Safely

A Cloud Cache cluster can be shared across different spaces in PAS. Only one space is the owner; the additional spaces operate in read-only mode. This avoids update conflicts. (Quick aside: Cloud Cache can handle update conflicts, but the single owner approach simplifies your architecture.)

By default, service instance sharing is turned off. Your service teams have to explicitly opt-in.

Service teams may also treat bindings that come from outside spaces differently than their “home” space. For example, operators can give out different permissions for the same data in each space.

Speaking of security! It’s time to talk about the second big feature in Cloud Cache 1.9

Secure, Efficient Data Replication Across Clouds

So it’s a multi-cloud world. A particularly thorny aspect of multi-cloud is how the data is handled.

The good news here is that a caching layer like Cloud Cache can help solve this problem. Cloud Cache doesn’t stop there though. It handles other important needs like processing a high-volume concurrent workload with high throughput and delivering high availability. Pivotal Platform helps here too—it abstracts away the differences in the infrastructure APIs. (Want to learn more? Check out this whitepaper.) 


Cloud Cache Handles Multi-Site Traffic Securely

Historically, Cloud Cache has encrypted multi-site traffic using the IPSec module. With Cloud Cache v1.9, we’ve added a new kind of encryption: Mutual Transport Layer Security (mTLS). Cloud Cache encrypts of data in motion, and it leans on PAS to manage the certificates.

Both sites (and PAS foundations) have to “trust” each other before data transfer can start. In contrast, IPSec encrypted all the communication between the foundations. Consequently, the performance “hit” for encryption was high. Now, with mTLS, we will only encrypt data flow between the foundations. This boosts transfer rates quite a bit!

Let’s finish up with how we’ve made it easier to get up and running with caching in your Spring apps!

Start Fast and Spring Into Action

Cloud Cache is based on Apache Geode™. This offers Cloud Cache users a wealth of ecosystem tools. To wit: Geode is part of the Spring Initializr site

Developers use this site to get new Spring projects started quickly. The site bootstraps an application by generating a project that contains all the dependencies. You can focus on the application logic, rather than all the configuration toil!

 

Spring Initializr brings the power of Spring Boot for Apache Geode to Cloud Cache. You can easily toggle between standalone Geode and Cloud Cache. You don’t have to fiddle with any code or configuration changes either!

Your Microservices Need a Cache. So Where Will You Start?

Ready to learn more? Find out more about Cloud Cache here. Read  the documentation here.

For a more immersive experience, register for SpringOne Platform happening October 7-10 in Austin! The conference is a great way to explore new use cases and best practices. Whether you’re just getting started, or a seasoned hand, there’s something for you!

Our fourth Apache® Geode™ Summit is being held in conjunction with SpringOne Platform – You can register now, and receive a $200 discount by using the following code: S1P200_JMirani. 

Can’t make it in-person? Recordings will be posted in the following weeks.