posted

0 Comments

Merlin Glynn, Senior Product Line Manager, Cloud-Native Apps, VMware

 

We are excited to announce that PKS 1.2 will be generally available at the end of September.    Introducing new capabilities for today’s multi-cloud enterprises, PKS 1.2 will deliver features that are targeted at improving multi-cloud operations, networking and security, management and operations, and developer productivity. These capabilities will be delivered in a turnkey solution ready to help customers take their Kubernetes workloads into production.

 

Multi-Cloud Operations

 

Amazon EC2 Support

Kubernetes enables developers to deploy and manage containerized apps at scale on any infrastructure on-premises or in the cloud. According to the Cloud Native Computing Foundation, Amazon EC2 is the leading container deployment environment*. With this latest release, PKS will enable a consistent operating model and Kubernetes user experience across on-premises data centers and Amazon EC2.

 

 “Running containers in the AWS Cloud allows customers to build robust, scalable applications and services by leveraging the benefits of the AWS Cloud such as elasticity, availability, security, and economies of scale,” said Deepak Singh, Director Compute Services, AWS. “With PKS support for AWS, customers can deploy a consistent infrastructure for their containerized workloads across on-premises VMware environments and the AWS cloud, and benefit from access to native AWS cloud services and features. We look forward to collaborating with VMware to help meet the vast needs of both of our customers.”

 

A key benefit of PKS is self-service provisioning of Kubernetes across multiple supported IaaS providers with a common user interface. PKS also optimizes the Kubernetes clusters for the IaaS they are provisioned into.  This helps make workloads and operational tasks portable on any supported cloud while enabling enterprises with operational efficiencies around provisioning and day 2 operations.

 

PKS on AWS EC2

 

Networking and Security

 

Integration with NSX-T for Enhanced Scale and Security

PKS 1.2 integrates with NSX-T for production-grade container networking and security. A new capability introduced in NSX-T 2.2 allows workload SSL termination to be performed by NSX-T Load Balancing services. PKS will leverage this capability to provide better security and workload protection. The NSX-T integration uses native Kubernetes objects, like Secrets and Ingress controllers, to manage the SSL termination. This will secure the requests to the workloads deployed on Kubernetes in a cloud-native way. PKS with NSX-T 2.2 can significantly increase the scalability of the PKS platform in terms of the number of Kubernetes clusters, Kubernetes “LoadBalancer” exposed services, Kubernetes “Ingress” exposed services, and network traffic performance. The integration will also provide an automated installation and simplified user experience for implementing Kubernetes with NSX-T.

 

NSX-T 2.2 and Kubernetes SSL Integration

 

Network Profiles for Per-Cluster Customization and Choice of Load Balancer Size

As enterprises begin to scale out their workloads on Kubernetes, it’s critical for their networking solution to offer security and scalability across many Kubernetes clusters. NSX-T is an integral part of PKS that provides integration with Kubernetes Security Policy controls and the scalability required for large-scale workloads. PKS 1.2 introduces a new feature called Network Profiles. This capability will allow the flexibility to choose different size NSX-T Load Balancing services to better meet the security and performance characteristics required for each cluster.  This flexibility will allow users to optimize the Load Balancing services to meet the needs of   Kubernetes deployments of various sizes.

 

Native Kubernetes RBAC with Enterprise LDAP and AD

PKS 1.2 will introduce a centralized authentication mechanism that will allow customers to assign Kubernetes role-based access control (RBAC) bindings to LDAP users and groups.  Kubernetes RBAC enables fine-grained control when a cluster is serving many teams. This capability will enable PKS users to serve the needs of many single tenant clusters and/or fewer multi-tenant clusters with a unified and auditable authentication framework (User Account and Authentication, or UAA).

 

PKS LDAP and Role Binding 

 

Developer Productivity

 

Kubernetes 1.11

With constant compatibility to GKE, PKS allows customers to stay current with the latest stable Kubernetes release as operated by Google Kubernetes Engine.  PKS 1.2 will guarantee a native open-source experience with Kubernetes 1.11 so that workloads and CI/CD processes can benefit from the latest Kubernetes upstream features as well as ease of portability across other native Kubernetes platforms.  This release of Kubernetes has been validated for enterprise readiness in PKS 1.2 and fully passed the Kubernetes conformance testing defined by the Cloud Native Computing Foundation (CNCF) to enable workload compatibility and portability.

 

Self-Service of Pod/Workload Log Sinks

A key goal of PKS is to deliver self-service to teams consuming Kubernetes and other services required to run workloads in production. One critical prerequisite for self-service is workload observability and logging. PKS 1.2 will provide an additional logging feature to its existing Syslog and vRealize Log Insight integration. This Sink Resources feature will allow for a cluster admin or development team to determine a syslog endpoint to which all workload (Pod) stdout and stderr are shipped. This will enable speed and efficiency in getting the right level of observability to the right teams for Kubernetes workloads. Sink Resources are implemented as Kubernetes Custom Resource Definitions (CRDs) that can be controlled by Kubernetes RBAC to allow teams to make individual logging choices at the namespace or cluster level.

 

PKS Logging Sink

 

Management and Operations

 

Highly Available Kubernetes Control Plane

PKS 1.2 will provide a highly available Kubernetes control plane that will give operators the confidence to deploy Kubernetes services and workloads in production.  This will be accomplished by providing an optimized BOSH release for the Kubernetes key-value store for state, etcd.  BOSH keeps the Kubernetes control plane healthy and easily upgrades it when required. Additionally, the tight integration with VMware NSX-T allows for dynamic load balancing of multiple Kubernetes Master Node instances to achieve Kubernetes API access and uptime goals. These enhancements help the Kubernetes control plane to always be available and ready for production.

 

Comprehensive Kubernetes Solution with Integration with VMware Products

PKS provides a turnkey solution for users to run production workloads on Kubernetes by integrating with VMware NSX-T, Project Harbor, vRealize Log Insight, vRealize Automation, Wavefront by VMware, and other VMware products. The latest vRealize Automation 7.5 added integration with PKS for Kubernetes cluster management. By integrating vRealize Automation and PKS, customers can take advantage of managing Kubernetes clusters within vRealize Automation — the same interface where they deploy infrastructure.

 

vRealize Automation and PKS Integration

 

To learn more about PKS, please visit our website at https://cloud.vmware.com/pivotal-container-service. You can also try PKS on your own at http://labs.hol.vmware.com/HOL/catalogs/lab/4249. If you are at SpringOne Platform, visit the VMware booth #6 to view the PKS demos. To read the blog announcement from Pivotal, please click https://content.pivotal.io/blog/pks-1-2.

 

* CNCF Survey: Use of Cloud Native Technologies in Production Has Grown Over 200%

https://www.cncf.io/blog/2018/08/29/cncf-survey-use-of-cloud-native-technologies-in-production-has-grown-over-200-percent/