posted

Authored by Kit Colbert, CTO, Cloud Platform BU

dockercon2017-01

Another great DockerCon was just wrapped up.  The conference continues to grow, with 5,500 attendees this year.  The conference itself also feels more mature.  Customers are doing more interesting things with containers, the expo is filled with a broader variety of vendors, and there’s just a ton of content.  Yet DockerCon is able to maintain its nerdy-fun hacker ethos.  And, of course, the quality of the food continues to beat out all other conferences!

DockerCon had a simple message this year: containers are going mainstream.  They focused on two areas to back that up: first is the staggering growth and increasing maturity of the Docker open source projects and second is that containers are ready for the enterprise.  But are containers really mainstream?  Let us see…

Starting with the open source Docker project, the hero numbers are striking:

  • 14M Docker hosts
  • 900K Docker apps
  • 12B image pulls 
  • 3,300 project contributors

Open source Docker is used across every industry and has a thriving ecosystem.  Docker Engine is getting important usability improvements, such as multi-stage builds.  Docker Swarm continues to mature, getting more features around security and orchestration.  Certainly many of these data points indicate containers are very much going mainstream!

But in order for containers to be mainstream, they need to be adopted by enterprises.  A key element of enterprise adoption is the requirement that containers support all applications, not just “cloud-native apps”, or modern, distributed, and usually greenfield applications.  While companies are building new cloud-native apps, the reality is that businesses have tons of existing traditional, monolithically architected applications.  By containerizing those applications, businesses can reap some of the benefits of containers, such as better CI/CD tooling and automation and greater runtime efficiency, without the need for the expensive and resource-intensive task of rewriting, rearchitecting, or in general, changing any code in the application.  Certainly this is mainstream activity!

Indeed many customers presented both on the keynote stage as well as in breakouts about how they were containerizing traditional applications, including Visa, Metlife, Cornell, Northern Trust, Microsoft IT, Societe General, and PayPal.  There were discussions about which applications make good candidates for containerization, intros to Image2Docker, and best practices for making the transition.

While “day 1” containerization and provisioning activities were covered, I didn’t see much about “day 2” activities, specifically how folks actually operated those containerized applications in production.  For instance, how does one deal with availability issues for a monolithic app?  E.g. what if the host has to go down for maintenance?  Can the app be restarted on another host?  How long does the app take to restart?  What if the app takes dozens of minutes or hours to get back up to full speed?  What about performance monitoring?  Or logging?  Or compliance assessments?  What about backup?  DR?  These are the questions that didn’t seem to be addressed.

Quite the opposite, in one session, Metlife even explicitly ruled out discussing these items:

dockercon2017-02

To some degree, this is a maturity question.  You’ve got to get the basics of packaging and provisioning solved before you can move on to the meatier topics of day 2 operations.  And the reality is that many of those day 2 operations capabilities aren’t well-solved yet in container environments.  When talking to one customer, he mentioned that he is running a couple of containerized traditional apps in production but quickly admitted that they’re “not the important ones.”  Indeed he did not have solutions for monitoring, back up, or any of the other items I mentioned above.  Even Northern Trust, who gave a fairly impassioned talk in support of containerizing traditional apps, mentioned at the end of the session that they had only containerized one Tomcat app and a single Weblogic cluster.

This is exactly where we at VMware want to help!  With vSphere Integrated Containers (VIC), we can leverage the high availability features of vSphere as well as its robust ecosystem to solve some of the shortcomings in the container space today.  VIC allows you to run a containerized traditional app in production, because it’s leveraging all the production capabilities of vSphere.  This means that businesses don’t need to reinvent the wheel for monitoring, compliance, DR, etc – they can leverage the solutions they already have for vSphere!  In addition, we’re working closely with Docker, Inc on many different projects, from containerd to LinuxKit to Docker Enterprise Edition.  So if you haven’t checked out what we’re doing with VIC, you definitely should!

In the end, have containers gone mainstream?  I think we’re oh-so-close, on the precipice.  To date, the industry has lacked mature solutions in the container space for all the operational requirements businesses have for any application they want to run in production.  But those solutions are coming fast, from the improvements in the open source Docker projects to the growing container ecosystem to what we’re doing with VIC.  So if containers aren’t mainstream now, they will be very soon!  What do you think – are we there yet?

You can follow Kit on Twitter: @kitcolbert

posted

Authored by Nathan Ness, Senior Technical Marketing Engineer, Cloud Native Applications.

In this post I will discuss how a developer or infrastructure administrator can use Terraform to automate infrastructure provisioning. I will demonstrate how you can use Terraform with Photon Platform to deploy and scale Docker Datacenter. Terraform is a tool for building, changing, and versioning infrastructure. Terraform uses configuration files to describe the infrastructure you wish to provision.

I have created two configuration files for this deployment. This first one deploys the Docker Datacenter Manager (manager.tf) and the second one deploys the workers (worker.tf) and automatically adds them to the Swarm Cluster. Photon Platform is used for multi-tenancy and control how much infrastructure you are able to deploy.

terraform-directory

Let’s take a look at the manager.tf. The purpose of this file is to deploy a VM and install docker ucp on top of it. For that we use the Photon Platform provider to deploy the VM and Terraform to issue the remote execution commands to deploy the Docker Datacenter Manager.

blogcode001

Now that the manager is deployed we can start deploying swarm workers and add them into the cluster. That is where the worker.tf comes in.

blogcode002

Now that we have the configuration files for the manager and workers, let’s deploy it. In our terminal we issue terraform apply and it will ask for the number of worker nodes you want to deploy. On the right you can see Photon Platform and the available infrastructure resources you have assigned to your project.

terraform-photon-start

After you execute terraform apply it will start deploying the infrastructure described in the configuration files. The first thing it deploys is the Docker Datacenter Manager so that workers can join the cluster.

photon-manager

After the manager is deployed, it will spin up the number of workers you specified for the “COUNT” variable.

photon-complete

The last part of a terraform template is the “outputs”. I want to know the endpoint for my Docker Datacenter VM so I have defined that as an output. Terraform also tells you what it added/changed/removed whenever you issue a terraform plan or terraform apply.

terraform-complete

Success! We have Docker Datacenter up and running with 5 worker nodes. Now you can start deploying your contanerized workloads with the Docker API.

ddc-ui

Now let’s say we need to scale up the number of worker nodes. Simply run terraform apply and specify the new number of worker nodes you want. Terraform will examine the current state of the infrastructure and deploy 5 more VM’s on top of Photon Platform and add them into the Swarm cluster.

scaleout-10-nodes-cli-1

This is a simple example of how you can use “Infrastructure as Code” with Terraform and Photon Platform. You can store your configuration files in version control to track changes and help rollback if something went wrong.

Thank you!

posted

Authored by Wendy Cartee, Sr. Director of Cloud-Native Marketing

We are excited to announce the release of VMware Photon™ Platform 1.2 today!

Photon Platform is a container-optimized cloud infrastructure solution for deploying and operating cloud-native applications and microservices. It offers highly secure, fully integrated virtual compute, networking, and storage to simplify and secure cloud-native applications. The 1.2 release adds enhancements across Kubernetes, compute, networking, security, and storage to deliver enterprise-ready capabilities needed to deploy and operationalize Kubernetes clusters.

What’s New

Support for Kubernetes 1.6

Photon Platform 1.2 now includes support for the latest release of Kubernetes. Announced during KubeCon Berlin, Kubernetes 1.6 enhances scale and automation to deploy multiple workloads to multiple users on a cluster. The key functions introduced were:

  • Dynamic storage provisioning (moved to stable state)
  • RBAC or role-based access control (started beta)
  • Automation and controlled scheduling enhancements

Simpler Cluster Management

Photon Platform 1.2 simplifies lifecycle management of Kubernetes clusters. Users have the ability to customize cluster sizing via flavors and quotas, enabling sizing up and down without tickets. 1.2 also now supports the ability for users to upgrade Kubernetes with a few clicks and to choose a desired Kubernetes version when spinning up a cluster. This enables a smooth development staging to production pipeline and eases upgrades of Kubernetes clusters.

Static and Dynamic Persistent Volumes

Photon Platform 1.2 is fully integrated with VMware vSAN™ enabling users to leverage VMware’s production-grade virtual storage platform. Users are able to spin up both static and dynamic persistent volumes on the platform, enabling applications running in Kubernetes to maintain state without any additional work for the developer. This support enables high availability (HA) for stateful applications, delivering the resiliency and availability characteristics found in enterprise-class shared storage for cloud-native apps.

Master and Worker Node High Availability

In addition to HA for stateful applications, the 1.2 release also introduces rolling upgrade capabilities for master and worker nodes. Upgrades and downgrades frequently lead to downtime for master and worker nodes. In order to maximize uptime during upgrades and downgrades, we added new automation processes that update the software versions running on Kubernetes master and worker nodes with minimal downtime.

This rolling upgrade process is part of our new multi-master Kubernetes cluster deployment, which leverages a load balancer front-ending the Kubernetes master nodes. This allows for critical Kubernetes components to be upgraded one at a time without impacting users in the process of consuming a Kubernetes cluster.

Pod Networking and Enhancements

With this release, Photon Platform now integrates VMware NSX® and Kubernetes out of the box with pod-level networking for Kubernetes clusters. In beta, this networking function allows developers to have their own segmented distinct virtual networks, offering data isolation and operationalization of containerized applications.

The deeper NSX integration also enables users to leverage additional enhancements such as embedded DHCP services, overlapping IP addresses across subnets, floating IP addresses, and creation of multiple routers in a project. These new networking features allow developers to be specific about the addressing of their workloads, along with the ability to create CI/CD pipelines that deterministically consume a known address space during repeated tests.

AD/LDAP and Security

Photon Platform 1.2 adds several new security enhancements for enterprise environments.

  • Photon Platform now integrates Lightwave 1.2, VMware’s open source active directory and LDAP authentication system providing in-depth role-based access control.
  • The release also provides OpenID Connect (OIDC) for authentication with Kubernetes through the standards-based OIDC protocol. The net result is that Kubernetes API requests are authenticated via a highly-available and scalable authentication cluster and kept secure for enterprises.
  • Project users can now upload images with more control to ensure other projects don’t inadvertently impact their images. For example, a project user can add controls to prevent other project users from accidentally deleting his/her image. When uploading images, project users are able to limit access to those images and control who can use or delete those images.

Quota Based Dynamic Resource Allocation

This release replaces resource tickets with quotas. Quota, unlike resource tickets, can be resized by a system administrator. This simplifies resource management in the cluster. With 1.2, a tenant can now increase or decrease resource allocation by using a quota setting. This provides faster resource allocation and increases the accuracy of resources consumed and available in the cluster.

SDK and API

Photon Platform now publishes an OpenAPI 2.0 API specification for our APIs that is simpler to use and operationalize. New APIs were also added to manage resource quotas on per tenant and project. We are also announcing the availability of a Go SDK that reflects these new API changes which simplify the development of plugins and drivers that enable Photon Platform to be consumed “as code” by DevOps and SRE teams.

Availability

Photon Platform 1.2 is available today. Please contact your VMware representative to find out more.

Photon Platform Product Information

For more information about VMware Photon Platform, please check out the Photon Platform product page on VMware website http://www.vmware.com/products/photon-platform.html and follow us on @cloudnativeapps.

DockerCon 2017

VMware is a gold sponsor of DockerCon 2017 and we will highlight Photon Platform, VMware vSphere® Integrated Containers™, NSX, vSAN, and many other new demos in our booth. Come by to visit us, meet our developers, and pick up cool giveaways.

VMware, Photon, vSAN, NSX, vSphere, and vSphere Integrated Containers are registered trademarks or trademarks of VMware, Inc. in the United States and other jurisdictions.

posted

Authored by Karthik Narayan, Senior Product Manager, Cloud Native Applications

Today, we are pleased to announce the release of vSphere® Integrated Containers™ 1.1!

vSphere Integrated Containers was released as part of vSphere 6.5, and the new 1.1 release delivers significant user experience improvements, including a new user interface (UI).

What is vSphere Integrated Containers?

vSphere Integrated Containers is designed to solve many of the challenges associated with developing and running containerized applications in enterprise environments. It directly uses the clustering, dynamic scheduling, and virtualized infrastructure of vSphere to create Virtual Container Hosts – providing significant security and operational benefits as compared to standard container hosts.

With vSphere Integrated Containers, developers can use the Docker Client and API to quickly and easily develop and run containerized applications on vSphere while VI admins can benefit from the security, visibility, and operational efficiency normally associated with VMs. vSphere Integrated Containers allows containerized applications to run alongside VM-based applications, leveraging the same resources and tooling. Minimally requiring just vSphere to begin running containers, vSphere Integrated Containers can also leverage the advanced functionality of VMware NSX® for container networking and security as well as VMware vSAN™ to extend its persistent storage capabilities to containers.

 vSphere Integrated Containers Architecture
vSphere Integrated Containers Architecture

What’s New

Unified UI for Developers and DevOps

The primary users of the vSphere Integrated Containers management portal and registry are developers, cloud admins, and DevOps team members. To improve their user experience, we have unified the user interfaces of both these components. Designed using VMware’s open source design system, Project Clarity, customers will be able to access advanced functionality with a more efficient and intuitive user experience.

The management portal provides automated deployment and lifecycle management of containers along with enterprise grade security and identity management. It includes the following key attributes:

  • Application and container lifecycle management – Provision, monitor and manage applications that comprise one or more container images.
  • Container infrastructure management – Cloud administrators and DevOps teams can monitor and manage the infrastructure including compute resources, networks and volumes within the bounds defined by the vSphere administrator.
  • Efficient multi-container template management – Define, build and manage multi-container application templates to stand up complex applications quickly.
  • Live state updates – Get live information on the performance and resource consumption of your applications.

The management portal is also available as a component of VMware’s industry leading VMware vRealize® Suite Cloud Management Platform, providing seamless management, orchestration and operations for both traditional and modern application environments.

Unified Management and Registry Portal
Unified Management and Registry Portal

The registry stores and distributes Docker images behind the company’s firewall. It extends the open source Docker Distribution by adding the functionalities usually required by an enterprise, such as security, identity and management. In addition, the registry includes the following key attributes:

  • Role Based Access Control (RBAC) – Users and Docker repositories are organized via “projects”. A user can have different permission for images under a namespace.
  • Image replication – Images can be replicated (synchronized) between multiple registry instances for load balancing, high availability, hybrid and multi-cloud scenarios.
  • Active Directory/Lightweight Directory Access Protocol (AD/LDAP) – Integrates with existing enterprise AD/LDAP for user authentication and management.
  • Auditing – All the operations to the repositories are tracked to assist with auditing.
Service Registry
Service Registry

Updated Installer

The 1.1 release provides an updated installer which packages all the components of vSphere Integrated Containers into a single OVA. This allows the vSphere administrators to easily deploy, maintain, and upgrade all aspects of the deployment. The new installer also provides a simple upgrade path for customers who deployed vSphere Integrated Containers 1.0. In addition to the management portal and the registry, a file server hosts the vic-machine binary and the vSphere plugins.

OVA Installer
OVA Installer

vSphere 6.5 HTML5 Integration

With the 1.1 release, the vSphere Integrated Containers UI plugin works with the HTML5 based vSphere UI. Once installed, the vSphere UI will feature a section dedicated to vSphere Integrated Containers. In addition, the VI administrators will find two new HTML5 portlets – one that displays information about the Virtual Container Host and another that displays information about the Container-VM.

HTML5 User Interface
HTML5 User Interface

Demo

For a demo of vSphere Integrated Containers, please click here.

Availability

vSphere Integrated Containers is available with vSphere 6.0 and 6.5, Enterprise Plus edition. You can download it on myvmware.com. Please contact your VMware representative if you would like to schedule a technical deep dive session.

Product Information

For more information about Sphere Integrated Containers, please check out the vSphere product page on VMware website and follow us on @cloudnativeapps.

DockerCon 2017

VMware is a gold sponsor of DockerCon 2017 and we will highlight vSphere Integrated Containers, NSX, vSAN, VMware Photon™ Platform, and many other container solutions in our booth. Come by booth G9 to visit us, meet our developers, and pick up cool giveaways. We will also present a session on self-service provisioning of Docker Datacenter on vSphere. Come see us!

VMware, vSphere, vSphere Integrated Containers, NSX, vSAN, vRealize, and Photon are registered trademarks or trademarks of VMware, Inc. in the United States and other jurisdictions.

posted

Authored by Debra Robertson, Product Marketing Manager, Cloud Native Applications

DockerCon 2017 is a three-day, Docker-­centric conference organized by Docker, Inc that takes place from April 17-20, 2017 in Austin, Texas. As a gold sponsor of the event, VMware will be there to support the Docker community and participate in sessions, share demos, and promote the event.  

If you’re heading to DockerCon 2017 next week in Austin, connect with us to learn how VMware’s Cloud Native Apps portfolio and open source technologies enable developers and IT from dev-test to production. VMware subject matter experts will be onsite to walk you through demos and use cases for securely developing, deploying and managing container-based applications.

Attend the Self-Service Provisioning of Docker Datacenter on VMware vSphere Session

Join VMware in a session discussing how new self-service provisioning capabilities in vSphere will allow developers and cloud admins to deploy and operate Docker Datacenter in a ticketless manner. Software Engineers Ivan Porto Carrero and Benjamin Corrie will give a sneak preview of current development work aimed at giving developers access to container frameworks on demand – while at the same time leveraging the advanced performance, availability, and security capabilities of vSphere.

WHEN: Tuesday, Apr 18, 4:15 PM – 4:35 PM
WHAT: Self-Service Provisioning of Docker Datacenter on VMware vSphere
TRACK: Ballroom C

This session will highlight how features like DRS, SDRS, HA, and NSX micro-segmentation can be used to make your container deployments more highly available, secure, performant and maintainable.

Join the Conversation at Booth G9

While at DockerCon, you can connect with VMware’s cloud-native team, and experience our solutions through our many booth demos:

  • vSphere Integrated Containers: Deploy enterprise-grade container infrastructure designed specifically for running traditional apps in containers, alongside VM-based workloads on vSphere.
  • Docker Datacenter on vSphere: Rapid, self-service provisioning of Docker Datacenter on vSphere. Empower developers and cloud admins to deploy secure, highly available and performant container frameworks on demand.
  • NSX for Docker Networking: Deploys micro-segmentation to secure and network Docker containers by leveraging advanced NSX CNM/libnetwork capabilities.
  • vSphere Docker Volume Services: Enables high availability for stateful apps with Docker-certified vSphere Docker Volume Storage.
  • Project Admiral – Container Management Platform: Operationalizes Docker with enterprise container management services including full life cycle management.
  • Project Harbor – Enterprise Service Registry: Project Harbor provides an open source, secure, private container registry for enterprises. Features LDAP/AD integration, policy-based replication, and advanced audit and logging functionality.
  • Photon Platform and Docker Swarm: Provisions containers on Photon Platform through Docker Swarm with easy integration and compatibility

Pick Up Your Access Pass for the Cloud-Native Fiesta

Join the VMware Cloud-Native team for an exclusive rooftop party with tacos and margaritas during DockerCon in Austin on Wednesday, April 19th at 5:30 PM. At this event, you will have the opportunity to kick up your heels and network with other container community members. Stop by the VMware booth #G9 to pick your access pass; space is limited.

invite-cloud-native-fiesta-austin-april-dockercon

Navigate DockerCon Like Pro

Last year the lines were extremely long at the time the floor opened, so if you can check in early!

  • Registration: Bring your ID – You’ll need this to check in and for the DockerCon Party. Registration opens on Monday, April 17 from 10:00am – 7:30pm

Additionally, here are some tools provided by DockerCon to help you navigate the show and network with the community:

  • Mobile App: Download the official DockerCon Mobile App to stay informed with the most up to date news and information.
  • DockerCon Slack: Download DockerCon Slack – This is the conference preferred
  • Moby Mingle: Log into your account and set up Offers, Request and/or Group Chats!

Stay Connected

Stay connected with VMware Cloud Native by following us on Twitter at @cloudnativeapps.
We hope to see you there!

posted

This blog is written by Abrar Shivani who is a software engineer in Cloud Native Applications Storage team.

This blog post will provide an overview of vSphere Cloud Provider which exposes persistent storage for containers orchestrated by Kubernetes on top of vSphere. By using a sample WordPress-MySQL application, this blog will provide a step-by-step guide on how administrators can use vSphere Cloud Provider to make application data highly available.

 

vSphere Cloud Provider

Cloud Provider is a module in Kubernetes which provides an interface for managing nodes, volumes and networking routes. VMware contributes to both vSphere and Photon Cloud Provider.

chart1

 

Containers launched using Kubernetes can be resurrected. Yet, the data stored by application running inside that container will be lost once container goes down. With vSphere Cloud Provider the data can be stored in vSphere Persistent Volume and after rescheduling of the pods containers get the data back wherever it is scheduled. vSphere Cloud Provider enables access to vSphere managed storage (vSAN, VMFS, NFS) for applications deployed in Kubernetes. This is achieved by  supporting persistent volumes and storageclass primitives in Kubernetes. Moreover, it interacts with vCenter to support various operations such as creation and deletion of volumes, attaching and detaching volumes to application pods and nodes.  vSphere Cloud Provider creates Persistent Volumes backed by VMDK and mounts to the node where pod is scheduled and available for pod to use. Later, when pod fails and rescheduled, vSphere Cloud Provider will automatically detach volume from the node and attach back to the node where new pod is scheduled. It is mounted on same location as earlier and pod gets its data back. Thus, storage failover is completely transparent to Kubernetes PODs.

 

Let’s briefly go over storage primitives in Kubernetes that will help us understand this blog much better.

  • StorageClass – StorageClass describes custom parameters that are passed to vSphere Cloud Provider for creating vmdk (example: diskformat).
  • PersistentVolume – PersistentVolume is Kubernetes API object which is associated with volume. This is created automatically if volume is provisioned dynamically.
  • PersistentVolumeClaim – PersistentVolumeClaim describes user requirements for storage. (example: volume size)
  • Service – Service is an abstraction that defines a set of pods and a policy to access them.
  • Deployment – It makes sure that pod is running and provides declarative updates for pods and replica sets.

Deploying WordPress-MySQL Application with local storage

Let’s deploy WordPress application using kubectl. For this demo, you will need a Kubernetes cluster configured with vSphere Cloud Provider to create and access vSphere volumes. Kubernetes cluster can be launched with Kubernetes-anywhere.

As shown below, the proposed deployment for the application contains two pods containing one container: one for MySQL and other is WordPress.

chart2

Deploy MySQL

1.     apiVersion: v1

2.     kind: Service

3.     metadata:

4.       name: wordpress-mysql

5.       labels:

6.         app: wordpress

7.     spec:

8.       ports:

9.         - port: 3306

10.    selector:

11.      app: wordpress

12.      tier: mysql

13.    clusterIP: None

14.  ---

15.  apiVersion: extensions/v1beta1

16.  kind: Deployment

17.  metadata:

18.    name: wordpress-mysql

19.    labels:

20.      app: wordpress

21.  spec:

22.    strategy:

23.      type: Recreate

24.    template:

25.      metadata:

26.        labels:

27.          app: wordpress

28.          tier: mysql

29.      spec:

30.        containers:

31.        - image: mysql:5.6

32.          name: mysql

33.          env:

34.          - name: MYSQL_ROOT_PASSWORD

35.            value: mysqlpassword

36.          ports:

37.          - containerPort: 3306

38.            name: mysql

 

 

Let’s go over mysql-deployment.yaml.  Lines 1 – 13 describe service object of Kubernetes. This service selects pods with label

`app: wordpress` and `tier: mysql`(Line: 10 – 12 ) and network requests are forwarded to the one of pods on port 3306 (Line: 9). Line 15 – 38 describes deployment object of Kubernetes. This deployment objects creates pod with single container that has image `mysql:5.6`(Line: 31). Moreover, MySQL credentials are passed to the container as environment variables. MySQL container exposes port 3306 where it accepts network requests.

 

Let’s deploy MySQL pod in Kubernetes. Run the following command to launch MySQL pod:

sc1

Once we execute above command, let’s verify whether MySQL pod, deployment and service are up.

sc2

Now, that we have MySQL container running, we will deploy WordPress.

Deploying WordPress

We will use following yaml for deploying WordPress.

# wordpress-deployment.yaml

1.     apiVersion: v1

2.     kind: Service

3.     metadata:

4.       name: wordpress

5.       labels:

6.         app: wordpress

7.     spec:

8.       ports:

9.         – port: 80

10.        nodePort: 30080

11.    selector:

12.      app: wordpress

13.      tier: frontend

14.    type: NodePort

15.  —

16.  apiVersion: extensions/v1beta1

17.  kind: Deployment

18.  metadata:

19.    name: wordpress

20.    labels:

21.      app: wordpress

22.  spec:

23.    strategy:

24.      type: Recreate

25.    template:

26.      metadata:

27.        labels:

28.          app: wordpress

29.          tier: frontend

30.      spec:

31.        containers:

32.        – image: wordpress:4.6.1-apache

33.          name: wordpress

34.          env:

35.          – name: WORDPRESS_DB_HOST

36.            value: wordpress-mysql

37.          – name: WORDPRESS_DB_PASSWORD

38.            value: mysqlpassword

39.          ports:

40.          – containerPort: 80

41.            name: wordpress

 

 

Let’s go over wordpress-deployment.yaml. Lines 1 – 14 describe service object of Kubernetes. This service selects pods with label

`app: wordpress` and `tier: frontend`(Line: 11 – 12 ). The incoming network requests on any node in Kubernetes cluster on port 30080 are forwarded to the one of pods with labels `app: wordpress` and `tier: frontend` on port 80 (Line: 9 – 10). Line 15 – 38 describes deployment object of Kubernetes. This deployment objects creates pod with single container that has image `wordpress:4.6.1-apache`(Line 32). Moreover, the host and MySQL credentials is passed to container as environment variables. WordPress container exposes port 80 where it accepts network requests.

 

Let’s deploy WordPress pod in Kubernetes. Run following command to launch WordPress pod:

Screen Shot 2017-04-14 at 12.15.01 PM

Once we execute the above command, let’s verify whether WordPress pod is up and running using following command:

Screen Shot 2017-04-14 at 12.15.11 PM

Let’s find the IP address and port to get access to the WordPress.

Screen Shot 2017-04-14 at 12.17.05 PM

Visit your brand new WordPress blog at http://10.160.241.61:30080

You’ll see the familiar WordPress start page:

wp1

Select your language and click Continue to configure your website.

 

Now, we have WordPress up so let’s see whether application data is accessible when MySQL pod goes down and rescheduled. Let’s try to kill MySQL pod.

Screen Shot 2017-04-14 at 12.18.14 PM

Another MySQL pod will be scheduled on available node.

Screen Shot 2017-04-14 at 12.18.54 PM

Visit the same WordPress URL now and you will find WordPress is unable to establish connection to database.

DBconn

Before we proceed to the next section to see how to persist data into vmdks using vSphere Cloud Provider in Kubernetes,

let’s clean up our setup by destroying all Kubernetes objects.

Screen Shot 2017-04-14 at 12.20.15 PM

Deploying WordPress-MySQL Application with vSphere Persistent Storage

 

In this section we will look at how we can persist data using vSphere Cloud Provider. First, we will provision the disk using storageclass and vsphere-volume provisioner. Later, we will use this storageclass to claim volumes using persistent volume claim. Once the claim is bound, we will use the volumes inside pods to store data.

 

You will need Kubernetes configured with vSphere Cloud Provider to create and access vSphere volumes. One of the ways you can have Kubernetes cluster setup with Cloud Provider configured is Kubernetes-Anywhere.

 

First, we need to define disk format for vmdk which will be created to persist MySQL and WordPress state. We can do this by creating storage class.

Let’s create storage-class using ‘vsphere-storage-class.yaml’.

# vsphere-storage-class.yaml

1.     kind: StorageClass

2.     apiVersion: storage.k8s.io/v1beta1

3.     metadata:

4.       name: fast

5.     provisioner: kubernetes.io/vsphere-volume

6.     parameters:

7.       diskformat: zeroedthick

This yaml describes that we will be using vsphere-volume provisioner (Line: 5) to provision disk with diskformat as ‘zeroedthick’ (Line: 7).

 

Run the following command to create storage class.

Screen Shot 2017-04-14 at 12.21.12 PM

Let’s validate if storage class is created.

Screen Shot 2017-04-14 at 12.21.51 PM

Deploy MySQL

Let’s deploy MySQL using mysql-deployment-vsphere.yaml

# mysql-deployment-vsphere.yaml

1.     apiVersion: v1

2.     kind: Service

3.     metadata:

4.       name: wordpress-mysql

5.       labels:

6.         app: wordpress

7.     spec:

8.       ports:

9.         – port: 3306

10.    selector:

11.      app: wordpress

12.      tier: mysql

13.    clusterIP: None

14.  —

15.  apiVersion: v1

16.  kind: PersistentVolumeClaim

17.  metadata:

18.    name: mysql-pv-claim

19.    annotations:

20.      volume.beta.kubernetes.io/storage-class: fast

21.    labels:

22.      app: wordpress

23.  spec:

24.    accessModes:

25.      – ReadWriteOnce

26.    resources:

27.      requests:

28.        storage: 20Gi

29.  —

30.  apiVersion: extensions/v1beta1

31.  kind: Deployment

32.  metadata:

33.    name: wordpress-mysql

34.    labels:

35.      app: wordpress

36.  spec:

37.    strategy:

38.      type: Recreate

39.    template:

40.      metadata:

41.        labels:

42.          app: wordpress

43.          tier: mysql

44.      spec:

45.        containers:

46.        – image: mysql:5.6

47.          name: mysql

48.          env:

49.          – name: MYSQL_ROOT_PASSWORD

50.            value: mysqlpassword

51.          ports:

52.          – containerPort: 3306

53.            name: mysql

54.          volumeMounts:

55.          – name: mysql-persistent-storage

56.            mountPath: /var/lib/mysql

57.        volumes:

58.        – name: mysql-persistent-storage

59.          persistentVolumeClaim:

60.            claimName: mysql-pv-claim

 

In above yaml, we declare service (Line: 1 – 13), persistent volume claim (Line: 15 – 28) and deployment objects of Kubernetes (Line: 30 – 60).

We have the same description for service as above. We describe persistent volume claim to use storage class which we just described above (Line: 20). Also, we mentioned that we need 20G for the volume (Line: 28). Once this claim is created it will provision vmdk of 20G with diskformat as zeroedthick. Now, let’s look at deployment description. Deployment description is same as above with additional volume information. We mention that we will use volume that is bound to claim  `mysql-pv-claim` (Line: 59 – 60) and mount it to the path ‘ /var/lib/mysql’ (Line: 56). Once MySQL container is launched vSphere Persistent Volume will be attached to the node on which it is launched and MySQL will write its data to the vmdk.

 

Run the following command to deploy MySQL,

Screen Shot 2017-04-14 at 12.22.33 PM

Let’s validate whether MySQL is deployed,

Screen Shot 2017-04-14 at 12.23.15 PM

Deploying WordPress

We will use following yaml for deploying WordPress.

# wordpress-deployment-vsphere.yaml

1.     apiVersion: v1

2.     kind: Service

3.     metadata:

4.       name: wordpress

5.       labels:

6.         app: wordpress

7.     spec:

8.       ports:

9.         – port: 80

10.        nodePort: 30080

11.    selector:

12.      app: wordpress

13.      tier: frontend

14.    type: NodePort

15.  —

16.  apiVersion: v1

17.  kind: PersistentVolumeClaim

18.  metadata:

19.    name: wp-pv-claim

20.    annotations:

21.      volume.beta.kubernetes.io/storage-class: fast

22.    labels:

23.      app: wordpress

24.  spec:

25.    accessModes:

26.      – ReadWriteOnce

27.    resources:

28.      requests:

29.        storage: 20Gi

30.  —

31.  apiVersion: extensions/v1beta1

32.  kind: Deployment

33.  metadata:

34.    name: wordpress

35.    labels:

36.      app: wordpress

37.  spec:

38.    strategy:

39.      type: Recreate

40.    template:

41.      metadata:

42.        labels:

43.          app: wordpress

44.          tier: frontend

45.      spec:

46.        containers:

47.        – image: wordpress:4.6.1-apache

48.          name: wordpress

49.          env:

50.          – name: WORDPRESS_DB_HOST

51.            value: wordpress-mysql

52.          – name: WORDPRESS_DB_PASSWORD

53.            value: mysqlpassword

54.          ports:

55.          – containerPort: 80

56.            name: wordpress

57.          volumeMounts:

58.          – name: wordpress-persistent-storage

59.            mountPath: /var/www/html

60.        volumes:

61.        – name: wordpress-persistent-storage

62.          persistentVolumeClaim:

63.            claimName: wp-pv-claim

 

Similarly, this yaml creates service, persistent volume claim and deployment objects in Kubernetes for WordPress.

 

Run following command to deploy WordPress,

Screen Shot 2017-04-14 at 12.25.32 PM

Let’s validate whether WordPress is deployed,

Screen Shot 2017-04-14 at 12.26.21 PM

Now we have WordPress and MySQL up and running let’s visit our new WordPress blog.

Enter following command to get the ip address and port to access the blog,

Screen Shot 2017-04-14 at 12.27.01 PM

Just enter this ‘10.160.241.61:30080’ in the browser to visit the WordPress blog.

Once WordPress is configured. Let’s take down the WordPress pod.

Screen Shot 2017-04-14 at 12.27.36 PM

Visit the same WordPress URL and see WordPress app is accessible with the same configuration.

Now let’s take down MySQL pod.

Screen Shot 2017-04-14 at 12.28.12 PM

Another MySQL pod will be scheduled on available node.

Screen Shot 2017-04-14 at 12.29.00 PM

Once pod is restarted by Kubernetes. The pod will pick up the same volumes and its state is intact.

 

As you can see, we can easily persist the state of containers using vSphere Cloud Provider!

 

We would love to hear your feedback! We will be at DockerCon. Please drop by at booth G9 to learn more about what we have to offer.

 

 

posted

Authored by Massimo Re Ferre, Technical Product Manager for Cloud Native Applications

Kubecon 2017 contained plenty of presentations that moved the needle further up the steep learning curve of Kubernetes. Listening to the advanced experiences and the enthusiasm of presenters gives you the sense that Kubernetes is here to stick around–and that it will be a key driving force in the future of cloud computing.

The technology is evolving quickly. Its implementation is bringing success to startups and small organizations as well as in pockets of enterprises. And in the cases where it has been deployed in pockets of enterprises, the teams that own the deployment are starting to seek help from IT to run Kubernetes for them. Multitenancy and security are beginning to become concerns.

Meanwhile, at the expo, the dominant areas of the Kubernetes ecosystem on display were setup, maintenance, networking, and monitoring. There were, in particular, many interesting offerings and solutions in the area of monitoring.

During the keynote, areas of improvement and the newer features of Kubernetes were at the heart of the presentation by Aparna Sinha of Google’s Kubernetes product team. Improvements include support for 5000 hosts, RBAC and dynamic storage provisioning. One of the seemingly new features in the scheduler allows for “taint” and “toleration,” which may be useful to segment specific worker nodes for different namespaces.

Etcd version 3 gets a mention as having a quite big role in the scalability enhancements to Kubernetes, but the new version seemed to trigger concern among some participants on how to safely migrate from etcd version 2 to the etcd version 3.

Aparna also talked about disks. She suggests leveraging claims to decouple the K8s admin role (infrastructure aware) from the K8s user role (infrastructure agnostic).

Dynamic storage provisioning is available out of the box, and it supports a set of back-end infrastructure (GCE, AWS, Azure, vSphere, Cinder).

For the next version of Kubernetes, Aparna alluded to some network policies being cooked up.

Next, Clayton Coleman of Red Hat talked about K8s security. When he asks how many people set up and consume their own Kubernetes cluster, the vast majority of users raise their hands–very few, it seems, are running centralized Kubernetes instances that users access in multitenant mode, an understandable state of affairs given that RBAC has just made it into the platform.

Clayton went on to mention that security in these “personal” environments isn’t as important as it will be when K8s starts to be deployed and managed by a central organization expressly for users to consume it. At that stage, a clear definition of roles and proper access control will be paramount. As a side note, with 1.6, cluster-up doesn’t enable RBAC by default but Kubeadm does.

On Thursday, Kelsey Hightower talked about cluster federation–that is, federating different K8s clusters. The federation API control plane is a special K8s client that coordinates dealing with multiple clusters.

Many of the breakout sessions were completely full. The containerd session presented by Docker highlighted that containerd was born in 2015 to control and manage runC. It K8s integration will look like this:

Kubelet –> CRI shim –> containerd –> containers

Keep in mind, though, that there is no opinionated networking support, no volumes support, no build support, no logging management support, etc.

Containerd uses gRPC and exposes gRPC APIs. There is the expectation that you interact with containerd through the gRPC APIs, typically via a platform. There is, however, a containerd API that is not expected to be a viable way for a standard user to deal with containerd. In other words,  containerd will not have a fully featured, supported CLI. It is, instead, code that is to be used with or integrated into higher-level code, such as Kubernetes or Docker.

gRPC and container metrics are exposed via a Prometheus end-point. Full Windows support is in the plan but not yet in the repo.

One speaker, Justin Cormack, mentions that VMware has an implementation that can replace containerd with a different runtime, the vSphere Integrated Containers engine. For more on containerd, see one of my previous blog posts, Docker Containerd Explained in Plain Words (http://www.it20.info/2017/03/docker-containerd-explained-in-plain-words/).

Another interesting breakout session was on cluster operations. Presented by Brandon Philips, the CoreOS CTO, the session covered some best practices to manage Kubernetes clusters. What stood out was the mechanism that Tectonic uses to manage the deployment. Fundamentally, CoreOS deploys Kubernetes components as containers and lets Kubernetes manage those containers (basically letting Kubernetes manage itself). This way Tectonic can take advantage of Kubernetes’s own features, such as keeping the control plane up and running and doing rolling upgrades of the API and scheduler.

Another session covered Helm, a package manager for Kubernetes. Helm Charts are logical units of K8s resources plus variables. The aim of the session was to present new use cases for Helm that aspire to go beyond the mere packaging and interactive setup of a multi-container app.

All in all, KubeCon exposed a lot of people’s experiences with Kubernetes to help developers and operators learn about the system and its related projects, adapt the system to their needs, and deploy it successfully.

posted

 

Authored by Nathan Ness, Senior Technical Marketing Engineer, Cloud Native Applications

Last week in Berlin, VMware joined the Kubernetes community to support the Cloud Native Computing Foundation community and participate in sessions, share demos, and promote the event.  Did you miss our booth demos or are you looking for a refresher? Here’s what we showcased:

Photon Platform + Kubernetes

Kubernetes is a container orchestration platform that provides developers agility, high availability, and scheduling for deploying container workloads. This is accomplished by submitting API calls to a Kubernetes Master with in a cluster. The Kubernetes Master schedules Deployments, Pods, etc., on to available workers nodes within the cluster. Photon Platform can provide Kubernetes clusters on demand with a single API call. This can done by either the “Provider” of infrastructure and the Kubernetes API endpoint can be handed off to the “Consumer” developer team(s). Or the developers can deploy and manage the Kubernetes cluster through Photon Platform in a self-service manner.

k8-blog-nate-001

Need to increase the size of the Kubernetes cluster? You shouldn’t have to rely on manual processes to manage the deployment/lifecycle of a Kubernetes cluster.  You can resize the cluster from the UI of Photon Platform or the again from the API.  If you have the available capacity to increase the size the cluster you can easily increase the number Kubernetes nodes and Photon will automatically provision and add them into the Kubernetes cluster.

k8-blog-nate-002

Photon Platform manages your available cloud resources for you with quotas. These quotas can be divided up into different tenants assigned to projects for consumption. This ensures that development teams have boundaries for the amount of infrastructure consumed.

k8-blog-nate-003

Lastly, Photon Platform is monitoring the health of the cluster. If a worker dies for any reason, Photon Platform will automatically spin up a new worker and keep the worker node count at the deployed number.

VMware NSX + Kubernetes

Last week the NSX-T with K8s demonstration showed how NSX uses the Container Networking Interface (CNI) Integration to provide enterprise networking to containers.  Benefits of using the NSX-T network virtualization and security platform for container networking include automating the creation of network topology as well as enhancing security with per Pod (group of containers) logical ports and micro-segmentation between Pods.  

Interested in learning more NSX and container networking? Check out this technical blog!

Stay Connected

Stay connected with Cloud Native by following us on Twitter at @cloudnativeapps.