posted

0 Comments

By Ben Corrie, Senior Staff Engineer, VMware

Introduction 

Do you ever find yourself actually installing an operating system these days? If so, stop right now. Ever found yourself manually deploying and configuring a database? Manually setting up and configuring language frameworks for an application? Making the same configuration change to more than one computer manually? We should talk.

Containers have fundamentally changed how we consume software. They’ve changed how we develop and package software. Containers have also changed how we interact with infrastructure.

It has never been easier to deploy a service, an application, a database or a cluster than with today’s container technologies. Consider the following:

  • The immutability and portability of container images means that workloads deploy predictably in multiple locations and scale up and down with ease.
  • The increasing sophistication of container registries incorporate security capabilities and access control that simply weren’t available previously.
  • The way that containers force developers to think about how state is managed, in terms of persistence, scope and integrity is leading to more flexible application architectures.

vSphere Integrated Containers (VIC) has brought all of these benefits directly to vSphere by giving you the ability to manage and consume vSphere infrastructure using an opinionated container consumption model.

What’s even better about VIC is that it goes where many containers fear to tread. Take a large database for example. Many people would tell you not to run a database in a container. Why? Because you need data integrity, strong runtime isolation, and good network throughput. With a regular Linux container, that would mean deploying a dedicated Linux host, securing and patching it, configuring it with a volume plugin that works with a storage LUN of some kind, ensuring that no other containers run in that host, and making sure to configure the container with host networking. VIC requires no such configuration. It will deploy a MySQL or MSSQL container image out-of-the-box direct to vSphere as a strongly-isolated virtual machine (VM) that has encrypted, replicated persistent storage on vSAN, with its own vNIC and connected directly to an NSX logical switch.

Deploying workloads to vSphere has never been so easy! And with VIC 1.2, we’ve added many of the security features Cloud Admins have come to expect, such as vulnerability scanning, SSO and image signing.

Jenkins is a great example of a long-running stateful application that plays to VIC’s strengths. In this post, we will share the best practices around deploying Jenkins using VIC.

We will discuss how to maintain the persistent state of Jenkins, how to configure security around image management and access control, how to ensure that it has the resources it needs and how vSphere High Availability (HA) can make the master node highly available.

Few would argue with the contention that containers make software provisioning easier. As such, with VIC, it’s never been easier to provision software to vSphere. This is just as true of Jenkins as any other application. However, with power comes responsibility and if we want to deploy Jenkins, there’s a few critical factors we need to consider:

  • What are the software artifacts in the Jenkins image we want to deploy?
  • Do we trust the provenance of those artifacts?
  • Do those software artifacts contain known vulnerabilities?
  • What data needs to be persisted and what data should be ephemeral?
  • What resource limits should we put around the container?
  • Do we want the container to be highly available and if so, how?

Security of Software Artifacts

Deploying an image from a public registry without knowing its contents or provenance is risky. This is why VIC comes with its own registry, which in VIC 1.2 is designed specifically to help you address these concerns.

As a Cloud Admin, you can choose to roll your own container images using your own Dockerfiles, or you can start from a public image and further modify it to your needs. As you’ll see from DockerHub (https://hub.docker.com/r/jenkins/jenkins/tags), you can choose from a Debian base or an Alpine base. The Alpine base is half the download size and has less than half the packages, so that makes it attractive, although there may be compliance considerations involved in the decision.

As an example, let’s start with the jenkins/jenkins:lts-alpine image from DockerHub, push it to a registry and scan it for vulnerabilities.

> docker pull jenkins/jenkins:lts-alpine

> docker tag jenkins/jenkins:lts-alpine vicregistry.myfirm.com/myproject/jenkins:lts-alpine

> docker push vicregistry.myfirm.com/myproject/jenkins:lts-alpine

As you can see, VIC registry has identified that the zlib package contains two high-level vulnerabilities. It provides a link to the CVE database which gives more details about the specific issue. What’s good is that it shows that there are updated versions of zlib in which the issue is fixed, so we can use that information to create a Dockerfile that defines a new image with updated packages.

> cat Dockerfile

FROM jenkins/jenkins:lts-alpine

USER root

RUN apk update && apk upgrade

USER jenkins

> docker build -t vicregistry.myfirm.com/myproject/jenkins:lts-alpine-upgrade .

> docker push vicregistry.myfirm.com/myproject/jenkins:lts-alpine-upgrade

Once the image is pushed, VIC registry shows that it is 100% green – no vulnerabilities.

Note that as a Cloud Admin, you can choose to limit the vulnerability level images can be deployed with. You can also restrict the images deployed to an endpoint to only ones that have been signed by a service such as Notary.

Data Persistence

The Jenkins container is configured in such a way that all of the persistent state is stored in one location – /var/jenkins_home. This means that you can safely start, stop or even upgrade the Jenkins master container and it will always come back up with all of the previous data, assuming you specified a named volume.

This data includes job definitions, credentials, logs, plugins – very important data that should not only be persistent but should also have high integrity. This is not data you want to have to recreate! VIC makes it really easy to store persistent data with these characteristics by mapping a container volume to a persistent disk on a vSphere datastore. The volume can then benefit from the security capabilities of the datastore, such encryption, and replication on vSphere vSAN.

Resource Limits and HA

VIC makes it very easy to specify how much resource a container should consume when deployed by allowing for the specification of vCPUs and a memory limit.

A VIC container has exclusive access to its own guest buffer cache, so there’s no resource competition from other containers.

If the vSphere cluster has HA enabled, then if an ESXi host goes down, the endpoint VM and/or the containers will be automatically restarted on other hosts.

Deploying Jenkins and Access Control

Now that we’ve thought about all of the implications, we can go ahead and deploy Jenkins. This can be done either with the VIC Management UI or using a vanilla Docker command-line client.

Regardless of how it’s done, it should only be possible to authenticate with the VIC endpoint with the appropriate credentials. As a Cloud Admin, the Management UI gives you control over who has access to those credentials, thereby limiting who has access to certain deployment endpoints and ensuring that credentials are not leaked.

In addition to this, VIC 1.2 now has integration with the vSphere Platform Services Controller which means that identities in the VIC Management UI can be vSphere identities, just with additional roles and responsibilities.

The Docker command-line which could be used to deploy Jenkins might look like this:

> docker volume create –opt VolumeStore=encryped –opt Capacity=5G my-named-volume

> docker run -d –name jenkins-master –cpuset-cpus 2 -m 4g -p 8888:8080 -e TINI_SUBREAPER= -v my-named-volume:/var/jenkins_home vicregistry.myfirm.com/myproject/jenkins:lts-alpine-upgrade

> docker logs jenkins-master

Note that a named volume on the desired datastore should be created first and then mapped to the appropriate mount point. Setting the environment variable TINI_SUBREAPER to null ensures that the Tini init process functions correctly, given that it won’t run as PID 1 in a VIC container. When you start Jenkins, you need an initial admin password that’s generated to the logs, so the docker logs command will show you that password.

If you deploy the same container via the Management UI, you can create a template that persists the configuration so that you can re-use it for future deployments. Viewing the container once it’s deployed allows you to see statistics and logs for the container.

Once Jenkins is up and running, you can access it at http://vch-endpoint-address:8888. It will ask for the password from the logs and then ask you to create an Admin user. You can go ahead and take the default plugins and once they’ve finished installing, you should see the Jenkins dashboard, ready for configuration!

Going to Dockercon Europe?

At DockerCon Europe, I’ll be speaking about how to use vSphere Integrated Containers for production-grade container deployments, Wednesday,  Oct 18 from 1:30 PM to 1:50 PM in Auditorium 11. If you wish to learn more, please bookmark my session!

You can also visit VMware’s booth at Dockercon and see the latest demos on Pivotal Container Service and vSphere Integrated Containers.