posted

by Jonathan Cham, Staff Systems Engineer 

You have participated in one of the most transformational shifts in the IT industry. 15 years ago, could you imagine being able to move an application from one part of the datacenter to the other, while it’s up and running, with no user impact, in a click of a button? VMware has enabled an entire shift in the industry from hardware to software, from resource silos to pooled capacity and from manual driven processes to automation. Now with containers, an old technology that has been simplified by Docker and CoreOS, we can run multiple applications across a shared kernel instantaneously. Container technology has been quickly adopted by developers as an easy way to run distributed, complex, portable application architectures on their personal desktops. The power behind this portability, flexibility and collaboration has made containers a great choice for new applications. With this great power comes great responsibility, and the burden of that responsibility now lies in management, operations, and visibility into this new paradigm of application development. Let’s take a look at common misconceptions around containers before embarking on your container journey.

1. Containers sound amazing, time to migrate my VMs to containers
While there are certain scenarios where this might make sense, containers have a completely different operating model from VM’s.  This makes the migration from a VM to containers tremendously complex and difficult  Containers act like processes and share the kernel space, meaning that your application essentially shares the OS with the rest of the containers. Because containers are anchored to a specific process, when that process dies or stops, your entire container stops. If your container crashes the kernel, all containers on that host will also crash. You certainly don’t expect these type of behaviors with your VM’s so don’t try and migrate unless you understand all the different implications.

Containers also share hardware with the server as opposed to a VM that has their own hardware stack that can be configured. This hardware layer provides a certain isolation that security organizations have relied on. With containers, this hardware layer isolation does not exist and requires additional security elements like SElinux and AppArmor to provide security. A vulnerability in the linux kernel can compromise all the containers running on it.  Migrations are possible, but don’t mistake a container for a lightweight VM.

2. I’ll tell my developers to re-write our applications with containers
That’s great! Perfect use-case. However, applications built with containers facilitate a different application development paradigm. This is not just re-writing an application, but re-architecting the application. Best practices around developing applications for containers are keeping your application immutable, stateless, highly available, dynamic, shared nothing type of architecture. Imagine for a second if your VM could suddenly appear somewhere on another host, with a different IP and storage backend. How would you handle this situation? In a container world, where microservices are very popular, new datacenter components like service registry and discovery, image repositories, api gateways, container clustering, orchestration and scheduling are required to address the dynamic nature of containers.  Re-writing an application to use containers will require a re-architecting the application as well.

3. Containers will help my application deploy faster
Both operators and developers can benefit greatly from containers because of the abstraction between the 2 layers and can facilitate CI/CD pipelines to deploy applications quickly. It’s a great separation of duties where developers manage the containers and operations manage everything underneath. However, when you look at an application deployment timeline, 95% of that time is people and process. Unless you have your people and process structured to facilitate a devops mentality, you will gain very little benefit from using containers.  To this day, there are still large enterprises that require months of lead time just to get compute/storage/network resources.  Do your change requests take 2 weeks?  Do you have 3 layers of approvals?  Containers won’t accelerate your people/process barriers but can be a fantastic platform to start breaking down the barriers that prevent agility.

4. Other customers run containers in production, I can too.
Think for a second all the IT systems and components that are required to run your datacenter today. Monitoring tools (network, compute, storage, application), change management, compliance, config management, authentication, security, backup, deployment tools, etc. Some large organizations have 100’s-1000’s of tools they use to run their datacenter, both out of the box and custom built. Unfortunately, very little, if any of those tools translate well to being used in a container world. That’s why you see a large, fragmented set of open source tools to address these gaps, one by one, adding a lot of operational layers to deploying containers. Many organizations end up building their own tools to support their container deployments.  Some customers who decide to run in dev/test can get some experience running containers but will have to add an additional operational layer to help migrate the application back into VM’s for production deployment.  Yes, you can run your containers in production — it will just require an entirely new operational paradigm and a toolset that is still in its maturing phase.

Hopefully, these don’t deter you from embarking on an experimentation journey with containers.  At VMware, for customers who want to dive into containers, we’re making some big bets to help them transition to a containerized environment.  We have built an entire stack of container related technology (and open sourced many of them too!) along with container identity and access management to help our customers adopt containers with an enterprise like approach.  Our vSphere Integrated Containers launches a 25MB Photon Kernel (pico version of Photon OS) and runs containers as if they were VMs.  The ultimate benefit of having a VM backed container is being able to leveraging all your enterprise investments around security, management, operations, and compliance.  From a developer point of view, it is as fast and as easy as a container and on the backend, we get the manageability and security of a VM.   IT is happy.  Developer is happy.  

You’ve built a successful business and while you want to experiment with new technologies, you also want to minimize the risk in doing so. We talk to many customers and we’ve committed to our customers to help them drive agility to their business no matter what the technology. We understand the challenges and are in the best position to help you adopt new technologies without sacrificing enterprise grade capabilities. Are you ready for containers? You can be with VMware.

posted

by Chris Adams

One of the many great things about Photon OS, VMware’s Cloud Native Apps container host, is that it is open source. The project is hosted on the VMware Github source code repository to download and build from source using the latest code. There are several builds that can be done depending on your needs. You can build ISO’s, Azure images, Amazon AMI images, Google Compute Engine images, etc. Of course, you can also download pre-build images from the Photon OS download page at the Photon OS web site if you are in a rush or want a more official release.

Downloading an official release is the way to go if you want something pre-built and at a defined release level. I sometimes want to try a version with the most up-to-date commits or to get a new feature that hasn’t been included in the latest stable release yet.  You may also want to change something in the Photon codebase yourself too since it is open source.

The easiest way to get everything built so you can evaluate the many options is to build them all at once. The following directions will walk you through the steps necessary to build everything on an Ubuntu based Linux system (VM or host). I used Linux Mint 17.2 but Ubuntu (14.x or 15.x) or any Ubuntu derivative should work fine. All command run inside the Photon build system will be in Courier font.

First, we need to ensure some required software is installed:
sudo apt-get install bison gawk g++ createrepo python-aptdaemon genisoimage texinfo python-requests git vim

Now install Docker:
wget -qO- https://get.docker.com/ | sh

Allow your user to run docker (change NAME to your username):
sudo usermod -aG docker NAME

By default on Linux Mint (and most other Ubuntu based distributions), /bin/sh is a symbolic link to /bin/dash. We want to use /bin/bash instead of /bin/dash so update the link to point to bash. This can be reverted when you are done building Photon by linking /bin/sh back to /bin/dash.
sudo ln -sf /bin/bash /bin/sh

Make a directory for the repo and change into directory:
mkdir repos
cd repos

Clone the Photon git repo from github:
git clone https://github.com/vmware/photon.git

Photon1

Change directory into the newly created photon directory:
cd photon

Begin the build process:
sudo make all

This process will take a while to complete. You might want to start it before going to bed or run it in the background while doing other things throughout the day. The output of the build process will store your freshly built material in the repos/photon/stage directory. You will see ISO files as well as several other directories. Explore the contents and enjoy your freshly built Photon binaries including the ISO’s you can use to build Photon VM’s.

photon2

Happy building! Have questions? Check out our community.