posted

0 Comments

By Eduardo Meirelles, Consulting Architect, VMware

The image below shows a high-level view of the networks that vSphere Integrated Containers (VIC) use and how they connect to your vSphere environment, the Registry and Management Portal and to public registries, such as Docker Hub.

containers

As you can see from the picture above, a Virtual Container Host (VCH) not only allows you to easily segregate management traffic from data traffic, but also Docker client traffic from intra-container traffic. Moreover, since containers in VIC are deployed as virtual machines (VMs), vSphere administrators can make vSphere networks directly available to containers.

The VIC network overview lightboard details the networking concepts for vSphere Integrated Containers, while the recently updated documentation comes in handy to further explain these options:

  • Client Network: The Client Network is used by a VCH to expose the Docker API service and where developers must point their Docker clients to manage and run containers.
  • Public Network: The Public Network is used by a VCH to pull images from registries. The most common use case is to pull images from the public Docker hub. You can also create your own private, secure local registry by using the VIC Registry (based on Project Harbor).
  • Management Network: The Management Network is used by a VCH to securely communicate with vCenter and ESXi hosts.
  • Bridge Network: The Bridge Network is a private network for container communication. External access is granted by exposing ports to containers and routing the traffic through the VCH endpoint VM. With no extra configuration, VIC provides service discovery while a built-in IPAM server provides the containerVMs with private IP addresses from the subnet of the bridge network.
  • Container Network: A Container Network is a user-defined network that can be used to connect containerVMs directly to a routable network. Container networks allow vSphere administrators to make vSphere networks directly available to containers. Container networks are specific to VIC and have no equivalent in Docker.

For developers, one of the standout features is the ability for VIC to expose containers directly on a network through the use of the container network option: vic-machine create --container-network. You can connect the containerVMs to any specific distributed port group or NSX logical switch, giving them their dedicated connection to the network.

This feature allows containerized applications to get their own routable IP and become first class citizens of your data center, providing the following benefits:

  • No single point of failure: Now every container has its own dedicated network connection, so even if the VCH endpoint VM fails, there’s no outage for your applications.
  • No network bandwidth sharing: Every container gets its own network interface and all the bandwidth it can provide is available to the application. Traffic does not route through the VCH endpoint VM via network address translation (NAT), and containers do not share the public IP of the VCH.
  • No NAT conflicts: There’s no need for port mapping anymore. Every container gets its own IP address. The container services are directly exposed on the network without NAT, so applications that once could not run on containers can now run by using VIC.
  • No Port conflicts: Since every container gets its own IP, you can have multiple application containers that require an exclusive port running on the same VCH. This provides better utilization of your resources.

All of this is possible through the use of the Container Network option.

But wait, there’s more! To give vSphere administrators even better management and control over the traffic that flows on container networks, VIC introduced the new container network firewall in version 1.2.

containers

The container network firewall provides five distinct trust levels:

  1. Closed: no traffic can come in or out of the container interface.
  2. Open: all traffic is permitted.
  3. Outbound: only outbound connections are permitted, which works well for containers that consume but do not provide services.
  4. Published: only connections to published ports are permitted. When you create a container, you must specify which port will be permitted. (Default)
  5. Peers: only containers on the same peer interface are permitted to communicate with each other. To establish peers, you need to provide an IP address range to the container network with the vic-machine create --container-network-ip-range option when you create a VCH.

The container firewall trust level is managed when you create a VCH:

vic-machine create --container-network-firewall “PortGroup”:[closed | open |outbound | published | peers]

In VIC version 1.2, the default trust level is set to Published. This means that you now have to explicitly identify which ports will be exposed with the -p option.

e.g. docker run -d -p 80 nginx

Running a container by using the -P option (e.g. docker run -d -P nginx) will not expose any service declared on the Dockerfile to the network, and your application will be unreachable from the outside.

Specifying the exposed port improves security and gives you more awareness of your environment and applications.

Now, if you still want to use the -P option (e.g. docker run -d -P nginx), you need to change the container network firewall trust level to Open:

vic-machine create --container-network “PortGroup” --container-network-firewall “PortGroup”:open

As you can see, as a vSphere administrator, you get a lot of power and flexibility in your hands when configuring VCHs for your developers.

You can configure VCHs where no network traffic can come out of them, no matter what the developers try to do:

vic-machine create --container-network “PortGroup” --container-network-firewall “PortGroup”:closed

Or, you can configure VCHs where all traffic is permitted and you let the developer decide at the application level which ports are exposed and which are not:

 vic-machine create --container-network “PortGroup” --container-network-firewall “PortGroup”:open

Or, you can configure VCHs where only outbound connections are permitted. This works well if you plan to host applications that consume but do not provide services:

vic-machine create --container-network “PortGroup”  --container-network-firewall “PortGroup”:outbound

You can configure VCHs where only connections to published ports are permitted, letting the developers or DevOps control which ports are open for applications where you can’t change the Dockerfile. Think of all the new COTS applications delivered as Docker images:

vic-machine create --container-network “PortGroup” --container-network-firewall “PortGroup”:published

You can also configure VCHs where the containers can only communicate with each other. This is ideal for a set of microservices that need to talk with each other, but not with the external world. For example, a set of Spark jobs that compute some data and save the result to disk:

vic-machine create --container-network “PortGroup” --container-network-firewall “PortGroup”:peers

You should now have a better understanding of the benefits that the different networking options of VMware vSphere Integrated Containers, together with the newly introduced Container Network Firewall feature, provide over traditional container host implementations, and how they make deploying containers on VIC even more secure. You should also know how to segregate different types of network traffic, make containers routable by exposing them directly on a network and secure network connections by using the five distinct trust levels of the container network firewall.

If you’re interested in getting more hands-on knowledge around VIC, download the latest 1.2.1 release.

Check back on the Cloud-Native Applications blog for the latest around VIC, and be sure to follow us on Twitter (@cloudnativeapps).