posted

Authored by Nathan Ness, Senior Technical Marketing Engineer, Cloud Native Applications.

In this post I will discuss how a developer or infrastructure administrator can use Terraform to automate infrastructure provisioning. I will demonstrate how you can use Terraform with Photon Platform to deploy and scale Docker Datacenter. Terraform is a tool for building, changing, and versioning infrastructure. Terraform uses configuration files to describe the infrastructure you wish to provision.

I have created two configuration files for this deployment. This first one deploys the Docker Datacenter Manager (manager.tf) and the second one deploys the workers (worker.tf) and automatically adds them to the Swarm Cluster. Photon Platform is used for multi-tenancy and control how much infrastructure you are able to deploy.

terraform-directory

Let’s take a look at the manager.tf. The purpose of this file is to deploy a VM and install docker ucp on top of it. For that we use the Photon Platform provider to deploy the VM and Terraform to issue the remote execution commands to deploy the Docker Datacenter Manager.

blogcode001

Now that the manager is deployed we can start deploying swarm workers and add them into the cluster. That is where the worker.tf comes in.

blogcode002

Now that we have the configuration files for the manager and workers, let’s deploy it. In our terminal we issue terraform apply and it will ask for the number of worker nodes you want to deploy. On the right you can see Photon Platform and the available infrastructure resources you have assigned to your project.

terraform-photon-start

After you execute terraform apply it will start deploying the infrastructure described in the configuration files. The first thing it deploys is the Docker Datacenter Manager so that workers can join the cluster.

photon-manager

After the manager is deployed, it will spin up the number of workers you specified for the “COUNT” variable.

photon-complete

The last part of a terraform template is the “outputs”. I want to know the endpoint for my Docker Datacenter VM so I have defined that as an output. Terraform also tells you what it added/changed/removed whenever you issue a terraform plan or terraform apply.

terraform-complete

Success! We have Docker Datacenter up and running with 5 worker nodes. Now you can start deploying your contanerized workloads with the Docker API.

ddc-ui

Now let’s say we need to scale up the number of worker nodes. Simply run terraform apply and specify the new number of worker nodes you want. Terraform will examine the current state of the infrastructure and deploy 5 more VM’s on top of Photon Platform and add them into the Swarm cluster.

scaleout-10-nodes-cli-1

This is a simple example of how you can use “Infrastructure as Code” with Terraform and Photon Platform. You can store your configuration files in version control to track changes and help rollback if something went wrong.

Thank you!

posted

This blog is written by Abrar Shivani who is a software engineer in Cloud Native Applications Storage team.

This blog post will provide an overview of vSphere Cloud Provider which exposes persistent storage for containers orchestrated by Kubernetes on top of vSphere. By using a sample WordPress-MySQL application, this blog will provide a step-by-step guide on how administrators can use vSphere Cloud Provider to make application data highly available.

 

vSphere Cloud Provider

Cloud Provider is a module in Kubernetes which provides an interface for managing nodes, volumes and networking routes. VMware contributes to both vSphere and Photon Cloud Provider.

chart1

 

Containers launched using Kubernetes can be resurrected. Yet, the data stored by application running inside that container will be lost once container goes down. With vSphere Cloud Provider the data can be stored in vSphere Persistent Volume and after rescheduling of the pods containers get the data back wherever it is scheduled. vSphere Cloud Provider enables access to vSphere managed storage (vSAN, VMFS, NFS) for applications deployed in Kubernetes. This is achieved by  supporting persistent volumes and storageclass primitives in Kubernetes. Moreover, it interacts with vCenter to support various operations such as creation and deletion of volumes, attaching and detaching volumes to application pods and nodes.  vSphere Cloud Provider creates Persistent Volumes backed by VMDK and mounts to the node where pod is scheduled and available for pod to use. Later, when pod fails and rescheduled, vSphere Cloud Provider will automatically detach volume from the node and attach back to the node where new pod is scheduled. It is mounted on same location as earlier and pod gets its data back. Thus, storage failover is completely transparent to Kubernetes PODs.

 

Let’s briefly go over storage primitives in Kubernetes that will help us understand this blog much better.

  • StorageClass – StorageClass describes custom parameters that are passed to vSphere Cloud Provider for creating vmdk (example: diskformat).
  • PersistentVolume – PersistentVolume is Kubernetes API object which is associated with volume. This is created automatically if volume is provisioned dynamically.
  • PersistentVolumeClaim – PersistentVolumeClaim describes user requirements for storage. (example: volume size)
  • Service – Service is an abstraction that defines a set of pods and a policy to access them.
  • Deployment – It makes sure that pod is running and provides declarative updates for pods and replica sets.

Deploying WordPress-MySQL Application with local storage

Let’s deploy WordPress application using kubectl. For this demo, you will need a Kubernetes cluster configured with vSphere Cloud Provider to create and access vSphere volumes. Kubernetes cluster can be launched with Kubernetes-anywhere.

As shown below, the proposed deployment for the application contains two pods containing one container: one for MySQL and other is WordPress.

chart2

Deploy MySQL

1.     apiVersion: v1

2.     kind: Service

3.     metadata:

4.       name: wordpress-mysql

5.       labels:

6.         app: wordpress

7.     spec:

8.       ports:

9.         - port: 3306

10.    selector:

11.      app: wordpress

12.      tier: mysql

13.    clusterIP: None

14.  ---

15.  apiVersion: extensions/v1beta1

16.  kind: Deployment

17.  metadata:

18.    name: wordpress-mysql

19.    labels:

20.      app: wordpress

21.  spec:

22.    strategy:

23.      type: Recreate

24.    template:

25.      metadata:

26.        labels:

27.          app: wordpress

28.          tier: mysql

29.      spec:

30.        containers:

31.        - image: mysql:5.6

32.          name: mysql

33.          env:

34.          - name: MYSQL_ROOT_PASSWORD

35.            value: mysqlpassword

36.          ports:

37.          - containerPort: 3306

38.            name: mysql

 

 

Let’s go over mysql-deployment.yaml.  Lines 1 – 13 describe service object of Kubernetes. This service selects pods with label

`app: wordpress` and `tier: mysql`(Line: 10 – 12 ) and network requests are forwarded to the one of pods on port 3306 (Line: 9). Line 15 – 38 describes deployment object of Kubernetes. This deployment objects creates pod with single container that has image `mysql:5.6`(Line: 31). Moreover, MySQL credentials are passed to the container as environment variables. MySQL container exposes port 3306 where it accepts network requests.

 

Let’s deploy MySQL pod in Kubernetes. Run the following command to launch MySQL pod:

sc1

Once we execute above command, let’s verify whether MySQL pod, deployment and service are up.

sc2

Now, that we have MySQL container running, we will deploy WordPress.

Deploying WordPress

We will use following yaml for deploying WordPress.

# wordpress-deployment.yaml

1.     apiVersion: v1

2.     kind: Service

3.     metadata:

4.       name: wordpress

5.       labels:

6.         app: wordpress

7.     spec:

8.       ports:

9.         – port: 80

10.        nodePort: 30080

11.    selector:

12.      app: wordpress

13.      tier: frontend

14.    type: NodePort

15.  —

16.  apiVersion: extensions/v1beta1

17.  kind: Deployment

18.  metadata:

19.    name: wordpress

20.    labels:

21.      app: wordpress

22.  spec:

23.    strategy:

24.      type: Recreate

25.    template:

26.      metadata:

27.        labels:

28.          app: wordpress

29.          tier: frontend

30.      spec:

31.        containers:

32.        – image: wordpress:4.6.1-apache

33.          name: wordpress

34.          env:

35.          – name: WORDPRESS_DB_HOST

36.            value: wordpress-mysql

37.          – name: WORDPRESS_DB_PASSWORD

38.            value: mysqlpassword

39.          ports:

40.          – containerPort: 80

41.            name: wordpress

 

 

Let’s go over wordpress-deployment.yaml. Lines 1 – 14 describe service object of Kubernetes. This service selects pods with label

`app: wordpress` and `tier: frontend`(Line: 11 – 12 ). The incoming network requests on any node in Kubernetes cluster on port 30080 are forwarded to the one of pods with labels `app: wordpress` and `tier: frontend` on port 80 (Line: 9 – 10). Line 15 – 38 describes deployment object of Kubernetes. This deployment objects creates pod with single container that has image `wordpress:4.6.1-apache`(Line 32). Moreover, the host and MySQL credentials is passed to container as environment variables. WordPress container exposes port 80 where it accepts network requests.

 

Let’s deploy WordPress pod in Kubernetes. Run following command to launch WordPress pod:

Screen Shot 2017-04-14 at 12.15.01 PM

Once we execute the above command, let’s verify whether WordPress pod is up and running using following command:

Screen Shot 2017-04-14 at 12.15.11 PM

Let’s find the IP address and port to get access to the WordPress.

Screen Shot 2017-04-14 at 12.17.05 PM

Visit your brand new WordPress blog at http://10.160.241.61:30080

You’ll see the familiar WordPress start page:

wp1

Select your language and click Continue to configure your website.

 

Now, we have WordPress up so let’s see whether application data is accessible when MySQL pod goes down and rescheduled. Let’s try to kill MySQL pod.

Screen Shot 2017-04-14 at 12.18.14 PM

Another MySQL pod will be scheduled on available node.

Screen Shot 2017-04-14 at 12.18.54 PM

Visit the same WordPress URL now and you will find WordPress is unable to establish connection to database.

DBconn

Before we proceed to the next section to see how to persist data into vmdks using vSphere Cloud Provider in Kubernetes,

let’s clean up our setup by destroying all Kubernetes objects.

Screen Shot 2017-04-14 at 12.20.15 PM

Deploying WordPress-MySQL Application with vSphere Persistent Storage

 

In this section we will look at how we can persist data using vSphere Cloud Provider. First, we will provision the disk using storageclass and vsphere-volume provisioner. Later, we will use this storageclass to claim volumes using persistent volume claim. Once the claim is bound, we will use the volumes inside pods to store data.

 

You will need Kubernetes configured with vSphere Cloud Provider to create and access vSphere volumes. One of the ways you can have Kubernetes cluster setup with Cloud Provider configured is Kubernetes-Anywhere.

 

First, we need to define disk format for vmdk which will be created to persist MySQL and WordPress state. We can do this by creating storage class.

Let’s create storage-class using ‘vsphere-storage-class.yaml’.

# vsphere-storage-class.yaml

1.     kind: StorageClass

2.     apiVersion: storage.k8s.io/v1beta1

3.     metadata:

4.       name: fast

5.     provisioner: kubernetes.io/vsphere-volume

6.     parameters:

7.       diskformat: zeroedthick

This yaml describes that we will be using vsphere-volume provisioner (Line: 5) to provision disk with diskformat as ‘zeroedthick’ (Line: 7).

 

Run the following command to create storage class.

Screen Shot 2017-04-14 at 12.21.12 PM

Let’s validate if storage class is created.

Screen Shot 2017-04-14 at 12.21.51 PM

Deploy MySQL

Let’s deploy MySQL using mysql-deployment-vsphere.yaml

# mysql-deployment-vsphere.yaml

1.     apiVersion: v1

2.     kind: Service

3.     metadata:

4.       name: wordpress-mysql

5.       labels:

6.         app: wordpress

7.     spec:

8.       ports:

9.         – port: 3306

10.    selector:

11.      app: wordpress

12.      tier: mysql

13.    clusterIP: None

14.  —

15.  apiVersion: v1

16.  kind: PersistentVolumeClaim

17.  metadata:

18.    name: mysql-pv-claim

19.    annotations:

20.      volume.beta.kubernetes.io/storage-class: fast

21.    labels:

22.      app: wordpress

23.  spec:

24.    accessModes:

25.      – ReadWriteOnce

26.    resources:

27.      requests:

28.        storage: 20Gi

29.  —

30.  apiVersion: extensions/v1beta1

31.  kind: Deployment

32.  metadata:

33.    name: wordpress-mysql

34.    labels:

35.      app: wordpress

36.  spec:

37.    strategy:

38.      type: Recreate

39.    template:

40.      metadata:

41.        labels:

42.          app: wordpress

43.          tier: mysql

44.      spec:

45.        containers:

46.        – image: mysql:5.6

47.          name: mysql

48.          env:

49.          – name: MYSQL_ROOT_PASSWORD

50.            value: mysqlpassword

51.          ports:

52.          – containerPort: 3306

53.            name: mysql

54.          volumeMounts:

55.          – name: mysql-persistent-storage

56.            mountPath: /var/lib/mysql

57.        volumes:

58.        – name: mysql-persistent-storage

59.          persistentVolumeClaim:

60.            claimName: mysql-pv-claim

 

In above yaml, we declare service (Line: 1 – 13), persistent volume claim (Line: 15 – 28) and deployment objects of Kubernetes (Line: 30 – 60).

We have the same description for service as above. We describe persistent volume claim to use storage class which we just described above (Line: 20). Also, we mentioned that we need 20G for the volume (Line: 28). Once this claim is created it will provision vmdk of 20G with diskformat as zeroedthick. Now, let’s look at deployment description. Deployment description is same as above with additional volume information. We mention that we will use volume that is bound to claim  `mysql-pv-claim` (Line: 59 – 60) and mount it to the path ‘ /var/lib/mysql’ (Line: 56). Once MySQL container is launched vSphere Persistent Volume will be attached to the node on which it is launched and MySQL will write its data to the vmdk.

 

Run the following command to deploy MySQL,

Screen Shot 2017-04-14 at 12.22.33 PM

Let’s validate whether MySQL is deployed,

Screen Shot 2017-04-14 at 12.23.15 PM

Deploying WordPress

We will use following yaml for deploying WordPress.

# wordpress-deployment-vsphere.yaml

1.     apiVersion: v1

2.     kind: Service

3.     metadata:

4.       name: wordpress

5.       labels:

6.         app: wordpress

7.     spec:

8.       ports:

9.         – port: 80

10.        nodePort: 30080

11.    selector:

12.      app: wordpress

13.      tier: frontend

14.    type: NodePort

15.  —

16.  apiVersion: v1

17.  kind: PersistentVolumeClaim

18.  metadata:

19.    name: wp-pv-claim

20.    annotations:

21.      volume.beta.kubernetes.io/storage-class: fast

22.    labels:

23.      app: wordpress

24.  spec:

25.    accessModes:

26.      – ReadWriteOnce

27.    resources:

28.      requests:

29.        storage: 20Gi

30.  —

31.  apiVersion: extensions/v1beta1

32.  kind: Deployment

33.  metadata:

34.    name: wordpress

35.    labels:

36.      app: wordpress

37.  spec:

38.    strategy:

39.      type: Recreate

40.    template:

41.      metadata:

42.        labels:

43.          app: wordpress

44.          tier: frontend

45.      spec:

46.        containers:

47.        – image: wordpress:4.6.1-apache

48.          name: wordpress

49.          env:

50.          – name: WORDPRESS_DB_HOST

51.            value: wordpress-mysql

52.          – name: WORDPRESS_DB_PASSWORD

53.            value: mysqlpassword

54.          ports:

55.          – containerPort: 80

56.            name: wordpress

57.          volumeMounts:

58.          – name: wordpress-persistent-storage

59.            mountPath: /var/www/html

60.        volumes:

61.        – name: wordpress-persistent-storage

62.          persistentVolumeClaim:

63.            claimName: wp-pv-claim

 

Similarly, this yaml creates service, persistent volume claim and deployment objects in Kubernetes for WordPress.

 

Run following command to deploy WordPress,

Screen Shot 2017-04-14 at 12.25.32 PM

Let’s validate whether WordPress is deployed,

Screen Shot 2017-04-14 at 12.26.21 PM

Now we have WordPress and MySQL up and running let’s visit our new WordPress blog.

Enter following command to get the ip address and port to access the blog,

Screen Shot 2017-04-14 at 12.27.01 PM

Just enter this ‘10.160.241.61:30080’ in the browser to visit the WordPress blog.

Once WordPress is configured. Let’s take down the WordPress pod.

Screen Shot 2017-04-14 at 12.27.36 PM

Visit the same WordPress URL and see WordPress app is accessible with the same configuration.

Now let’s take down MySQL pod.

Screen Shot 2017-04-14 at 12.28.12 PM

Another MySQL pod will be scheduled on available node.

Screen Shot 2017-04-14 at 12.29.00 PM

Once pod is restarted by Kubernetes. The pod will pick up the same volumes and its state is intact.

 

As you can see, we can easily persist the state of containers using vSphere Cloud Provider!

 

We would love to hear your feedback! We will be at DockerCon. Please drop by at booth G9 to learn more about what we have to offer.

 

 

posted

Authored by Emad Benjamin, Principal Architect, Global Services Advanced Architecture

The room for this session was packed in Las Vegas, and boy did people come armed with their questions. It was great to see attendees for multiple companies who are paying attention to the Cloud Native Apps (CNA) space.  Now, we promised that what was discussed in Vegas would stay in Vegas, but if we can offer a glimpse for our European attendees, then we are sure you would appreciate this minor break away from tradition.

Speaking of breaking away from tradition, well “Hello, CNA!” – What a way to begin the session as to just what is CNA, how does one distinguish a cloud native app from a monolithic one. But wait a minute!? What is monolithic, draw it for me please!?  And this is how the conversation began; we defined what we see as being a monolithic app as opposed to highly scaled out micro-services like architecture often found in CNA.  It is all great flexibility offered on Day-1 and we talked about the benefits, but what happens on Day-2 (security, manageability, scalability) – well we discussed the answers to that too, and won’t spoil the surprise, but suffice to say that if you come to the session we will do our best to answer any and all questions about this, IMHO a rapidly forming new and highly opinionated space.  Come join us and listen to a few of our technical services experts as to how their customers are tackling CNA.

state-of-the-u

But wait…you didn’t think that was it…here read more…

In this group discussion we will have an interactive session on what is cloud native, what scale it addresses, who are some of the adopters, and which direction this trend is forcing the market over the next few years.  It is an opportunity for you to ask the simplest of questions to the most complex ones, sometimes a simple question as “what is cloud native” can quickly turn into a complicated answer, and hence is the opportunity to discuss the wide variety of opinion that surrounds this.

In this talk we will highlight the elements of this rapidly moving phenomenon through our industry, a phenomenon of building platforms, not just business logic software but infrastructure as software. We humbly believe that the drive towards these platform solutions is due to the following fact: approximately half of new applications fail to meet their performance objectives, and almost all of these have 2.x more cloud capacity provisioned than what is actually needed. As developers/DevOps engineers we live with this fact every day, always chasing performance and feasible scalability, but never actually cementing it into a scientific equation where it is predictable, but rather it has always been trial based, and heavily prone to error. As a result we find ourselves delving with some interesting platforming patterns of this decade, and unfortunately we are lead to believe that such patterns as microservices, 3rd platforms, cloud native, and 12factor are mainly a change in coding patterns.  However, contrary to this popular belief, these patterns represent a major change in “deployment” approach, a change in how we deploy and structure code artifacts within applications runtimes, and how those application runtimes can leverage the underlying cloud capacity. These patterns are not code design patterns, but rather platform engineering patterns, with a drive to using APIs/Software to define application platform policies to manage scalability, availability and performance in a predictable manner.

 

posted

Authored by Mark Peek, Principal Engineer, Cloud-Native Applications

Technologies such as PaaS and containers are making developers increasingly more efficient at delivering their code into production. The tooling around continuous integration and continuous deployment is reducing the time it takes to safely push code through the delivery pipeline. Earlier this year we announced the Pivotal-VMware Cloud Native Stack which delivered the power of the Pivotal Cloud Foundry on top of Photon Platform. And at VMworld US 2016 we hinted about more to come on top of Photon Platform.

container_blog

Next week at VMworld Europe 2016 in Barcelona, Jared Rosoff (CTO, Cloud Native Applications) will be delivering a spotlight session on Delivering Containers as a Service with Photon Platform [CNA12273]. In this session he will talk about how containers are becoming increasingly popular as a way to deliver software from development out into production. Kuberenetes integration with Photon Platform can address the challenges to running an enterprise container infrastructure. Jared will discuss the capabilities such as self-service Kubernetes clusters on demand, multi-tenant operation, and much more. Come join us in Barcelona to hear about our Photon Platform offerings.

posted

Authored by Alka Gupta, Senior Global Technical Alliance Manager


screen-shot-2016-10-06-at-1-33-06-pm

You have heard about Pivotal CloudFoundry. You have also heard about VMware’s brand new product, Photon Platform. You want to learn more about each one of them and  how the two work together to deliver an optimized cloud native experience to both operators and developers? Where does each sit in the stack and what use cases does a PCF+Photon Platform solution address. When should I run PCF on vSphere and when on Photon Platform?

These are exactly the questions we will address in this session: Architecting Cloud-Native Systems with Photon and Pivotal Cloud Foundry [CNA7813-QT]

We will share a real world case study on deploying PCF on Photon Platform, lessons learned and some best practices. You will be able to walk away with an understanding of Photon Platform architecture, why it is best suited to run Pivotal Cloud Foundry, architecture components of each and how they integrate together.

posted

Authored by Alka Gupta, Senior Global Technical Alliance Manager

wp-pivotal-cloud-foundry-620x410vmware

Digital era is upon us. Every business is challenged by new innovations, whether it’s new products like Tesla, new business models like venmo or new user experiences like Uber. Customers and end users are expecting businesses to provide experiences that are personalized, localized, mobilized and responsive to their demands in cycles nearing real time. And I can guarantee you that your company is impacted by these trends as well!  Achieving state-of-the-art application development and delivery lies at the heart of this transformation and accelerates your time-to-market.

You are likely to have questions around how you can extend your current investments in VMware SDDC towards enabling your developers build these next gen apps.  In session CNA-7813, learn how VMware and Pivotal have partnered together to deliver the best in class integrated solutions in this space, targeting both operators and developers.

In addition, you will become familiar with Pivotal Cloud Foundry and its core tenets. You will also learn about the operational, reporting and monitoring capabilities available for PCF from VMware vRealize suite of products.  You will get the best practices around securing PCF with NSX today, and what’s on the horizon. For those interested in carving out separate greenfield stacks for cloud native workloads, you will see how to run Pivotal Cloud Foundry on our newly announced Photon Platform.

From this session, you will walk away with a good understanding of standing up a Pivotal Cloud Foundry environment in your data center, operationalizing it, and rolling it into production. You will be able to offer your developers a turnkey cloud native app-dev platform to build and run their apps with agility, with operational control via your trusted VMware SDDC.

posted

Authored by Ryan Kelly, Staff Systems Engineer, Cloud Management

In this guide I will walk you through a simple setup of Admiral using Photon OS as the container host. Admiral™ is a highly scalable and very lightweight Container Management platform for deploying and managing container based applications. It is designed to have a small footprint and boot extremely quickly. Admiral™ is intended to provide automated deployment and lifecycle management of containers.

Key Features:

  • Rule-based resource management – Setup your deployment preferences to let Admiral™ manage container placement.
  • Live state updates – Provides a live view of your system.
  • Efficient multi-container template management – Enables logical multi-container application deployments.

Pre-Reqs

  • One Photon OS VM to install Admiral Container Service
  • Two Photon OS VM’s with Docker Remote API enabled to use as Container Hosts : See my Guide here
  • Internet Access from all the above Photon OS VM’s
  • A quiet place where you will not be interrupted. See my guide here.

Login to one of your Photon OS VM’s and type the following and press enter

docker run -d -p 8282:8282 --name admiral vmware/admiral

After a few minutes you should see the following:

Open a browser to the ip address of your Photon OS and port 8282 http://ipaddress:8282 then click on add host

Enter the IP and host name of one of your other Photon OS VM’s

Note: The Photon OS host you’re adding needs the Docker Remote API enabled, see my guide here.

Next, click login credentials, New Credentials and enter the following information

Next, select the default-resource-pool

Now click verify to make sure it connected correctly

Now click Add

You should now see this screen with your new host, now click on Templates

In the search box enter vmtocloud and press enter, then click to provision the vmtocloud/myblog template

Watch the progress screen on the right, after several minutes it should show finished. Now clock the Containers tab.

Notice that all the templates are being pulled from Docker Hub. In a later post I will show you how to use VMware Harbor Registry locally.

Click the My Blog Container

Notice all the information you get about the running container. Now click the second port link to go to the WordPress Site

Notice you now have a container running WordPress

Now let’s add a second host. Back in the container service screen click on the hosts tab

Now click add host

Enter the same information as before and click verify

Remember the Photon Host needs to have Docker Remote API enabled of the verify will fail with a connection error. See my guide here.

Now click add

You should now see two hosts available for Container provisioning

You should now be well on your way to using Admiral, see the user guide here to explore more features.

posted

Authored by Ryan Kelly, Staff Systems Engineer, Cloud Management

So you want to connect to the Docker instance on Photon OS remotly from another Docker client? In this guide I will walk you through a few short steps to configure Photon OS to enable the remote docker API. NOTE: This is not considered the secure method. If you want to use encryption and secure connections I will have a follow up post on that soon.

Login to your Photon OS using SSH or open the console and type the following and press enter

systemctl stop docker
vi /etc/default/docker

Press i on the keyboard then enter the following, when done press the ESC key then hold Shift and press the Z key twice

DOCKER_OPTS="-H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock"

Since Photon OS uses IP tables we need to open that port, type the following and press enter

iptables -A INPUT -p tcp --dport 2375 -j ACCEPT

Now start docker with the following command and press enter

systemctl start docker

To test that it worked, open a web browser to the Photon OS at http://ipaddress:2375/info and you should see the following.

Enjoy!

posted

Authored by Ryan Kelly, Staff Systems Engineer, Cloud Management

We are back from another successful VMworld and a lot of folks are asking for the slides from this session. While the official slides are being posted on VMworld.com, I want to follow up with a blog post on this for anyone that was unable to attend in person. As you may or may not know, VMware recently announced Photon Platform. In my initial conversations with customers, I came to the conclusion that there is some confusion between vSphere and what Photon Platform is designed for. That was the basis for my session at VMworld this year.

So, what is Photon Platform?

Purpose built, multi-tenant, scale-out infrastructure for running containers on proven VMware technology you can trust!

A closer look under the covers.

Photon Controller is the scheduler and control plane that provides the constructs to combine ESXi, vSAN and NSX into a container cloud.

Ok, but how difficult is it to install?

Easy as..One

Two

Three

Your just a clicks away from access to industry standard API and command line

Photon Platform has Role based access focused on ensuring the Developer retains their preferred tools and workflows.

What are the use cases for Photon Platform?

But we already have vSphere?

As stated, Photon Platform has a heavy focus on containers, that’s not to say that it’s your only option, if you are already running vSphere you have a huge head start to containers. The question we get a lot is..

Also, containers as a service with vRealize Automation

So which one do I choose?

vSphere Integrated Containers:

  • Already invested in and standardized around vSphere
  • You need a quick and easy solution for your developers today
  • Plans to run containers in production
  • Requirement for Policy, governance and metered self service – vRealize Automation
  • Lack of resources or commitment to adopt/learn/train on a new technology

Photon Platform:

  • Lower cost IaaS layer for Pivitol Cloud Foundry – PCF – Photon Bundle!
  • Very mature agile development processes in place that needs to scale beyond vSphere Maximums > 35,000 VM’s
  • Currently or planning to build large distributed micro service architectures.
  • Alternative to other programmable infrastructure stacks
  • Large scale high churn environments (Spin up, tear down thousands of servers/containers per day.

Sometime both:

  • Already using containers on vSphere and need to deploy at a larger scale and faster pace
  • Old hardware laying around and you want to give developers a sandbox environment to relieve some of the resources on vSphere environment
  • Internal mandate to move off of Public Cloud Service
  • Innovation projects:
    • New Mobile App Development
    • Life Science research projects
    • Application Re-Architecture Projects
    • Internet of things projects
    • Distributed computing

posted

As many of you know docker-machine is the client side tool that allows an individual on his/her own workstation to fire up docker hosts either locally or remote.

Docker-machine supports a variety of “drivers” to accomplish this. Some of these drivers deploy locally (e.g. Virtualbox, VMware Fusion), some of them deploy inside the data center (e.g.  OpenStack, VMware vSphere) and others can deploy in public clouds (e.g. AWS, VMware vCloud Air).

As I was experimenting with the vSphere driver for some tests I was doing with Docker Swarm, I found that the number of options available on the vSphere driver and the flexibility it provides, could make its usage challenging without proper examples to kick off the scripting.

For this reason, I am sharing some of the scripts I have used for my experiments with the ultimate goal of providing solid practical examples of how to use those vSphere parameters.

For your convenience, I am also attaching the variable configuration examples in this post.

This is how you’d configure the variables or corresponding options if you were to deploy to a vCenter server:

VSPHERE_VCENTER=192.168.1.12                                                                # vCenter IP/FQDN

VSPHERE_USERNAME=’administrator@vsphere.local’                             # vCenter user

VSPHERE_PASSWORD=’***********’                                                             # vCenter user password

VSPHERE_NETWORK=’VM Network’                                                          # PortGroup

VSPHERE_DATASTORE=’datastore1′                                                          # Datastore

VSPHERE_DATACENTER=’Home’                                                               # Datacenter name

VSPHERE_HOSTSYSTEM=’Cluster1/*’                                                        # cluster name

#VSPHERE_POOL=’/Home/host/Cluster1/Resources/SwarmTeam13′        # *optional* Resource Pool name

This is how you’d configure the variables or corresponding options if you were to deploy to a standalone ESXi host:

VSPHERE_VCENTER=192.168.209.11                                                          # ESXi IP/FQDN

VSPHERE_USERNAME=’root’                                                                      # ESXi user

VSPHERE_PASSWORD=’***********‘                                                              # ESXi user password

VSPHERE_NETWORK=’VM Network’                                                           # PortGroup

VSPHERE_DATASTORE=’datastore1′                                                           # Datastore

#VSPHERE_POOL=’/*/host/*/Resources/SwarmTeam9′                               # *optional* Resource Pool name

In particular, the syntax to use for the VSPHERE_POOL variable (or corresponding options) requires a bit of attention.

Let’s say, for example, that you want to deploy a 5-node Swarm cluster in a vCenter Resource Pool called “SwarmTeam13”, inside a cluster called “Cluster1”, in a data center called “Home”.

To do so, you will use the first syntax above inside the swarmcluster_consul.sh script and then you will run it on your workstation using the following parameters:

> ./swarmcluster_consul.sh 5 vmwarevsphere vcenter

Note: the VSPHERE_POOL variable has to be set if you want to deploy inside a Resource Pool (and not in the root of the cluster) so you need to remove the comment preceding the variable.

This is what you will see in the vCenter UI once the script has completed:

How-to-use-docker-machine-in-conjunction-with-vSphere-driver1

You can set the proper environmental variables to access the Swarm cluster by running the following command at the prompt:

> eval $(docker-machine env –swarm swarm-node1-master)

If you want to play with the scripts and deploy / destroy an entire multi-node Swarm cluster on vSphere, just grab them here.

Enjoy!

@mreferre