Going to the bank or to the restaurant?

Few years ago, there was a common idea that public Cloud would “eat it all” in a similar way that the banking industry did with the management of personal finances. How many of us have 99.9% of their finance in the banks?  Are the internal IT workloads from all companies across the world going to do the same migration because the public Cloud is so attractive?

Well, the common idea is now more mixed. It sounds like we’re more going towards a model similar to the “cooking” model. We all have kitchens to cook our food but also use restaurants from time to time. Cooking at home or going to the restaurant provides the same service at the end (food) but with different characteristics that give us the freedom to use one or the other according to the specifics of time, quality, price, etc..

Back in IT terms, companies today want to have the choice between private and public cloud. They want to build this hybrid cloud model. They want to provide a common framework to their end users to consume workloads from the private cloud who has its unique characteristics as well as the public clouds who also has its unique characteristics.

One important pre requisite to achieve this is to have a powerful Cloud Management Platform. This CMP needs to provide a number of OOTB features as well as great extensibility to fit all the use cases that companies will face to answer their customers ‘requests.

A common need is to provide a portal to deploy new workloads in the public and private cloud but it’s important to also import existing workloads that Shadow IT is already using. You will find many blogs focusing on how to deploy new workloads to the public Cloud. Here we are focusing on importing the existing ones.

Why is it important to import existing Public cloud workload?

Well, there’s a number of reasons.

For IT, it’s about providing additional services for the workloads that went into the Public Cloud. Services such as governance, 24/7 performance and availability monitoring, cost management, etc…

It’s also about getting control back on the shadow IT.

For developers, it’s about having the support from their internal IT while keeping the public cloud benefits.

What we found out is that a number of users started using the public Cloud because it initially took less time to deploy them.

Now, they are stuck with the job of monitoring and manage these workloads and could use IT help to support them and provide additional governance such as setting up the policies that specify who can manage these resources and what they are allowed to do with them.

Additionally, developers will actually be quite happy to see how fast IT’s private cloud can deploy workloads with the same management interface than their public Cloud workloads.

That’s what vRealize Automation, the VMware Cloud Management Platform, is about.

CMP

 

By using vRealize Automation you are able to import existing AWS instances such as the one you see in this screenshots…

… Into the vRealize Automation portal.

aws imported example

In this first blog post we are going to focus on how to make this happen. In a 2nd blog post, we will look on how to add actions (start, stop, terminate, etc…) to manage this imported workloads.

import aws into vra

OK, great. How does it work?

To show the use case of importing existing AWS instances we are going to use the following services:

  • vRA 6.2 Self Service portal to provide a common user interface for the private and public cloud
  • Policy Based Governance
  • vRealize Orchestrator to access AWS workflows

More specifically, we are going to use a vRA 6.2 feature named Custom Resource. This new feature gives us the ability to show all kinds of items in the vRA portal (Custom Resource) and manage them.

This, in essence, provides pure oxygen for the most asphyxiated Cloud admin since you can import and manage different things in your Cloud portal. J

In our use case, we are going to show how to import AWS instances, but it works the same for NSX security group, AWS security group, AWS VPC, etc. etc..

What we are going to do is use vRealize Orchestrator AWS plug in to leverage the AWS instances, show them in the vRA portal and manage them with the vRrealize Orchestrator (vRO) AWS workflows.

 

What are the Pros and cons of this approach?

Pros:

  • It’s easy to setup
  • You can take advantage of the following vRA services:
    • Day 2 actions
    • entitlement mechanism,
    • approval mechanism for actions
    • vRA portal
  • It’s very flexible and you can do a high level of customization
  • You can automate it to import a large number of instances.
  • It works for all kind of items (AWS vPC, other cloud objects, etc..) as long as you have them in vRO!

 

Cons:

  • It’s not integrated with the IaaS module:
  • No OOTB IaaS lifecycle mgmt (No reclamation process, no lease, etc..)
  • No IaaS custom prop
  • No costs displayed in the item section
  • Basically, it’s a new operational model which differs from what is managed by the IaaS engine
  • It’s not OOTB

 

OK, how does it REALLY work?

The High level view for the initial setup is the following:

  1. Install and setup the vRO AWS plug in.
  2. Import the vRO AWS instance object type in the ASD module from the vRA portal
  3. Create very simple vRO workflows to import existing AWS instances
  4. Import the “import workflow” in ASD as a Service Blueprint
  5. Publish and entitle the Service Blueprint
  6. Import the Day 2 actions (start, stop, terminate, etc…) in ASD as resource actions
  7. Publish and entitle these resource actions

That’s it.

Later on, if you need to add new day 2 actions, you just need to go through step 6 and 7.

Step 6 and 7 will be covered in a future blog post.

Let’s go through this

  1. Install and setup the vRO AWS plug in.

It’s very easy, so we won’t go through this. The plugin is available here with the appropriate documentation.

  1. Import the vRO AWS instance type in the ASD module from the vRA portal

In vRealize Automation, go to Advanced Services/Custom Resources. Click on Add.

1

In the Orchestrator type, type AWS:EC2instance, type the name for the custom resource (“AWS:EC2instance for example). Click Next

2

Here, you can customize the form that will appear when selecting the AWS instance object. For now, we leave it as it is. Click Add.

3

Step 3: Create very simple vRealize Orchestator workflows to import existing AWS instances

This Workflow has just one purpose: browse the AWS cloud and select the instance we want to import.

Therefore, it’s going to be a very simple workflow which we are going to create and then make available as a Service in the vRA portal.

So to summarize:

3.1: create very simple workflow taking in input and EC2 instance objet and make it an output

3.2 Add this workflow as a service Blueprint and publish it

3.3 As for any type of services, we will entitle it

That’s it for the setup.

3.4 We will then run it to import our first AWS instance

 

 

3.1 create very simple workflow taking in input and EC2 instance objet and make it an output

Log into vRO

4

Create your workflow and add an imput parameter AWS:EC2instance

4-2

Add an output parameter AWS:EC2instance

6

Drag & drop a scriptable task. Write inside the following code

7

Set the visual binding as follow:

8

3.2 Add this workflow as a service Blueprint and publish it

In the vRA portal, go to Advanced Services, Service Blueprint, click Add

9

Select the Workflow you created, click Next

10

Tick “Hide Catalog request information page”, click Next

11

Enter the text you want to display and click Next

12

As an output, select as below. Click update

13

 

Select the Service Blueprint and publish it

14

3.3 As for any type of services, we will entitle it

Go to Administration, Catalog Management,

15

Click on Services, select the Service category you want to use (here we use AWS that we created beforehand), click “Manage Catalog Items”

16

Click “+”

17

Select your Service Blueprint, click Add

18

That’s it for the setup.

3.4 We will then run our import Workflow to import our first AWS instance

Go to Catalog, Select your Service category (here AWS) and click on request on the Import Service

19

(Sidenote: You should see the service with a different icon. It doesn’t matter at this point)

Click Add

20

You can then see all the AWS datacenters

21

Select the one hosting your instances and select your instance as shown below, click Submit

22

That’s it. Your instance will be available shortly under the Service category AWS

23

 

With a little more instances:

24

In a future blog post, we will see how to add 2nd Day actions to manage this instance.

Conclusion

Several things where shown in this article:

  • How to setup vRA to handle AWS instances
  • How powerful is the extensibility module of vRA

This extensibility module gives you a great toolbox to achieve your goals even for the ones that the solution can’t deliver OOTB. The orchestrator engine offers a breadth of powerful workflows that you can provide in an intelligent and efficient manner to your end users, saving you time and proving your ability to your customers to bridge the private Cloud with the public Cloud.

Finally, managing your public workloads from the same portal as your private ones is not the end of the journey. You will soon face important request from your customers to monitor the performance and availability of your applications, as well as helping them manage the financial side of the Cloud.

vRealize Operations and vRealize Business will be the solutions that will solve these challenges for you. In short, you will (v)Realize the Hybrid Cloud J