Home > Blogs > VMware Consulting Blog > Category Archives: Cloud Computing

Category Archives: Cloud Computing

Hybrid Cloud Manager Deployment Considerations

by Michael Francis

VMware Hybrid Cloud Manager™ is VMware’s management extension for VMware vSphere® and VMware vCloud® Air™. Hybrid Cloud Manager aims to simplify the implementation of a true hybrid cloud.

My Definition of Hybrid Cloud

What is hybrid cloud? In my mind, hybrid cloud means extending my on-premises estate into a data center facility owned and provided by a third party. The key to this definition is in the word “extension.” A true extension means I can retain my existing operating model, security model, and provisioning systems and seamlessly migrate applications from my on-premises environment to my provider’s platform, just as I do within my on-premises environment.

Continue reading

Hybrid Cloud and Hybrid Cloud Manager

Michael_FrancisBy Michael Francis

Disclaimer: This blog is not a technical deep dive on Hybrid Cloud Manager; it talks to the components of the product and the design decisions around the product. It assumes the reader has knowledge of the product and its architecture.

Recently, I have been involved with the design and deployment of Hybrid Cloud Manager for some customers. It has been a very interesting exercise to work through the design and the broader implications.

Let’s start with a brief overview of Hybrid Cloud Manager. Hybrid Cloud Manager is actually comprised of a set of virtual appliances that reside both on-premises and in vCloud Air. The product is divided into a management plane, control plane, and data plane.

  • The management plane is instantiated by a plugin in the vSphere Web Client.
  • The control plane is instantiated by the Hybrid Cloud Manager virtual appliance.
  • The data plane is instantiated by a number of virtual appliances – Cloud Gateway, Layer 2 Concentrator, and the WAN Opto appliance.

The diagram below illustrates these components and their relationships to each other on-premises and the components in vCloud Air.

MFrancis_Logical Architecture Hybrid Cloud Manager

Figure 1 – Logical Architecture Hybrid Cloud Manager

The Hybrid Cloud Manager provides virtual machine migration capability, which is built on two functions: virtual machine replication[1] and Layer 2 network extension. The combination of these functions provides an organization with the ability to migrate workloads without the logistical and technical issues traditionally associated with migrations to a public cloud; specifically, the outage time to copy on-premises virtual machines to a public cloud, and virtual machine re-addressing.

During a recent engagement that involved the use of Hybrid Cloud Manager, it became very obvious that even though this functionality simplifies the migration, it does not diminish the importance of the planning and design effort prior to any migration exercises. Let me explain.

Importance of Plan and Design

When discussing a plan, I am really discussing the importance of a discovery that deeply analyses
on-premises virtual workloads. This is critical, as the Hybrid Cloud Manager creates such a seamless extension to the on-premises environment, we need to understand:

  • Which workloads will be migrated
  • Which networks the workloads reside on
  • What compute isolation requirements exist
  • How and where network access control is instantiated on-premises

Modification of a virtual network topology in Public Cloud can be a disruptive operation; just as it is in the data center. Introducing an ability to stretch layer 2 network segments into the Public Cloud and migrating out of a data center into Public Cloud increases the number of networks and the complexity of the topology of the networks in the Public Cloud. So the more planning that can be done early the less likely disruptions to services will need to occur later.

One of the constraints in the solution revolves around stretching layer 2 network segments. A Layer 2 network segment located on-premises can be ‘stretched’ to one virtual data center in vCloud Air. So we have some implications of which workloads exist on a network segment, and which vCloud Air virtual data centers will be used to host the workloads on the on-premises network segment. This obviously influences the creation of virtual data centers in vCloud Air, and the principals defined in the design, which influence when additional virtual data centers are stood up – compared with growing an existing virtual data center.

Ideally, an assessment of on-premises workloads would be performed prior to any hybrid cloud design effort. This assessment would be used to size subsequent vCloud Air virtual data centers; plus, it would discover information about the workload resource isolation that drives the need for workload separation into multiple virtual data centers. For instance, the requirement to separate test/development workloads from production workloads with a ‘hard’ partition would be one example of a requirement that would drive a virtual data center design.

During this discovery we would also identify which workloads reside on which networks, and which networks require ‘stretching’ into vCloud Air. This would surface any issues we may face due to the constraint that we can only stretch a Layer 2 segment into one virtual data center.[2] This assessment really forms the ‘planning’ effort in this discussion.

Design Effort

The design effort involves designs for vCloud Air and Hybrid Cloud Manager. I believe the network design of vCloud Air is a critical element. We need to determine whether to use:

  • Dynamic routing or static routing
  • Subnet design and its relationship to routing summarization
  • Routing paths to the Internet
  • Estimated throughputs required for any virtual routing devices
  • Other virtual network services
  • Egress optimization functionality from Hybrid Cloud Manager
  • And finally, we need to determine where security network access points are required

The other aspect is the design of the virtual compute containers, such as virtual data centers in vCloud Air. The design for vCloud Air should define the expected virtual data center design over the lifecycle of the solution. It would define the compute resource assignment to each virtual data center initially, and over the lifecycle as anticipated growth is factored in. During the growth of use, the requirements for throughput will increase on the networking components in vCloud Air, so the design should articulate guidance around when an increase in size of virtual routing devices will need to occur.

The vCloud Air platform is an extension of the on-premises infrastructure. It is a fundamental expectation that operations teams have visibility into the health of the infrastructure, and that capacity planning of infrastructure is taking place. Similarly, there is a requirement to ensure that the vCloud Air platform and associated services are healthy and capacity managed. We should be able to answer the question, “Are my virtual data center routing devices of the right size, and is their throughput sufficient for the needs of the workloads hosted in vCloud Air?” Ideally we should have a management platform that treats vCloud Air as an extension to our on-premises infrastructure.

This topic could go much deeper, and there are many other considerations as well, such as, “Should I place some management components in vCloud Air,” or, “Should I have a virtual data center in vCloud Air specifically assigned to host these management components?”

I believe today many people take an Agile approach to their deployment of public cloud services, such as networking and virtual compute containers. But I believe if you are implementing such a hybrid interface as offered by Hybrid Cloud Manager, there is real benefit to a longer term view to the design of vCloud Air services to minimise risk if we paint ourselves into a corner in the future.

Some Thoughts on Hybrid Cloud Manager Best Practices

Before wrapping up this blog, I wanted to provide some thoughts on some of the design decisions regarding Hybrid Cloud Manager.

In a recent engagement we considered best practices for placement of appliances, and we came up with the following design decisions.

MFrancis_Design Decision 1

MFrancis_Design Decision 2

MFrancis_Design Decision 3

Key Takeaways

The following are the key takeaways from this discussion:

  • As Hybrid Cloud Manager provides a much more seamless extension of the on-premises data center, deeper thought and consideration needs to be put into the design of the vCloud Air public cloud services.
  • To effectively design vCloud Air services for Hybrid Cloud requires a deep understanding of the on-premises workloads, and how they will leverage the hybrid cloud extension.
  • Network design and ongoing network access controlling operational changes need to be considered.
  • Management and monitoring of the vCloud Air services acts as an extension of the data center needs to be included in the scope of a Hybrid Cloud solution.

[1] Leverages the underlying functionality of vSphere Replication; but isn’t a full vSphere Replication architecture.

[2] This constraint could be overcome; however, the solution would require configurations that would make other elements of the design sub-optimal; for example, disabling the use of egress optimization.


Michael Francis is a Principal Systems Engineer at VMware, based in Brisbane.

VMware App Volumes™ with F5’s Local Traffic Manager

By Dale Carter, Senior Solutions Architect, End User Computing & Justin Venezia, Senior Solutions Architect, F5 Networks

App Volumes™—a result of VMware’s recent acquisition of Cloud Volumes—provides an alternative, just-in-time method for integrating and delivering applications to virtualized desktop- and Remote Desktop Services (RDS)-based computing environments. With this real-time application delivery system, applications are delivered by attaching virtual disks (VMDKs) to the virtual machine (VM) without modifying the VM – or the applications themselves. Applications can be scaled out with superior performance, at lower costs, and without compromising the end-user experience.

For this blog post, I have colluded with Justin Venezia – one of my good friends and a former colleague now working at F5 Networks. Justin and I will discuss ways to build resiliency and scalability within the App Volumes architecture using F5’s Local Traffic Manager (LTM).

App Volumes Nitty-Gritty

Let’s start out with the basics. Harry Labana’s blog post gives a great overview of how App Volumes work and what it does. The following picture depicts a common App Volumes conceptual architecture:

HLabana AppVolumes

 

Basically, App Volumes does a “real time” attachment of applications (read-only and writable) to virtual desktops and RDS hosts using VMDKs. When the App Volumes Agent checks in with the manager, the App Volumes Manager (the brains of App Volumes) will attach the necessary VMDKs to the virtual machines through a connection with a paired vCenter. The App Volumes Agent manages the redirection of file system calls to AppStacks (read-only VMDK of applications) or Writeable Volumes (a user-specific writeable VMDK). Through the Web-based App Volumes Manager console, IT administrators can dynamically provision, manage, or revoke applications access. Applications can even be dynamically delivered while users are logged into the RDS session or virtual desktop.

The App Volumes Manager is a critical component for administration and Agent communications. By using F5’s LTM capabilities, we can intelligently monitor the health of each App Volumes Manager server, balance and optimize the communications for the App Volume Agents, and build a level of resiliency for maximum system uptime.

Who is Talking with What?

As with any application, there’s always some back-and-forth chatter on the network. Besides administrator-initiated actions to the App Volumes Manager using a web browser, there are four other events that will generate traffic through the F5’s BIG-IP module; these four events are very short, quick communications. There aren’t any persistent or long-term connections kept between the App Volumes Agent and Manager.

When an IT administrator assigns an application to a desktop/user that is already powered on and logged in, the App Volumes Manager talks directly with vCenter and attaches the VMDK. The Agent then handles the rest of the integration of the VMDK into the virtual machine. When this event occurs, the agent never communicates with the App Volumes Manager during this process.

Configuring Load Balancing with App Volume Managers

Setting up the load balancing for App Volumes Manager servers is pretty straightforward. Before we walk through the load-balancing configuration, we’ll assume your F5 is already set up on your internal network and has the proper licensing for LTM.

Also, it’s important to ensure the App Volume agents will be able to communicate with the BIG-IP’s virtual IP address/FQDN assigned to App Volumes Manager; take the time to check routing and access to/from the agents and BIG-IP.

Since the App Volumes Manager works with both HTTP and HTTPS, we’ll show you how to load balance App Volumes using SSL Termination. We’ll be doing SSL Bridging: SSL from the client to the F5 → it is decrypted → it is re-encrypted and sent to the App Volumes Manager server. This method will allow the F5 to use advanced features—such as iRules and OneConnect—while maintaining a secure, end-to-end connection.

Click here to get a step-by-step guide on integrating App Volumes Manager servers with F5’s LTM. Here are some prerequisites you’ll need to consider before you start:

  • Determine what the FQDN will be and what virtual IP address will be used.
  • Add the FQDN and virtual IP into your company’s DNS.
  • Create and/or import the certificate that will be used; this blog post, does not cover creating, importing and chaining certificates.

The certificate should contain the FQDN that we will use for load balancing. We can actually leave the default certificates on the App Volumes Manager servers. BIG-IP will handle all the SSL translations, even with self-signed certificates created on the App Volumes servers. A standard, 2,048-bit web server (with private key) will work well with the BIG-IP, just make sure you import and chain the Root and Intermediate Certificates with the Web Server Certificate.

Once you’re done running through the instructions, you’ll have some load-balanced App Volumes Manager servers!

Again, BIG thanks to Justin Venezia from the F5 team – you can read more about Justin Venezia and his work here.


Dale is a Senior Solutions Architect and member of the CTO Ambassadors. Dale focuses in the End User Compute space, where Dale has become a subject matter expert in a number of the VMware products. Dale has more than 20 years experience working in IT having started his career in Northern England before moving the Spain and finally the USA. Dale currently hold a number of certifications including VCP-DV, VCP-DT, VCAP-DTD and VCAP-DTA.

For updates you can follow Dale on twitter @vDelboy

Justin Venezia is a Senior Solutions Architect for F5 Networks

MomentumSI Brings New DevOps and Cloud Professional Services to VMware

By now, it is common knowledge that VMware has evolved beyond server MomentumSI_logovirtualization and is a leading Private Cloud, Hybrid Cloud, and End-User Computing provider.  To enable the transformational business outcomes that these technologies support, we have continued to invest in building the best Professional Services team in the industry.

I am excited to share that in Q4 2014, VMware acquired MomentumSI, a leading IT consultancy that expands our capabilities to help our customers transform their IT processes and infrastructures into strategic advantage.

MomentumSI is a pure-play Professional Services business that served many of the same Fortune 500 companies that VMware does today. The company focused on four key solution areas:

  • Building DevOps capabilities for customers, leveraging technologies such as Docker, Puppet, Chef, Jenkins, Salt and Ansible
  • Architecting and implementing OpenStack Private Clouds
  • Enabling Hybrid Cloud solutions, with an emphasis on AWS and vCloud Air
  • Modernizing applications for cloud environments

The MomentumSI team has joined the Americas Professional Services Organization (PSO).  Together, the combined practice will assist our clients in achieving business results through IT transformation.

So with that, we welcome the MomentumSI team to the VMware family and look forward to expanding the value that we can deliver to our customers.

For more information on the services MomentumSI is bringing to VMware, please visit http://page.momentumsi.com/vmware.

Bret

Begin Your Journey to vRealize Operations Manager

By Brent Douglas

In early December, VMware launched an exciting new array of updates to its products. For some products, this update was a refinement of already widely used functionality and capabilities. For other products, the December release marked a new direction and new path forward. One such product is vRealize Operations Manager.

With VMware’s acquisition of Integrien’s patented real-time performance analytics solution in August 2010, VMware added a powerful tool to its arsenal of virtualization management solutions. This tool, vCenter Operations Manager, enabled customers to begin managing beyond “what my environment is doing now” and into “what my environment will be doing in 30 minutes—and beyond?” In essence, with vCenter Operations Manager, customers gained a tool that could predict―and ultimately prevent―the phone from ringing.

Since August 2010, vCenter Operations Manager received bug fixes, regular updates, and new features and capabilities. Even with those, the VMware product designers and engineers knew they could produce a new version of the product that captured and extended the capabilities of vCenter Operations Manager. On December 9, VMware released that tool—vRealize Operations Manager.

In many respects, vRealize Operations Manager, is a new product from the ground up. Due to the differences between vCenter Operations Manager v5.x and vRealize Operations Manager v6.x, current users of vCenter Operations Manager cannot simply apply a v6.x update to existing environments. For customers with little historical data or default policies, the best course forward may be to just install and begin using vRealize Operations Manager. Other customers, with deep historical data and advanced configuration/policies, the best path forward is likely a migration of existing data and configuration information from their vCenter Operations Manager v5.x instance.

A full discussion of migration planning and procedures is available in the vRealize Operations Manager Customization and Administration Guide. This guide also outlines many common vCenter Operations Manager scenarios and suggests migration paths to vRealize Operations Manager.

Important note: In order to migrate data and/or configuration information from an existing vCenter Operations Manager instance, the instance must be at least v.5.8.1 at a minimum, and preferably v5.8.3 or higher.

Question 1: Should any portion of my existing vCenter Operations Manager instance(s) be migrated?

VMware believes you are a candidate for a full migration (data and configuration information) if you can answer “yes” to any one of the following:

  • Have you operationalized capacity planning in vCenter Operations Manager 5.8.x?
    • Actively reclaiming waste
    • Reallocating resources
  • Have you operationalized vCenter Operations Manager to be performance- and health monitoring-based?
  • Do you act upon the performance alerts that are generated by vCenter Operations Manager?
  • Is any aspect of data in vCenter Operations Manager feeding another production system?
    • Raw metrics, alerts, reports, emails, etc
  • Do you have a company policy to retain monitoring data?
    • Does your current vCenter Operations Manager instance fall into this category (e.g., it’s running in TEST)?

VMware believes you are a candidate for a configuration-only migration if you answer “yes” to any one of the following:

  • Are you happy with your current configuration?
    • Dashboards
    • Policies
    • Users
    • Super Metrics

— AND —

  • I do not need to save the data I have collected
    • Running in a test environment or proof-of-concept you have refined and find useful
    • Not really using the data yet

If you answered “no” to these questions, you should install and try vRealize Operations Manager today. You are ready to go with a fresh install without migrating any existing data or configuration information.

Question 2: If some portion of an existing vCenter Operations Manager instance is to be migrated, who should perform the migration?

vRealize Operations Manager is capable of migrating existing data and configuration information from an existing vCenter Operations Manager instance. However, complicating factors may require an in-depth look by a VMware services professional to ensure a successful migration. The following table outlines some of the complicating factors and suggests paths forward.

Consulting_blog_table_012815

 

That’s it! With a bit of upfront planning you can be well on your journey to vRealize Operations Manager! The information above addresses the “big hitters” for planning a migration to vRealize Operations Manager from vCenter Operations Manager. As mentioned, a full discussion of migration planning and procedures is available in the vRealize Operations Manager Customization and Administration Guide.

On a personal note, I am excited about vRealize Operations Manager. Although vCenter Operations Manager served VMware and its customers well for many years, it is time for something new and exciting. I encourage you to try vRealize Operations Manager today. This post represents information produced in collaboration with David Moore, VMware Professional Services, and Dave Overbeek, VMware Technical Marketing team. I thank them for their contributions and continued focus on VMware and its customers.


Brent Douglas is a VMware Cloud Technical Solutions Architect

Have a Chat with Your SDDC (Automating the Management of the Management Portal, Part 2)

By Andrea Siviero, VMware Senior Solutions Architect

Andrea SivieroIn my recent post “Look Mom, no Mouse!” I introduced an amazing new way to interact with your SDDC without a mouse, but now, using a command-line with simple mnemonic instructions you can “Talk” with your SDDC to “Automate the Management of the Management Portal”.

VMware has just announced vRealize CloudClient 3.0 released for general availability (GA) (http://developercenter.vmware.com/web/dp/tool/cloudclient/3.0.0).

So now that it’s GA, I’m excited to explore with you more deeply how to use CloudClient, and also to share its benefits.

What commands do I want to show you today?

–        Create a brand new tenant and service catalog and entitle them to administrators

–        Import an existing blueprint into the brand new CloudClient-made tenant

–        Deploy blueprints from the catalog of services

So wake up your SDDC — it’s time for a lovely chat. 🙂

Log in and create a tenant
CloudClient allows you to log in in an interactive way:

CloudClient> vra login userpass --server vcac-l-01a.corp.local --tenant pse --user siviero@gts.local --password ****** --iaasUser corp\\Administrator --iaasPassword ******

Or to edit the CloudClient.properties file to fill in all the details, just type this command to create an empty configuration:

CloudClient> login autologinfile

NOTE: IaaS credentials need to be passed with double back-slash i.e. corp\\Administrator

Login Screen

Figure 1: Login

Create a new tenant, identity-store and admins
When you are logged in as administrator@vsphere.local, creating a tenant is just three commands away. To set the name of the tenant, how users will get authenticated (AD or LDAP) and who will be the adminstrators:

CloudClient> vra tenant add --name "PSE" --url "PSE"
CloudClient> vra tenant identitystore add --name "PSE AD" --type AD --url ldap://controlcenter.corp.local --userdn "CN=Administrator,CN=Users,DC=corp,DC=local" --password "****" --groupbasedn "OU=Nephosoft QE,DC=corp,DC=local" --tenantname "PSE" --alias "gts.local" --domain "corp.local"
CloudClient> vra tenant admin update --tenantname "PSE" --addtenantadmins siviero@gts.local --addiaasadmins admin1@gts.local,admin2@gts.local
Create Tenant

Figure 2: Create Tenant

Create a fabric group and business group and assign resources
Now let’s annotate the returned IDs so they can be used in further commands. (They can be scripted using variables.)

CloudClient> vra fabricgroup add --name "GTS Fabric Group" --admins "admin1@gts.local,admin2@gts.local"
Create Fabric Group

Figure 3: Create Fabric Group

Search for the suitable compute resources. We will select the “Cluster Site A”:

CloudClient> vra computeresource list
Compute Resources

Figure 4: Compute Resources

Let’s finalize the “trivial” steps of assigning the compute resources to the fabric group and creating a business group wth a pre-determined machine prefix.

CloudClient> vra fabricgroup update --id f8bbfcd5-79c0-43db-a382-2473b91862e6 --addcomputeresource c47e3332-bdef-4391-9f93-269dcf14f2c5
CloudClient> vra machineprefix add --prefix gts- --numberOfDigits 3 --nextNumber 001
CloudClient> vra businessgroup add --name "GTS Business Group" --admins "admin1@gts.local,admin2@gts.local" --adContainer "cn=computers" --email admin2@gts.local --description "GTS Group" --machinePrefixId 1c1d20c3-ba91-443e-beb0-b9b0728ee29c
Assign Resources

Figure 5: Assign Resources

Here comes the fun: import/export blueprints
Until now, CloudClient commands used are merely a reproduction of what normally happens on the GUI.

Let me show where the power of it comes out: let’s assume you already created a good blueprint in a tenant with a blueprint profile and you just want to “copy&paste” it to another tenant. You cannot do it in the GUI — you need to manually recreate it — but hey, here comes the CloudClient magic: log in to the source tenant and export the blueprint in a JSON format:

CloudClient> vra iaas blueprint list

CloudClient> vra iaas blueprint detail --id 697b8302-b5a9-4fbf-8544-2f19d4e8a220 --format JSON --export CentOS63.json
Export Blueprint to JSON file

Figure 6: Export Blueprint to JSON file

Now log back to the brand new PSE tenant and import the blueprint like this:

CloudClient> vra iaas blueprint add vsphere --inputfile CentOS63.json --name "CentOS 6.3 x64 Base" --cpu 1 --memory 512
Import Blueprint from JSON

Figure 7: Import Blueprint from JSON

Request the blueprint from the catalog
The remaining steps will be trivial as before: Create a service, an entitlement, and actions and assign the blueprint to catalog. Reading the documentation will help you to get familiar with it.

Note: “Reservations” verbs are not yet implemented, so at some point you need to use the GUI to complete the process.

So please let me fast forward to the final moment when you can successfully deploy a blueprint and see it live. 🙂

CloudClient> vra catalog list
Listing the Catalog

Figure 8: Listing the Catalog

Using the ID returned from catalog list, make the request:

CloudClient> vra catalog request submit --id c8a850d2-a089-4afb-b5d8-b298580cf9f9 --groupid 2c220523-60bb-419e-80c8-c5bfd81aa805 --reason fun
Checking the Requests

Figure 9: Checking the Requests

And here it is, our little VM, happy and running. 🙂

Happy and Running

Figure 10: Happy and Running

The Occam’s Razor principle: “Entities must not be multiplied beyond necessity.”

In my humble opinion: Please don’t waste lot of time doing everything (Coffe/Tea?) from a command-line. vRealize Automation 6.1 has a nicely improved UI and is very intuitive to work with.

Keep the solutions as simple as possible and use vRealize CloudClient when some real “black magic” is needed.


Andrea Siviero is an eight-year veteran of VMware and a senior solutions architect member of Professional Services Engineering (PSE) for the Software-Defined Datacenter (SDDC), a part of the Global Technical Solutions (GTS) team. Prior to PSE, Andrea spent three years as pre-sales system engineer and three years as a post-sales consultant architect for cloud computing and desktop virtualization solutions focusing on very large and complex deployments, especially for service providers in the finance and telco sectors. 

vCAC 6 Custom Properties, Build Profiles and Property Dictionary Simplified

By Eiad Al-Aqqad

Eiad Al-AqqadThis post originally appeared on Eiad’s Virtualization Team blog.

vCloud Automation Center offers a lot of built-in extensibility features to help you achieve your desired result while minimizing the amount of coding required. Using vCAC custom properties, build profiles, property dictionary is just one example of how you can customize the product, minimize coding, and customize the input form. As property dictionary seems to be the most missed or misunderstood feature of vCAC, followed by build profiles and custom properties, I will try to simplify the explanation of these great features as much as possible. At the end of the article, I will point out more resources for in-depth information on each of these features.

vCAC Custom Properties
Custom properties is the building block for build profiles and property dictionary. VMware documentation defines custom properties as:

“VMware vCloud Automation Center™ custom properties allow you to add attributes of the machines your site provisions, or to override their standard attributes.”

What that means is that vCloud Automation Center utilizes particular variables (custom properties) that contain values that vCAC uses during machine provisioning (such as machine name, machine IP address, port group to use, and so on). vCAC exposes this information as custom properties that you can query or edit to overwrite the default values by a specific value or by a user input. This is a very powerful tool, as you can shape out the request form to ask the user for input (not required by the default request form) and execute upon it without requiring you to do any coding. You can also create your own custom properties to use with your own custom workflows.

Let’s look at a quick example of using vCAC custom properties. The image below shows the default blueprint/VM request form in vCAC:

Default Blueprint Request Form

As you can see, the default VM request form does not ask for a machine hostname or IP address. What if you wanted to allow the user to choose the VM hostname or IP address? You can do that using custom properties, and your request form will look like the screen below:

VCAC Custom Properties

In the above screenshot, I have used the Hostname and VirtualMachine.NetworkN.Address custom properties to allow the user to provide the desired VM hostname and IP address that vCAC will use when creating the VM. I did this by going to Infrastructure ==> Blueprint ==> Properties, then adding the two custom properties as shown in the image below.

VCAC Custom Hostname Property

While the above is using existing vCAC custom properties that vCAC uses when deploying a VM, you can always create your own custom properties to pass to your own workflow or just to track information within the request. For a list of custom properties available in vCAC 6, see: vCloud Automation Center 6 Custom Property Reference.

vCAC Build Profiles
Build profiles is simply a collection of the custom properties under a single title. Imagine if you have 20 different custom properties that you need to include with every Windows blueprint. It would be nice to bundle them all in a build profile then go to these blueprints and assign a single build profile instead of assigning 20 different custom properties to every Windows blueprint. This will save work and provide better consistency. You can create a build profile by going to Infrastructure => Blueprints => Build Profiles => New Build Profile, then add the desired custom properties to that build profile as shown in the image below.

Creating a Build Profile

The next step is to add that build profile to your blueprint as per the image below.

Add Build Profile to Blueprint

vCAC Property Dictionary
I am not sure why property dictionary seems to be the most misunderstood or missed feature of vCAC. It’s quite simple to use and can unleash a lot of power. Allowing users to provide values to custom properties as shown in previous examples is quite useful, but most of the time you want to limit the user choices using drop down menus or check boxes. Property dictionary is all about enabling you to do just that.

vCAC property dictionary lets you define characteristics of custom properties to tailor their display in the user interface. You can customize the property display in the user interface, as in the following examples:

  • Associate a property name with a user control, such as a check box or drop-down menu.
  • Specify constraints such as minimum and maximum values or validation against a regular expression.
  • Provide descriptive display names for properties or add label text.
  • Group sets of property controls together and specify the order in which they appear.
  • Create a relationship between different controls, where for example a location drop down menu can update the storage and network drop down menus to show only values that is valid for that location.

To see how useful property dictionaries can be, let’s take an example where we want to create the drop down menus as illustrated in the below diagram:

Drop Down Menu Sample

The goal of this exercise is to create three drop down menus that will ask the user for location, storage path, and network path to use. Let’s ignore the relationship between the different drop down menus for now and try to focus on just creating these drop down menus. To create the property dictionary required to create these drop down menus, go to: Infrastructure => Blueprints => Property Dictionary.

For each drop down menu you want to create, repeat the steps below. In this example I will create the location drop down menu:

  1. Click New Property Definition, then fill the information as shown in the below screenshot. Please note the name must match the custom property name you want to use.

Location Property Definition

  1. Click the green check mark to save your property definition.
  2. Under Property Attributes, click Edit.
  3. Click New Property Attributes, and then fill in the Property Attributes as shown in the image below.

Property Attribute Drop Down

  1. Repeat the above steps for storage and network as shown in the images below.

Property Definitions

Network Property

Storage Property Attribute

  1. Now that you have all the required property definitions and property attributes created, let’s create a property layout, which is a way of organizing how these drop-down boxes will be ordered when shown to the user. I wanted the drop boxes to be ordered as follows: Location, Storage, Network. To do this, I had to click New Property Layout and fill the information as shown in the below screenshot:

New Property Dictionary Layout

  1. Under Property Layout > Property Instances, click Edit, and organize your property instances as shown in the image below.

Organize Property Instances

  1. Let’s create a build profile that includes all the custom properties involved in our property dictionary example as shown in the image below.

Build Profile Property Dictionary Sample

  1. Now all you are left with is adding this build profile to your blueprint as shown below.

15vcac-adding-property-dictionary-build-profile-to-blueprint-470x232

  1. Now let’s check how the input of our blueprint looks:

16 vCAC-Property-Dictionary-in-action-470x324

Notice in the above example, the three drop-down menus that were created for location, storage, and network are operating independently. There is no relationship between them. In other words, choosing a particular location does not filter which options you have for storage or network. The capability of doing such filtering is part of the property dictionary relationship, which I cover in the following two posts:


Eiad Al-Aqqad is a consulting architect within the SDDC Professional Services practice. He has been an active consultant using VMware technologies since 2006. He is a VMware Certified Design Expert (VCDX#89), as well as an expert in VMware vCloud, vSphere, and SRM. Read more from Eiad at his blog, Virtualization Team, and follow him on Twitter @VirtualizationT.

Working with VMware Just Gets Better

Ford DonaldBy Ford Donald, Principal Architect, GTS PSE, VMware

Imagine someone gives you and a group of friends a box of nuts and bolts and a few pieces of metal and tells you to build a model skyscraper. You might start putting the pieces together and end up with a beautiful model, but it probably won’t be the exact result that any of you imagined at the beginning. Now imagine if someone hands you that same box, along with a blueprint and an illustration of the finished product. In this scenario, you all work together to a prescribed end goal, with few questions or disagreements along the way. Think about this in the context of a large technical engagement, for example a software-defined data center (SDDC) implementation. Is it preferable to make it up as you go along, or to start with a vision for success and achieve it through a systematic approach?

Here at VMware, we’re enhancing the way we engage with customers by providing prescriptive guidance, a foundation for success, and a predictable outcome through the SDDC Assess, Design and Deploy Service. As our product line has matured, our consulting approach is maturing along with it. In the past, we have excelled at the “discovery” approach, where we uncover the solution through discussion, and every customized outcome meets a unique customer need. We’ve built thousands of strong skyscrapers that way, and the skill for discovering the right solution remains critical within every customer engagement. Today we bring a common starting point that can be scaled to any size of organization and adapted up the stack or with snap-ins according to customer preference or need. A core implementation brings a number of benefits to the process, and to the end result.

A modular technical solution

Think of the starting point as a blueprint for the well-done data center. With our approach, the core elements of SDDC come standard, including vSphere, vCenter Operations, vCenter Orchestrator, and software-defined networking thru vCNS. This is the clockwork by which the SDDC from VMware is best established, and it lays the foundation for further maturity evolutions to Infrastructure Service and Application Service. The core “SDDC Ready” layer is the default, providing everything you need to be successful in the data center, regardless of whether you adopt the other layers. Beyond that, to meet the unique needs of customers, we developed “snap-ins” as enhancements or upgrades to the core model, which include many of our desirable, but not necessarily included-by-default, assets such as VSAN and NSX.

The Infrastructure Service layer builds on the SDDC by establishing cloud-based metaphors via vCloud Automation Center and other requirements for cloud readiness, including a service portal, catalog-based consumption, and reduction of administrative overhead. The Application Service layer includes vCloud Application Director and elevates the Infrastructure layer with application deployment, blueprinting and standardization.

From our experience, customers demand flexibility and customization. In order to meet that need, we built a full menu of Snap-ins. These snap-ins allow customers to choose any number of options from software-defined storage, NSX, compliance, business continuity & disaster recovery (BCDR), hybrid cloud capabilities and financial/cost management. Snap-ins are elemental to the solution, and can be added as needed according to the customer’s desired end result.

Operational Transformation Support

Once you’ve adopted a cloud computing model, you may want to consider organizational enhancements that take advantage of the efficiency gained by an SDDC architecture. As we work with our customers in designing the technical elements, we also consult with our customers on the operational processes. Changing from high administrative overhead to low overhead, introducing new roles, defining what type of consumer model you want to implement – our consultants help you plan and design your optimal organization to support the cloud model.

The beauty of this approach shines in its ability to serve both green field and brown field projects. In the green field approach, where a customer wants the consultants to take the reins and implement top to bottom, the approach serves as a blueprint. In a brown field model, where the customer has input and opinions and desires integration and customization, the approach can be adapted to the customer’s environment, relative to the original blueprint.

So whether you’re building your skyscraper from the ground up, or remodeling an existing tower, the new SDDC Assess, Design and Deploy Service provides an adaptable model, with a great starting point that will help you get the best out of your investment.

Stay tuned for an upcoming post that gives you a look under the hood of the work stream process for implementing the technical solution.


Ford Donald is a Principal Architect and member of Professional Services Engineering (PSE), a part of the Global Technical Solutions (GTS) team, a seven-year veteran of VMware. Prior to PSE, Ford spent three years as a pre-sales cloud computing specialist focusing on very large/complex virtualization deployments, including the VMware sales cloud known as vSEL. Ford also served as coreteam on VMworld Labs and as a field SE.

 

Quick Tip: Change the Password on the vCNS Edge

By Martijn Baecke, VMware Senior Consultant

Martijn BaeckeDeploying and managing a vCNS Edge device with vCloud Director is a pretty easy task. You just spin up the appliance, integrate it with vCenter and then hook it up to vCloud Director. Piece of vCAC!

I was trying to dig deeper into the structure of how vCNS Edge devices work and wanted to log in to the Edge device itself. The only problem was fact that I couldn’t log into console of the Edge appliance that was deployed by vCNS manager on my virtual infrastructure. Thankfully, the vCNS Manager interface provides you with the possibility to reset the password.

To reset the password and be able to log into the vCNS Edge device:

1. Log into the vCNS web interface.
2. At “View:” in the left corner, select Edges.
3. Select the Edge Gateway you want to log into.
4. Click Actions and select Change CLI Credentials.

This allows you to set the password for the “admin” account. With these credentials you can login to the vCNS Edge device.


Martijn Baecke is a Senior Consultant for VMware Professional Services in Northern EMEA. He has 10+ years experience in advising and consulting with large enterprise companies around IT infrastructure. He is a VMware Certified Design eXpert (VCDX #103) and you can find more insights on his personal blog, Think©Loud.

Cloud Automation Requirements from the Field

By Jung Hwang, Enterprise Solutions Architect, VMware

Jung HwangIT organizations adopt private cloud solutions for two main reasons: to gain agility and to improve efficiency of the services they offer. VMware’s vCloud Automation Center (vCAC) solution offers workload lifecycle capabilities that help IT organizations automate and centrally manage IT tasks that were traditionally done manually. Although vCAC has robust out-of-the-box (OOTB) capabilities that address many of these manual processes, enabling business and IT logic on top of the OOTB capabilities has helped many of our customers to reach their goals and realize the true value of automation. Below we’ll explore three requirements we have seen enabled on top of the vCAC OOTB capabilities.

Generate Custom Host Names
Although this seems to be a straightforward process, maintaining consistent host names can be challenging, especially in the private cloud environment where the virtual machine provisioning is automated without any IT staff’s involvement.

Within vCAC, administrators have some ability to add a prefix and a suffix to host names, but many customers need more custom fields, such as the environment (Prod/Dev/QA), type (Application/Web/DB), location (NA/EMEA), and incremental numbers (00X). (For example, a host name could be PROD-SQL-NA-001.) Every customer has a unique naming standard – because of this, VM host name assignment should be automated in vCAC to further minimize the manual intervention.

Active Directory Organization Unit (OU) Placement
Related to the host names issue, vCAC can integrate with Active Directory and will place VMs in a default computer object container within Active Directory. Our customers often have complex Active Directory Organizational Unit (OU) structures. Based on the host name assigned by vCAC, customers want to place the VM in the specific Active Directory OU. This will minimize unnecessary steps required to associate automatically provisioned VMs by vCAC. Moving VMs from the default computer object container to other containers can be as easy as a drag and drop operation, but when 10s or even 100s of VMs are provisioned via a self-service portal, placing a VM to the right OU based on the host name becomes an important task.

Configuration Management Database (CMDB) Integration and Configuration Item (CI) Management
Another common requirement is integrating vCAC with CMDB. Traditionally, updating and maintaining CIs were manual tasks, but they would be extremely difficult to do manually in a private cloud environment when VMs are provisioned and decommissioned based on the policy. The consumer of the vCAC solution will also be able to make changes with VM specifications so the integration with CMDB is another important area. Since the VMs will be requested via vCAC, vCAC can capture the VM specifications to create and update CIs in CMDB. The integration and automation can be enabled during the provisioning (when VMs are initially deployed), management (when VM specifications are changed by the owner), and decommissioning (when VMs are deleted).

The key to success and further identifying automation opportunities is understanding the customer’s end-to-end processes and translating them to new, private cloud processes. As we listen to our customers we can bring them more of what they need.


Jung I. Hwang is an Enterprise Solutions Architect and a member of VMware’s Services organization. Jung is responsible for creating solution roadmaps and execution plans with VMware’s products and services portfolio to solve customers’ business and technology challenges and initiatives.