Home > Blogs > VMware Consulting Blog

App Volumes AppStack Creation

Dale-Carter-150x150By Dale Carter, Senior Solutions Architect, End-User Computing

VMware App Volumes provide just-in-time application delivery to virtualized desktop environments. With this real-time application delivery system, applications are delivered to virtual desktops through VMDK virtual disks, without modifying the VM or applications themselves. Applications can be scaled out with superior performance, at lower costs, and without compromising end-user experience.

In this blog post I will show you how easy it is to create a VMware App Volumes AppStack and how that AppStack can then be easily deployed to up to hundreds of users

When configuring App Volumes with VMware Horizon View an App Volumes AppStack is a read-only VMDK file that is added to a user’s virtual machine, and then the App Volumes Agent merges the two or more VMDK files so the Microsoft Windows operating system sees the files as just one drive. This way the applications look to the Windows OS as if they are natively installed and not on a separate disk.

To create an App Volumes AppStack follow these simple steps.

  1. Log in to the App Volumes Manager Web interface.
  2. Click Volumes.
    DCarter Volumes
  3. Click Create AppStack.
    DCarter AppStack
  4. Give the AppStack a name. Choose the storage location and give it a description (optional). Then click Create.
    DCarter Create AppStack
  5. Choose to either Perform in the background or Wait for completion and click Create.
    DCarter Create
  6. vCenter will now create a new VMDK for the AppStack to use.
  7. Once vCenter finishes creating the VMDK the AppStack will show up as Un-provisioned. Click the + sign.
    DCarter
  8. Click Provision
    .
    DCarter Provision
  9. Search for the desktop that will be used to install the software. Select the Desktop and click Provision.
    DCarter Provision AppStack
  10. Click Start Provisioning.
    DCarter Start Provisioning
  11.  vCenter will now attach the VMDK to the desktop.
  12. Open the desktop that will be used for provisioning the new software. You will see the following message: DO NOT click OK. You will click OK after the install of the software.
    DCarter Provisioning Mode
  13. Install the software on the desktop. This can be just one application or a number of applications. If reboots are required between installs that is OK. App Volumes will remember where you are after the install.
  14. Once all of the software has been installed click OK.
    DCarter Install
  15. Click Yes to confirm and reboot.
    DCarter Reboot
  16. Click OK.
    DCarter 2
  17. The desktop will now reboot. After the reboot you must log back in to the desktop.
  18. After you log in you must click OK. This will reconfigure the VMDK on the desktop.
    DCarter Provisioning Successful
  19. You can now connect to the App Volumes Manager Web interface and see that the AppStack is ready to be assigned.
    DCarter App Volumes Manager

Once you have created the AppStack you can assign the AppStack to an Active Directory object. This could be a user, computer or user group.

To assign an AppStack to a user, computer or user group, follow these simple steps.

  1. Log in to the App Volumes Manager Web interface.
  2. Click Volumes.
    DCarter Volumes Dashboard
  3. Click the + sign by the AppStack you want to assign.
  4. Click Assign.
    DCarter Assign
  5. Search for the Active Director object. Select the user, computer, OU or user group to assign the AppStack to. Click Assign.
    DCarter Assign Dashboard
  6. Choose either to assign the AppStack at the next login or immediately, and click Assign.
    DCarter Active Director
  7. The users will now have the AppStack assigned to them and will be able to launch the applications as they would any normal application.
    DCarter AppStack Assign

By following these simple steps you will be able to quickly create an AppStack and simply deploy that AppStack to your users.


Dale Carter, a VMware Senior Solutions Architect specializing in the EUC space, has worked in IT for more than 20 years. He is also a VCP4-DT, VCP5-DT, VCAP-DTD, and VCAP-DTA.

“Gotchas” and Lessons Learned When Using Virtual SAN

jonathanm-profileBy Jonathan McDonald

There are certainly a number of blogs on the Web that talk about software-defined storage, and in particular Virtual SAN. But as someone who has worked at VMware for nine years, my goal is not to rehash the same information, but to provide insights from my experiences.

At VMware, much of my time was spent working for Global Support Services; however, over the last year-and-a-half, I have been working as a member of the Professional Services Engineering team.

As a part of this team, my focus is now on core virtualization elements, including vSphere, Virtual SAN, and Health Check Services. Most recently I was challenged with getting up to speed with Virtual SAN and developing an architecture design for it. At first this seemed pretty intimidating, since I had only heard about the marketing details prior to this; however, Virtual SAN truly did live up to all the hype about being “radically simple”. What I found is that the more I work with Virtual SAN the less concerned I became with the underlying storage. After having used Virtual SAN and tested it in customer environments, I can honestly say my mind is very much changed because of the absolute power it gives an administrator.

To help simplify the design process I broke it out into the following workflow design to not only simplify it for myself, but to help anyone else who is unaware of the different design decisions required to successfully implement Virtual SAN.

Workflow for a Virtual SAN Design_JMcDonald

Workflow for a Virtual SAN Design

When working with a Virtual SAN design, this workflow can be quite helpful. To further simplify it, I break it down into a four key areas:

  1. Hardware selection – In absolutely every environment I have worked in there has always been a challenge to select the hardware. I would guess that 75 percent of the problems I have seen in implementing Virtual SAN have been as a result of hardware selection or configuration. This includes things such as non-supported devices or incorrect firmware/drivers. Note: VMware does not provide support for devices that are not on the Virtual SAN Compatibility List. Be sure that when selecting hardware that it is on the list!
  2. Software configuration – The configuration is simple—rarely have I seen questions on actually turning it on. You merely click a check box, and it will configure itself (assuming of course that the underlying configuration is correct). If it is not, the result can be mixed, such as if the networking is not configured correctly, or if the disks have not been presented properly.
  3. Storage policy – The storage policy is at first a huge decision point. This is what gives Virtual SAN its power, the ability to configure what happens with the virtual machine for performance and availability characteristics.
  4. Monitoring/performance testing/failure testing – This is the final area and it is in regards to how you are supposed to monitor and test the configuration.

All of these things should be taken into account in any design for Virtual SAN, or the design is not really complete. Now, I could talk through a lot of this for hours. Rather than doing that I thought it would be better to post my top “gotcha” moments, along with the lessons learned from the projects I have been involved with.

Common “Gotchas”

Inevitably, “gotcha” moments will happen when implementing Virtual SAN. Here are the top moments I have run into:

  1. 1. Network configuration – No matter what the networking team says, always validate the configuration. The “Misconfiguration detected” error is by far the most common thing I have seen. Normally this means that either the port group has not been successfully configured for Virtual SAN or the multicast has not been set up properly. If I were to guess, most of the issues I have seen are as a result of multicast setup. On Cisco switches, unless an IGMP Snooping Carrier has been configured OR IGMP snooping has been explicitly disabled on the ports used for Virtual SAN, configuration will generally fail. In the default configuration it is simply not configured, and therefore—even if the network admin says it is configured properly it may not be configured at all—double check it to avoid any painNetwork Configuration_JMcDonald
  2. Network speed – Although 1 GB networking is supported, and I have seen it operate effectively for small environments, 10 GB networking is highly recommended for most configurations. I don’t just say this because the documentation says so. From experience, what it really comes down to here is not the regular everyday usage of Virtual SAN. Where people run into problems rather is when an issue occurs, such as during failures or periods of heavy virtual machine creation. Replication traffic during these periods can be substantial and cause huge performance degradation while they are occurring. The only way to know is to test what happens during a failure or peek provisioning cycle. This testing is critical as this tells you what the expected performance will be. When in doubt, always use 10 GB networking.
  3. Storage adapter choice – Although seemingly simple, the queue depth of the controller should be greater than 256 to ensure the best performance. This is not as much of an issue now as it was several months ago because the VMware Virtual SAN compatibility list should no longer have any cards that are under 256 queue depth in it anymore. Be sure to verify though. As an example, there was one card when first released that artificially limited the queue depth of the card in the driver software. Performance was dramatically impacted until an updated driver was released.

Lessons Learned

There are always lessons to be learned when using new software, and ours came with a price of a half or full day’s work in trying to troubleshoot issues. Here’s what we figured out:

  1. Always verify firmware/driver versions – This one always seems to be overlooked, but I am stating it because of experiences onsite with customers.One example that comes to mind is where we had three identical servers bought and shipped in the same order that we were using to configure Virtual SAN. Two of them worked fine, the third just wouldn’t cooperate, no matter what we did. After investigating for several hours we found that not only would Virtual SAN not configure, but all drives attached to that host were Read only. Looking at the utility that was provided with the actual card itself showed that the card was a revision behind on the firmware. As soon as we upgraded the firmware it came online and everything worked brilliantly.
  2. Pass-through/RAID0 controller configuration – It is almost always recommended to use a pass-through controller such as Virtual SAN, as it is the owner of the drives and can have full control of them. In many cases there is only RAID0 mode. Proper configuration of this is required to avoid any problems and to maximize performance for Virtual SAN. First, ensure any controller caching is set to 100% Read Cache. Second, configure each drive as its own “array” and not a giant array of disks. This will ensure it is set up properly.As an example of incorrect configuration that can cause unnecessary overhead, several times I have seen all disks configured as a single RAID volume on the controller. This shows up as a single disk to the operating system (ESXi in this case), which is not desired for Virtual SAN. To fix this you have to go into the controller and configure it correctly, by configuring each disk individually.  You also have to ensure the partition table (if previously created) is removed, which can—in many cases—involve a zero out of the drive if there is not an option to remove the header.
  3. Performance testing – The lesson learned here is you can do an infinite amount of testing – where do you start and stop. Wade Holmes from the Virtual SAN technical marketing team at VMware has an amazing blog series on this that I highly recommend reviewing for guidance here. His methodology allows for both basic and more in-depth testing to be done for your Virtual SAN configuration.

I hope these pointers help in your evaluation and implementation of Virtual SAN. Before diving head first in to anything, I always like to make sure I am informed about the subject matter. Virtual SAN is no different. To be successful you need to make sure you have genuine subject matter expertise for the design, whether it be in-house or by contacting a professional services organization. Remember, VMware is happy to be your trusted advisor if you need assistance with Virtual SAN or any of our other products!


Jonathan McDonald is a Technical Solutions Architect for the Professional Services Engineering team. He currently specializes in developing architecture designs for core Virtualization, and Software-Defined Storage, as well as providing best practices for upgrading and health checks for vSphere environments. 

Celebrating Eight Years at VMware

Andrea SivieroBy Andrea Siviero, VMware Senior Solutions Architect

How fascinating!

When you are having fun, you don’t realize how fast time passes by. This has never been truer than for the eight years I have spent at VMware. On the personal side, I have gained two children, changed couple of houses, lost 20 kg and found a new passion for running. On the professional side, I’ve changed roles, from a pre-sales system engineer of the “Virtualization 1.0 Era” to an architect of “What’s Next.”

VMware acknowledges every four years of service with an award. When I celebrated four years, the award was a VASA sculpture comprising these three cubes, recalling the old-style VMware logo:

ASiviero1

VMware 4 Years Award

(To read more about the VASA sculptures and how Diane Green got the idea, click here.)

At eight years, it was a brand new kind of VASA sculpture. There are no cubes anymore, but the design still recalls them in colors and shapes taken from different perspectives.  Moreover, the small squares inside the sculpture are actually eight, like the number of years of the award. An incrementally evolved idea, isn’t it? After all, that’s the essence of VMware.

ASiviero2

VMware Eight Years Award: I was so pleased! 

Then: the Virtualization 1.0 Era and the “Compute Plant”

Of course, more has changed over the past eight years at VMware than just the awards. Eight years ago—in the “Virtualization 1.0 Era”—one of the biggest customer challenges was data center resource optimization and cost savings in the face of an increasing number of separated components needed for evolved applications architecture (i.e. Service Oriented Architecture) and x86 power unrelentingly following the Moore’s law.

ASiviero3

VMware, with x86 virtualization, began to solve the problem by decoupling the hardware from the operating system and applications in a simple and disruptive approach that promised to deliver immense benefits.

ASiviero4

Historical picture from 2007 EMEA TSX

There were three basic ways customers approached virtualization at this time, which led to vastly different outcomes:

-        Reluctant to change: These customers were informed on new IT trends but, not considering virtualization a serious alternative for production environment, they continued to allocate dedicated hardware for each new project, with IT budget demands increasing year-over-year without real business benefits.

-        Taking a tactical approach: These customers invested in virtualization using a project-specific approach to virtual infrastructure, creating different non-standardized silos with sprawling of virtual machines.

-        Making strategic moves to a shared virtual infrastructure:  These customers took a big-picture view, aggregating budgets from multiple projects to build a shared virtual infrastructure that allowed easy redistribution of compute resources while maintaining high levels of governance, increasing availability and agility, and lowering costs.

Slide1

2008 Customer Virtualization adoption strategies

Over the years, VMware introduced new approaches to managing virtual infrastructure, transforming it into a “Compute Plant” where customers could dynamically manage resources. This introduced agility, automation and governance.

ASiviero7

Figure 5 2008 VMware Historical picture: vSphere as a “Compute Plan”

Now: Transforming the Ways IT Provides Services

Now, in the mobile/cloud era, VMware has continued to be the catalyst for the evolution of IT, building disruptive advantages for managing, automating and orchestrating computing, networking, storage and security. This has transformed IT into a provider of services that can be delivered on-premise, off-premise and in a hybrid combination of the two.

ASiviero8

VMware vRealize Suite

What about customer approaches of today? IT goals haven’t changed much over the years, and neither have the three types of organizational approaches to new technologies:

-        Reactive – With IT exhausting resources to maintain existing systems, they’re challenged to support future business results. The need for rapid innovation has driven users outside of traditional IT channels. As a result, cloud has entered the business opportunistically, threatening to create silos of activities that cannot satisfy mandates for security, risk management and compliance.

-        Proactive – IT has moved to embrace cloud as a model for achieving innovation through increased efficiency, reliability and agility. Shifts in processes and organizational responsibilities attempt to bring structure to cloud decisions and directions. More importantly, IT has embraced a new role: that of a service broker. IT is now able to leverage external providers to deliver rapid innovation within the governance structure of IT, balancing costs, risks and services levels.

-        Innovative – IT has fully implemented cloud computing as the model for producing and consuming computing, shifting legacy systems to a more flexible infrastructure. They’ve invested in automation and policy-based management for greater efficiency and reliability, enabling a broad range of stakeholders to consume IT services via self-service. They’ve also detailed measurement capabilities that quantify the financial impact of sourcing decisions, allowing them to redirect resources and drive new services and capabilities that advance business goals.

Moving Beyond a Reactive State of IT

At every stage of the virtualization evolution, there have been strategic, early adopters and those who take a “wait and see” attitude. But as workloads and end-users become more demanding, even the most reticent IT departments will need to shift away from a reactive environment, taking steps to redefine the way that it operates and the technology it leverages for its foundation. I believe in the near future enterprise customers to move beyond a “reactive state” will have to:

  • Continue to invest in private cloud to build the foundation for an efficient, agile, reliable infrastructure
  • Identify processes that can be automated, Involving our technology consulting services to create, expand or optimize their environments while gaining hands-on knowledge for their teams
  • Establish a self-service environment to deliver IT services to stakeholders on-demand across every Business Units.
  • Begin to identify the true costs of IT services.
  • Embrace third-party providers as a source of innovation.

Get ready for more bumps and fun

“It is not the strongest or the most intelligent who will survive but those who can best manage change.” C. Darwin

Evolution of any kind doesn’t happen without bumps and fun. We live and work in a constantly changing landscape, and with VMware we have opportunities every day to influence and be part of the exciting changes that are taking place today and shaping the IT of tomorrow.

Which is what makes it all so fascinating.

See more at: http://www.vmware.com/products/vrealize-business/


Andrea Siviero is an eight-year veteran of VMware and a senior solutions architect member of Professional Services Engineering (PSE) for the Software-Defined Datacenter (SDDC), a part of the Global Technical Solutions (GTS) team. Prior to PSE, Andrea spent three years as pre-sales system engineer and three years as a post-sales consultant architect for cloud computing and desktop virtualization solutions focusing on very large and complex deployments, especially for service providers in the finance and telco sectors.

VDI Current Capacity Details

Anand Vaneswaran

By Anand Vaneswaran

In my previous post, I provided instructions on constructing a high-level “at-a-glance” VDI dashboard in vRealize Operations for Horizon, one that would aid in troubleshooting scenarios. In the second of this three-part blog series, I will be talking about constructing a custom dashboard that will take a holistic view of my vSphere HA clusters that run my VDI workloads in an effort to understand current capacity. The ultimate objective would be to place myself in a better position in not only understanding my current capacity, but I better hope that these stats help me identify trends to be able to help me forecast capacity. In this example, I’m going to try to gain information on the following:

  • Total number of running hosts
  • Total number of running VMs
  • VM-LUN densities
  • Usable RAM capacity (in a N+1 cluster configuration)
  • vCPU to pCPU density (in a N+1 cluster configuration)
  • Total disk space used in percentage.

You can either follow my lead and recreate this dashboard step-by-step, or simply use this as a guide and create a dashboard of your own for the most important capacity metrics you care about. In my environment, I have five (5) clusters comprising of full-clone VDI machines and three (3) clusters comprising of linked-clone VDI machines. I have decided to incorporate eight (8) “Generic Scoreboard” widgets in a two-column custom dashboard. I’m going to populate each of these “Generic Scoreboard” widgets with the relevant stats described above.

anand_vdi_1

Once my widgets have been imported, I will rearrange my dashboard so that the left side of the screen occupies full-clone clusters and the right side of the screen occupies linked-clone clusters. Now, as part of this exercise I determined that I needed to create super metrics to calculate the following metrics:

  • VM-LUN densities
  • Usable RAM capacity (in a N+1 cluster configuration)
  • vCPU to pCPU density (in a N+1 cluster configuration)
  • Total disk space used in percentage

With that being said, let’s begin! The first super metric I will create will be called SM – Cluster LUN Density. I’m going to design my super metric with the following formula:

sum(This Resource:Deployed|Count Distinct VM)/sum(This Resource:Summary|Total Number of Datastores)

anand_vdi_2

In this super metric I will attempt to find out how many VMs reside in my datastores on average. The objective is to make sure I’m abiding by the recommended configuration maximums of allowing a certain number of virtual machines to reside on my VMFS volume.

The next super metric I will create is called SM – Cluster N+1 RAM Usable. I want to calculate the usable RAM in a cluster in an N+1 configuration. The formula is as follows:

(((sum(This Resource:Memory|Usable Memory (KB)/sum(This Resource:Summary/Number of Running Hosts))*.80)*(sum(This Resource:Summary/Number of Running Hosts)-1))/10458576

anand_vdi_3

Okay, so clearly there is a lot going on in this formula. Allow me to try to break it down and explain what is happening under the hood. I’m calculating this stat for an entire cluster. So what I will do is take the usable memory metric (installed) under the Cluster Compute Resource Kind. Then I will divide that number by the total number of running hosts to give me the average usable memory per host. But hang on, there are two caveats here that I need to take into consideration if I want an accurate representation of the true overall usage in my environment:

1)      I don’t think I want my hosts running at more than 80 percent capacity when it
comes to RAM utilization. I always want to leave a little buffer. So my utilization factor will be 80 percent or .8.

2)      I always want to account for the failure of a single host (in some environments, you might want to factor in the failure of two hosts) in my cluster design so that compute capabilities for running VMs are not compromised in the event of a host failure.  I’ll
want to incorporate this N+1 cluster configuration design in my formula.

So, I will take the result of my overall usable, or installed, memory (in KB) for the cluster, divide that by the number of running hosts on said cluster, then multiply that result by the .8 utilization factor to arrive at a number – let’s call it x – this is the amount of real usable memory I have for the cluster. Next, I’m going to take x, then multiply the total number of hosts minus 1, which will give me y. This will take into account my N+1 configuration. Finally I’m going to take y, still in KB, and divide it by (1024×1024) to convert it to GB and get my final result, z.

The next super metric I will create is called SM – Cluster N+1 vCPU to Core Ratio. The formula is as follows:

sum(This Resource:Summary|Number of vCPUs on Powered On VMs)/((sum(This Resource:CPU Usage|Provisioned CPU Cores)/sum(This Resource:Summary|Total Number of Hosts))*(sum(This Resource:Summary|Total Number of Hosts)-1))

anand_vdi_4

anand_vdi_5

This formula is fairly self-explanatory. I’m taking the total space used for that datastore cluster and dividing that by the total capacity of that datastore cluster. This is going to give me a number greater than 0 and less than 1, so I’m going to multiply this number by 100 to give me a percentage output.

Once I have the super metrics I want, I want to attach these super metrics to a package called SM – Cluster SuperMetrics.

anand_vdi_6

The next step would be to tie this package to current Cluster resources as well as Cluster resources that will be discovered in the future. Navigate to Environment > Environment Overview > Resource Kinds > Cluster Compute Resource. Shift-select the resources you want to edit, and click on Edit Resource.

anand_vdi_7

Click the checkbox to enable “Super Metric Package, and from the drop-down select SM – Cluster SuperMetrics.

anand_vdi_8

To ensure that this SuperMetric package is automatically attached to future Clusters that are discovered, navigate to Environment > Configuration > Resource Kind Defaults. Click on Cluster Compute Resource, and on the right pane select SM – Cluster SuperMetrics as the Super Metric Package.

anand_vdi_9

Now that we have created our super metrics and attached the super metric package to the appropriate resources, we are now ready to begin editing our “Generic Scoreboard” widgets. I will tell you how to edit two widgets (one for a full-clone cluster and one for a linked-clone cluster) with the appropriate data and show its output. We will then want to replicate the same procedures to ensure that we are hitting every unique full clone and linked clone cluster. Here is an example of what the widget for a full-clone cluster should look like:

anand_vdi_10

And here’s an example of what a widget for a linked-clone cluster should look like:

anand_vdi_11

Once we replicate the same process and account for all of our clusters, our end-state dashboard should resemble something like this:

anand_vdi_12

And we are done. A few takeaways from this lesson:

  • We delved into the concept of super metrics in this tutorial. Super metrics are awesome resources that allow you the ability to manipulate metrics and display just the data you want to.  In our examples we created some fairly involving formulas, but a very simple example for why a super metric can be particularly useful would be memory. vRealize Operations Manager displays memory metrics in KB, but how do we get it to display in GB? Super metrics are your solution here.
  • Obviously, every environment is configured differently and therefore behaves differently, so you will want to tailor the dashboards and widgets according to your environment needs, but at the very least the above examples can be a good starting point to build your own widgets/dashboards.

In my next tutorial, I will walk through the steps for creating a high-level “at-a-glance” VDI dashboard that your operations command center team can monitor. With most organizations, IT issues are categorized on a severity basis that are then assigned to the appropriate parties by a central team that runs point on issue resolution by coordinating with different departments.  What happens if a Severity 1 issue happens to afflict your VDI environment? How are these folks supposed to know what to look for before placing that phone call to you? This upcoming dashboard will make it very easy. Stay tuned!!


Anand Vaneswaran is a Senior Technology Consultant with the End User Computing group at VMware. He is an expert in VMware Horizon (with View), VMware ThinApp, VMware vCenter Operations Manager, VMware vCenter Operations Manager for Horizon, and VMware Horizon Workspace. Outside of technology, his hobbies include filmmaking, sports, and traveling.

Overcoming Design Challenges with an Enterprise-wide Syslog Solution

MHoskenBy Martin Hosken

I’ve spent a lot of time helping my customers build a proper foundation for a successful implementation of vRealize Log Insight, and I’ve published a white paper that highlights key design challenges on how to overcome them. I’d like to share a brief overview with you here.

VMware vRealize Log Insight gives administrators the ability to consolidate logs, monitor and troubleshoot vSphere and to perform security auditing and compliance testing.

This white paper addresses the design challenges and key design decisions that arise when architecting an enterprise-wide syslog solution with vRealize Log Insight. It focuses on the design aspects of syslog in a vSphere environment and provides sample reference architectures to aid your design work and provide ideas about strategies for your own projects.

With every ESXi host in the data center generating approximately 250 MB of log file data a day, the need to centrally manage this data for proactive health monitoring, troubleshooting issues and performing security audits is something that many organizations continue to face every day.

mhosken 1
Note: A symlink is a type of file that contains a reference to another file in the form of an absolute or relative path.

VMware vRealize Log Insight is a scalable and secure solution that includes a syslog server, log consolidation tool and log analysis tool that works for any type of device that can send syslog data and not only the vSphere infrastructure.

As with any successful implementation project, the need to plan and design a solution that meets all the requirements set out by the business is key in ensuring success, and developing a design that is scalable, resilient and secure is fundamental to achieving this. And this includes keeping in mind the requirements of your business leaders, system administrators and security auditors as well.

To read the entire whitepaper, click HERE.


Martin Hosken is a Senior Consultant, VMware Professional Services EMEA

Have a Chat with Your SDDC (Automating the Management of the Management Portal, Part 2)

By Andrea Siviero, VMware Senior Solutions Architect

Andrea SivieroIn my recent post “Look Mom, no Mouse!” I introduced an amazing new way to interact with your SDDC without a mouse, but now, using a command-line with simple mnemonic instructions you can “Talk” with your SDDC to “Automate the Management of the Management Portal”.

VMware has just announced vRealize CloudClient 3.0 released for general availability (GA) (http://developercenter.vmware.com/web/dp/tool/cloudclient/3.0.0).

So now that it’s GA, I’m excited to explore with you more deeply how to use CloudClient, and also to share its benefits.

What commands do I want to show you today?

-        Create a brand new tenant and service catalog and entitle them to administrators

-        Import an existing blueprint into the brand new CloudClient-made tenant

-        Deploy blueprints from the catalog of services

So wake up your SDDC — it’s time for a lovely chat. :-)

Log in and create a tenant
CloudClient allows you to log in in an interactive way:

CloudClient> vra login userpass --server vcac-l-01a.corp.local --tenant pse --user siviero@gts.local --password ****** --iaasUser corp\\Administrator --iaasPassword ******

Or to edit the CloudClient.properties file to fill in all the details, just type this command to create an empty configuration:

CloudClient> login autologinfile

NOTE: IaaS credentials need to be passed with double back-slash i.e. corp\\Administrator

Login Screen

Figure 1: Login

Create a new tenant, identity-store and admins
When you are logged in as administrator@vsphere.local, creating a tenant is just three commands away. To set the name of the tenant, how users will get authenticated (AD or LDAP) and who will be the adminstrators:

CloudClient> vra tenant add --name "PSE" --url "PSE"
CloudClient> vra tenant identitystore add --name "PSE AD" --type AD --url ldap://controlcenter.corp.local --userdn "CN=Administrator,CN=Users,DC=corp,DC=local" --password "****" --groupbasedn "OU=Nephosoft QE,DC=corp,DC=local" --tenantname "PSE" --alias "gts.local" --domain "corp.local"
CloudClient> vra tenant admin update --tenantname "PSE" --addtenantadmins siviero@gts.local --addiaasadmins admin1@gts.local,admin2@gts.local
Create Tenant

Figure 2: Create Tenant

Create a fabric group and business group and assign resources
Now let’s annotate the returned IDs so they can be used in further commands. (They can be scripted using variables.)

CloudClient> vra fabricgroup add --name "GTS Fabric Group" --admins "admin1@gts.local,admin2@gts.local"
Create Fabric Group

Figure 3: Create Fabric Group

Search for the suitable compute resources. We will select the “Cluster Site A”:

CloudClient> vra computeresource list
Compute Resources

Figure 4: Compute Resources

Let’s finalize the “trivial” steps of assigning the compute resources to the fabric group and creating a business group wth a pre-determined machine prefix.

CloudClient> vra fabricgroup update --id f8bbfcd5-79c0-43db-a382-2473b91862e6 --addcomputeresource c47e3332-bdef-4391-9f93-269dcf14f2c5
CloudClient> vra machineprefix add --prefix gts- --numberOfDigits 3 --nextNumber 001
CloudClient> vra businessgroup add --name "GTS Business Group" --admins "admin1@gts.local,admin2@gts.local" --adContainer "cn=computers" --email admin2@gts.local --description "GTS Group" --machinePrefixId 1c1d20c3-ba91-443e-beb0-b9b0728ee29c
Assign Resources

Figure 5: Assign Resources

Here comes the fun: import/export blueprints
Until now, CloudClient commands used are merely a reproduction of what normally happens on the GUI.

Let me show where the power of it comes out: let’s assume you already created a good blueprint in a tenant with a blueprint profile and you just want to “copy&paste” it to another tenant. You cannot do it in the GUI — you need to manually recreate it — but hey, here comes the CloudClient magic: log in to the source tenant and export the blueprint in a JSON format:

CloudClient> vra iaas blueprint list

CloudClient> vra iaas blueprint detail --id 697b8302-b5a9-4fbf-8544-2f19d4e8a220 --format JSON --export CentOS63.json
Export Blueprint to JSON file

Figure 6: Export Blueprint to JSON file

Now log back to the brand new PSE tenant and import the blueprint like this:

CloudClient> vra iaas blueprint add vsphere --inputfile CentOS63.json --name "CentOS 6.3 x64 Base" --cpu 1 --memory 512
Import Blueprint from JSON

Figure 7: Import Blueprint from JSON

Request the blueprint from the catalog
The remaining steps will be trivial as before: Create a service, an entitlement, and actions and assign the blueprint to catalog. Reading the documentation will help you to get familiar with it.

Note: “Reservations” verbs are not yet implemented, so at some point you need to use the GUI to complete the process.

So please let me fast forward to the final moment when you can successfully deploy a blueprint and see it live. :-)

CloudClient> vra catalog list
Listing the Catalog

Figure 8: Listing the Catalog

Using the ID returned from catalog list, make the request:

CloudClient> vra catalog request submit --id c8a850d2-a089-4afb-b5d8-b298580cf9f9 --groupid 2c220523-60bb-419e-80c8-c5bfd81aa805 --reason fun
Checking the Requests

Figure 9: Checking the Requests

And here it is, our little VM, happy and running. :-)

Happy and Running

Figure 10: Happy and Running

The Occam’s Razor principle: “Entities must not be multiplied beyond necessity.”

In my humble opinion: Please don’t waste lot of time doing everything (Coffe/Tea?) from a command-line. vRealize Automation 6.1 has a nicely improved UI and is very intuitive to work with.

Keep the solutions as simple as possible and use vRealize CloudClient when some real “black magic” is needed.


Andrea Siviero is an eight-year veteran of VMware and a senior solutions architect member of Professional Services Engineering (PSE) for the Software-Defined Datacenter (SDDC), a part of the Global Technical Solutions (GTS) team. Prior to PSE, Andrea spent three years as pre-sales system engineer and three years as a post-sales consultant architect for cloud computing and desktop virtualization solutions focusing on very large and complex deployments, especially for service providers in the finance and telco sectors. 

Analyzing Virtual Desktop Login Time

By Gourav Bhardwaj with Matt Larson

GouravMatt LarsonOften when performing health checks a discussion arises about the login time and what constitutes login time. This article covers some of the common ways to look at login time and its underlying components.  You can look at login time using vCOps for View or a third-party user experience monitoring solution. In this example the login time is demonstrated using Stratusphere™ UX. Experienced system administrators can also use this process to troubleshoot slow login times.

 

 

Review Virtual Desktop login times using Stratusphere UX™

  1. First, ensure you are in the Stratusphere UX Interface.
    Stratusphere UX screen 1
  2. On the Inspector tab, choose Machine Diagnostic Summary, and then click Go.
    Stratusphere UX screen 2
  3. In the Date Range drop-down menu, select Last 24 Hours.Stratusphere UX screen 3
  4. In the results list, sort by Login Delay.Stratusphere UX screen 4
  5. Click the down-arrow next to the name of the machine. Click Drill-down to see machine inspection history.
    Stratusphere UX screen 5
  6. Select the down-arrow next to the hour that contains the slow login time. Click Drill-down to see inspection report details.
    Stratusphere UX screen 6

A lot of information will be provided, including the username of the user experiencing the issues, as well as information regarding processes. One important piece of information used to find what may be causing the slow logins is the CPU System Time(s) field. The graphic below shows VMWVvpsvc running long. This metric indicates some login slowness resulting from the profile being copied from the profile location using VMware’s persona management. This may be the result of a file server being in a location local to the user, but not local to the View environment.

Stratusphere UX screen 7

This information is helpful, as it says that the VMWVvpsvc was running for 94 seconds. We can assume this is mostly during login, but that only accounts for 94 seconds of a 351 second login delay. Clearly, more information is necessary. While turning to logs can be helpful (such as persona management, the system event log, the application event log, and various View and PCoIP logs), they can be time consuming to review, and often the information these logs provide is insufficient.

Using the Windows Performance Toolkit
The Windows Performance Toolkit is a set of tools provided in the Windows SDKs for both Windows 7 and Windows 8. It consists of two high level toolsets: A toolset to gather information, and a toolset to analyze information. Once users and systems have been found to have slow login times, the toolsets provided with the Windows Performance Toolkit can be employed to further ascertain what exactly is causing the slow logins.

Installation
This section details the installation process to get the tools on the system that is experiencing slow login times. This process assumes the use of the Windows 7 SDK. Below are the steps:

  1. Remove Visual C# 2010 – this may or may not be necessary. If the C# version of the vSphere Client is installed on the workstation, then that existing installation of Visual C# 2010 will need to be removed. Not to worry, the SDK puts C# back on there, and there is no impact to the vSphere client or other applications that may use Visual C# 2010.
  2. Install the Windows 7 SDK – this can be done HERE. Launch the winsdk_web.exe file and ensure that at least the Windows Performance Toolkit is selected, and then click Next. Once the installation has completed, move on to the next step.Windows SDK screenNote: In order to analyze Windows crash dumps (AKA BSOD) I keep the Debugging Tools for Windows installed as well.
  3. Install .NET 4.0 – this can be done from HERE. Again, this depends upon whether or not it is installed on the workstation in question.

This completes the installation. The installation can be verified by confirming that the program group exists on the Start Menu, or navigating to the installation directory, which defaults to C:\Program Files\Microsoft Windows Performance Toolkit, and confirm the existence of xbootmgr.exe and xperf.exe as seen in the images below.

Windows Screen 2Windows Screen 3

Using XPERF
The process to use XPERF to gather information regarding slow logins is as follows:

  1. Enable fast user switching in the registry or GPO.
  2. Create a local user account named Test, and add to the local administrators group. (Using an administrative user that is not the problematic user will also work.)
  3. From the console of the problematic workstation, log in as the user with administrative privileges.
  4. Launch a command line with elevated privileges, and navigate to C:\Program Files\Microsoft Windows Performance Toolkit.
  5. Launch the XPERF command:
    1. XPERF Command: xperf -on base+latency+dispatcher+NetworkTrace+Registry+FileIO -stackWalk CSwitch+ReadyThread+ThreadCreate+Profile -BufferSize 128 -start UserTrace -on “Microsoft-Windows-Shell-Core+Microsoft-Windows-Wininit+Microsoft-Windows-Folder Redirection+Microsoft-Windows-User Profiles Service+Microsoft-Windows-GroupPolicy+Microsoft-Windows-Winlogon+Microsoft-Windows-Security-Kerberos+Microsoft-Windows-User Profiles General+e5ba83f6-07d0-46b1-8bc7-7e669a1d31dc+63b530f8-29c9-4880-a5b4-b8179096e7b8+2f07e2ee-15db-40f1-90ef-9d7ba282188a” -BufferSize 1024 -MinBuffers 64 -MaxBuffers 128 -MaxFile 1024
  6. Using fast user switching, switch users, and login as the problematic user.
    1. Once the login has completed, stop the trace using the following command:
      xperf -stop UserTrace -d merged.etl
  7. Gather the merged.etl trace file for analysis.

Using XBOOTMGR
In some cases, it may not be possible to switch users using fast user switching. In many cases, it may be easier to have the user run XBOOTMGR. This tool, when run, reboots the system and tracks both the startup time and the login time. The analysis ends after a set period of time. Gather an XBOOTMGR analysis by performing the following:

  1. Launch a command line with elevated privileges, and navigate to C:\Program Files\Microsoft Windows Performance Toolkit.
  2. Run the following command:
    1. XBOOTMGR Command: xbootmgr -trace boot -traceflags base+latency+dispatcher -stackwalk profile+cswitch+readythread -notraceflagsinfilename -postbootdelay 120
  3. The system will prompt that it is being rebooted. Allow the reboot to occur.
  4. When the VM is started, have the user connect to the View desktop using the View client.
  5. When the user logs in, XBOOTMGR will present the user with a countdown of 120 seconds. Allow XBOOTMGR to collect data.
  6. Once complete, gather the *.etl trace file for analysis. It may take some time to merge the file.

Analysis
The trace file has been created, and now it is time to analyze the results. The analysis toolset available in the Windows 7 Performance Toolkit is slightly different than what is available in the Windows 8 Performance Toolkit.

Performance Analyzer from Windows 7 Performance Toolkit

Open with Performance Analyzer (From the Windows 7 Performance Toolkit)
Windows Performance Analyzer
The graph below shows the processes occurring during the Winlogon Init process. It is easy to see that VMWVvpsvc is running for approximately two minutes.
Windows Performance Analyzer Screen 1

By right clicking on the graph, one can Overlay Graphs from other categories. This graph shows the Winlogon process, as well as the overlay graphs for Boot Phases and CPU Usage. This can be helpful to see which boot phase the processes are running. Additionally, the CPU graph will show whether the process is running long because it has maxed out the available CPU capacity.
Windows Performance Analyzer Screen 2

These overlays can be tweaked by selecting the CheckPoints box in the top right corner of the graph.

CheckPoints Dialog
Windows Performance Analyzer from Windows 8 Performance Toolkit

Open with Performance Analyzer (From the Windows 8 Performance Toolkit).  The icon is shown below:

Windows8

Windows Screen

When looking at the same trace file as before, the graphs show that VMWVvpsvc was running for over 2 minutes. Moving the user files closer (from a network perspective) to the View desktop will help reduce the login time.

References
http://social.technet.microsoft.com/wiki/contents/articles/10128.tools-for-troubleshooting-slow-boots-and-slow-logons-sbsl.aspx

http://www.liquidwarelabs.com/products/stratusphere-ux


Gourav Bhardwaj is a VMware consulting architect who has created virtualized infrastructure designs across various verticals. He has assisted IT organizations of various Fortune 500 and Fortune 1000 companies, by creating designs and providing implementation oversight. His experience includes system architecture, analysis, solution design and implementation.

Matt Larson is an experienced, independent VMware consultant working in design, implementation and operation of VMware technologies. His interests lie in enterprise architecture related to datacenter and end user computing.

EUC Datacenter Design Series — EVO:RAIL VDI Scalability Reference

By TJ Vatsa with Fred Schimscheimer and Todd Dayton

End User Computing (EUC) has come of age and is continuing to mature by leaps and bounds. Customers are no longer considering virtual desktop infrastructure (VDI) as a tactical project but are looking at EUC holistically as an enterprise solution that accelerates EUC transformation. You can refer to the EUC Design 101 series here (Part 1, Part 2, and Part 3) or a consolidated perspective here (EUC Enterprise Solution). Having collaborated with my fellow colleagues Fred Schimscheimer and Todd Dayton (bios below) during the last few weeks, I intend to share the game changing revolution that VMware’s hyper-converged infrastructure solution is bringing to the EUC domain.

The Challenge
People familiar with VDI are well aware of the fact that a scalable production deployment requires systematic and thorough planning of the infrastructure, namely compute, storage and networking. This can be a daunting task for customers that are either chasing tight deadlines or do not have the available infrastructure or people resources. We have noticed this to be a perpetual challenge for many of our customers across different industry domains including healthcare, financial, insurance services, manufacturing and others.

The Panacea
During the last few years, hyper-converged appliances have been taking the industry by storm. By design these systems follow a modular, building block approach that scales out horizontally and is very quick to deploy. From the EUC infrastructure perspective, it has become necessary to acknowledge the efficiency of hyper-converged appliances. While there are vendors that have hyper-converged infrastructure that runs on VMware’s vSphere hypervisor, VMware’s foray into this domain, EVO:RAIL, was released for general availability during VMworld 2014 in San Francisco in September.

EVO:RAIL has been optimized for VMware’s vSphere and Virtual SAN technology with compute, storage and networking resources in a simple, integrated deployment, configuration, and management solution. EVO:RAIL is the next generation EUC building block for a Software Defined Data Center (SDDC).

Numbers Don’t Lie
During the last few months, our teams have been diligently testing and scaling EVO:RAIL for a variety of use cases such as EUC, Business Continuity and Disaster Recovery (BCDR) and X-in-a-box. The next few paragraphs will focus on our findings for Horizon 6 View desktops scalability.

You may be having lots of questions by now. So let’s take it one by one!

Q: What did the hardware configuration look like?
A: The test bed hardware infrastructure configuration was as follows:

EVO:RAIL Appliance

  • 4 x nodes
  • Each node
    • 2 x Intel E5-2620 @ 2.1 GHz
    • 192GB memory (12 x 16GB)
    • 3 x Hitachi SAS 10K 1.2TB MD
    • 1 x 400GB Intel S3700 SSD

Q: What did the software configuration look like?
A: The test bed View software configuration was as follows:

  • vSphere 5.5 + VSAN
  • Horizon View 6.0 (H6)

Table 1: Horizon 6 Configuration

Horizon 6 Configuration TableNote: vCSA=vCenter Server Appliance

Q: What did the VDI image configuration look like?
A: The test bed image configuration was as follows:

Table 2: Desktop Image Configuration

Desktop Image Configuration Table

Q: What types of View desktops did we test?
A: Horizon View 6, linked clone virtual desktops with floating assignments.

Q: What Horizon 6 configurations did we test?
A: The following configurations were tested using Reference Architecture Workload Code (RAWC):

Table 3: Load Test Configurations

Load Test Configurations

These configurations are pictorially represented in the following schematics:

Management Cluster and Desktop Cluster

 

Figure 1: Configurations #1a/#1b

The figure above represents EVO:RAIL appliances with separate Horizon 6 Management and Desktop clusters.

VDI-in-a-Box

Figure 2: Configuration #2

The figure above represents the EVO:RAIL appliance with both Horizon 6 Management and Desktop clusters in the same appliance. It also illustrates an N+1 configuration to support one node failure within the EVO:RAIL appliance.

Q: What did the results look like?
A: The following results were obtained after the configurations were stress tested using RAWC.

Test Category Results
RAWC Virtual SAN Observer
Config #1a Configuration 1a-RAWC Configuration 1a - VSAN
Config #1b Configuration 1b - RAWC Configuration 1b-VSAN
Config #2 Configuration 2 - RAWC Configuration 2 - VSAN

 

Note: Click the thumbnail images above to drill down into graph details.

Results Summary
The table below summarizes different test configurations and the tested consolidation ratios of numbers of virtual desktops to the EVO:RAIL appliance.

Table 4: Test Configuration Findings

Test Configuration Findings

We hope you will find this information to be useful and motivating. We are looking forward to you bravely adopting and implementing a VDI-in-a-box solution using VMware’s EVO:RAIL hyper-converged appliance in your Software Defined Data Center (SDDC).

Until next time, Go VMware!


Author

TJ VatsaTJ Vatsa is a Principal Architect and CTO Ambassador at VMware, representing the Professional Services organization. TJ has been working at VMware since 2010 and has over 20 years of experience in the IT industry. At VMware, TJ has focused on enterprise architecture and applied his extensive experience to cloud computing, virtual desktop infrastructure, SOA planning and implementation, functional/solution architecture, enterprise data services and technical project management. Catch TJ on Twitter, Facebook or LinkedIn.

Contributors

Fred SchimscheimerFred Schimscheimer has worked at VMware since 2007 and is currently a Staff Engineer in the EUC Office of the CTO. In his role, he helps out with prototyping, validating advanced development projects as well as doing product evaluations for potential acquisitions. He is the architect and author of RAWC – VMware’s first Reference Architecture Workload Simulator.

 

Todd DaytonTodd Dayton joined VMware in 2005 as the first field “Desktop Specialist” working on ACE (precursor to VDI). In his current role as a Principal Systems Engineer and CTO Ambassador, he continues to evangelize End User Computing (EUC) initiatives and opportunities for VMware’s customers.

vCAC 6 Custom Properties, Build Profiles and Property Dictionary Simplified

By Eiad Al-Aqqad

Eiad Al-AqqadThis post originally appeared on Eiad’s Virtualization Team blog.

vCloud Automation Center offers a lot of built-in extensibility features to help you achieve your desired result while minimizing the amount of coding required. Using vCAC custom properties, build profiles, property dictionary is just one example of how you can customize the product, minimize coding, and customize the input form. As property dictionary seems to be the most missed or misunderstood feature of vCAC, followed by build profiles and custom properties, I will try to simplify the explanation of these great features as much as possible. At the end of the article, I will point out more resources for in-depth information on each of these features.

vCAC Custom Properties
Custom properties is the building block for build profiles and property dictionary. VMware documentation defines custom properties as:

“VMware vCloud Automation Center™ custom properties allow you to add attributes of the machines your site provisions, or to override their standard attributes.”

What that means is that vCloud Automation Center utilizes particular variables (custom properties) that contain values that vCAC uses during machine provisioning (such as machine name, machine IP address, port group to use, and so on). vCAC exposes this information as custom properties that you can query or edit to overwrite the default values by a specific value or by a user input. This is a very powerful tool, as you can shape out the request form to ask the user for input (not required by the default request form) and execute upon it without requiring you to do any coding. You can also create your own custom properties to use with your own custom workflows.

Let’s look at a quick example of using vCAC custom properties. The image below shows the default blueprint/VM request form in vCAC:

Default Blueprint Request Form

As you can see, the default VM request form does not ask for a machine hostname or IP address. What if you wanted to allow the user to choose the VM hostname or IP address? You can do that using custom properties, and your request form will look like the screen below:

VCAC Custom Properties

In the above screenshot, I have used the Hostname and VirtualMachine.NetworkN.Address custom properties to allow the user to provide the desired VM hostname and IP address that vCAC will use when creating the VM. I did this by going to Infrastructure ==> Blueprint ==> Properties, then adding the two custom properties as shown in the image below.

VCAC Custom Hostname Property

While the above is using existing vCAC custom properties that vCAC uses when deploying a VM, you can always create your own custom properties to pass to your own workflow or just to track information within the request. For a list of custom properties available in vCAC 6, see: vCloud Automation Center 6 Custom Property Reference.

vCAC Build Profiles
Build profiles is simply a collection of the custom properties under a single title. Imagine if you have 20 different custom properties that you need to include with every Windows blueprint. It would be nice to bundle them all in a build profile then go to these blueprints and assign a single build profile instead of assigning 20 different custom properties to every Windows blueprint. This will save work and provide better consistency. You can create a build profile by going to Infrastructure => Blueprints => Build Profiles => New Build Profile, then add the desired custom properties to that build profile as shown in the image below.

Creating a Build Profile

The next step is to add that build profile to your blueprint as per the image below.

Add Build Profile to Blueprint

vCAC Property Dictionary
I am not sure why property dictionary seems to be the most misunderstood or missed feature of vCAC. It’s quite simple to use and can unleash a lot of power. Allowing users to provide values to custom properties as shown in previous examples is quite useful, but most of the time you want to limit the user choices using drop down menus or check boxes. Property dictionary is all about enabling you to do just that.

vCAC property dictionary lets you define characteristics of custom properties to tailor their display in the user interface. You can customize the property display in the user interface, as in the following examples:

  • Associate a property name with a user control, such as a check box or drop-down menu.
  • Specify constraints such as minimum and maximum values or validation against a regular expression.
  • Provide descriptive display names for properties or add label text.
  • Group sets of property controls together and specify the order in which they appear.
  • Create a relationship between different controls, where for example a location drop down menu can update the storage and network drop down menus to show only values that is valid for that location.

To see how useful property dictionaries can be, let’s take an example where we want to create the drop down menus as illustrated in the below diagram:

Drop Down Menu Sample

The goal of this exercise is to create three drop down menus that will ask the user for location, storage path, and network path to use. Let’s ignore the relationship between the different drop down menus for now and try to focus on just creating these drop down menus. To create the property dictionary required to create these drop down menus, go to: Infrastructure => Blueprints => Property Dictionary.

For each drop down menu you want to create, repeat the steps below. In this example I will create the location drop down menu:

  1. Click New Property Definition, then fill the information as shown in the below screenshot. Please note the name must match the custom property name you want to use.

Location Property Definition

  1. Click the green check mark to save your property definition.
  2. Under Property Attributes, click Edit.
  3. Click New Property Attributes, and then fill in the Property Attributes as shown in the image below.

Property Attribute Drop Down

  1. Repeat the above steps for storage and network as shown in the images below.

Property Definitions

Network Property

Storage Property Attribute

  1. Now that you have all the required property definitions and property attributes created, let’s create a property layout, which is a way of organizing how these drop-down boxes will be ordered when shown to the user. I wanted the drop boxes to be ordered as follows: Location, Storage, Network. To do this, I had to click New Property Layout and fill the information as shown in the below screenshot:

New Property Dictionary Layout

  1. Under Property Layout > Property Instances, click Edit, and organize your property instances as shown in the image below.

Organize Property Instances

  1. Let’s create a build profile that includes all the custom properties involved in our property dictionary example as shown in the image below.

Build Profile Property Dictionary Sample

  1. Now all you are left with is adding this build profile to your blueprint as shown below.

15vcac-adding-property-dictionary-build-profile-to-blueprint-470x232

  1. Now let’s check how the input of our blueprint looks:

16 vCAC-Property-Dictionary-in-action-470x324

Notice in the above example, the three drop-down menus that were created for location, storage, and network are operating independently. There is no relationship between them. In other words, choosing a particular location does not filter which options you have for storage or network. The capability of doing such filtering is part of the property dictionary relationship, which I cover in the following two posts:


Eiad Al-Aqqad is a consulting architect within the SDDC Professional Services practice. He has been an active consultant using VMware technologies since 2006. He is a VMware Certified Design Expert (VCDX#89), as well as an expert in VMware vCloud, vSphere, and SRM. Read more from Eiad at his blog, Virtualization Team, and follow him on Twitter @VirtualizationT.

Look Mom, No Mouse! (Automating the Management of the Management Portal)

By Andrea Siviero, VMware Senior Solutions Architect

Andrea Siviero

The concept of a Software-Defined Data Center (SDDC) has impressed me since the first time I deployed it.

vRealize Automation’s purpose-built infrastructure and application service delivery capabilities combined with its Advanced Service Designer and library of vCenter Orchestrator plugins and workflows make automating almost anything as a service relatively easy.

During my work consulting for enterprise-level customers, I’m frequently exposed to new challenges. One customer engagement inspired my fantasy: how to automate the management of the management portal. This looks like a tongue-twister joke, but actually is an interesting question.

SDDC Service Catalog

As soon as you start exploring this sweet idea you find yourself with a REST client opened to interact with your SDDC using APIs, and you can do almost anything!

REST Client

However, there is some downside to this approach, which I would like to simplify with a simple phrase: IT Admins don’t “naturally” talk API. :-)

Not long ago, I was sitting in a VMware CTO Ambassador session, and suddenly a bright light appeared in front of my eyes: The CloudClient.

Cloud Client

CloudClient is a plugin based architecture with a “command line interface” for traditional provisioning and day two operation support, eliminating the challenges of dealing with SSO / CAFE API and no need to speak JSON (unless you want to).

Providing higher-level “verbs” instead of dealing with myriad of JSON / URIs, makes my job supporting customers a little easier and allows a centralized point to talk not only with vRealize Automation but with the other SDDC components like vCenter Orchestrator/Site Recovery Manager and Application Director.

Moreover, CloudClient provides a Java SDK so it can be easily integrated within a third-party solution, without slowing down the SDDC adoption in the stellar complexity of an enterprise customer.

For instance, you can browse catalog items like in the picture below and request them by simply saying “vcac catalog list.” More interestingly, with the admin account, you can create a new tenant — and adding items to the catalog as easy as chatting with your SDDC.

Cloud Client Catalog View

A Fool With a Tool is Still a Fool

Getting a tool for doing a project is the beginning, not the end, of your journey. Any time a discussion goes toward tools, any tools really, it’s a good idea to challenge the tool itself.

What I mean is that solutions, not tools, help you achieve your business needs,. It’s important to have the right team in place to develop solutions, which will ensure you implement the right tools for your needs.


Andrea Siviero is an eight-year veteran of VMware and a senior solutions architect member of Professional Services Engineering (PSE) for the Software-Defined Datacenter (SDDC), a part of the Global Technical Solutions (GTS) team. Prior to PSE, Andrea spent three years as pre-sales system engineer and three years as a post-sales consultant architect for cloud computing and desktop virtualization solutions focusing on very large and complex deployments, especially for service providers in the finance and telco sectors.