Home > Blogs > VMware Consulting Blog > Monthly Archives: December 2014

Monthly Archives: December 2014

SDDC is the Future

Michael_Francis

 

 

By Michael Francis

 

VMware’s Transformative Growth

Over the last eight years at VMware I have observed so much change, and in my mind it has been transformative change. I think about my 20 years in IT and the changes I have seen, and feel the emergence of virtualization of x86 hardware will be looked upon as one of the most important catalysts for change in information technology history. It has modified the speed of service delivery, the cost of that delivery and subsequently has enabled innovative business models for computing – such as cloud computing.

I have been part of the transformation of our company in these eight years; we’ve grown from being a single-product infrastructure company to what we are today – an application platform company. Virtualization of compute is now mainstream. We have broadened virtualization to storage and networking, bringing the benefits realized for compute to these new areas. I don’t believe this is incremental value or evolutionary. I think this broader virtualization―coupled with intelligent, business policy-aware management systems―will be so disruptive to the industry that it will be considered a separate milestone potentially, on par with x86 virtualization.

Where We Are Now

Here is why I think the SDDC is significant:

  • The software-defined data center (SDDC) brings balance back to the ongoing discussion between the use of public and private computing.
  • It enables the attributes of agility, reduced operational and capital costs, lower security risk, and a new of stack management visibility.
  • SDDC not only modifies the operational and consumption model for computing infrastructure, but it also modifies the way computing infrastructure is designed and built.
  • Infrastructure is now a combination of software and configuration. It can be programmatically generated based on a specification; hyper-converged infrastructure is one example of this.

As a principal architect in VMware’s team responsible for the generation of tools and intellectual property that can assist our Professional Services and Partners to deliver VMware SDDC solutions, the last point is especially interesting and the one I want to spend some time on.

How We Started

As an infrastructure-focused project resource and lead over the past two decades, I have become very familiar developing design documents and ‘as-built’ documentation. I remember rolling out Microsoft Windows NT 4.0 in 1996 on CDs. There was a guide that showed me what to click and in what order to do certain steps. There was a lot of manual effort, opportunity for human error, inconsistencies between builds, and a lot of potential for the built item to vary significantly from the design specification.

Later, in 2000, I was a technical lead for a systems integrator; we had standard design document templates and ‘as-built’ document templates, and consistency and standardization had become very important. A few of us worked heavily with VBScript, and we started scripting the creation of Active Directory configurations such as Sites and Services definitions, OU structures and the like. We dreamed of the day when we could do a design diagram, click ‘build’, and have scripts build what was in the specification. But we couldn’t get there. The amount of work to develop the scripts, maintain them, and modify them as elements changed was too great. That was when we focused on the operating stack and a single vendor’s back office suite; imagine trying to automate a heterogeneous infrastructure platform.

It’s All About Automated Design

Today we have the ability to leverage the SDDC as an application programming interface (API) that abstracts not only the hardware elements below and can automate the application stack above― but can abstract the APIs of ecosystem partners.

This means I can write to one API to instantiate a system of elements from many vendors at all different layers of the stack, all based on a design specification.

Our dream in the year 2000 is something customers can achieve in their data centers with SDDC today. To be clear – I am not referring to just configuring the services offered by the SDDC to support an application, but also to standing up the SDDC itself. The reality is, we can now have a hyper-converged deployment experience where the playbook of the deployment is driven by a consultant-developed design specification.

For instance, our partners and our professional services organization has access to what we refer to as the SDDC Deployment Tool (an imaginative name, I know) (or SDT for short). This tool can automate the deployment and configuration of all the components that make up the software-defined data center. The following screenshot illustrates this:

MFrancis1

 

Today this tool deploys the SDDC elements in a single use case configuration.

In VMware’s Professional Services Engineering group we have created a design specification for an SDDC platform. It is modular and completely instantiated in software. Our Professional Services Consultants and Partners can use this intellectual property to design and build the SDDC.

What Comes Next?

I believe our next step is to architect our solution design artifacts so the SDDC itself can be described in a format that allows software―like SDT―to automatically provision and configure the hardware platform, the SDDC software fabric, and the services of the SDDC to the point where it is ready for consumption.

A consultant could design the specification of the SDDC infrastructure layer and have that design deployed in a similar way to hyper-converged infrastructure―but allowing the customer to choose the hardware platform.

As I mentioned at the beginning, the SDDC is not just about technology, consumption and operations: it provides the basis for a transformation in delivery. To me a good analogy right now is the 3D printer. The SDDC itself is like the plastic that can be molded into anything; the 3D printer is the SDDC deployment tool, and our service kits would represent the electronic blueprint the printer reads to then build up the layers of the SDDC solution for delivery.

This will create better and more predictable outcomes and also greater efficiency in delivering the SDDC solutions to our customers as we treat our design artifacts as part of the SDDC code.


Michael Francis is a Principal Systems Engineer at VMware, based in Brisbane.

App Volumes AppStacks vs. Writable Volumes

By Dale Carter, Senior Solutions Architect, End-User Computing

With the release of VMware App Volumes I wanted to take the time to explain the difference between AppStacks and Writable Volumes, and how the two need to be designed as you start to deploy App Volumes.

The graphic below shows the traditional way to manage your Windows desktop, as well as the way things have changed with App Volumes and the introduction of “Just-in-time” apps.

DCarter AppVolumes v Writable Volumes 1

 

So what are the differences between AppStacks and Writable Volumes?

AppStacks

An AppStack is a virtual disk that contains one or more applications that can be assigned to a user as a read-only disk. A user can have one or many AppStacks assigned to them depending on how the IT administrator manages the applications.

When designing for AppStacks it should be noted that an AppStack is deployed in a one-to-many configuration. This means that at any one time an AppStack could be connected to one or hundreds of users.

DCarter AppVolumes v Writable Volumes 2

 

When designing storage for an AppStack it should also be noted that App Volumes do not change the IOPS required for an application, but it does consolidate the IOPS to a single virtual disk. So like any other virtual desktop technology it is critical to know your applications and their requirements; it is recommended to do an application assessment before moving to a large-scale deployment. Lakeside Software and Liquidware Labs both publish software for doing application assessments.

For example, if you know that on average the applications being moved to an AppStack use 10 IOPS, and that the AppStack has 100 users connected to it, you will require 1,000 IOPS average (IOPS pre-user x number of users) to support that AppStack; you can see it is key to designing your storage correctly for AppStacks.

In large-scale deployments it may be recommended to create copies of AppStacks and place them across storage LUNs, and assign a subset of users to each AppStack for best performance.

DCarter AppVolumes v Writable Volumes 3

 

Writable Volumes

Like AppStacks, a Writable Volume is also a virtual disk, but unlike AppStacks a Writable Volume is configured in a one-to-one configuration, and each user has their own assigned Writeable Volume.

DCarter AppVolumes v Writable Volumes 4

 

When an IT administrator assigns a Writable Volume to a user, the first thing the IT administrator will need to decide is what type of data the user will be able to store in the Writable Volumes. There are three choices :

  • User Profile Data Only
  • User Installed Applications Only
  • Both Profile Data and User Installed Applications

It should be noted that App Volumes are not a Profile Management tool, but can be used alongside any currently used User-Environment Management tool.

When designing for Writable Volumes, the storage requirement will be different than it is when designing for AppStacks. Where an AppStack will require all Read I/O, a Writable Volume will require both Read and Write I/O. The IOPS for a Writable Volume will also vary per user depending on the individual user and how they use their data; it will also vary depending on the type of data the IT administrator allows the user to store in their Writable Volume.

IT administrators should monitor their users and how they access their Writable Volume; this will help them manage how many Writable Volumes can be configured on a single storage LUN.

Hopefully this blog helps describe the differences between AppStacks and Writable Volumes, and the differences that should be taken into consideration when designing for each.

I would like to thank Stephane Asselin for his input on this blog.


Dale is a Senior Solutions Architect and member of the CTO Ambassadors. Dale focuses in the End User Compute space, where Dale has become a subject matter expert in a number of the VMware products. Dale has more than 20 years experience working in IT having started his career in Northern England before moving the Spain and finally the USA. Dale currently hold a number of certifications including VCP-DV, VCP-DT, VCAP-DTD and VCAP-DTA.

For updates you can follow Dale on twitter @vDelboy

App Volumes AppStack Creation

Dale CarterBy Dale Carter, Senior Solutions Architect, End-User Computing

VMware App Volumes provide just-in-time application delivery to virtualized desktop environments. With this real-time application delivery system, applications are delivered to virtual desktops through VMDK virtual disks, without modifying the VM or applications themselves. Applications can be scaled out with superior performance, at lower costs, and without compromising end-user experience.

In this blog post I will show you how easy it is to create a VMware App Volumes AppStack and how that AppStack can then be easily deployed to up to hundreds of users

When configuring App Volumes with VMware Horizon View an App Volumes AppStack is a read-only VMDK file that is added to a user’s virtual machine, and then the App Volumes Agent merges the two or more VMDK files so the Microsoft Windows operating system sees the files as just one drive. This way the applications look to the Windows OS as if they are natively installed and not on a separate disk.

To create an App Volumes AppStack follow these simple steps.

  1. Log in to the App Volumes Manager Web interface.
  2. Click Volumes.
    DCarter Volumes
  3. Click Create AppStack.
    DCarter AppStack
  4. Give the AppStack a name. Choose the storage location and give it a description (optional). Then click Create.
    DCarter Create AppStack
  5. Choose to either Perform in the background or Wait for completion and click Create.
    DCarter Create
  6. vCenter will now create a new VMDK for the AppStack to use.
  7. Once vCenter finishes creating the VMDK the AppStack will show up as Un-provisioned. Click the + sign.
    DCarter
  8. Click Provision
    .
    DCarter Provision
  9. Search for the desktop that will be used to install the software. Select the Desktop and click Provision.
    DCarter Provision AppStack
  10. Click Start Provisioning.
    DCarter Start Provisioning
  11.  vCenter will now attach the VMDK to the desktop.
  12. Open the desktop that will be used for provisioning the new software. You will see the following message: DO NOT click OK. You will click OK after the install of the software.
    DCarter Provisioning Mode
  13. Install the software on the desktop. This can be just one application or a number of applications. If reboots are required between installs that is OK. App Volumes will remember where you are after the install.
  14. Once all of the software has been installed click OK.
    DCarter Install
  15. Click Yes to confirm and reboot.
    DCarter Reboot
  16. Click OK.
    DCarter 2
  17. The desktop will now reboot. After the reboot you must log back in to the desktop.
  18. After you log in you must click OK. This will reconfigure the VMDK on the desktop.
    DCarter Provisioning Successful
  19. You can now connect to the App Volumes Manager Web interface and see that the AppStack is ready to be assigned.
    DCarter App Volumes Manager

Once you have created the AppStack you can assign the AppStack to an Active Directory object. This could be a user, computer or user group.

To assign an AppStack to a user, computer or user group, follow these simple steps.

  1. Log in to the App Volumes Manager Web interface.
  2. Click Volumes.
    DCarter Volumes Dashboard
  3. Click the + sign by the AppStack you want to assign.
  4. Click Assign.
    DCarter Assign
  5. Search for the Active Director object. Select the user, computer, OU or user group to assign the AppStack to. Click Assign.
    DCarter Assign Dashboard
  6. Choose either to assign the AppStack at the next login or immediately, and click Assign.
    DCarter Active Director
  7. The users will now have the AppStack assigned to them and will be able to launch the applications as they would any normal application.
    DCarter AppStack Assign

By following these simple steps you will be able to quickly create an AppStack and simply deploy that AppStack to your users.


Dale is a Senior Solutions Architect and member of the CTO Ambassadors. Dale focuses in the End User Compute space, where Dale has become a subject matter expert in a number of the VMware products. Dale has more than 20 years experience working in IT having started his career in Northern England before moving the Spain and finally the USA. Dale currently hold a number of certifications including VCP-DV, VCP-DT, VCAP-DTD and VCAP-DTA.

For updates you can follow Dale on twitter @vDelboy

“Gotchas” and Lessons Learned When Using Virtual SAN

jonathanm-profileBy Jonathan McDonald

There are certainly a number of blogs on the Web that talk about software-defined storage, and in particular Virtual SAN. But as someone who has worked at VMware for nine years, my goal is not to rehash the same information, but to provide insights from my experiences.

At VMware, much of my time was spent working for Global Support Services; however, over the last year-and-a-half, I have been working as a member of the Professional Services Engineering team.

As a part of this team, my focus is now on core virtualization elements, including vSphere, Virtual SAN, and Health Check Services. Most recently I was challenged with getting up to speed with Virtual SAN and developing an architecture design for it. At first this seemed pretty intimidating, since I had only heard about the marketing details prior to this; however, Virtual SAN truly did live up to all the hype about being “radically simple”. What I found is that the more I work with Virtual SAN the less concerned I became with the underlying storage. After having used Virtual SAN and tested it in customer environments, I can honestly say my mind is very much changed because of the absolute power it gives an administrator.

To help simplify the design process I broke it out into the following workflow design to not only simplify it for myself, but to help anyone else who is unaware of the different design decisions required to successfully implement Virtual SAN.

Workflow for a Virtual SAN Design_JMcDonald

Workflow for a Virtual SAN Design

When working with a Virtual SAN design, this workflow can be quite helpful. To further simplify it, I break it down into a four key areas:

  1. Hardware selection – In absolutely every environment I have worked in there has always been a challenge to select the hardware. I would guess that 75 percent of the problems I have seen in implementing Virtual SAN have been as a result of hardware selection or configuration. This includes things such as non-supported devices or incorrect firmware/drivers. Note: VMware does not provide support for devices that are not on the Virtual SAN Compatibility List. Be sure that when selecting hardware that it is on the list!
  2. Software configuration – The configuration is simple—rarely have I seen questions on actually turning it on. You merely click a check box, and it will configure itself (assuming of course that the underlying configuration is correct). If it is not, the result can be mixed, such as if the networking is not configured correctly, or if the disks have not been presented properly.
  3. Storage policy – The storage policy is at first a huge decision point. This is what gives Virtual SAN its power, the ability to configure what happens with the virtual machine for performance and availability characteristics.
  4. Monitoring/performance testing/failure testing – This is the final area and it is in regards to how you are supposed to monitor and test the configuration.

All of these things should be taken into account in any design for Virtual SAN, or the design is not really complete. Now, I could talk through a lot of this for hours. Rather than doing that I thought it would be better to post my top “gotcha” moments, along with the lessons learned from the projects I have been involved with.

Common “Gotchas”

Inevitably, “gotcha” moments will happen when implementing Virtual SAN. Here are the top moments I have run into:

  1. 1. Network configuration – No matter what the networking team says, always validate the configuration. The “Misconfiguration detected” error is by far the most common thing I have seen. Normally this means that either the port group has not been successfully configured for Virtual SAN or the multicast has not been set up properly. If I were to guess, most of the issues I have seen are as a result of multicast setup. On Cisco switches, unless an IGMP Snooping Carrier has been configured OR IGMP snooping has been explicitly disabled on the ports used for Virtual SAN, configuration will generally fail. In the default configuration it is simply not configured, and therefore—even if the network admin says it is configured properly it may not be configured at all—double check it to avoid any painNetwork Configuration_JMcDonald
  2. Network speed – Although 1 GB networking is supported, and I have seen it operate effectively for small environments, 10 GB networking is highly recommended for most configurations. I don’t just say this because the documentation says so. From experience, what it really comes down to here is not the regular everyday usage of Virtual SAN. Where people run into problems rather is when an issue occurs, such as during failures or periods of heavy virtual machine creation. Replication traffic during these periods can be substantial and cause huge performance degradation while they are occurring. The only way to know is to test what happens during a failure or peek provisioning cycle. This testing is critical as this tells you what the expected performance will be. When in doubt, always use 10 GB networking.
  3. Storage adapter choice – Although seemingly simple, the queue depth of the controller should be greater than 256 to ensure the best performance. This is not as much of an issue now as it was several months ago because the VMware Virtual SAN compatibility list should no longer have any cards that are under 256 queue depth in it anymore. Be sure to verify though. As an example, there was one card when first released that artificially limited the queue depth of the card in the driver software. Performance was dramatically impacted until an updated driver was released.

Lessons Learned

There are always lessons to be learned when using new software, and ours came with a price of a half or full day’s work in trying to troubleshoot issues. Here’s what we figured out:

  1. Always verify firmware/driver versions – This one always seems to be overlooked, but I am stating it because of experiences onsite with customers.One example that comes to mind is where we had three identical servers bought and shipped in the same order that we were using to configure Virtual SAN. Two of them worked fine, the third just wouldn’t cooperate, no matter what we did. After investigating for several hours we found that not only would Virtual SAN not configure, but all drives attached to that host were Read only. Looking at the utility that was provided with the actual card itself showed that the card was a revision behind on the firmware. As soon as we upgraded the firmware it came online and everything worked brilliantly.
  2. Pass-through/RAID0 controller configuration – It is almost always recommended to use a pass-through controller such as Virtual SAN, as it is the owner of the drives and can have full control of them. In many cases there is only RAID0 mode. Proper configuration of this is required to avoid any problems and to maximize performance for Virtual SAN. First, ensure any controller caching is set to 100% Read Cache. Second, configure each drive as its own “array” and not a giant array of disks. This will ensure it is set up properly.As an example of incorrect configuration that can cause unnecessary overhead, several times I have seen all disks configured as a single RAID volume on the controller. This shows up as a single disk to the operating system (ESXi in this case), which is not desired for Virtual SAN. To fix this you have to go into the controller and configure it correctly, by configuring each disk individually.  You also have to ensure the partition table (if previously created) is removed, which can—in many cases—involve a zero out of the drive if there is not an option to remove the header.
  3. Performance testing – The lesson learned here is you can do an infinite amount of testing – where do you start and stop. Wade Holmes from the Virtual SAN technical marketing team at VMware has an amazing blog series on this that I highly recommend reviewing for guidance here. His methodology allows for both basic and more in-depth testing to be done for your Virtual SAN configuration.

I hope these pointers help in your evaluation and implementation of Virtual SAN. Before diving head first in to anything, I always like to make sure I am informed about the subject matter. Virtual SAN is no different. To be successful you need to make sure you have genuine subject matter expertise for the design, whether it be in-house or by contacting a professional services organization. Remember, VMware is happy to be your trusted advisor if you need assistance with Virtual SAN or any of our other products!


Jonathan McDonald is a Technical Solutions Architect for the Professional Services Engineering team. He currently specializes in developing architecture designs for core Virtualization, and Software-Defined Storage, as well as providing best practices for upgrading and health checks for vSphere environments.