Home > Blogs > VMware vSphere Blog > Monthly Archives: April 2012

Monthly Archives: April 2012

McAfee MOVE Antivirus joins the vShield Endpoint Family

We have seen a tremendous amount of customer interest in optimizing endpoint security in VMware vSphere and VMware View environments.  McAfee LogoAs server consolidation ratios rise and as large scale virtual desktop infrastructure (VDI) environments roll out it is important to take a fresh look at endpoint security.  While the tried and true practice of installing a thick security agent per virtual machine is certainly viable, there is a lot to be gained in taking a new approach that is optimized for the virtual environment.

VMware vShield Endpoint offloads antivirus and anti-malware agent processing to a dedicated secure virtual appliance delivered by VMware partners.  Our offload approach dramatically increases consolidation ratios and performance by eliminating anti-virus “storms”, streamlines deployment, and satisfies compliance requirements.  These capabilities combined with choice of industry leading endpoint security solutions are fundamental to your journey to the cloud.

VMware is proud to announce that McAfee is now shipping the McAfee MOVE Antivirus solution that integrates with VMware vShield Endpoint.  The McAfee MOVE provides powerful, comprehensive, and consistent protection, and is managed and reported by the McAfee ePolicy Orchestrator platform.

With the addition of McAfee we now have four actively shipping solutions with vShield Endpoint including:

Bitdefender Security for Virtualized Environments
http://www.bitdefender.com/sve

Kaspersky Security for Virtualization
http://www.kaspersky.com/products/business/applications/security-virtualization

McAfee MOVE Antivirus
www.mcafee.com/us/products/move-anti-virus.aspx

Trend Micro Deep Security
http://www.trendmicro.com/us/enterprise/cloud-solutions/deep-security/index.html

 

Technical Marketing Update 2012 – Week 17

By Duncan Epping, Principal Architect.

Technical Marketing Update 2012 – Week 17

Great white paper by Cormac Hogan on Storage Protocols. I know many of you have asked for this in the past, so I am sure you will appreciate this paper that explains the various protocols and how they interoperate with VMware. Excellent work Cormac!

Blog posts: 

  • vSphere Security Hardening Report Script for vSphere 5 (William Lam) http://bit.ly/Ju91uU 
  • Cool tool update: RVTools 3.3 released! (Duncan Epping) bit.ly/INHvor
  • Removing Previous Local Datastore Label for Reinstall in ESXi 5 (William Lam) http://bit.ly/IfhmSN 
  • Does VMware Support Shared/Switched SAS? (Cormac Hogan) bit.ly/JD50lT
  • What is das.maskCleanShutdownEnabled about? (Duncan Epping) http://bit.ly/JnkKKd 
  • Demystifying Configuration Maximums for VSS and VDS (Venky Deshpande) bit.ly/JzZ7YC
  • VAAI Thin Provisioning Block Reclaim/UNMAP In Action (Cormac Hogan) bit.ly/JpTkB2
  • Aggregating datastores from multiple storage arrays into one Storage DRS datastore cluster (Frank Denneman) bit.ly/I3Bx2z
  • Preparing the hosts in Provider VDCs with PowerCLI (Alan Renouf) bit.ly/IhJ00B 
  • SRM 5.0.1 Upgrade with vSphere Replication (Ken Werneburg) bit.ly/I8yIgD
  • Using the vSphere ESXi Image Builder CLI (Kyle Gleed) bit.ly/IhAznV
  • Retrieving Information from VMware VDS + Cisco Nexus 1000v (William Lam) bit.ly/Ixeaxu 

 

Retrieving Information from VMware VDS + Cisco Nexus 1000v

By William Lam, Sr. Technical Marketing Engineer

Recently, we have been receiving numerous questions about extracting information from a VMware vSphere Distributed Virtual Switch (VDS) and whether it was possible to do the same with 3rd party Distributed Virtual Switches that have integrated with the vSphere platform such as the Cisco’s Nexus 1000v. The answer is yes and we can easily do so with the help from the vSphere API.

Continue reading

SRM 5.0.1 Upgrade with vSphere Replication


Posted by
Ken Werneburg
Sr Tech Marketing Manager

Upgrading from 5.0 to 5.0.1 is such a very simple process that it doesn't require a lot of attention, but there's one quick caveat regarding the vSphere Replication virtual appliances that should be noted.  

Because of this, I figured it would be worthwhile to walk you through a couple of different ways to make sure you VR appliances are up to date along with the latest SRM code.

As always, I follow an upgrade process like this:

A) Protected site VC first.  Why? You can still do a recovery on the other site if things go sour for you for any reason!  These are all in-place upgrades and should take minimal time and effort.

1) vCenter Server

2) vSphere Client

3) Web Client Server— oh wait, nope, not for 5.0.1, there's no update.  Careful you don't just blindly "next-next-next" your way through this or it'll uninstall your web client server. 

4) VUM.  Very important, make sure this is up to date!

B) Protected site SRM

1) In-place upgrade of SRM to 5.0.1.

  1

C) Recovery Site vCenter Server (same steps as above)

D) Recovery Site SRM Server (same steps as above)

That was easy.  Now you should be back to a good state with everything protected and running.  All you need to do is log into your vCenter and check that SRM is still functional.  

Don't forget you'll need to update the SRM plugin as well!  This upgrade requires that the vpxclient gets bounced, so do that and make sure SRM is working.  

Ah but wait, SRM now has the vSphere Replication pieces if you've installed them, and we didn't upgrade those as part of the SRM server upgrade.  So how do we do this?  There are a couple of ways.

C) Upgrade vSphere Replication

One way is to upgrade the appliances themselves by logging into them and using the built-in update tools.  The other is that we could use vSphere Update Manager.

I like logging into the appliances and running the update.  The problem is that it presumes you have an internet connection available to the appliances, and it also assumes your proxy settings are correct, so please double check the appliance configuration in order to do this!  I've done this through a proxy server and it worked like a charm for me.  

For those of you who want to try it via the web interface of the appliance, it's quite straight forward.  Log onto the appliance through the web interface, click on the "update" tab on the top, click on the "Check Updates" action button on the right, and if updates are available, click on the "Install Updates" action. That's it!

2

3

But this is far too manual overall, requires internet and proxy access for your appliances, and is not necessarily the most verbose about what's going on behind the scenes. Let's use VUM instead.

The great news is that VUM has built-in "VA Upgrade" baselines that include upgrading the VR component appliances to the latest available build.  You can build your own baselines for the virtual appliances if you want, but in this case there's a predefined baseline that you can use.  

4

We can simply attach the appropriate baseline to the VR appliances, or a folder containing them, and remediate.  It'll go through a bunch of actions you can follow via events in the Tasks and Events tab:

5

Voila.  They are now upgraded to the latest and greatest.

6

So I recommend doing all of this through VUM.  It's quick, easy, doesn't require any manual process, and means we don't need to worry about network connectivity for the appliances to get to the outside world, so it's also more secure.  You can also reuse the baseline pretty easily next time it comes time to upgrade.  Lastly VUM has the great ability to do snapshots and rollbacks in case of problems, so it's nicely fixable if things go wrong!

That's it for now – basically, when upgrading SRM from now on, make sure you also remember to upgrade your vSphere Replication components, and my advice to you is to use VUM to do so.

-Ken

 

Using the vSphere ESXi Image Builder CLI

Kyle Gleed, Sr. Technical Marketing Architect, VMware

I’ve had several requests for a brief tutorial on using the vSphere ESXi Image Builder CLI. I hope this post will help people better understand the power of the Image Builder CLI and how easy it is create and maintain custom ESXi installation images.

Before I get into using the Image Builder CLI lets review some basic terminology:

  • vSphere Installation Bundle (VIB): VIBs are the building blocks of the ESXi image. A VIB is akin to a tarball or ZIP archive in that it is a collection of files packaged into a single archive. A detailed description of a VIB can be found here.
  • Software Depot: Software depots are used to package and distribute VIBs. A Software Depot (sometimes referred to as a Software Bundle) is a collection VIBs specially packaged for distribution. There are two types of depots – online and offline. An online software depot is accessed remotely using HTTP. An offline software depot is downloaded and accessed locally.
  • Image Profile: An Image Profile is the logical collection of VMware and third-party VIBs needed to install an ESXi host. Image profiles created with the Image Builder CLI can be saved as ZIP archives or ISO files.

Continue reading

Storage Protocol Comparison (A vSphere Perspective) White Paper now available

A number of months ago I published a blog article which compared the different storage protocols which are found in a vSphere environment. On the back of this posting, a number of folks reached out to me to ask if there was a PDF version of the storage comparison available. Well, now there is. You can pick it up from the VMware Technical Resource Center which has a great repository of VMware white papers.

The Storage Protocol Comparison white paper can be download by clicking on this link.

Get notification of these blogs postings and more VMware Storage information by following me on Twitter: Twitter @VMwareStorage

 

Demystifying Configuration Maximums for VSS and VDS

In this blog entry, I will spend some time discussing the configuration maximums related to vSphere standard switch (VSS) and vSphere distributed switch (VDS). I always get this question, what will happen when you cross those configuration maximum limits? Especially, with the vSphere Distributed Switch configuration maximums where there are vCenter Server level limits as well as host level limits. I would like to clarify some of the things regarding these limits in this post. Here are the configuration maximums for vSPhere 5.0 as it pertains to Hosts, VSS, and VDS.

Host Maximums (These apply to both VSS and VDS):

-       Total virtual network switch ports per host  : 4096

-       Maximum Active ports per host  : 1016

VSS Maximums

-       Port groups per standard switch  : 256

VDS Maximums (These are all vCenter Server maximums as vCenter server controls the configuration of VDS)

-       Hosts per VDS  : 350

-       Total distributed virtual network switch ports  : 30,000

-       Total number of Static Distributed Port groups  : 5000

-       Total number of Ephemeral Port groups  : 256

After taking a look at the limits, let’s focus our attention on the VSS deployments first. In such deployments you have to configure VSS on each host and in some cases there might be multiple VSSs on the same host. When you create a VSS you have an option to define the number of virtual ports on that specific virtual switch (default is 128). Next step is to create port groups. VSS only supports Ephemeral binding and allocates zero virtual ports when a port group is created. The virtual port is allocated only when a virtual machine or vmknic is connected to the port group. In this deployment by keeping the count of the number of VMs and vmknics on a host you can tell how many virtual ports are used. You can then compare the number of virtual ports with host limits.  

The host limits are Hard limits. Hard limit means that the host will enforce the limit and you will not be allowed to create more than 4096 virtual ports or have more than 1016 active virtual ports. If you have multiple VSSs on the host this port maximum numbers don’t change. You might have some VSS with more VMs connected and some with less. As long as the total number of VMs and vmknics on the VSSs are within the maximum range you are fine. Also, in my opinion there are enough virtual ports available as per the host maximums, and you should not have any problems regarding scaling your environment and achieving higher consolidation ratios.

Now let’s look at the VDS deployments where there are additional limits placed by the vCenter Server. Before we dive into the limits discussion on VDS, I would like to point out one main difference between the port group configuration on VSS and the distributed port group configuration on VDS.  On VSS port group there is only support for Ephemeral port binding. While on VDS distributed port group, you have an option to choose from the following three different port binding types:

1)    Static binding: Assigns a distributed port when a virtual machine is connected to distributed port group

2)    Dynamic binding: Assigns a distributed port when a powered on virtual machine is connected to distributed port group. This option is deprecated and won’t be available in the future vSphere releases.

3)    Ephemeral binding: There is no port binding with this choice. When you choose this option the behavior is similar to a standard virtual switch (VSS). The number of ports is automatically set to 0, and the port group allocates one port for each connected virtual machine, up to the maximum number of ports available on that port group.

The choice of port binding type on a distributed port group determines how the distributed virtual ports are allocated.

For example, if you choose static port binding for distributed port groups by default 128 virtual ports are allocated by vCenter Server. As you can see, this is different from the VSS deployment where no virtual ports are allocated when a port group is created. Some customers have concerns that they will run out of virtual ports as they create large number of distributed port groups OR they have to manually mange the number of virtual ports per distributed port group to overcome the limits.

To illustrate through an example, if you want to create 400 distributed port groups with default number of virtual ports then you would need 51,200 virtual ports. This number is above the vCenter server limit of 30,000 virtual ports. Even though the number of virtual ports are higher than the limit, vCenter Server will allow you to create 400 distributed port groups because vCenter server limits are Soft limits. Soft limit means that the limit is not enforced and you can create more number of distributed port group or virtual ports beyond the specified limits.

However, it is important to note that VMware has tested these maximum limits. If you go beyond those limits, things still should work but you might encounter other challenges in such big environments that are more related to manageability and performance of the management system.We are trying to simplify the workflow for customers where they don’t have to manually manage the number of ports available on a distributed port group or worry about the limits. To that respect the Auto Expand feature that is available in vSphere 5.0 helps grow the number of virtual ports on a distributed port group automatically. For more details on how to configure this feature please take a look at the following blog entry by William Lam here

Finally, I just want to reiterate that the vCenter server limits are soft limits and doesn’t stop you from going beyond the tested limits. And the Host limits are the one that will be enforced. Given the 1016 virtual port limits per host I am sure it provides enough capacity to grow as far as consolidation ratio goes. Would love to hear your comment on this topic. In the next post I will talk more about the Static port binding advantages and the Auto Expand capability.

 

VAAI Thin Provisioning Block Reclaim/UNMAP In Action

Cormac_Hogan
Posted by Cormac Hogan
Technical Marketing Architect (Storage)

I have done a number of blog posts in the recent past related to our newest VAAI primitive UNMAP. For those who do not know, VAAI UNMAP was introduced in vSphere 5.0 to allow the ESXi host to inform the storage array that files or VMs had be moved or deleted from a Thin Provisioned VMFS datastore. This allowed the array to reclaim the freed blocks. We had no way of doing this previously, so many customers ended up with a considerable amount of stranded space on their Thin Provisioned VMFS datastores.

Now there were some issues with using this primitive which meant we had to disable it for a while. Fortunately, 5.0 U1 brought forward some enhancements which allows us to use this feature once again.

Over the past couple of days, my good friend Paudie O'Riordan from GSS has been doing some testing with the VAAI UNMAP primitive against our NetApp array. He kindly shared the results with me, so that I can share them with you. The posting is rather long, but the information contained will be quite useful if you are considering implementing dead space reclamation.

Continue reading

Does VMware Support Shared/Switched SAS?

An interesting observation was made on a previous blog posting of mine which compared different storage protocols. The commenter asked why I didn't include Shared SAS in the comparison (SAS is Serial-Attached SCSI). I personally have not seen a lot of shared SAS configurations, so I decided to have a look at what is on our HCL for Shared or Switched SAS.

I had a bit of bother locating the supported models however. On the VMware HCL (Hardware Compatibility List), I first selected Storage/SAN which only lists FC, iSCSI & NAS for the Array Type. At first glance, you might think we do not support SAS storage arrays.

23-04-2012 14-50-31
After some guidance from a few folks internally, I since learnt that there are a lot of SAS arrays on our HCL. To find a list of both Direct-Attached and Switched arrays that have been certified, you need to browse the Array Test Configuration window in the 'Additional Criteria' section and select either SAS Direct Attach or SAS Switched.

23-04-2012 14-50-50
An updated search with SAS Switched selected in the Array Test Configuration returned 30 arrays on our HCL, from partners that included DELL, Dot Hill, NetApp & Oracle, among others. Note that 'Test' here simply refers to the fact that the arrays went through a test certification process; it has nothing to do with arrays running in a 'test' environment. VMware fully supports Switched SAS in production environments, as long as the storage array is on our HCL.

We do understand that this is not the easiest information to find, and we are working on a mechanism which will make this a little easier to find going forward. Bottom line – VMware most certainly supports shared/switched SAS.

Get notification of these blogs postings and more VMware Storage information by following me on Twitter: Twitter @VMwareStorage

Did you know that you can now prioritize I/O Paths in the event of a failover?

Cormac_Hogan
Posted by Cormac Hogan
Technical Marketing Architect (Storage)

Historically, when a path failure occurred, we never had a way of selecting which path to failover to. Rather, the VMW_PSP_MRU path selection policy randomly selected a different active path to the target from all available paths. In ESX 4.1, VMware introduced a feature to allow admins to prioritize/rank different paths, and the one with the highest priority was chosen in the event of a failure on the active path.

 In ESXi 5, this functionality is now merged into the standard VMW_PSP_MRU. An admin can now assign ranks to individual paths. Ranking goes through the pathgroup states in the following order:

  • ACTIVE
  • ACTIVE_UO (Active Unoptomized – ALUA state)
  • STANDBY

The PSP will then pick a path that has the highest rank for I/O. As long as there are paths in the ACTIVE pathgroup, these will be given preference over paths in the ACTIVE_UO and STANDBY pathgroups, even if the rank of a path in the ACTIVE group is less than that of a path in ACTIVE_UO or STANDBY pathgroup.

When all paths have the same rank, the behaviour is just like normal VMW_PSP_MRU path failover.

Users will be able to get/set path rank using esxcli commands as shown here:

# esxcli nmp psp getconfig –path <vmhba pathname>
# esxcli nmp psp setconfig –config "rank=<number>" –path <vmhba pathname>

# esxcli nmp psp getconfig –device <naa id>
# esxcli nmp psp setconfig –config "rank=<number>" –device <naa id>

The path rank will be persistent across reboots as "esxcli" is used to set path rank. Whenever a path rank is set an entry is made in esx.conf which ensures persistence. A sample esx.conf entry would look like:

/storage/plugin/NMP/device[naa.60060160f1c014002ca7d9e74df2dd11]/psp = "VMW_PSP_MRU_RANKED“
/storage/plugin/NMP/path[fc.2000001b32100b3d:2100001b32100b3d-fc.50060160b0600c6b:5006016030600c6b-naa.60060160f1c014002ca7d9e74df2dd11]/pspConfig = "rank=2"

As long as there are paths in the ACTIVE pathgroup, a path that has the highest rank in ACTIVE pathgroup will be selected. If there are no paths in the ACTIVE pathgroup then the algorithm searches the ACTIVE_UO pathgroup and STANDBY pathgroup in that order. During failover when there are no ACTIVE paths available, ranking will try to activate a path that has the highest rank in ACTIVE_UO state. If no paths are available in ACTIVE_UO state then ranking will pick a STANDBY path that has a highest rank for activation.

Ranking will fail-back to a better ranked path or path with a better state when such a path becomes available. Note that this is safe with respect to path thrashing because VMW_PSP_MRU will never fail-back to a path which requires activation (e.g. ACTIVE to STANDBY or ACTIVE to ACTIVE_UO).

Get notification of these blogs postings and more VMware Storage information by following me on Twitter: Twitter @VMwareStorage