Category Archives: 2 Storage Management

VMware Virtual SAN Alarms for vCenter Server with PowerCLI

VSANPowerCLIAlarmLogoI was recently involved in a couple customer conversations where the main topics were focused on monitoring and troubleshooting events in vCenter particularly for Virtual SAN.

I know that particular topic has been covered a few times in the past, not only on the VMware corporate storage blog but also by other community blogs. To be more specific, one of the VSAN Champions William Lam has covered this topic extensively on his personal blog.

The work that we have done on the topic of vCenter Server Alarms and Virtual SAN stems from the findings identified in two articles published by William. For more information on what are the recommended vCenter Server Alarms for Virtual SAN and how to add and configure them take a look at the articles listed below:

With vSphere 6.0 and Virtual SAN 6.0 nearing generally available very soon, this script can make things a lot easier for all Virtual SAN customers and provide a simplified way to get all the available vCenter Server alarms for Virtual SAN added and configured within seconds.

I got a chance to work on this little nugget with one of the world’s baddest PowerCLI gurus on the planet and also another VSAN Champion Alan Renouf and William Lam as well whom are members of the VMware virtualization team codename #TheWreckingCrew. Here is a PowerCLI sample code that can be utilized to add and configure all of the vCenter Server Alarms for Virtual SAN. These alarms are applicable to both Virtual SAN versions 5.5 as well as 6.0. Continue reading

What’s All the Buzz About Software-Defined Storage?

By now, you’ve more than likely heard something about Software-Defined Storage. With every mention of the term, you may be wondering, “What does it mean for me?”

Wonder no longer!

The VMware Software-Defined Storage approach enables a fundamentally more efficient operational model, driving transformation through the hypervisor, bring to storage the same operational efficiency that server virtualization brought to compute. Software-Defined Storage will enable you to better handle some of the most pressing challenges storage systems face today.

During this webcast, Mauricio Barra, Senior Product Marketing Manager at VMware, will discuss the VMware Software-Defined Storage vision, the role of the hypervisor in transforming storage, as well as key architectural components of VMware Software-Defined Storage.

If you are looking to understand how Software-Defined Storage, along with the enhanced VMware Virtual SAN 6 and new VMware vSphere Virtual Volumes, can benefit your organization, now is your chance.

Register today and take the next step toward making Software-Defined Storage a reality.

Be sure to subscribe to the Virtual SAN blog or follow our social channels at @vmwarevsan and Facebook.com/vmwarevsan for the latest updates.

For more information about VMware Virtual SAN, visit http://www.vmware.com/products/virtual-san.

vSphere Virtual Volumes Interoperability: VAAI APIs vs VVOLs

VVOLs-LOGO2015In 2011 VMware introduced block based VAAI APIs as part of vSphere 4.1 release. This APIs helped improving performance of VMFS by providing offload of some of the heavy operations to the storage array. In subsequent release, VMware added VAAI APIs for NAS, thin provisioning, and T10 command support for Block VAAI APIs.

Now with Virtual Volumes (VVOLs) VMware is introducing a new virtual machine management and integration framework that exposes virtual disks as the primary unit of data management for storage arrays. This new framework enables array-based operations at the virtual disk level that can be precisely aligned to application boundaries with the capability of providing a policy-based management approach per virtual machine.

The question now is what happens to VAAI APIs (NAS and Block) and how will virtual volumes co-exist together?. With Virtual Volumes, aside from the data path, the ESX hosts also control of the connection path to the storage arrays. The Vendor Provider typically arranges the path to the storage arrays. In this case, virtual volumes can be considered as a richer extension of the VAAI NAS APIs. On July of last year I published an article in which I discussed the interoperability between VAAI and VVOLs during cloning operations in deferent scenarios “Virtual Volumes (VVols) vSphere APIs & Cloning Operation Scenarios” consider having another look. Now let’s go over a set of interaction scenarios between the VAAI APIs and Virtual Volumes.

VAAI vs VVOLsRev2

VAAI Block and VVOLs:

VAAI Block defines basic SCSI primitives, which allows vSphere (primarily VMFS) to offload pieces of its operations to the array. There is still a heavy dependency on VMFS playing the role of an orchestrator and sending individual VAAI Block command to the storage array.

With VVOLs, the storage array systems are aware of virtual machine’s disk and hence they can efficiently perform operations such as snapshots, clones, and zeroing operations on the virtual machines disks. But still VAAI Block and thin-provisioning primitives co-exists with VVOLs.

  • ATS – All config VVOLs objects that are stored in a Block VVOLs datastore are formatted with VMFS and hence require supporting ATS commands. This support is detected based on ATS support for a PE LUN to which VVOLs are bound.
  • XCOPY – With VVOLs, ESX will always try to use array based VVOLs copy mechanism defined using copyDiffsToVirtualVolume or CloneVirtualVolume primitives. If these are not supported, it will fall back to software copy. Since software copy involves copying data between  Protocol Endpoint (PE) LUNs and VMFS LUN, there is still potential to use XCOPY command during software data copy. When falling back to software copy, vSphere will use the XCOPY command when moving a virtual machine from a VMFS datastore to VVOLs datastore or between two VVOLs datastores. In the first release, vSphere will not try to use XCOPY if the virtual machine is moving from VVOLs datastore to VMFS datastore. vSphere will detect the support for XCOPY for individual VVOLs based on the support of VAAI XCOPY on PE LUN to which it is bound.
  • Block Zeroing – Main purpose of this primitive is to initialize thick disks provisioned on VMFS datastores. So this primitive is not used for VVOLs as in VVOL, VMFS on config VVOLs only contains small descriptor files. Main purpose of this primitive is to initialize thick disks provisioned on VMFS datastores. With VVols VM provisioning type is specified as part of profile information passed during VVOL creation. Config VVol which are formatted VMFS are “thin” by definition. Also size of config VVol is very small (default 4GB) and it contains small files such as disk descriptors, vm config files, stats and logs data. So Block Zeroing primitive is not used for VVOLs by vSphere.

VAAI NAS and VVOLs:

Unlike SCSI, NFSv3 is a frozen protocol, which means all features of VAAI NAS came via private RPCs issued by vendor plugins. VVOLs extends this model of communicating outside basic protocol. VVOLs defines the rich set of VASA APIs to allow offload of most of the vSphere operations. With vSphere 6.0, existing VAAI NAS will continue work but VVOL datastores will offer richer and faster experience than VAAI NAS. Also, VVOLs doesn’t need any vendor specific plugin installation. Another noteworthy point regarding NAS VAAI and storage vMotion is that NAS VAAI snapshots cannot be migrated, when an attempt is made to migrate a virtual machine with NAS VAAI “snapshots”, the snapshot hierarchy is collapsed and all snapshot history is lost.  This is not the case with VVOLs, and further we can translate snapshot hierarchies between NFS (Non-VAAI)/VMFS/VSAN/VVol (any source->target combination of the 4).

VAAI Thin-Provisioning and VVOLs:

  • Soft Threshold Warnings – similar to a VMFS datastore with TP support, Soft threshold warning for any VVOL virtual machine’s I/O, will be seen in vCenter. Also the corresponding container gets flagged appropriately. The container gets the yellow warning icon when soft threshold warning is issued. Essentially this could be potentially confusing for vSphere admin as this warning is virtual machine specific and warning message doesn’t provide the details on which virtual machine has the problem. This will be corrected in a future update.
  • Hard Threshold Warnings – Hard threshold warning behavior is similar to that on VMFS datastore.
    When VVOL virtual machine’s I/O gets a hard threshold warning, it will stun the corresponding virtual machine. Administrator can resume the virtual machine after provisioning more space or can completely stop the virtual machine.
  • UNMAP – Since there are no disks managed by VMFS, vSphere will not be actively using the UNMAP primitive. vSphere will not actively used UNMAP primitive. Although it will pass through UNMAP to backing VVOLs when guest issues it. Although it will issue UNMAP when guest issues it like XCOPY and ATS, vSphere will detect support for UNMAP for individual VVOLs based on the support of VAAI UNMAP on a Protocol Endpoint LUN to which it is bound. Another thing is that vSphere will not enforce any of the alignment criteria when UNMAP is issued by the guest. This behavior is very similar to the one found with an RDM LUN. With VVOLs UNMAP commands going from the guest directly to the storage array the same way we send all I/O, and the array will now finally see all the individual UNMAP commands guest operating systems issues. For example, a Windows Server 2012 will immediately become a source of UNMAP commands.  On the other hand for the Linux, the filesystem checks the SCSI version supported by the virtual device and won’t issue an UNMAP with the current level of SCSI support we present (SCSI-2). That is something that will be addressed in a future release.

Now let’s identify the supported operations and behavior for the different primitives.

Primitives Supported Operations and Behavior

Powered On Storage vMotion without snapshots

For a powered on VM without snapshots, the Storage vMotion driver coordinates the copy. The Storage vMotion driver will use the data mover to move sections of the current running point. The data mover will employ “host orchestrated hardware offloads” (XCOPY, etc) when possible.

Block VAAI & Block VVOLs:
– Bitmap APIs will be used to determine only the relevant blocks to migrate (Space efficiency optimization)
– XCOPY will be used to migrate (host orchestrated offload)

NAS VAAI
– No optimizations

NAS VVOL
– Bitmap APIs will be used to determine only the relevant blocks to migrate (Space efficiency optimization)

Powered On Storage vMotion with snapshots

For a powered on VM with snapshots, the migration of snapshots is done first, then the Storage vMotion driver will use the data mover to move the current running point.

Block VAAI
– Bitmap APIs will be used to determine only the relevant blocks to migrate (Space efficiency optimization)
– XCOPY will be used to migrate snapshots + current running point

Block VVOL
– Bitmap APIs will be used to determine only the relevant blocks to migrate (Space efficiency optimization)
– The cloneVirtualVolume and copyDiffsToVirtualVolume VASA APIs will be used to migrate all snapshots (Full hardware offload)
– XCOPY will be used to migrate the current running point (host orchestrated offload)

NAS VAAI
– NAS VAAI cannot migrate snapshots
– No further optimization

NAS VVOL
– Bitmap APIs will be used to determine only the relevant blocks to migrate (Space efficiency optimization)
– The cloneVirtualVolume and copyDiffsToVirtualVolume VASA APIs will be used to migrate all snapshots (Full hardware offload)

Powered Off Storage vMotion without snapshots
For a powered off VM, the Storage vMotion driver is not in the picture. So, effectively a Storage vMotion of a powered off VM is a logical move (Clone + Delete Source).

Block VAAI
– Bitmap APIs will be used to determine only the relevant blocks to migrate (Space efficiency optimization)
– XCOPY will be used to migrate current running point

Block VVOL
– Bitmap APIs will be used to determine only the relevant blocks to migrate (Space efficiency optimization)
– The cloneVirtualVolume VASA API will be used to migrate the current running point (Full hardware offload)
– The copyDiffsToVirtualVolume VASA APIs will be used to migrate all snapshots (Full hardware offload)

NAS VAAI
– NAS VAAI clone offload will be employed to migrate the current running point

NAS VVOL (Same as block VVOL)
– Bitmap APIs will be used to determine only the relevant blocks to migrate (Space efficiency optimization)
– The cloneVirtualVolume VASA API will be used to migrate the current running point (Full hardware offload)
– The copyDiffsToVirtualVolume VASA APIs will be used to migrate all snapshots (Full hardware offload)

Powered off Storage vMotion with snapshots
– Same general idea as above, just with snapshots too…

Block VAAI
– Bitmap APIs will be used to determine only the relevant blocks to migrate (Space efficiency optimization)
– XCOPY will be used to migrate current running point + snapshots

Block VVOL
– Bitmap APIs will be used to determine only the relevant blocks to migrate (Space efficiency optimization)
– The cloneVirtualVolume VASA API will be used to migrate the current running point + snapshots (Full hardware offload)

NAS VAAI
– NAS VAAI cannot migrate snapshots
– NAS VAAI clone offload will be employed to migrate the current running point

NAS VVol (Same as block VVOL)
– Bitmap APIs will be used to determine only the relevant blocks to migrate (Space efficiency optimization)
– The cloneVirtualVolume VASA API will be used to migrate the current running point + snapshots (Full hardware offload)

– Enjoy

For future updates on Virtual SAN (VSAN), vSphere Virtual Volumes (VVOLs) and other Software-defined Storage technologies as well as vSphere + OpenStack be sure to follow me on Twitter: @PunchingClouds

VMware Virtual SAN: All-Flash Configuration

VSAN-ALL-FLASH-LOGOThe cat is officially out of the bag, as they say!. Everyone in the world should now be aware of the fact that VMware Virtual SAN 6.0 supports an all-flash architecture. I think it’s time to discuss a couple of items with regards to a new architecture.

The Virtual SAN 6.0 All-Flash architecture uses flash-based devices for both caching and persistent storage. In this architecture, the flash cache is used completely as write buffer. This all-flash architecture introduces a two-tier model of flash devices:

  • write-intensive, high endurance caching tier for the writes
  • read-intensive, cost-effective flash device tier for data persistence

All-Flash-Arch

The new device tiering model not only deliver incredible performance results, but it can also potentially introduce cost savings for the Virtual SAN 6.0 all-flash architecture depending on the design and hardware configuration of the solution.

Virtual SAN Configuration Requirements

In order to configure Virtual SAN 6.0 for the all-flash architecture, the flash devices need to be appropriately identified within the system. In Virtual SAN, flash devices are identified and categorized for the caching tier by default. In order to successfully enabled the all-flash architecture configuration we need manually to flag the flash devices that will be utilized for data persistence or capacity. This configuration is performed via one of the supported command line interface tools such as RVC or ESXCLI.

RVC handles the configuration of the devices at a cluster level. Below you’ll find an image, which illustrates the usable syntax for flagging the flash devices with RVC.

RVC

ESXCLI handles it at the per-host level. Below you’ll find an image, which illustrates the usable syntax for flagging the flash devices with esxcli.

RVC

Another command line utility that is worth knowing is the VSAN Disk Query (vdq). This utility allows users to identify when the flash devices are configured for used in the capacity tier as well as if they are eligible to be use by Virtual SAN.
Whenever vdq is used to query the flash devices on a host as illustrated below, the output will display a new property called “IsCapacityFlash”. This property specifies whether a flash device will be utilized for the capacity tier instead of the caching tier.

all-flash-vsan-6-vdq

For more in-depth information on the use of vdq, please take a look at a post by one of VMware’s elite engineers and VSAN Champion William Lam.

It’s important to highlight that flagging flash devices to be used for capacity cannot be performed from the option available in the vSphere Web Client UI. It has to be performed via the CLI. (wait for it….wait for it)

Once the flash devices, they will be displayed as magnetic devices (HDD) in the disk management section of the Virtual SAN management tab.

That’s about it, after the flash devices have been properly tagged, the rest of the Virtual SAN configuration procedure is as easy as it was in the previous version.

So in the spirit of making things easy and reduce any contention with getting into the CLI and manually flagging every disk. I’ve been able to design a tool along with my good pal and now a VSAN Champion Brian Graf that should take care of disk tagging process for just about everyone.

Here is a demo of how simple it is to configure a Virtual SAN 6.0 all-flash cluster with a teaser of the Virtual SAN All-Flash Configuration Utility. Oh yeah, I almost forgot to mention….. It’s a 64 node all-flash cluster (The BigDaddy).

– Enjoy

For future updates on Virtual SAN (VSAN), vSphere Virtual Volumes (VVOLs) and other Software-defined Storage technologies as well as vSphere + OpenStack be sure to follow me on Twitter: @PunchingClouds

What’s New with vSphere Data Protection 6.0 and vSphere Replication 6.0

There are many interesting items coming out of VMware’s 28 Days of February where customers can learn more about “One Cloud, Any Application, Any Device”. A couple of the biggest items are the announcements of vSphere 6.0 and Virtual SAN 6.0. In this article, we will look at what is new with two of the more popular vSphere features: vSphere Data Protection and vSphere Replication. Perhaps the biggest news with these two features is around vSphere Data Protection. Before vSphere 6.0 and vSphere Data Protection 6.0, there were two editions of vSphere Data Protection: vSphere Data Protection, included with vSphere, and vSphere Data Protection Advanced, which was sold separately. With the release of vSphere Data Protection 6.0, all vSphere Data Protection Advanced functionality has been consolidated into vSphere Data Protection 6.0 and included with vSphere 6.0 Essentials Plus Kit and higher editions. Keep reading to learn more about the advanced functionality now included as part of vSphere Data Protection 6.0.

Continue reading

vSphere APIs for IO Filtering

I’ve been fortunate to have one of our super sharp product line managers, Alex Jauch (twitter @ajauch), spend some time explaining to me one of the new enabling technologies of vSphere 6.0: VAIO.  Let’s take a look at this really powerful capability and see what types of things it can enable and an overview of how it works.

VAIO stands for “vSphere APIs for IO Filtering”

This had for a time colloquially been known as “IO Filters”. Fundamentally, it is a means by which a VM can have its IO safely and securely filtered in accordance with a policy.

VAIO offers partners the ability to put their technology directly into the IO stream of a VM through a filter that intercepts data before it is committed to disk.

Why would I want to do that? What kinds of things can you do with an IO filter?

Well that’s up to our customers and our partners. VAIO is a filtering framework that will initially allow vendors to present capabilities for caching and replication to individual VMs. This will expand over time as partners come on board to write filters for the framework, so you can imagine where this can go for topics such as security, antivirus, encryption and other areas, as the framework matures. VAIO gives us the ability to do stuff to an IO stream in a safe and certified fashion, and manage the whole thing through profiles to ensure we get a view into the IO stream’s compliance with policy!

The VAIO program itself is for partners – the benefit is for consumers who want to do policy based management of their environment and pull in the value of our partner solutions directly into per-VM and indeed per-virtual disk storage management.

When partners create their solutions their data services are surfaced through the Storage Policy Based Management control plane, just like all the rest of our policy-driven storage offerings like Virtual SAN or Virtual Volumes.

Beyond that, because the data services operate at the VM virtual device level, they can also work with just about any type of storage device, again furthering the value of VSAN and VVOLs, and extending the use of these offerings through these additional data services.

How does it work?

The capabilities of a partner filter solution are registered with the VAIO framework, and are surfaced for user interaction in the SPBM Continue reading

vRealize Operations Management Pack for Virtual SAN Beta – Early Sign-up

If you already heard the exciting news about VMware new offerings – vSphere 6, Virtual SAN 6 and vSphere Virtual Volumes – and thought it can’t get any better, we have a small surprise for you. Virtual SAN 6.0 includes a host of new features including high performance snapshots and clones, all flash Virtual SAN with intelligent 2-tier model, failure domains and more. One of the most requested features among Virtual SAN 5.5 customers is enabling greater visibility to what happens “under the hood”. The Virtual SAN team developed a new health dashboard that will help customers tackle underlying hardware issues as part of the v6.0 release, but in addition to that the vRealize operations team developed an advanced set of dashboards aimed at making Virtual SAN users life much easier.

The Virtual SAN team along with the vRealize Operations team are thrilled to offer you a unique opportunity to beta test the new vRealize Operations Management Pack for Storage Devices. The management pack will feature advanced insight into Virtual SAN through advanced analytics to enable rapid troubleshooting and cluster optimization.

 vrops1 vrops2

 

We will share more details closer to the start of the beta program which will kick off in Q1 2015. If you would like to get more information and be on the list of people who get an invitation to participate in the beta, please sign up here – www.vmware.com/go/vrops4vsan-beta

Check out the vROps MPSD blog post for more details – http://blogs.vmware.com/management/2015/02/vsan-simplifying-sddc-storage-operations-with-vrealize-operations-management-pack-for-storage-devices.html

Operationalizing VMware Virtual SAN: Automating vCenter Alarm Configuration Using PowerCLI

powercli 5.8 icon

Welcome to the next installment in our Operationalizing VMware Virtual SAN series. In our previous article we detailed “How to configure vCenter alarms for Virtual SAN”. In today’s article we will demonstrate how to automate that configuration workflow leveraging PowerCLI.

(Many thanks to VMware genius Alan Renouf (@alanrenouf) for his contributions to this topic) [Joe Cook: @CloudAnimal]

The PowerCLI code required to automate the configuration of vCenter Alarms for Virtual SAN is considerably straightforward.

1. Connect to vCenter

Connect-VIServer -Server 192.168.100.1 -User Administrator@vsphere.local -Password vmware

2. Define the the Virtual SAN cluster where you would like the rules to be created

$Cluster = "Cluster Site A"

3. Next we create a hash table with the desired VMware ESXi Observeration IDs (VOB IDs) for Virtual SAN and include a description for each VOB ID.

If you are not used to programming, the concept of arrays and hash tables may be a bit confusing. Using variables is generally much easier to understand. One way of understanding variables is to think of them simply as a short amount of text used to represent a larger amount of text in your program or script ($x=”larger amount of text”). Instead of typing “larger amount of text” continually, you can simply type $x and the language interpreter (in our case PowerShell), will substitute the string “larger amount of text” wherever it finds $x in your script. Variables can be used to greatly reduce the amount of code you have to type, make your scripts much easier to read, and have many other uses as well.

If we think of variables as ways to store one value to reference, we can think of arrays as a way to store multiple values to reference. In our example today, we would have to create at least 32 variables to perform the same work that we can with one hash table.

A hash table is a type of array that is also known as a dictionary. It is a collection of name-value pairs (e.g. “name”=”value”) that can be used . Here we have an example of a basic hash table:

$HashTableName = @{
VOB_ID_A="VOB Description";
VOB_ID_B="VOB Description";
VOB_ID_C="VOB Description";
}

In the table below we have a breakdown of the components of the code used to create a hash table:

Syntax Component Description
$HashTableName = Replace “HashTableName” with the text you wish to use to reference this list of key-values pairs.
@{ Indicates the start of the hash table or array
VOB_ID_A=”VOB Description”; Key-Value pair to store within the hash table. VOB_ID_A will be the VOB ID from the VMware ESXi Observation Log (VOBD) (e.g. “esx.audit.vsan.clustering.enabled”). “VOB Description” will be the description of the associated “VOB ID” (e.g. “Virtual SAN clustering service had been enabled”). Make sure to use quotation marks whenever spaces are used and to separate each key-value pair with a semicolon (;).Examine /var/log/vobd.log on your vSphere host to obtain possible VOB IDs. See here for a list of VMware ESXi Observation IDs for Virtual SAN.
} Indicates the end of the hash table or array

Here is an example of a hash table with a single key-value pair representing a single vCenter Alarm for Virtual SAN:

$VSANAlerts = @{
"esx.audit.vsan.clustering.enabled" = "Virtual SAN clustering service had been enabled";
}

Below is the actual hash table that we will use in our example Virtual SAN Alarm Configuration script. It is fully populated with all of the recommended VOB IDs for Virtual SAN along with the description for each. We have labeled this hash table as “$VSANAlerts”. You will see $VSANAlerts referenced further along in the script as we reference the items within our hash table.

$VSANAlerts = @{
 "esx.audit.vsan.clustering.enabled" = "Virtual SAN clustering service had been enabled";
 "esx.clear.vob.vsan.pdl.online" = "Virtual SAN device has come online.";
 "esx.clear.vsan.clustering.enabled" = "Virtual SAN clustering services have now been enabled.";
 "esx.clear.vsan.vsan.network.available" = "Virtual SAN now has at least one active network configuration.";
 "esx.clear.vsan.vsan.vmknic.ready" = "A previously reported vmknic now has a valid IP.";
 "esx.problem.vob.vsan.lsom.componentthreshold" = "Virtual SAN Node: Near node component count limit.";
 "esx.problem.vob.vsan.lsom.diskerror" = "Virtual SAN device is under permanent error.";
 "esx.problem.vob.vsan.lsom.diskgrouplimit" = "Failed to create a new disk group.";
 "esx.problem.vob.vsan.lsom.disklimit" = "Failed to add disk to disk group.";
 "esx.problem.vob.vsan.pdl.offline" = "Virtual SAN device has gone offline.";
 "esx.problem.vsan.clustering.disabled" = "Virtual SAN clustering services have been disabled.";
 "esx.problem.vsan.lsom.congestionthreshold" = "Virtual SAN device Memory/SSD congestion has changed.";
 "esx.problem.vsan.net.not.ready" = "A vmknic added to Virtual SAN network config doesn't have valid IP.";
 "esx.problem.vsan.net.redundancy.lost" = "Virtual SAN doesn't haven any redundancy in its network configuration.";
 "esx.problem.vsan.net.redundancy.reduced" = "Virtual SAN is operating on reduced network redundancy.";
 "esx.problem.vsan.no.network.connectivity" = "Virtual SAN doesn't have any networking configuration for use."
 }

(For more information on working with PowerShell hash tables, see this handy Microsoft TechNet article)

4. Next we use the Get-View cmdlet to query the vCenter Alarm Manager for each VOB ID listed in step 3.

The Get-View cmdlet returns the vSphere inventory objects (VIObject) that correspond to the specified search criteria.

$alarmMgr = Get-View AlarmManager
 $entity = Get-Cluster $Cluster | Get-View
 $VSANAlerts.Keys | Foreach {
 $Name = $VSANAlerts.Get_Item($_)
 $Value = $_

5. Create the vCenter Alarm specification object

 $alarm = New-Object VMware.Vim.AlarmSpec
 $alarm.Name = $Name
 $alarm.Description = $Name
 $alarm.Enabled = $TRUE
 $expression = New-Object VMware.Vim.EventAlarmExpression
 $expression.EventType = Vim.Event.EventEx
 $expression.eventTypeId = $Value
 $expression.objectType = "HostSystem"
 $expression.status = "red"
 $alarm.expression = New-Object VMware.Vim.OrAlarmExpression
 $alarm.expression.expression += $expression
 $alarm.setting = New-Object VMware.Vim.AlarmSetting
 $alarm.setting.reportingFrequency = 0
 $alarm.setting.toleranceRange = 0

6. Create the vCenter Alarm in vCenter

 Write-Host "Creating Alarm on $Cluster for $Name"
 $CreatedAlarm = $alarmMgr.CreateAlarm($entity.MoRef, $alarm)
 }
 Write-Host "All Alarms Added to $Cluster"

As you can see, the steps to create vCenter Alarms for Virtual SAN are actually pretty straightforward. If you have not yet began monitoring your Virtual SAN environment, these steps can accelerate the process quite rapidly and you really do not have to be an expert in PowerCLI to do so.

VMware Hands on Labs

Here is a great tip brought to you by our friends at the VMware Hands on Labs. If you would like an excellent shortcut to getting “hands on” creating vCenter Alarms for Virtual SAN, using PowerCLI cmdlets, try out the lab below:

HOL-SDC-1427 – VMware Software Defined Storage: Module 5: Advanced Software Defined Storage With SPBM and PowerCLI (30 minutes)

 

We have many more articles on there way so share, re-tweet, or whatever your favorite social media method is. You will not want to miss these!

(Thanks to @millardjk for his keen eye)


Resources

 

Gartner Predictions: Storage Integration Leading the Way in 2015

Rowing crews move in sync to keep their craft moving at a steady pace – but that synchronous movement would not exist without an investment in the right hardware and the right training. For businesses, that means not only investing in the right rowers, but also in the right tools to enable athletes. And for a long time those tools, namely storage for midmarket organizations, have either been too expensive, too resource heavy or too complex.

It doesn’t have to be that way. Midmarket teams should be able to afford the right equipment without worry so they can focus on more important tasks. That’s why we’re thrilled to discover Gartner Research included VMware Virtual SAN in their recent report, “Predicts 2015: Midmarket CIOs Must Shed IT Debt to Invest in Strategic IT Initiatives.”

The report investigates how CIOs can best invest resources to give IT teams the simplified tools they need while staying on budget. Often, this excludes “best of breed” solutions. Gartner suggests midsize businesses seek out integrated systems that combine server, storage and network components in a package suitable for their needs instead.

For many, Virtual SAN, VMware’s policy-driven storage product design for vSphere environments, is that solution. Its ease of use, performance, scalability and low total cost of ownership helps to avoid significant upfront investments. And, with its VM-level storage policies, Virtual SAN automatically and dynamically matches requirements with underlying storage resources. Meaning less time manually managing storage tasks and more time focusing on important tasks.

According to Gartner’s predictions, roughly 40% of midsize enterprises will replace all data center services and storage with integrated systems by 2018. We certainly would like to see, and be at the forefront, of that transition.

VMware Virtual SAN stands apart from the competition not just because of its ability to deliver simple software defined shared storage, but also because of its integrated partner ecosystem. More than 40 Virtual SAN Ready Nodes can be purchased from our system vendor partners.

We’re thrilled to have been a part of software defined storage in 2014, and we can’t wait to push the envelope further in 2015.

Be sure to subscribe to the Virtual SAN blog or follow our social channels at @vmwarevsan and Facebook.com/vmwarevsan for the latest updates.

For more information about VMware Virtual SAN, visit http://www.vmware.com/products/virtual-san.

Tips for a Successful VMware Virtual SAN Evaluation

One of the biggest advantages of Virtual SAN is that it is so easy to set up and use.  Out of all the evaluation options available to them, many customers have realized that trying it out in their own environment is entirely feasible.

We’ve been keeping track of the many Virtual SAN evaluations to date, and have created a quick guide that should help anyone evaluating Virtual SAN in their environment.  In it, you’ll find a checklist on configurations, verifying compatibility, testing the network, expected behaviors for failure testing, and tips on testing performance.  It’s an essential guide for anyone working with Virtual SAN.

We hope you find this document useful!

Download: Tips for a Successful VMware Virtual SAN Evaluation