Home > Blogs > Virtual Blocks > Author Archives: Rawlinson Rivera

Author Archives: Rawlinson Rivera

Rawlinson Rivera

About Rawlinson Rivera

Rawlinson is a Principal Architect working in the Office of CTO for the Storage and Availability Business Unit at VMware. Focus on defining and communicating VMware’s product vision and strategy, and an active advisor for VMware's product roadmap and portfolio. Responsibilities revolved around connecting VMware's R&D organizations with customers and partners in the field. He specializes in enterprise architectures (private and public clouds), Hyper-converged Infrastructures, business continuity / disaster recovery technologies and solutions including Virtual SAN, vSphere Virtual Volumes, as well as other storage technologies and solutions for OpenStack and Cloud-Native Applications. Rawlinson is a VMware Certified Design Experts (VCDX#86) and main author of the blog punchingclouds.com.

vSphere Virtual Volumes (VVols) Interoperability Matrix

VVols-GA

Since the official release of vSphere 6.0, Virtual Volumes (VVols) has generated a great deal of interest with customers, field consultants, and the VMware community. Now that VVols is available customers can begin testing functionality and capabilities. There have been many questions about what VMware products and vSphere features are compatible and currently interoperate with VVols.

Because VMware’s product portfolio continues to expand exponentially, identifying all of the new products and features that interoperate with VVols can be a tedious and potentially time-consuming task. In the interest of time and efficiency, the need for a centralized Virtual Volumes interoperability guide is eminent, so here is one.

Below is a list of VMware products and vSphere 6.0 features that as of today March 30th, 2015 are supported and interoperate with VVols. Please keep in mind that the interoperability and supportability of any of these products and features can change with a future patch or product release. It is highly recommended to check the VMware compatibility matrix guide for the official and up to date list of products and features that are interoperable with VVols.

Continue reading

VMware Virtual SAN 6.0: Data Encryption with Hytrust DataControl

VSAN-Hytrust

Customers from different industries and institutions are very interested in Virtual SAN as a storage solution not just because of the technological value it delivers today, but because of the product’s undeniable value around operational efficiency, ease of management, and flexibility.

Some of these customers are from financial, healthcare and government institutions, and conduct their business in areas that are governed by regulatory compliance laws such as HIPPA, PCI-DSS, FedRAMP, Sarbanes-Oxley, etc. These laws demand compliance with numerous security measures, one of them being the ability to guarantee data integrity by securing data with some form of encryption.

Today Virtual SAN does not include encryption as one of its data services as this feature is currently under development for a future release. Now, when considering Virtual SAN as a potential solution wherever data encryption is a requirement based on regulatory compliance laws, it’s important to know what options are currently available.

In Virtual SAN the encryption data service capabilities are offloaded to hardware-based offerings available through Virtual SAN Ready Nodes. Data encryption data services are exclusively supported on Virtual SAN Ready Node appliances that are comprised with all of the certified and compatible hardware devices that provide encryption capabilities such as self-encrypting drives, and/or storage controllers. The Virtual SAN Ready Node appliances are offered by just about all the OEM hardware vendors that are part of VMware’s ecosystem.

An alternative option to the Virtual SAN Ready Nodes is a software based solution developed and offered by a company called Hytrust. Hytrust is one of the members of VMware’s partner ecosystem whose business is focused around the delivery of data security services for private and public cloud infrastructures. The solution I want to highlight in particular is called Hytrust DataControl.

Hytrust DataControl is a software-based solution that is designed with the capability of protecting virtual machines and their data throughout their entire lifecycle (from creating to decommission). Hytrust DataControl delivers both encryption and key management services.

This solution is built specifically to address the unique requirements of private, hybrid and public clouds, combining robust security, easy deployment, exceptional performance, infrastructure independence, and operational transparency. Hytrust DataControl ease of deployment and management capabilities complies with one of the main principles of Virtual SAN which is simplicity and ease of management.

Hytrust DataControl virtual machine edition is based on a software agent that encrypts data from within the Windows or Linux operating system of a virtual machine, ensuring protection and multi-tenancy of data in any infrastructure. DataControl also allows you to transfer files between VMs, so you can securely migrate stored data from your private to the public cloud.

The deployment of the Hytrust DataControl solution and installation and configuration of the software is done in a couple of easy steps which take just a few minutes. Once the software is resident, any data written to storage by an application will be encrypted both in motion, as it travels securely through the hypervisor and network, and also at rest on the Virtual SAN datastore.

HT-deployment

Continue reading

Upgrading to VMware Virtual SAN 6.0

VSAN-UpgradeVirtual SAN 6.0 introduced new changes to the structural components of its architecture. One of those changes is a new on-disk format which delivers better performance and capability enhancements. One of those new capabilities allows vSphere Admins to perform in-place rolling upgrades from Virtual SAN 5.5 to Virtual SAN 6.0 without introducing any application downtime.

Upgrading an existing Virtual SAN 5.5 cluster to Virtual SAN 6.0 is performed in multiple phases and it requires the re-formating of the of all of the magnetic disks that are being used in a Virtual SAN cluster. The upgrade is defined as a one-time procedure that is performed from RVC command line utility with a single command.

Upgrade Phase I: vSphere Infrastructure Upgrade

This phase of the upgrade is all components are upgraded to the vSphere 6.0 release. All vCenter Servers and ESXi hosts and all infrastructure related components need to be upgraded to version their respective and corresponding 6.0 software release. Any of the vSphere supported procedures for the individual components is supported.

  • Upgrade vCenter Server 5.5 to 6.0 first (Windows or Linux based)
  • Upgrade ESXi hosts from 5.5 to 6.0 (Interactive, Update Manager, Re-install, Scripted Updates, etc)
  • Use Maintenance Mode (Ensure accessibility – recommended for reduced times, Full data migration – not recommended unless necessary

Upgrade Phase II: Virtual SAN 6.0 Disk Format Conversion (DFC)

This phase is where the previous on-disk format (VMFS-L) is replaced on all of the magnetic disk devices with the new on-disk format (VSAN FS). The disk format conversion procedures will reformat the disk groups and upgrade all of the objects to the new version 2. Virtual SAN 6.0 provides supports for both the previous on-disk format of Virtual SAN 5.5 (VMFS-L) as well as its new native on-disk format (VSAN FS).

While both on-disk formats are supported, it is highly recommended to upgrade the Virtual SAN cluster to the new on-disk format in order to take advantage of the performance and new available features. The disk format conversion is performed sequentially performed in a Virtual SAN cluster where the upgrade takes place one disk group per host at a time. The workflow illustrated below is repeated for all disk groups on each host before the process moves onto another host that is a member of the cluster.

DFC-Workflow

Continue reading

VMware Virtual SAN Alarms for vCenter Server with PowerCLI

VSANPowerCLIAlarmLogoI was recently involved in a couple customer conversations where the main topics were focused on monitoring and troubleshooting events in vCenter particularly for Virtual SAN.

I know that particular topic has been covered a few times in the past, not only on the VMware corporate storage blog but also by other community blogs. To be more specific, one of the VSAN Champions William Lam has covered this topic extensively on his personal blog.

The work that we have done on the topic of vCenter Server Alarms and Virtual SAN stems from the findings identified in two articles published by William. For more information on what are the recommended vCenter Server Alarms for Virtual SAN and how to add and configure them take a look at the articles listed below:

With vSphere 6.0 and Virtual SAN 6.0 nearing generally available very soon, this script can make things a lot easier for all Virtual SAN customers and provide a simplified way to get all the available vCenter Server alarms for Virtual SAN added and configured within seconds.

I got a chance to work on this little nugget with one of the world’s baddest PowerCLI gurus on the planet and also another VSAN Champion Alan Renouf and William Lam as well whom are members of the VMware virtualization team codename #TheWreckingCrew. Here is a PowerCLI sample code that can be utilized to add and configure all of the vCenter Server Alarms for Virtual SAN. These alarms are applicable to both Virtual SAN versions 5.5 as well as 6.0. Continue reading

VMware Virtual SAN 6.0: Bootstorm Demonstration

VSAN60-All-FlashSince the official announcement of VMware Virtual SAN All-Flash architecture, most of the conversations have been focused about the solutions incredible performance capabilities and attributes with regards to IOPS, predictable performance, sub-millisecond latencies. All of those attributes are great and part of the reason as to why Virtual SAN 6.0 as a storage platform and its use cases have been expanded to also focus on business critical applications and large enterprise environments.

I want to turn the spotlight onto one of the many supported use cases for Virtual SAN 6.0 and highlight one of the invaluable capabilities of the new platform with regards to Virtual Desktop Infrastructures (VDI).

Some of the functional requirements for large enterprise infrastructure designs for VDI include the characterization of boot, refresh, and provision times for standard operations and worst case scenarios.

I have seen a fair share of VDI designs and demonstrations of different platforms showcasing bootstorms, refresh and rebuilds times they all do a pretty good job. Now with that said I wanted to take the opportunity to showcase the powerful capabilities of the Virtual SAN 6.0 by demonstrating a bootstorm at the maximum supported capabilities of the platform. This bootstorm demonstration consists of 6401 desktops on a Virtual SAN 6.0 All-Flash 64 node cluster (BigDaddy).
The key and impressive items showcased as part of the demonstration are the following:

  • BigDaddy – 64 Node All-Flash Virtual SAN Cluster
  • Desktops – booting all 6401 desktops in the cluster at once (in batches of 1024 at a time)
  • Boot Time – 24 minutes booting all desktops plus allocation of IP address about 19 minutes for a total of about 40 minutes

This demonstration does not contain tampered or custom configurations of any of the Virtual SAN settings. This is what we generally call an Out-of-the-Box experience. Another important thing to point out here is my definition for completed boot time. What I mean by complete boot, is not just when the desktop is powered on, but when all the desktops have successfully acquired an IP address and are really up and running and ready to be use.

In the interest of time, the demonstration has been sped up from its original length of time to about 5 minutes. Feel free to pay attention to the timestamp as it is displayed in the command line interface to validate the accuracy of the booting time.

This demonstration successfully highlights the one of the many powerful capabilities of available in VMware Virtual SAN 6.0.

 

– Enjoy

For future updates on Virtual SAN (VSAN), vSphere Virtual Volumes (VVOLs) and other Software-defined Storage technologies as well as vSphere + OpenStack be sure to follow me on Twitter: @PunchingClouds

vSphere Virtual Volumes Interoperability: VAAI APIs vs VVOLs

VVOLs-LOGO2015In 2011 VMware introduced block based VAAI APIs as part of vSphere 4.1 release. This APIs helped improving performance of VMFS by providing offload of some of the heavy operations to the storage array. In subsequent release, VMware added VAAI APIs for NAS, thin provisioning, and T10 command support for Block VAAI APIs.

Now with Virtual Volumes (VVOLs) VMware is introducing a new virtual machine management and integration framework that exposes virtual disks as the primary unit of data management for storage arrays. This new framework enables array-based operations at the virtual disk level that can be precisely aligned to application boundaries with the capability of providing a policy-based management approach per virtual machine.

The question now is what happens to VAAI APIs (NAS and Block) and how will virtual volumes co-exist together?. With Virtual Volumes, aside from the data path, the ESX hosts also control of the connection path to the storage arrays. The Vendor Provider typically arranges the path to the storage arrays. In this case, virtual volumes can be considered as a richer extension of the VAAI NAS APIs. On July of last year I published an article in which I discussed the interoperability between VAAI and VVOLs during cloning operations in deferent scenarios “Virtual Volumes (VVols) vSphere APIs & Cloning Operation Scenarios” consider having another look. Now let’s go over a set of interaction scenarios between the VAAI APIs and Virtual Volumes.

VAAI vs VVOLsRev2

VAAI Block and VVOLs:

VAAI Block defines basic SCSI primitives, which allows vSphere (primarily VMFS) to offload pieces of its operations to the array. There is still a heavy dependency on VMFS playing the role of an orchestrator and sending individual VAAI Block command to the storage array.

With VVOLs, the storage array systems are aware of virtual machine’s disk and hence they can efficiently perform operations such as snapshots, clones, and zeroing operations on the virtual machines disks. But still VAAI Block and thin-provisioning primitives co-exists with VVOLs.

  • ATS – All config VVOLs objects that are stored in a Block VVOLs datastore are formatted with VMFS and hence require supporting ATS commands. This support is detected based on ATS support for a PE LUN to which VVOLs are bound.
  • XCOPY – With VVOLs, ESX will always try to use array based VVOLs copy mechanism defined using copyDiffsToVirtualVolume or CloneVirtualVolume primitives. If these are not supported, it will fall back to software copy. Since software copy involves copying data between  Protocol Endpoint (PE) LUNs and VMFS LUN, there is still potential to use XCOPY command during software data copy. When falling back to software copy, vSphere will use the XCOPY command when moving a virtual machine from a VMFS datastore to VVOLs datastore or between two VVOLs datastores. In the first release, vSphere will not try to use XCOPY if the virtual machine is moving from VVOLs datastore to VMFS datastore. vSphere will detect the support for XCOPY for individual VVOLs based on the support of VAAI XCOPY on PE LUN to which it is bound.
  • Block Zeroing – Main purpose of this primitive is to initialize thick disks provisioned on VMFS datastores. So this primitive is not used for VVOLs as in VVOL, VMFS on config VVOLs only contains small descriptor files. Main purpose of this primitive is to initialize thick disks provisioned on VMFS datastores. With VVols VM provisioning type is specified as part of profile information passed during VVOL creation. Config VVol which are formatted VMFS are “thin” by definition. Also size of config VVol is very small (default 4GB) and it contains small files such as disk descriptors, vm config files, stats and logs data. So Block Zeroing primitive is not used for VVOLs by vSphere.

VAAI NAS and VVOLs:

Unlike SCSI, NFSv3 is a frozen protocol, which means all features of VAAI NAS came via private RPCs issued by vendor plugins. VVOLs extends this model of communicating outside basic protocol. VVOLs defines the rich set of VASA APIs to allow offload of most of the vSphere operations. With vSphere 6.0, existing VAAI NAS will continue work but VVOL datastores will offer richer and faster experience than VAAI NAS. Also, VVOLs doesn’t need any vendor specific plugin installation. Another noteworthy point regarding NAS VAAI and storage vMotion is that NAS VAAI snapshots cannot be migrated, when an attempt is made to migrate a virtual machine with NAS VAAI “snapshots”, the snapshot hierarchy is collapsed and all snapshot history is lost.  This is not the case with VVOLs, and further we can translate snapshot hierarchies between NFS (Non-VAAI)/VMFS/VSAN/VVol (any source->target combination of the 4).

VAAI Thin-Provisioning and VVOLs:

  • Soft Threshold Warnings – similar to a VMFS datastore with TP support, Soft threshold warning for any VVOL virtual machine’s I/O, will be seen in vCenter. Also the corresponding container gets flagged appropriately. The container gets the yellow warning icon when soft threshold warning is issued. Essentially this could be potentially confusing for vSphere admin as this warning is virtual machine specific and warning message doesn’t provide the details on which virtual machine has the problem. This will be corrected in a future update.
  • Hard Threshold Warnings – Hard threshold warning behavior is similar to that on VMFS datastore.
    When VVOL virtual machine’s I/O gets a hard threshold warning, it will stun the corresponding virtual machine. Administrator can resume the virtual machine after provisioning more space or can completely stop the virtual machine.
  • UNMAP – Since there are no disks managed by VMFS, vSphere will not be actively using the UNMAP primitive. vSphere will not actively used UNMAP primitive. Although it will pass through UNMAP to backing VVOLs when guest issues it. Although it will issue UNMAP when guest issues it like XCOPY and ATS, vSphere will detect support for UNMAP for individual VVOLs based on the support of VAAI UNMAP on a Protocol Endpoint LUN to which it is bound. Another thing is that vSphere will not enforce any of the alignment criteria when UNMAP is issued by the guest. This behavior is very similar to the one found with an RDM LUN. With VVOLs UNMAP commands going from the guest directly to the storage array the same way we send all I/O, and the array will now finally see all the individual UNMAP commands guest operating systems issues. For example, a Windows Server 2012 will immediately become a source of UNMAP commands.  On the other hand for the Linux, the filesystem checks the SCSI version supported by the virtual device and won’t issue an UNMAP with the current level of SCSI support we present (SCSI-2). That is something that will be addressed in a future release.

Now let’s identify the supported operations and behavior for the different primitives.

Primitives Supported Operations and Behavior

Powered On Storage vMotion without snapshots

For a powered on VM without snapshots, the Storage vMotion driver coordinates the copy. The Storage vMotion driver will use the data mover to move sections of the current running point. The data mover will employ “host orchestrated hardware offloads” (XCOPY, etc) when possible.

Block VAAI & Block VVOLs:
– Bitmap APIs will be used to determine only the relevant blocks to migrate (Space efficiency optimization)
– XCOPY will be used to migrate (host orchestrated offload)

NAS VAAI
– No optimizations

NAS VVOL
– Bitmap APIs will be used to determine only the relevant blocks to migrate (Space efficiency optimization)

Powered On Storage vMotion with snapshots

For a powered on VM with snapshots, the migration of snapshots is done first, then the Storage vMotion driver will use the data mover to move the current running point.

Block VAAI
– Bitmap APIs will be used to determine only the relevant blocks to migrate (Space efficiency optimization)
– XCOPY will be used to migrate snapshots + current running point

Block VVOL
– Bitmap APIs will be used to determine only the relevant blocks to migrate (Space efficiency optimization)
– The cloneVirtualVolume and copyDiffsToVirtualVolume VASA APIs will be used to migrate all snapshots (Full hardware offload)
– XCOPY will be used to migrate the current running point (host orchestrated offload)

NAS VAAI
– NAS VAAI cannot migrate snapshots
– No further optimization

NAS VVOL
– Bitmap APIs will be used to determine only the relevant blocks to migrate (Space efficiency optimization)
– The cloneVirtualVolume and copyDiffsToVirtualVolume VASA APIs will be used to migrate all snapshots (Full hardware offload)

Powered Off Storage vMotion without snapshots
For a powered off VM, the Storage vMotion driver is not in the picture. So, effectively a Storage vMotion of a powered off VM is a logical move (Clone + Delete Source).

Block VAAI
– Bitmap APIs will be used to determine only the relevant blocks to migrate (Space efficiency optimization)
– XCOPY will be used to migrate current running point

Block VVOL
– Bitmap APIs will be used to determine only the relevant blocks to migrate (Space efficiency optimization)
– The cloneVirtualVolume VASA API will be used to migrate the current running point (Full hardware offload)
– The copyDiffsToVirtualVolume VASA APIs will be used to migrate all snapshots (Full hardware offload)

NAS VAAI
– NAS VAAI clone offload will be employed to migrate the current running point

NAS VVOL (Same as block VVOL)
– Bitmap APIs will be used to determine only the relevant blocks to migrate (Space efficiency optimization)
– The cloneVirtualVolume VASA API will be used to migrate the current running point (Full hardware offload)
– The copyDiffsToVirtualVolume VASA APIs will be used to migrate all snapshots (Full hardware offload)

Powered off Storage vMotion with snapshots
– Same general idea as above, just with snapshots too…

Block VAAI
– Bitmap APIs will be used to determine only the relevant blocks to migrate (Space efficiency optimization)
– XCOPY will be used to migrate current running point + snapshots

Block VVOL
– Bitmap APIs will be used to determine only the relevant blocks to migrate (Space efficiency optimization)
– The cloneVirtualVolume VASA API will be used to migrate the current running point + snapshots (Full hardware offload)

NAS VAAI
– NAS VAAI cannot migrate snapshots
– NAS VAAI clone offload will be employed to migrate the current running point

NAS VVol (Same as block VVOL)
– Bitmap APIs will be used to determine only the relevant blocks to migrate (Space efficiency optimization)
– The cloneVirtualVolume VASA API will be used to migrate the current running point + snapshots (Full hardware offload)

– Enjoy

For future updates on Virtual SAN (VSAN), vSphere Virtual Volumes (VVOLs) and other Software-defined Storage technologies as well as vSphere + OpenStack be sure to follow me on Twitter: @PunchingClouds

VMware Virtual SAN: All-Flash Configuration

VSAN-ALL-FLASH-LOGOThe cat is officially out of the bag, as they say!. Everyone in the world should now be aware of the fact that VMware Virtual SAN 6.0 supports an all-flash architecture. I think it’s time to discuss a couple of items with regards to a new architecture.

The Virtual SAN 6.0 All-Flash architecture uses flash-based devices for both caching and persistent storage. In this architecture, the flash cache is used completely as write buffer. This all-flash architecture introduces a two-tier model of flash devices:

  • write-intensive, high endurance caching tier for the writes
  • read-intensive, cost-effective flash device tier for data persistence

All-Flash-Arch

The new device tiering model not only deliver incredible performance results, but it can also potentially introduce cost savings for the Virtual SAN 6.0 all-flash architecture depending on the design and hardware configuration of the solution.

Virtual SAN Configuration Requirements

In order to configure Virtual SAN 6.0 for the all-flash architecture, the flash devices need to be appropriately identified within the system. In Virtual SAN, flash devices are identified and categorized for the caching tier by default. In order to successfully enabled the all-flash architecture configuration we need manually to flag the flash devices that will be utilized for data persistence or capacity. This configuration is performed via one of the supported command line interface tools such as RVC or ESXCLI.

RVC handles the configuration of the devices at a cluster level. Below you’ll find an image, which illustrates the usable syntax for flagging the flash devices with RVC.

RVC

ESXCLI handles it at the per-host level. Below you’ll find an image, which illustrates the usable syntax for flagging the flash devices with esxcli.

RVC

Another command line utility that is worth knowing is the VSAN Disk Query (vdq). This utility allows users to identify when the flash devices are configured for used in the capacity tier as well as if they are eligible to be use by Virtual SAN.
Whenever vdq is used to query the flash devices on a host as illustrated below, the output will display a new property called “IsCapacityFlash”. This property specifies whether a flash device will be utilized for the capacity tier instead of the caching tier.

all-flash-vsan-6-vdq

For more in-depth information on the use of vdq, please take a look at a post by one of VMware’s elite engineers and VSAN Champion William Lam.

It’s important to highlight that flagging flash devices to be used for capacity cannot be performed from the option available in the vSphere Web Client UI. It has to be performed via the CLI. (wait for it….wait for it)

Once the flash devices, they will be displayed as magnetic devices (HDD) in the disk management section of the Virtual SAN management tab.

That’s about it, after the flash devices have been properly tagged, the rest of the Virtual SAN configuration procedure is as easy as it was in the previous version.

So in the spirit of making things easy and reduce any contention with getting into the CLI and manually flagging every disk. I’ve been able to design a tool along with my good pal and now a VSAN Champion Brian Graf that should take care of disk tagging process for just about everyone.

Here is a demo of how simple it is to configure a Virtual SAN 6.0 all-flash cluster with a teaser of the Virtual SAN All-Flash Configuration Utility. Oh yeah, I almost forgot to mention….. It’s a 64 node all-flash cluster (The BigDaddy).

– Enjoy

For future updates on Virtual SAN (VSAN), vSphere Virtual Volumes (VVOLs) and other Software-defined Storage technologies as well as vSphere + OpenStack be sure to follow me on Twitter: @PunchingClouds

VMware Virtual SAN 6.0

VSAN-60It is with great pleasure and joy that I like to announce the official launch of VMware Virtual SAN 6.0, one of VMware’s most innovative software-defined storage products and the best hypervisor-converged storage platform for virtual machines. Virtual SAN 6.0 delivers a vast variety of enhancements, new features to the as well as performance and scalability improvements.

Continue reading

vSphere Virtual Volumes

VVOLs-LogoToday VMware announced the release of vSphere 6.0 and with this announcement comes the official release of vSphere Virtual Volumes. vSphere Virtual Volumes (VVOLs) is VMware’s new management & integration framework designed to deliver a more efficient operational model for external storage.

vSphere Virtual Volumes implements the core tenants of the VMware SDS vision to enable a fundamentally more efficient operational model for external storage in virtualized environments, centering on the application instead of the physical infrastructure.

Continue reading

VMware Virtual SAN: File Services with NexentaConnect

Nextenta+VSAN

NexentaConnect for Virtual SAN is a software-defined storage solution designed specifically to deliver file service on top of Virtual SAN. This solution complements Virtual SAN by delivering a software-based NAS capabilities that add enterprise-class Windows and UNIX-based file services without the need for any additional hardware purchase to augment the virtual machine storage provided by VMware Virtual SAN.

NexentaConnect is designed to deliver high-performance NFS and SMB file services to the full datacenter by leveraging Virtual SAN features and capabilities. The management and configuration of the solution adopts Virtual SAN simplified management approach delivering an agile and efficient operational model. The solution delivers NFSv3, NFSv4, and SMB 2.1 file shares services for hybrid networks, as well as its set of capabilities relate to performance, availability, and storage services such as:

  • Localized server-side read cache
  • Compression and de- duplication

NexentaConnect for Virtual SAN maintains data integrity under high workload, network/disk/host recovery and brings various recovery functions, such as snapshot and remote file replication designed specifically for this solution. All configured and managed through the vCenter console and the vSphere Web Client.

Continue reading