Home > Blogs > VMware vSphere Blog

Managing Virtual SAN with RVC: Part 2 – Navigating your vSphere and Virtual SAN infrastructure with RVC

In our first article in this series, we looked at the history, features, and setup of the Ruby vSphere Console. Built upon the Ruby interface to the vSphere API (RbVmomi), the Ruby vSphere Console is a powerful management utility for the vSphere infrastructure, as well as an efficient integration option for third party applications and cli-based automation.

In today’s article, we will begin digging further into the features and usage of the Ruby vSphere Console by leveraging it to explore the vSphere and Virtual SAN infrastructure. Within RVC, the vSphere infrastructure is presented to the user as a virtual file system. This allows us to navigate its managed entities and even execute commands against them as well.

Continue reading

VMware Virtual SAN with Cisco UCS Reference Architecture

co-author Bhumik Patel,  VMware Partner Architect 

VMware Virtual SAN LogoAs customers look to implement Virtual SAN in their environments, it is critical that the underlying platform has the matching attributes of rapid provisioning, unified management, linear scalability and operational simplicity to be able to fully leverage the capabilities of a scale-out, hypervisor-converged Virtual SAN solution. With Cisco UCS as a platform for Virtual SAN, customers can obtain these attributes through Virtual SAN and UCS working together to contribute the following capabilities in a complementary manner.

  • Rapid Provisioning: Virtual SAN nodes can be rapidly provisioned on Cisco UCS by leveraging the service profile construct within UCS that decouples all the attributes of a physical server into a template. These templates can be applied to bring new servers online during initial provisioning or to extend your existing Virtual SAN cluster.
  • Unified management: UCS C240-M3 is a rack mount server that is the baseline for a number of Virtual SAN Ready Nodes. UCS Manager provides centralized and policy driven management of rack mount servers similar to blade management within the UCS domain. In addition, C240-M3 are directly connected to the Fabric Interconnects without the need for an intermediate Nexus 2232 fabric extenders which was previously the case.
  • Linear Scalability: Virtual SAN scales-out linearly from 3 nodes up to 32 nodes in a cluster. The rapid provisioning capabilities of the UCS platform enabled by UCS service profiles makes the process of scaling out Virtual SAN clusters through node addition seamless.

To demonstrate these synergistic capabilities, VMware has partnered with the Cisco Server Access and Virtualization Technology Group to produce a joint Reference Architecture document detailing VMware Virtual SAN on Cisco Unified Computing System C-Series rack servers.  Details covered in the paper include the following:

  • Virtual SAN on Cisco UCS Architecture
  • VMware Virtual SAN Availability and Manageability
  • Configuring Virtual SAN on the Cisco UCS 240 M3
  • Benchmarking VMware Virtual SAN on Cisco UCS

Results from our benchmark testing includes the following. We conducted two types of IO benchmark tests on a 4-node and 8-node Virtual SAN cluster to provide guidance on IOPS capacity as shown by graphs below:


4k-70R30WBenchmarking was performed on Virtual SAN 5.5 with the following UCS configuration. Default Virtual SAN storage policies were used.UCS-VSAN Arch

In addition to scalability and management benefits, performing ongoing operations with ease and achieving highest levels of availability is equally critical. In the white paper, we perform failure simulations of different components such as HDD, SSD, network and host failures and showcase the resiliency of a Virtual SAN on Cisco UCS environment.

For details on Virtual SAN on the Cisco UCS C-series configuration and testing results, download the VMware Virtual SAN with Cisco UCS Reference Architecture today.


About co-author Bhumik Patel:

Bhumik Patel is a Partner Architect in VMware’s Technical Alliances team focusing on driving joint solutions with strategic partners. Bhumik has over 10 years of experience designing and implementing virtualization solutions for customers globally as a solutions architect and driving integrated solutions with key partners. Bhumik has presented at many leading industry conferences and partner events. Bhumik holds a Bachelors and a Masters degree in Computer Science. Follow @bhumikp


Managing Virtual SAN with RVC: Part 3 – RVC Usage and Command Syntax

In today’s article, we will be taking a deeper look into the features of the Ruby vSphere Console (RVC) by examining its command structure and syntax. With RVC being built in Ruby, and built upon the Ruby interface to the vSphere API (RbVmomi), it serves to offer considerable strengths that we can leverage to expedite operations in our vSphere infrastructures. RVC began its life in VMware Labs as a Fling as a Ruby based CLI for the vSphere infrastructure. VMware Lab “Flings” are really interesting Engineering side projects. As a Fling, RVC became such a considerable tool for VMware Engineering, Support, and others that it was decided to extend its functionality to include support for Virtual SAN environments. RVC has now become a robust CLI for managing vSphere and Virtual SAN infrastructures.

First though, if you need assistance with recommended practices for RVC deployment, or how to login and navigate your vSphere and Virtual SAN infrastructure, please take a look at our first two blog articles from this series.

Managing Virtual SAN with RVC Series:
Part 1 – Introduction to the Ruby vSphere Console
Part 2 – Navigating your vSphere and Virtual SAN infrastructure with RVC

Continue reading

Don’t miss what your peers are saying about building a Software-Defined Data Center at VMworld 2014!

Ever wonder how you can:

  • Forecast storage needs for the next 1-3 years
  • Deploy a cost-effective storage tier for VDI environments
  • Monitor compute and reduce mean time to resolution
  • Virtualize and provide Disaster Recovery for Business Critical Applications, like SAP / Oracle RAC Database and ERP
  • Automate server and application provisioning, with a true service catalog

Join us in a roundtable discussion on architecting a Software-Defined Data Center, design trade-off decisions, and time to value. John Gilmartin, General Manager of the SDDC Suites Business Unit, will moderate the discussion among:

  • Ricky Caldwell, MCSE, CCIA, CCE-V – Director, Server Infrastructure & Architecture for Cornerstone Home Lending
  • Andy Lubel – Manager, Technical Infrastructure for Exostar
  • Suzan Pickett – Senior Manager, Global Infrastructure Services for Columbia Sportswear
  • Sunyo Suhaimi – IT Cloud Transformation Director for VMware

Register now for SDDC2556-S – Customer Panel: Journey to Software-Defined Data Center. Attend and you just might be 1 of 5 lucky session attendees who will take home an iPad Mini or a Patagonia jacket. Because it never hurts to be prepared –  “The coldest winter I ever spent was a summer in San Francisco.” See you there.

Discover VMware Virtual SAN at VMworld!

We invite you to learn more about VMware Virtual SAN — our software-defined storage solution designed for vSphere environments — at VMworld 2014.

The sessions, like the rest of VMworld, will take place at the Moscone Center in San Francisco, California. Depending on your needs, there will be several important VMware Virtual SAN sessions you’ll want to attend.

If your organization is concerned about enterprise-level storage and availability, please consider these VMware Virtual SAN sessions:

Session Topic

Session ID

From Clouds to Bits: Exploring the Software Defined Storage Lifecycle


Software-Defined Storage: The Next Phase In The Evolution of Enterprise Storage


Software-Defined Storage – The VCDX Way Part II: The Empire Strikes Back


Virtual SAN 101 & Building a Business Case


Virtual SAN Ready Node and Hardware Guidance for Hypervisor Converged Infrastructure


Virtual SAN – Customer Panel


Virtual SAN Architecture Deep Dive


Virtual SAN Best Practices for Monitoring and Troubleshooting


Virtual SAN Best Practices and Use Cases


Performance Best Practices for Virtual San


Virtual Volumes Overview


Virtual Volumes Technical Deep Dive


Massively Scaling vSAN Implementations


Northrim Bank and USX


Virtual SAN hosted on Cisco UCS as the Virtual Desktop Architecture at a Major Call Center


Software-Defined Everything. The Next Big Thing.


SanDisk Experience on Virtual SAN for VDI and Enteprise Workload


Virtual SAN Hosted on Cisco UCS as the Virtual Desktop Architecture at a Major Call Center


Unleashing the Power of VMware’s Virtual SAN on the Latest Industry Standard Performance NVMe/PCIe SSDs


We look forward to seeing you at VMworld 2014! For more updates on VMware Virtual SAN and Software-Defined Storage, be sure to follow us on Twitter at @VMwareVSAN!

Profiling OLTP performance on Virtualized SQL 2014 with All Flash Arrays

TPC-C Benchmark is an on-line transaction processing (OLTP). (TPCC Main site) TPC-C uses a mix of five concurrent transactions of different types and complexity. The database is comprised of nine types of tables with a wide range of record and population sizes. TPC-C is measured as transactions per minute (TPM).

The goal of this exercise was to see if 1 million TPM can be achieved on virtualized SQL 2014 backed by an all Flash storage array for a TPC-C like test.  The TPC-C testing would be compared between two VM sizes (Within NUMA & Exceeding NUMA boundaries)

Continue reading

VMware Virtual SAN and Block Alignment

VMware Virtual SAN LogoRecently a question came up around Virtual SAN and block alignment that I want to address. Traditionally, there have been two potential block alignment issues in vSphere environments:

1) VMFS block alignment with respect to underlying array chunks. Alignment at the VMFS layer has been addressed since VMFS-3 (used in vSphere 3 and 4) being aligned at Logical Block Address (LBA) 128 and VMFS-5 (used in vSphere 5.x) being aligned at LBA 2048 by default. However, this issue is not relevant for Virutal SAN, as Virtual SAN does not utilize VMFS.  This is because Virtual SAN utilizes a native object store and there is no underlying array format.

2) Alignment of guest operating system blocks with VMFS blocks. Older guest operating systems have an issue with block alignment that can cause split IO.  This occurs when the guest filesystem partition starts at a unaligned LBA and as a result guest IOs may cross block boundaries in the underlying VMFS volume or Virtual SAN datastore. Newer operating systems (e.g., Windows 7 and newer) do not have this issue because they start the partition at an aligned 1MB LBA within a vmdk. For more background information on guest alignment, see our previous blog post.

The need for guest alignment in older operating systems is still applicable with Virtual SAN. However, the performance impact of split IO caused by guest OS misalignment is less noticeable to guests residing on Virtual SAN, compared to guests on traditional storage. This is because of the following facts.

a) All Virtual SAN writes go to the flash acceleration layer and are coalesced before they are de-staged to HDD.

b) Read performance will not be highly impacted by split IOs that span cache lines due guest OS misalignment, as the Virtual SAN flash acceleration layer can serve much higher levels of IOps than spinning disk.

Because all Virtual SAN writes and the vast majority of reads are served from the flash acceleration layer, the impact of guest OS misalignment is lessened in a Virtual SAN environment when compared to misaligned guests residing on traditional storage.

So, in summary, we still recommend that you align older guest operating systems for optimal performance. If they are not aligned, there will be a performance penalty, but generally (depending on workload characteristics) the penalty is less pronounced on Virtual SAN, due to the use of our flash acceleration layer for write buffering and read caching.

Dell, VMware Virtual SAN, and Horizon Whitepaper

VMware Virtual SAN Logo

The Dell Wyse Solutions Engineering group has partnered with VMware’s Software-Defined Storage team to produce an extensive whitepaper detailing the performance of Virtual SAN running Horizon with View on specific Dell platforms. Virutal SAN configurations using two different hardware platforms are documented, with performance results from multiple differing configurations based on SSD and disk group displayed. The paper details results from the following platforms.

 Virtual SAN on the Dell PowerEdge R720

  • Standard: Each host with one SanDisk 400GB SLC SSD  for one diskgroup with 6 HDD.
  • Value: Each host with up to three 200 GB SSD SATA Value MLC (Intel S3700) with up three disk groups and 12 HDDs.
  • Login VSI was used to test performance for each of these configurations, with workload and operations performance results provided for each configuration.

Virtual SAN on the Dell PowerEdge C6220 II

  • High density platform that encompasses four nodes in a 2U enclosure.
  • VMware View Planner was used to validate performance of this platform up to 100 desktops per node, for 400 desktops in a 2U high density enclosure.

For more details on the testing and documented results, download the Dell VMware Virtual SAN for ESXi 5.5 with VMware Horizon View paper today.








SSH keys when using Lockdown Mode – A 5.x Hardening Guide update


I was informed today that there is a behavior in the 5.1 through 5.5 Update 1 Hardening Guides that is incorrectly documented.

The two affected guidelines are:

  • ESXi.enable-lockdown-mode
  • ESXi.remove-authorized-keys

Continue reading

vSphere Data Protection (VDP) – SSO server could not be found

I ran into the following error today while working with VDP: The SSO server could not be found. Please make sure the SSO configuration on the VDP appliance is correct. There is a KB article on this: http://kb.vmware.com/kb/2072033. After reading through the article and checking those items, the issue was still not resolved.

Some background info: It is my lab environment, which has a couple of DNS sources. I have vCenter Server running on Windows. The Windows server name was vc01.vmware.local and part of an Active Directory domain named vmware.local. The lab environment “outside” of my vmware.local domain is named pml.local. VDP is using a pml.local DNS server as its primary DNS server and a vmware.local DNS server as its secondary DNS server. I know – kind of crazy and no wonder I was having this issue. I tried a variety of combinations with my vmware.local DNS and the hosts file on the VDP appliance, with no luck. I even renamed my vCenter Server to the host name found in the pml.local DNS and re-registered the VDP appliance to vCenter Server using the new host name. The re-registration went fine, but when I tried to connect using the vSphere Web Client – still, no luck (same error).

The fix:

I logged onto the VDP appliance and ran this: tail -f /usr/local/avamar/var/vdr/server_logs/vdr-server.log

I then tried connecting to the VDP appliance using the vSphere Web Client. I observed the VDP appliance attempting to connect to SSO using the URL https://vc01.vmware.local:7444/sso-adminserver/sdk/vsphere.local . I added an entry in the hosts file on the VDP appliance for vc01.vmware.local and I was able to connect. Even though I renamed my vCenter Server, this did not change the URL for connecting to SSO. I verified this in the Advanced Settings for vCenter Server.

Lessons learned and reinforced:

  • DNS must be rock solid in a VMware environment (this has always been the case) – especially with VDP in the mix. You should be able to resolve host names across the entire environment by short name, fully qualified domain name (FQDN), forward lookup, and reverse lookup.
  • Always use fully qualified domain names when configuring VDP.
  • Time must also be in sync. Not the problem in my case, but just making sure everyone is aware.
  • Renaming vCenter Server (the host name) does not change URLs for connecting to SSO, the VIM API, etc.
  • It is possible to populate the /etc/hosts file in the VDP appliance to work around many name resolution issues, but this is not a recommended practice (see the next bullet point).
  • Keep things simple. In my case, it can’t be helped, but having a single source for name resolution is best.