Back in 2014, Rawlinson Rivera made the case explaining how Virtual SAN took advantage of data locality with both temporal and spacial locality. He explains how intelligent prefetch algorithms save disk, and reduce latency. The Virtual SAN caching white paper goes into more detail on this.
A key point in this paper was that Virtual SAN did not migrate persistent data to the local flash. This type of data locality requires that behind every vMotion or HA event in your environment, should be a massive flood of networking traffic that can and will significantly impact latency. The supposed benefit of this is a reduction in storage latency or network throughput. Rawlinson clearly outlines in this paper that the latency of modern flash devices at queue depth is sufficiently high to make the added network hop a non-germane argument. Requiring customers to disable or turn down Distributed Resource Scheduling (DRS) to improve storage performance shows the limitations of this design. Many competitors leveraging this are quietly telling customers as they scale to turn off DRS. Our own lab testing has reveled that this to be a choice between bad consolidation ratio’s (DRS off) or highly inconsistent latency (Leaving it on and accepting the consequences of data locality). Discussions with backup vendors point to data locality significantly impacting performance of both backup and recovery at small and large scale. Significant compromises are often required to make full backups not cripple storage latency on these competing solutions.
Higher latency increases the processing time for workloads, and negatively impacts the end user experience. Inconsistent latency is even worse, as users will find seemingly random sudden losses of performance to their applications jarring and disorienting, and latency sensitive applications may experience outages. It should also be noted that these migrations do not improve latency for writes, as data must always be written to a remote system.
The argument for optimizing network throughput has changed quite a bit as network capabilities and costs have evolved. A decade ago this was a real concern as 1Gbps networking was the norm, and 10Gbps ports were expensive. Today with 10Gbps onboard ports, 10Gbase-T networking, and switch ports at below $200. Using the Virtual Distributed Switch (vDS) Virtual SAN can intelligently share ports with other traffic using Network IO Control (NIOC) as well as can leverage multiple connection links using link aggregation technologies available with modern switching.
With the industry soon moving to 25Gbps next year being the onboard standard, and 100Gbps multi-lane ports on the horizon, the immediate future does not point towards improvements in read throughput being something worth the cost of latency consistency. At the application layer modern applications scale out horizontally rather than vertically further diversifying the traffic profile for IO access. I will admit there are certain niche high throughput applications, but these are generally the fringe workloads that are primarily deployed on legacy modular storage for cost, or highly asymmetric scaling reasons.
Fundamentally one problem many storage platforms have is that they were designed 2-4 years before they were ever publicly shipped. Many of the initial bottleneck’s or design concerns are often less relevant at the time they are released. Early movers often find themselves in a situation 4-5 years in where they realize that their 10 year old design has limitations that fail to address the modern needs of the market. How do you solve this problem?
- First it requires flexibility that comes with taking a longer term view of the market, rather than shortcuts to relevance. Companies that took shortcuts in design to address the cost models or challenges of 7-10 years ago will struggle today to quickly change those “features” that have become technical debt that must be paid.
- Secondly it requires vision, and constant re-review of the market and its direction as well as an understanding of what customers are doing in their data center (from business critical applications, to end user computing, to cloud native applications). Nearly talking about customers deploying them is not enough. Coming up with custom solutions (like VMFork for Instant clones, and Photon container optimization) is key to staying relevant.
- Third it requires hard work. Brilliant development staff has to extend the platform to address the coming challenges while still recognizing what customers need today. Thankfully we have these resources committed to making Virtual SAN better.
In part two of this series we will review uses cases and extensions where latency of the network does matter, and how Virtual SAN has been extended to handle these requirements.