irst I’m going to ask you to go check out the following KB and take 2-3 minutes and read it. : https://kb.vmware.com/s/article/90343
Pay extra attention to the table from this document it links to.
Also go read Pete’s new blog explaining read intensive drive support.
So what does this KB Mean in practice?
You can start with the smallest ReadyNode (This will include the new ESA-AF-0 ReadyNode profile), and add capacity, drives, or bigger NICs and make changes based on the KB.
Should I change it?
The biggest things to watch for is adding TONS of capacity, and not increasing NIC sizes, could result in longer than expected rebuilds. Putting 300TB into a host with 2 x 10Gbps NICs is probably not the greatest idea, while adding extra RAM or cores (or changing the CPU frequency 5%) is unlikely to yield any unexpected behaviors. In general balanced designs are preferred (That’s why the ReadyNode profiles as a template exist) but we do understand sometimes customers need some flexibility and because of the the KB above was created to support it.
What can I change?
I’ve taken the original list, and converted it to text as well as added (in Italics) some commentary on what the impact may be of various changes.
I’ve taken the original list, and converted it to text as well as added (in Italics) some commentary on what the impact may be of various changes.
CPU
- Same or higher core count with similar or higher base clock speed is recommended. Purchasing hosts with 200mhz more or less than the reference node are unlikely to have significant performance changes, and buying a host with more cores will potentially increase storage performance with vSAN ESA.
Each SAN ESA ReadyNode™ is certified against a prescriptive BOM. - Adding more memory than what is listed is supported by SAN, provided Sphere supports it. Please maintain a balanced memory population configuration when possible (Server OEM SEs can guide you on proper memory channel usage). In general adding more RAM to a host is not going to cause any odd imbalances in vSAN or storage behavior.
- If wanting to scale storage performance with additional drives, consider more cores. While vSAN OSA was more sensative to clock speed for scaling agregate performance, vSAN ESA additional threading makes more cores particularly useful for scaling performance.
- As of the time of this writing the minimum number of cores is 32. Please check the vSAN ESA VCG profile page for updates to see if smaller nodes have been certified.
Storage Devices (NVMe drives today)
- Device needs to be same or higher performance/endurance class. Do note that “Read Intensive” drives are now available.
- Storage device models can be changed with SAN ESA certified disk. Please confirm with the Server vendor for Storage device support on the server.
- We recommend balancing drive types and sizes(homogenous configurations) across nodes in a cluster.
We allow changing the number of drives and drives at different capacity points(change should be contained within the same cluster)as long as it meets the capacity requirement of the profile selected but not exceed Max Drives certified for the ReadyNode™. Please note that the performance is dependent on the quantity of the drives. - Mixed Use NVMe (typically 3DWPD) endurance drives are best for large block steady State workloads. Lower endurance “Read Intensive” drives that are certified for vSAN ESA may make more sense for read heavy, shorter duty cycle, storage dense cost conscious designs.
- 1DWPD ~15TB “Read Intensive” are NOW on the vSAN ESA VCG, for storage dense, non-sustained large block write workloads these offer a great value for storage dense requirements.
- Consider rebuild times, and consider also upgrading the number of NICs for vSAN or the NIC interfaces to 100Gbps when adding significant amounts of capacity to a node.
NIC
- NICs certified in IOVP can be leveraged for SAN ESA ReadyNode™.
- NIC should be same or higher speed.
- We allow adding additional NICs as needed.
- When using ESA-AF-0 ReadyNodes it is advised to still consider 25Gbps NICs as they can operate at 10Gbps and support future switching upgrades (SFP28 interfaces are backwards compatible with SFP+ cables/transceivers). In general a lot of the cheaper 10Gbps NICs out there may be missing key offloads, not support RDMA, or have lower packets per second capability (PPS) than newer/smarter 25Gbps ASICs.
Boot Devices
- Boot device needs to be same or higher performance endurance class.
- Boot device needs to be in the same drive family.
TPM
Please purchase a TPM. It is critically important for vSAN Encryption key protection, securing the ESXi configuration, host attestation and other issues. They cost $50 up front, but hours of annoying maintenance to install after the fact.