Several years ago as a VMware Administrator for a government agency, I was tasked with replacing an aging hardware infrastructure that supported a mission critical e-mail messaging environment for 10,000 users. The messaging infrastructure was managed internally and was referred as our Private Cloud Messaging Environment.
I was responsible for overall architecture of the solution. In this post I want to detail some of the challenges associated with Designing, Building, and Testing the Private Cloud Messaging Environment and how EVO SDDC would help resolve those challenges today.
It’s no simple task to design a hardware infrastructure solution for a 10,000 user enterprise messaging environment. My team dedicated several weeks researching individual hardware components such as Servers, RAID Controllers, NICs, HBAs, Switches, Storage Solutions, and Hard Drives. We met with various vendors who detailed their hardware features, capabilities, and offered product demonstrations. Based on the feedback we collected we selected components that we expected would be compatible.
After creating a hardware bill of materials, we had hoped to find existing white papers or other documentation that demonstrated the performance and scalability results of all of the components that comprised our solution. Unfortunately, the results from benchmarking tools such as Microsoft Exchange Jetstress, and Microsoft Exchange Load Generator (LoadGen) weren’t available for our specific solution. We couldn’t risk undersizing the solution, so we sized the solution based on our current server count.
Ordering equipment proved to be one of our greatest challenges. Our initial bill of materials included more than 100 sub-components from 9 different vendors. This required multiple purchase orders, lengthening the approval process.
How does EVO SDDC Help Today?
VMware EVO SDDC can be ordered with a single SKU and is built on pre-validated hardware. Customers can select an EVO SDDC solution from multiple hardware vendors and can choose from a variety of CPU, memory, and storage options. All components are carefully selected, and rigorously tested to ensure optimal performance and compatibility.
EVO SDDC product demonstrations are currently available. VMware has recently opened a Solution Center in Palo Alto, CA. This facility will enable potential customers to obtain hands-on experience, and conduct workload performance and scalability testing.
VMware has also tested several enterprise applications on EVO SDDC and can help customers right-size their environment. This enables customers to buy exactly what they need today. Customers can grow their solution in increments as small as single server. This level a scalability supports future growth based on customer needs.
Over the following weeks many boxes of equipment arrived. With little storage space in the data center, boxes were stored in various locations which made inventory tracking complicated. Once we were fairly confident that all of the equipment had arrived we began assembly. After the racks were in place we quickly realized that several components were missing or incorrectly ordered. The vendor we had purchased the servers from shipped the wrong rack mount server rail kits, as a result we now had 24 unboxed servers that we couldn’t install in the rack. We also noticed that the power distribution unit (PDU) output connection did not match the receptacle provided in our data center. Getting approval for purchase orders to order missing equipment, and return merchandise authorizations (RMA) to return incorrectly ordered equipment extended the build process by several weeks.
How does EVO SDDC Help Today?
VMware EVO SDDC arrives onsite fully-assembled within 45-days in one box. Customers no longer have to worry about missing or incompatible components. Customers simply plug the PDUs into their data center power source, connect the top of rack switches in the first rack to their network and perform the initial bring-up.
Prior to shipment VMware conducts a site readiness assessment. Details including power, network, safety, accessibility, and shipping requirements are reviewed in advance to ensure seamless delivery of an EVO SDDC solution. Customers can feel confident that their EVO SDDC solution will be operational on Day 1.
Once the equipment was finally assembled, we were anxious to power it on and begin testing. We quickly ran into two showstoppers. Some of the RAID Controller cards were not identifying the installed disk drives. After several days of troubleshooting the problem was resolved with a disk firmware upgrade. The second issue was more challenging to resolve. One of the two HBAs installed in each server was not detected. There was a lot of finger-pointing between the server vendor and the HBA vendor, and no one would assume responsibility for support. After removing the HBAs and installing them one at a time, we learned that the voltage required for two HBAs and one RAID controller exceeded the installed power supplies capacity. Larger capacity power supplies were ordered, which delayed testing.
After spending weeks troubleshooting and resolving hardware related issues we finally had an opportunity to benchmark the solution. During benchmark testing we learned that our decision to size the solution based on existing server count caused us to grossly oversize the solution. Excess capacity would likely remain unused.
How does EVO SDDC Help Today?
VMware EVO SDDC is built upon pre-validated hardware, right down to the firmware and BIOS setting level. Customers no longer have to worry about troubleshooting incompatibility issues.
VMware also eliminates the finger-pointing typically associated with multi-vendor solutions. EVO SDDC customers have a single number to dial for support.
EVO SDDC allows customers to create several workload domains. Each workload domain supports elasticity and can quickly scale based upon workload requirements. When no longer needed, a workload domain can be deleted which returns its resources back to EVO SDDC. Each workload domain supports Roles Based Access Control (RBAC) and can be independently managed, allowing multiple applications to be separated, or multiple departments to have their own workload domain(s). In our case, these capabilities would have allowed us to create a single workload domain for the enterprise messaging application, while the remaining resources would be available to create additional workload domains for other departments or applications.