New Updates and Advisory – March 14th, 2018:
On Tuesday, March 13th, 2018, Microsoft released a number of Security Updates for Windows Operating Systems. Two of these updates are now confirmed to be problematic: when applied to a virtualized Windows Server 2008/R2 and/or Windows 7 instances, these patches replace the existing virtual network card (vNIC) with a new one, hides the pre-existing vNIC and does not persist or transfer the existing IP Address configuration to the new vNIC.
We are updating this post (rather than creating a new one) because the issues are similar and well-known at this time. We expect that Microsoft will release an advisory or updates in due course.
ACTION: If you have been impacted, please note that the following manual fixes have been known to resolve the issue (after the fact):
- Note the name of the new network adapter
- Open “Device Manager” -> “Show Hidden Devices” and delete ALL hidden/phantom network adapters from the list (be sure to NOT delete the drivers)
- Edit the properties of the new NIC and add the applicable IP address configuration. No reboot is required.
The following video has been created to provide a visual cue for this resolution.
Microsoft has has updated the “Known Issues in this Update” section of the applicable KB
VMware has received confirmation that Microsoft has determined that the issue reported in this post is a Windows-specific issue and unrelated to VMware or vSphere. Microsoft is encouraging customers to follow the directions provided in Microsoft KB3125574 for the recommended resolution. All further updates will be provided directly by Microsoft through the referenced KB. This marks the end of further updates to this blog post.
- Please see Microsoft’s updated guidance and recommended work-around regarding this issue
- I am removing reference to “VMware VMXNet3” in the title of this post to reflect Microsoft’s latest updates to their KB. Yes, the issue still exists when using VMXNet3 for VMs, but it no longer appears that this issue is specific to “VMXNet3” virtual network adapters.
- We are still working with Microsoft to conduct a comprehensive Root-Cause Analysis, and we will provide further updates as new information (or a resolution) becomes available.
As enterprise IT teams, leadership and business owners continuously drive towards service improvements, they invariably look at the public cloud infrastructure as a possible target for their mission critical applications. Whereas virtualization is now generally accepted as the default platform for enterprise-grade applications, businesses looking to leverage the public cloud for most of these applications are still constrained in their ability to do so.
These constraints can be directly attributed to the following (among others):
- Performance Concerns – is the target public cloud robust enough to meet the application’s scale and performance requirements?
- Vendor Support – is the target cloud platform certified for the application? Will the vendor provide the necessary technical support and assistance when (not if) the enterprise requires it?
- Level of Effort – mission critical applications demand considerable attention to configuration and other considerations beyond those required for lower-tiered applications and moving from one hosting platform to another may not be a simple or quick undertaking.
This article will address two of these constraints in relations to enterprises’ desire to operate their Microsoft Exchange Server workloads on the VMware Cloud on AWS platform – Performance and Support. Part II of this article will address the “Level of Effort” aspect – we feel that this deserves a stand-alone article of its own.
Microsoft Exchange Server is one of the most prevalent Messaging and Collaboration applications in enterprises today. Microsoft officially supports virtualizing Microsoft Exchange Server (hereafter simply referred to as “Exchange” or “Exchange Server”) on the VMware vSphere virtualization platform. Because VMware has been supporting (and providing guidance for) the virtualization of Exchange Server for more than 10 years (even before official Microsoft support), virtualizing Exchange Server on the vSphere platform has become quite mainstream. Continue reading
We are pleased to announce that it is now supported to deploy SAP HANA Scale-Out systems with up to 16 2 TB large virtualized SAP HANA systems. If the SAP sizing permits, SAP HANA Business Warehouse System RAM size of 32 TB* are now possible.
End of January, SAP granted VMware vSphere 6.5 support for up to 8 SAP HANA Scale-Out nodes/VMs. Our technology partner Fujitsu re-run the benchmark with 8 SAP HANA VMs, installed on four Fujitsu PRIMEQUEST 2800B3 systems, configured with 2 TB RAM and eight Intel Broadwell CPUs.
The SAP BW edition for SAP HANA Standard Application Benchmark Version 2 was performed according to the SAP benchmark rules, which required a memory utilization of minimum 65%. In our case, this where 8 datasets, with 10.400.000.000 initial data records. The CPU utilization was between 80-98% on the seven worker node VM during the query throughput phase. Details can get found in the benchmark certificate 2018007. Continue reading
“Around the World in Eighty Days” – A classic adventure novel and one of Jules Verne’s most acclaimed works which describes how Phileas Fogg and his valet Passepartout attempt to circumnavigate the world in 80 days on a wager.
In the world of Business Critical Applications, especially IO intensive Oracle workloads, there is always a need for storage migration, based on the ever demanding workload profile. For example,
- Migrate storage from one Tier to another Tier within a storage array
- Migrate storage from one array to another array (within and between datacenters)
With the recent launch of VMware Cloud on AWS from VMware, many Business Critical Application (BCA) workloads that were previously difficult to deploy in the cloud no longer require significant platform modifications. VMware Cloud on AWS, powered by VMware Cloud Foundation, integrates VMware flagship compute, storage, and network virtualization products—VMware vSphere, VMware vSAN, and VMware NSX—along with VMware vCenter Server management. It optimizes them to run on elastic, bare-metal AWS infrastructure.
VMware and AWS presented a Better Together demonstration at VMworld 2017 using an Oracle RAC Database for high-availability zero-downtime client connection failover, supporting a Django-Python application running in a Native AWS Elastic Beanstalk environment. This illustrates the further value you can take advantage of by choosing VMware Cloud on AWS as the public cloud infrastructure for your Oracle RAC implementations.
Key Points to take away from this blog
Oracle licensing does not change from a licensing perspective, whether you run Oracle workloads on a classic vSphere environment, Hyper-Converged Infrastructure solution like vSAN, or VMware Cloud on AWS.
At the end of last year, VMware assisted our technology partner Fujitsu with a SAP Scale-Out BWH benchmark (SAP BW edition for SAP HANA Standard Application Benchmark Version 2). The benchmark was run on Fujitsu PRIMEQUEST 2800B3, 1 TB RAM configured systems, with four Intel Broadwell CPUs (Intel Xeon E7-8880 v4). As a result of the performance demonstrated by this test, SAP provided support for SAP Scale-Out deployments for up to 8 active nodes (7 Worker + 1 Master) with vSphere 6.5.
Why is this important? Previously we only had support for this deployment option on older CPU generations and vSphere 5.5, which will reach end of support in September 2018. If you have deployed SAP HANA Scale-Out on vSphere 5.5, then, please consider upgrading to vSphere 6.5 as soon as possible.
We performed the benchmark with 3.9 billion initial records on a 4 node Scale-Out native and virtual deployed configuration. The SAP HANA native and virtual tests were on the same HW using the same OS and HANA system configuration, to make the native and virtual results comparable. Continue reading
This is part 1 of 2 blogs that will cover how hyper-threading impacts virtual SAP sizing and performance. Many virtual SAP deployments leverage INTEL’s hyper-threading (HT) technology. For each processor core that is physically present, the hypervisor sees two logical processors and shares the workload between them when possible. A vCPU can be scheduled on a logical processor on a core while the other logical processor of the core is idle. In this blog this is referred to as one vCPU scheduled per core. Two vCPUs can be scheduled on the two logical processors of the same core. This is referred to as two vCPUs scheduled per core. For more background on vSphere scheduling functionality, please see the whitepaper The CPU Scheduler in VMware vSphere .
I will show three different sizing scenarios.
The first scenario above shows
- 14 physical cores with HT enabled (28 logical CPUs).
- A virtual machine (VM) with 14 vCPUs.
- vSphere will schedule each vCPU on a logical CPU on a separate dedicated physical core (default behavior). The scheduler prefers a whole idle core, where both logical CPUs of the core are idle, over a partial idle core, where one logical CPU is idle while the other is busy.
- There is spare capacity for more performance as not all the logical CPUs are utilized.
Key Trends in Big Data Infrastructure:
Some of the key trends in big data infrastructure over the past couple of years are:
• Decoupling of Compute and Storage Clusters
• Separate compute virtual machines from storage VMs
• Data is processed and scaled independently of compute
• Dynamic Scaling of compute nodes used for analysis from dozens to hundreds
• SPARK and other newer Big Data platforms can work with regular filesystems
• Newer platforms store and process data in memory
• New platforms can leverage Distributed Filesystems that can use local or shared storage
• Need for High Availability & Fault Tolerance for master components
Storage – the final frontier. These are the voyages of any Business Critical Oracle database, its endless mission: to meet the business SLA, to sustain increasing workload demands and seek out new challenges, to boldly go where no database has gone before.
Storage is one of the most important aspect of any IO intensive workloads, Oracle workloads typically fit this bill and we all know how a mis-configured Storage or incorrect tuning often leads to database performance issues, irrespective of any architecture where the database is hosted on.
As part of my pre-sales Oracle Specialist role where I talk to Customers , Partners and VMware field, I always bring up the fact how we can go and procure ourselves the biggest and baddest piece of infrastructure on this face of earth and all it takes is one incorrect setting or mis-configuration and everything goes to “Hell in a Handbasket”.
The crux of this blog’s discussion is “How to stop hoarding much needed infrastructure resources and live wisely ever after by scaling up as needed effectively”
Typically Oracle workloads running on bare metal environments , or for that matter any environment, are sized very conservatively, given the nature of the workload , with the premise that , in event of any workload spike, the abundant resources thrown at the workload will be able to sustain this spike, but in reality , we need to ask ourselves these questions
- How much resource is actually allocated to the workload?
- How much of that allocated resource is actually consumed by that workload ?
- How often does the workload experience spikes?
- If spikes are happening regularly then, has proper capacity planning and forecasting been done for this workload?
Proper plan and design along with capacity planning and forecasting is the key to manage any Business Critical Application (BCA) workload and there is no shortcut around this.
Unfortunately what this means in a physical environment is , for example, static allocation of resources to a BCA workload where the CPU utilization has been flat at 30-40% for 11 months of the year with utilization at 55-60% for the last month of the year.
Pre-allocating resources to a workload , in anticipation of peaks for say 1 month in a whole year, basically results in the resources underutilized for the rest of the year , starving other workloads of much needed resource, an ineffective way of resource allocation , thereby leading to increase in larger footprint of servers resulting in increase in CAPEX and OPEX.
Enter “Hot Plug” – “Hot Plug CPU and Hot Plug Memory” on vSphere Platform – Resource allocation on demand thereby resulting in effective and elastic resource management working on the principle of “Ask and thy shall receive”.
With the recent launch of the VMware Cloud on AWS Software Defined Data Center (SDDC) from VMware, many Business Critical Application (BCA) workloads that were previously difficult to deploy in the cloud no longer require significant platform modifications.
This post describes a Better Together demonstration VMware and AWS presented at VMworld 2017 using an Oracle RAC Database for high-availability zero-downtime client connection failover, supporting a Django-Python application running in a Native AWS Elastic Beanstalk environment.
Oracle RAC presents two requirements that are difficult to meet on AWS infrastructure:
- Shared Storage
- Multicast Layer 2 Networking.
VMware vSAN and NSX deployed into the VMware SDDC cluster meet those requirements succinctly.
The Django-Python application layer’s end-to-end provisioning is fully automated with AWS Elastic Beanstalk, which creates one or more environments containing the necessary Elastic Load Balancer, Auto-Scaling Group, Security Group, and EC2 Instances each complete with all of the python prerequisites needed to dynamically scale based on demand. From a zip file containing your application code, a running environment can be launched with a single command. Continue reading