Who loves virtual Performance? Who wants to learn more about it?
Everybody of course!
I'm very excited about this year's Extreme Performance Series mini-track being hosted at VMworld San Francisco and Barcelona. These sessions are created and presented by VMware's best and most distinguished performance engineers, architects and gurus. I've tried to provide my personal thoughts on each session but these few words will never do them justice. Hope too see you all there!
Are you experiencing challenges with your current vSphere storage environment (i.e., performance, capacity constraint, complexity, expensive renewals) orjust not sure if VMware Virtual SAN (VSAN) would be a good fit?
Now VMware partners, SEs, or reps can help you with a free VSAN Assessment.
The mClock scheduler was introduced with vSphere 5.5 Storage I/O Control (SIOC) and laid the foundation for new capabilities for scheduling storage resources. vSphere 6.0 expands upon these capabilities and adds the ability to reserve IOPS, providing even more flexibility and control when delivering storage services to virtual machines. However, this new capability introduces new questions about how resources are managed and allocated during periods of storage contention.
Some vSphere administrators utilize a storage feature called "raw device mapping" or RDM. There are two types of RDM - virtual RDM and physical RDM. For more information on RDM, please see the vSphere 6.0 Documentation Center. In general, I recommend using VMDK files or Virtual Volumes, but there are certain benefits of RDM.
"Does vSphere Replication support the replication of RDMs?"
The answer is yes, but only virtual RDMs. vSphere Replication does not support physical RDMs. The next question I get is "How is the virtual RDM restored when recovered by vSphere Replication?" The answer is actually quite simple: It is recovered as a VMDK file at the target location. If you would like to see more details, keep reading...
A new customer technical case study on Skyscape's use of vSphere as their platform for deploying Hadoop in the cloud was published recently. Skyscape, based in the UK, deploys Hadoop clusters on demand for their UK Government customers from the company's public cloud infrastructure. Citizen services data and analysis tools are provided by these government departments that leverage Hadoop for data gathering and analysis.
The newly provisioned Hadoop clusters are based on the Hortonworks HDP platform today, but plans are in the works for providing other Hadoop distributions also in the future. The Skyscape engineers really innovated in an impressive way on the Big Data Extensions (BDE) platform. The system not only provides the end user with a Hadoop cluster capability but also with an Ambari Server of their own to manage and monitor their Hadoop cluster. This is all done on X86 hardware servers with direct-attached storage. Skyscape also made use of the BDE REST APIs to achieve their goal. They had five separate end-user customer groups signed up for use shortly after releasing the Hadoop service to their community.
Two other very interesting and useful blogs on virtualization of big data appeared recently: one on Using Big Data Extensions 2.2 written by Julie Roman, a Technical Account Manager at VMware who has worked on big data projects and another (from LinkedIn) on Big Data as a Service by George Trujillo, who is a VP at a Financial Services company. Both of these are very useful reads on their respective areas!
A customer recently asked me “How do I replace the “external” SSL certificate of vCenter but still use VMCA in default mode?” Ever curious, I asked “Why?”. His security team required that any “externally” facing management web pages needed to have a custom certificate that chained up to the corporate PKI. But behind that, they were totally cool with using VMCA in default mode (with the self-generated root certificate) for things like ESXi servers and solution users.
A new section to the public facing Business Critical Applications Homepage was introduced last week. It is called the "Applications-as-a-Service". The section will aggregate collateral from all applications that are considered to be mission critical but do not necessarily fit within the more established and well-known application and database categories such as Oracle, SAP and Microsoft. We chose the title "apps-as-a-service" because so many of these applications that reside on the periphery of the mission critical space depend heavily on instant provisioning and subsequent reclaiming of resources. The infrastructure flexibility that is required for High Performance Computing, Critical Big Data and Database-as-a-Service architectures is addressed perfectly by the platform of virtualized hardware known as vSphere. Please stop by and read about how we are extending the classic definition and realm of BCA to include these modern applications that are so well suited for virtualized infrastructure. http://www.vmware.com/business-critical-apps/applications-as-a-service/index.html
Our next webcast in the vSphere 6 webcast series is all about increased efficiency of running your data center via automation. Brian Graf, VMware's PowerCLI guru, will discuss what's new in vSphere 6 for PowerCLI as well as show off some tips and tricks that will wow you.
This webcast takes place July 7 at 9am PST. Register for the webcast today!