You read it right. As of vSphere 6.0 Update 1, the vSphere Update Manager (VUM) now has it’s interface fully-integrated in the vSphere Web Client! What does this mean for you? Now you truly have no excuse not to ditch the c# client and move directly into the Web Client!
VMWorld 2015 Session Recap
I’m almost fully recovered from VMWorld, which was probably one of my busiest and most enjoyable VMWorld’s I’ve had in my 6 plus years at VMware because of the interaction with attendees, customers, and partners. I’ll be doing a series of Post-VMWorld Blogs focused on my SAP HANA Software-Defined Data Centers sessions but my first blog will cover the misconceptions associated with sizing SAP HANA databases on vSphere. There are many good reasons to upgrade to vSphere 6.0, going beyond the 1TB monster virtual machine limit in vSphere 5.5 when deploying SAP HANA databases is not necessarily one of them.
SAP HANA is no longer just an in-memory database, it is now a data management platform. It is NOT confined by the size of available memory since the SAP HANA warm data can be stored on disk in a columnar format and accessed transparently by applications.
What this means is the 1TB monster virtual machine maximum in vSphere 5.5 is an artificial barrier. SAP HANA multi-terabyte size databases can be easily virtualized with vSphere 5.5 using Dynamic Tiering, Near-Line Storage, and other memory management techniques SAP has introduced to the SAP HANA Platform to optimize and reduces HANA’s in-memory footprint.
SAP HANA Dynamic Tiering (DT)
SAP HANA Dynamic Tiering was introduced last year in Support Pack Stack (SPS) 09 for use with BW, Dynamic Tiering allows customers to seamlessly manager their SAP HANA disk based “Warm Data” on an Extended Storage Host, essentially placing data which does not need to be in-memory on disk. The guidance SAP gives when using the SAP HANA Dynamic Tiering option for SPS 09 is up to 20% of in-memory data can reside on the Extended Storage (ES) Host, for SPS 10 up to 40% can reside on the ES Host, and in the future up to 70% of the SAP HANA data can reside on the ES Host. So in the future the majority of SAP HANA data which was once in-memory can reside on-disk.
Near-Line Storage (NLS)
In addition to the reduction of the SAP HANA in-memory footprint DT affords customers, Near-Line Storage should be considered as well. With NLS, data is moved outside of the SAP HANA database proper to disk and classified as “Cold”, due to its infrequent accessed and can only be accessed read only. SAP provides examples showing NLS can reduce the HANA database in-memory requirements by several Terabytes (link below).
It is also important to note that both the DT Extended Storage Host and NLS solutions do not require certified servers or storage, so not only has SAP given customers the ability to run SAP HANA in a reduced memory footprint, customers can run on standard x86 hardware as well.
There is a white paper authored by Priti Mishra, Staff Engineer, Performance Engineering VMware, which is an excellent read for anyone considering DT or NLS options. “Distributed Query Processing in SAP IQ on VMware vSphere and Virtual SAN”
Importance of the VMware Software Defined Data Center
To their credit SAP has taken a leadership role with HANA’s in-memory columnar database computing capabilities and as HANA has evolved the sizing and hardware requirements have evolved as well. Rapid change and evolving requirements are givens in technology; the VMware Software Defined Data Center provides a flexible and agile architecture to effectively react to change by recasting compute, network, and storage resources, in a centrally managed manner.
As a concrete example of the flexibility VMware’s Platform provides, Figure 1. illustrates the evolution of SAP HANA from SPS 07 to SPS 09. For customers who would like to take advantage of SAP HANA’s multi-temperature data management techniques but initially deployed SAP HANA on SPS 07 (all in-memory); through virtualization customers can reclaim and recast memory, storage, and network resources in their virtual HANA landscape to reflect the latest architectural advances and memory management techniques in SPS 10.
Figure 1. SAP HANA Platform: Evolving Hardware Requirements
Since SAP HANA can now run in a reduced memory footprint, customers who licensed HANA to be all in-memory can use virtualization to reclaim memory and deploy additional virtual databases and make HANA pervasive in their landscapes.
As a general rule, in any rapidly changing environment The VMware Software-Defined Data Center provides an agile platform which can accommodate change and also protect against capital hardware investments that may not be necessary in the future (certified vs. standard x86 hardware). For that matter, the cloud is a good option to deploy any rapidly changing application/database in places like VMware vCloud Air, Virtustream, or Secure-24 just to mention a few.
Virtual SAP HANA Back on track
After speaking with session attendees, customers, and partners, at VMworld about SAP HANA’s Multi-temperature management capabilities, I was happy to hear they will not be delaying their virtual HANA deployments due to the vSphere 6.0 roadmap certification timeline. As I said earlier, the 1TB monster virtual machine maximum in vSphere 5.5 is an artificial barrier. It really is a worthwhile exercise to take a closer look at the temperature of your data, age of your data, and your access requirements in order to take full advantage of all the tools and features SAP provides their customers.
I was also encouraged to hear from many session attendees that my presentation at VMWorld brought the SDDC from concept closer to reality by demonstrating actual mission critical database/application use cases. My future post VMWorld blogs will focus on how I deconstructed the SAP HANA Networks Requirements document and transformed that into a virtual network design using VMware NSX from my desktop. I’ll also cover Software Defined Storage, essentially translating SAP’s Multi-Temperature Storage Options into VMware Virtual Volumes and Storage Containers.
“SAP HANA SPS10- SAP HANA Dynamic Tiering”; (SAP Product Management)
“Distributed Query Processing in SAP IQ on VMware vSphere and Virtual SAN”; Priti Mishra, Performance Engineering VMware
Blog: Bob Goldsand; “SAP HANA Dynamic Tiering and the VMware Software Defined Data Center”
For those of you who attended my VMworld sessions with Salil Suri, we dropped a hint that there are things happening with Open-VM-Tools (OVT). We at VMware know that vSphere lifecycle is a difficult task to take on and that updating VMware Tools across hundreds or thousands of virtual machines is an ever-increasing burden. There have been some initiatives inside of VMware to help mitigate the amount of work needed to orchestrate this task and I think you all will find very interesting and exciting. Continue reading
An announcement we made during our repeat session “INF5123 – Managing vSphere Deployments and Upgrades – Part 2” at VMworld last week, was that VMware Tools 10.0.0 was released and is available for download in MyVMware. Continue reading
Project Photon OS, the small-footprint container runtime from VMware that was first announced back in April, is making great progress. Several new enhancements to this open source initiative are especially interesting to vSphere administrators and those responsible for deployment and administration.
PXE Boot and Network Installation
Operating system ISO images may be the lingua franca of install media due to portability and ease of use across a wide range of environments, but a proper PXE boot infrastructure can be a very valuable enhancement to both lab test beds and production environments. Those that have invested the effort in PXE will be pleased to know that Photon OS TP2 can be easily booted from the network for quick installation. And by quick, we mean really quick! Photon OS is purpose-built for containers and does not include the extraneous packages found in general-purpose distributions. Administrators can expect an interactive installation to take less than a minute, and the majority of that time will likely be spent keying in a complex root password two times.
The source of the network installation is also flexible, ranging from an internal HTTP server to a public Internet-based repository for those environments that desire to keep things minimal.
Manually installing guests in vSphere is fine for one-off efforts, troubleshooting, or other experiments, but to really operationalize any process, automation is necessary. Photon OS TP2 now supports scripted installation, which can be used with either the network or ISO installation options.
While it accomplishes the same goal as traditional kickstart, the Photon OS scripted install differs somewhat in implementation. The first and most obvious difference is the configuration file format. Instead of a plain text file with simple directives, Photon OS leverages JSON format. This is easy enough to edit by hand but also opens up the possibilities for programmatic manipulation, if desired. Another major difference is the range of directives – Photon OS is streamlined by nature and does not offer infinite control over aspects such as disk partition layout. There is, however, a means of running an arbitrary script at the end of the installation that should satisfy a great majority of customization requirements.
Guest OS Customization
In a vSphere environment, automated installation is great but it is typical to deploy new VMs from a template or Content Library – one of the new features of vSphere 6. Photon OS TP2 now has the necessary internals to support the guest OS customization that must occur after making a clone of a VM template. This is the procedure by which unique settings such as the hostname and network configuration are properly assigned. In TP2, all of the typical naming and addressing options are supported.
A new approach to OS deployment known as RPM-OSTree debuts in Photon OS TP2. This is an open source mechanism that combines aspects of image-based and package-based OS configuration, aimed at improving the consistency of deployed systems. Instead of updating packages on farms of individual servers through some means of configuration management, updates are made to a central reference system that is subsequently synchronized to clients.
While this approach may seem restrictive, it is actually very well aligned with a container runtime instance that needs just a small number of packages installed. Offering an advantage in areas such as stability and security, server instances become largely immutable and not subject to configuration drift that would be found in a handcrafted environment.
Photons Everywhere You Look!
Photon OS is a great open source Linux container runtime, but it is also an important ingredient in other VMware cloud-native infrastructure stacks. For instance, vSphere Integrated Containers uses a “pico” edition of the Photon OS Linux kernel for the parent VM that is repeatedly forked with Instant Clone to run containers. This “pico” edition is smaller than is practical for many Photon OS environments, but when used as an embedded component of vSphere Integrated Containers, the image can be very slim. Photon OS is also present as a container runtime in the distributed control plane that makes up Photon Controller, part of Photon Platform, the new VMware infrastructure optimized for running cloud native apps at extreme scale.
For developers, Photon OS is included in the VMware AppCatalyst product as well as through Hashicorp Atlas in the form of a Vagrant box. Speaking of Vagrant, another important new feature of Photon OS TP2 is full support for shared folders (HGFS) when using with VMware desktop hypervisors.
Getting Photon OS TP2
Photon OS continues to be offered as an open source project available on Github. But for the most part that venue is geared toward developers from VMware as well as other collaborators working on the actual code. vSphere administrators will primarily be interested in a binary ISO release, which now comes in two different sizes, optimized for minimal or full installations.
Take a look at Project Photon OS Technical Preview 2 and explore containers on your trusted vSphere infrastructure today!
Cloud-native applications are gaining mindshare, especially containerized apps that align well with the requirements of DevOps workflows, microservices, and immutable infrastructure trends. Developers and infrastructure experts must soon identify the platform for their next-generation workloads. Wouldn’t it be great if existing investments in skills, infrastructure, and technology ecosystem continued to offer the best environment to run all applications — including containerized apps?
Acknowledging that a single architecture may not satisfy the sometimes mutually exclusive requirements for traditional and third platform applications, VMware is gearing up for two new approaches in support of containerized apps.
Whether integrating with existing vSphere infrastructure to run alongside other workloads, or building an entirely new footprint optimized for high scale and churn, VMware has all of the bases covered!
vSphere Integrated Containers – Technology Preview
For those customers needing to support developers that are in the initial stages of deconstructing monolithic enterprise applications through microservices, Agile development, and DevOps workflows, the vSphere Integrated Containers (VIC) approach will serve them well.
VIC takes the basic constructs specified by the Open Container Initiative and maps them to the vSphere environment, exposing a virtual container host that is compatible with standard Docker client tools but backed by a flexible pool of resources to accommodate apps of many sizes. In this model, VMs essentially become containers and other aspects, such as storage and network, are mapped to corresponding elements of the vSphere platform. A tiny variant of Photon OS forms the basis of the container runtime in VIC. Performance and density is optimized through the use of Instant Clone – a feature of vSphere 6 that enables a running VM to be rapidly forked so that child VMs consume only resources that change from the parent base image.
Based on Project Bonneville technology, this is the most seamless way to provide a Docker container runtime environment with several advantages over bare-metal Linux container architectures. Hardware-level isolation of individual containers paves the way for capabilities in VIC that cannot be matched through a shared Linux kernel model.
Inherent benefits of the vSphere platform such as administrator tool choices — from the rich Web Client GUI to the productivity-boosting PowerCLI – are further extended by comprehensive application management and monitoring capabilities in vSphere and vRealize. These resource management features deliver enhanced abilities to meet enterprise SLAs for compute, network, and storage.
Photon Platform – Technology Preview
For those customers with new initiatives that have advanced cloud-native requirements, VMware is introducing the Photon Platform. The platform is a collection of technologies that provide infrastructure with just the features needed to securely run containerized applications, controlled by a massively-scalable distributed management plane with an API-first design approach. Photon Platform benefits from the solid heritage of the VMware ESXi hypervisor but favors scale and speed over the rich management features offered by vSphere.
Photon Platform consists of the following components:
- Photon Machine
- Secure ESX Microvisor based on the proven core of VMware ESXi and optimized for container-based workloads
- Photon OS – the lightweight Linux container runtime designed to integrate with VMware infrastructure
- Photon Controller
- Distributed management plane provides massive scale and resiliency
- API/CLI for flexible integration with DevOps workflows
Photon Platform will also provide an extensible provisioning capability that allows administrators to quickly instantiate popular consumptions surfaces for containerized applications such as Cloud Foundry, Kubernetes, or Mesos.
Scale, Speed, and Churn
For developers on the cutting edge of application architecture, a pattern is emerging that favors re-deployment over painstaking configuration management approaches often found in the traditional datacenter. This trend, sometimes called immutable infrastructure, forces deployments to be described programmatically and helps eliminate human bottlenecks and errors. Configuration changes can require many new VMs or containers to be deployed while old ones are rapidly destroyed, even further amplified when multiple development and test environments must also be delivered. These frequent deployments are automated, essentially eliminating the need for rich graphical interfaces and comprehensive wizards. Photon Platform foregoes full-featured centralized management tools, as they do not add the same value here that they do in traditional datacenter environments.
How to Choose
While VIC will quickly launch a container VM on demand, the magnitude would typically be in the tens, or possibly hundreds, at a time for an application. Photon Platform, on the other hand, is designed for environments where thousands or tens of thousands of containers are needed in a very short time – imagine how pleased your developers will be to learn that they can have a new Kubernetes endpoint with 1,000 nodes available for use within minutes — and another one a few minutes later!
Regardless of your cloud-native infrastructure needs, VMware will continue to be your trusted partner extending a strong record of innovation. Think of vSphere Integrated Containers as the enterprise-grade onramp to containerized applications, leveraging existing investments in technology and skillsets. Imagine Photon Platform as the next-generation infrastructure to support future initiatives that require incredible scale and churn for a range of popular container-centric consumption surfaces.
Both vSphere Integrated Containers and Photon Platform are currently Technology Previews. Please contact your VMware account team for more information or to learn about potential opportunities to participate in private betas.
Today VMware is revealing a Technology Preview of Project SkyScraper, a new set of hybrid cloud capabilities for VMware vSphere that will enable customers to confidently extend their data center to the public cloud and vice-a-versa by seamlessly operating across boundaries while providing enterprise-level security and business continuity.
At VMworld, we will demonstrate live workload migration with Cross-Cloud vMotion and Content Sync between on-premises and vCloud Air. These features will complement VMware vCloud® Air™ Hybrid Cloud Manager™ – a free, downloadable solution for vSphere Web Client users, with optional fee-based capabilities. Hybrid Cloud Manager consolidates various capabilities such as workload migration, network extension and improved hybrid management features into one easy-to-use solution for managing workloads in vCloud Air from the vSphere Web Client.
Cross-Cloud vMotion is a new technology based on vSphere vMotion that allows customers to seamlessly migrate running virtual machines between their on-premises environments and vCloud Air. Cross-cloud vMotion can be used via the vSphere Web Client, enabling rapid adoption with minimal training. The flexibility provided by this technology gives customers the ability to securely migrate virtual machines bi-directionally without compromising machine up-time; all vMotion guarantees are maintained.
Content Sync will allow customers to subscribe to an on-premise Content Library and seamlessly synchronize VM templates, vApps, ISOs, and scripts with their content catalog in vCloud Air with a single click of a button. This feature will ensure consistency of content between on-premise and the cloud, eliminating error prone manual sync process.
Learn more about these two capabilities under Project Skyscraper by visiting us the VMware booth at VMworld 2015.
No Application Left behind
This year at VMworld 2015 US in San Francisco, over 40 sessions focused on Business Critical Applications and databases will be delivered by a broad cast of VMware experts. These experts include VMware product specialists, partners, customers, and end users (developers and data scientists).
One specific session that we would like to shine the spotlight on is VAPP6952-S, “VMware Project Capstone”, in which VMware, HP and IBM will announce a collaborative effort to virtualize the highest demanding applications. As a result of this partnership between VMware, HP, and IBM, we can now more than ever, confidently claim that all applications and databases are candidates for virtualized infrastructure. This joint effort, which utilizes an HP Superdome X and an IBM FlashSystem with massive 120 vCPU VMs on vSphere 6 running Oracle 12c constitutes the most significant advancement in the area of virtualization of Business Critical Applications in many years.
The session takes place Monday, August 31st at 5PM. Join us for this session to learn about this game changing initiative.
VMware Project Capstone, a Collaboration of VMware, HP and IBM, driving Oracle to Soar Beyond the Clouds using vSphere 6, an HP Superdome X and an IBM FlashSystem ®
Abstract: When three of the most historically significant and iconic technology companies join forces, even the sky is not the limit. VMware, HP and IBM have collaborated on a project whose scope both eradicates the long accepted boundaries of virtualization for extreme high performance and establishes a new approach to cooperative solution building.
The Superdome X is HP’s first Xeon based Superdome and when combined with an IBM FlashSystem ® and virtualized with vSphere 6, the raw capabilities of this stack challenge the imagination and dispel previously held notions of performance limitations in virtualized environments. The Superdome X and the FlashSystem comprise a unique stack for all Business Critical Applications and databases. The most demanding environments can now be virtualized. It is no longer obligatory for VMware to claim that 99.9% of all applications and databases are candidates for virtualized infrastructure, as that number is now 100%. This spotlight session features senior executive management from VMware, HP and IBM and an introduction of the tests results of this unprecedented collaborative effort.
- The methodologies that are being used to drive the Superdome X and the IBM FlashSystem ® to the far edges of known performance.
- The reasons behind the joint effort of these three renowned companies as well as the aspirations for this collaboration.
- An understanding of how this new landmark architecture can affect the industry and benefit customers who have extreme but broad performance requirements.
One of the major components released with vSphere 6 this year was the support for Virtual Volumes (VVOLS). VVOLS has been gaining momentum with storage vendors, who are enabling its capabilities in their arrays.
When virtualizing business databases there are many critical concerns that need to be addressed that include:
- Database Performance to meet strict SLAs
- Daily Operations e.g. Backup & Recovery to complete in set window
- Cut down time to Clone / Refresh of Databases from Production
- Meet different IO characteristics and capabilities based on criticality
- Never ending debate with DBAs
- File Systems v/s Raw Devices (VMFS v/s RDM)
VVOLS can offer solutions to mitigate these concerns that impact the decision to virtualize business critical databases. VVOLS can help with the following:
1. Reduce backup windows for databases
2. Provide ability to have Database consistent backups
3. Reduced cloning times for multi-terabyte databases
4. Provide capabilities for Storage Policy based management
Details on the solutions available with VVOLS and its impact on “Virtualized Tier1 Business Critical Databases” will be discussed in detail at vmworld 2015 in session STO4452:
STO4452 – STO4452 – Virtual Volumes (VVOLS) a game changer for running Tier 1 Business Critical Databases
Session Date/Time: 08/31/2015 03:30 PM – 04:30 PM
More and more data is generated every day, and the same is true for “virtual” content, such as VM templates, vApps, ISO images, and scripts. The industry is paying more attention to how to streamline maintenance procedures, distribute the content, and deal with emergencies.
Now, imagine that you work at the head office in San Francisco. Your company recently opened three new branches in Denver, Atlanta, and Minneapolis and asked you to provide them with all required software. And of course, you have a VM that has all they need. So, what do you do next?
Content Library is a new feature in vSphere 6.0 that will help you to solve this and many other problems. Let me demonstrate how.