The 3rd VMware Elite Database Workshop is set to start at 7AM on Tuesday April 21. The program is an invitation only event for the some of the world’s top SQL Server experts in which VMware selects groups of database specialists within various disciplines to include Oracle and SQL Server and soon to include Big Data and other data focused areas. The experience consists of Executive Welcomes from both VMware and the storage partner in delivering a particular event. In this case the partner is Tintri who is providing an 880 high end flash array for the customized labs which will challenge both the creativity and technical expertise of the workshop attendees. The bulk of the three days will be filled with presentations and open discussion with VMware engineers and product specialists. Well known industry luminaries such as Richard McDougal and Jeff Buell will join the rich lineup over these three days. The attendee list includes. Steve Jones, Tim Ford, Arnie Rowland, Wendy Pastrick, Eddie Wurech, Robert Davis, Sean McCown, Allan Hirt, Geoff Hiten, Randy Knight, Chris Shaw, Jason Strate, Brandon Leach, Shawn Meyers, Melissa Connors. Alumni guests include David Klee, Mike Corey and Denny Cherry.
When you think of VMware, virtualization clearly jumps to mind. But if you take a step back, virtualization is really a means to an end. IT pros don’t earn their salary because they run virtual machines — but VMs support application services that are essential to business, ultimately contributing to the bottom line. VMware is focused on providing the best place to run any application; from LAMP stacks to business-critical workloads to big data analytics, vSphere can handle it all.
Two open source projects were just announced by the Cloud-Native Apps group: Project Photon and Project Lightwave. Both of these projects will be foundational elements for running Linux containers and supporting next-generation application architectures. This marked a big milestone in the lifecycle of VMware Cloud-Native Apps, and at first glance may seem to be a lot more relevant to application developers than the traditional vSphere audience, but there really is a great tie-in to the Software-Defined Data Center.
If you’re a vSphere administrator, an important part of your role is supporting the developers that create the apps that run on your infrastructure. There is a shift underway with developers right now – moving from a traditional waterfall model to agile, continuous integration. For a specific example of the change in mindset from previous software development processes, check out The Twelve-Factor App to see why the container enthusiasm starts to really make sense.
Today, customers trust their Software-Designed Data Center based on VMware infrastructure for any app. It would be a shame if a new platform for applications came along and brought back the silos of yesteryear. This is why vSphere admins should care about next-generation applications and corresponding infrastructure. The container runtime becomes another essential component of the infrastructure and it should be integrated for seamless operation. With Photon, VMware is going to make it easy to run containers alongside all of the other workloads – no silos here!
Photon is going to be available in places where developers expect to find it. For example, many developers use HashiCorp Vagrant as an easy means of pulling down standardized VM images from a central repository. A Photon image will be available there and elsewhere, enabling the same container runtime on laptops, in the datacenter, and in public clouds.
Administrators will like the fact that Photon has a small footprint because it is not weighed down with all of the packages typically found on a Linux system, and one can draw parallels with the VMware ESXi thin hypervisor. Less is more when it comes to infrastructure – fewer patches, less administration, and improved SLAs are among the key benefits.
The companion open source project – Lightwave – is an authorization and authentication platform with origins from the vSphere platform. It provides multi-master replication for scalable HA and flexible topology choices to accommodate any architecture.
There is great integration between Lightwave and Photon. In fact, Lightwave is designed to actually run directly on Photon instances – no general-purpose OS needed. Take a look at this demo video where a new Lightwave domain is created, Photon clients are joined to the domain, and ssh logins are authenticated against directory credentials, eliminating the need to manage local user accounts.
Linux containers are all the rage right now, but it’s not a zero-sum proposition. Containers run great on vSphere and VMware is investing accordingly. VMware SDDC administrators can be confident that their platform is, and will be, the best for any application – with the security, manageability, and governance that enterprises need.
vSphere Replication is an asynchronous, host-based replication feature that is included with vSphere Essentials Plus Kit and higher editions. It can be used as a standalone solution for simple, storage-agnostic, cost-effective virtual machine replication. vSphere Replication also serves as a replication component for VMware vCenter Site Recovery Manager (SRM) and VMware vCloud Air Disaster Recovery. When replication is configured for a powered on virtual machine, vSphere Replication starts replicating the files that make up the virtual machine from the source location to the target location. A question that comes up sometimes is “How much storage will be consumed by the virtual machine at the target location?” As with many questions like this, the short answer is “It depends.”
The guide is being provided as Excel spreadsheet. I’m also making a PDF doc available for easier viewing. In addition, I’ve also included an Excel spreadsheet of the guidelines that have moved out of the guide and into documentation. THIS IS INCOMPLETE. We are still working on some of that content. (that’s why this is a beta!)
Please read the blog on the changes that have happened to the guide. LOTS of changes and the blog will explain.
Automation technologies are a fundamental dependency to all aspects of the Software-Defined Data center. The use of automation technologies not only increases the overall productivity of the software-defined data center, but it can also accelerate the adoption of today’s modern operating models.
In recent years, a subset of the core pillars of the software-defined data center has experienced a great deal of improvements with the help of automation. The same can’t be said about storage. The lack management flexibility and capable automation frameworks have kept the storage infrastructures from delivering operational value and efficiencies similar to the ones available with the compute and network pillars.
VMware’s software-defined storage technologies and its storage policy-based management framework (SPBM) deliver the missing piece of the puzzle for storage infrastructure in the software-defined data center.
Challenges – Old School
Traditional data centers are operated, managed, and consumed in models that are mapped to silos and statically defined infrastructure resources. However, businesses lean heavily on their technology investments to be agile and responsive in order to gain a competitive edge or reduce their overhead expenses. As such, there is a fundamental shift taking place within the IT industry, a shift that aims to change the way in which data centers are operated, managed, and consumed.
In today’s hardware-defined data centers, external storage arrays drive the de-facto standards for how storage is consumed. The limits of the data center are tied to the limits of the infrastructure. The consumers of storage resources face a number of issues since traditional storage systems are primarily deployed in silos: each with unique features and unique management model. They each have their own constructs for delivering storage (LUNs, Volumes), regularly handled through separate workflows and even different teams. Which means that the delivery of new resources and services requires significantly longer periods of time.
Critical data services are often tied to specific arrays, obstructing standardized and aligned approaches across multiple storage types. Overall, creating a consistent operational model across multiple storage systems remains a challenge. Because storage capabilities are tied to static definitions, no guarantees for performance, risks of multi-tenant impacts, all disks treated same, and OPEX tied to the highest common denominator.
I saw a question the other day that asked “Can someone explain what the big deal is about Virtual Volumes?” A fair question.
The shortest, easiest answer is that VVols offer per-VM management of storage that helps deliver a software defined datacenter.
That, however, is a pretty big statement that requires some unpacking. Rawlinson has done a great job of showcasing Virtual Volumes already, and has talked about how it simplifies storage management, puts the VMs in charge of their own storage, and gives us more fine-grained control over VM storage. I myself will also dive into some detail on the technical capabilities in the future, but first let’s take a broader look at why this really is an important shift in the way we do VM storage.
On Wednesday April 15 Hitachi is hosting a one hour technical webinar event to discuss how Hitachi Storage for VMware Virtual Volumes can bring customers on a reliable enterprise journey to a software-defined, policy-controlled data center.
The webinar covers more than just the technical aspects of Virtual Volumes but also the operational value and efficiency Hitachi delivers with their unique implementation.
The technical and implementation details about Hitachi’s multi-protocol support and storage capabilities offered to virtual machines and their individual objects.
Webinar attendees will learn about:
The simplification of storage related operations for vSphere administrators
The increase in manageability for the vSphere infrastructure and greater levels of agility and efficiency driven by a policy-based management and operating model.
The event will be lead by Paul Morrissey – Director, Product Management, Storage, Virtualization & Application, Hitachi Data Systems and myself.
To register for the event by using the link below and don’t miss it:
For future updates on Virtual SAN (VSAN), vSphere Virtual Volumes (VVols) and other Software-defined Storage technologies, as well as vSphere + OpenStack be sure to follow me on Twitter: @PunchingClouds
SRM supports two different replication technologies, Storage Array or Array-Based Replication and vSphere Replication. One of the key decisions when implementing SRM is which technology to use and for what VMs. The two technologies can be used together in an SRM environment though not to protect the same VM. Given that, what are the differences and why would you use one over the other? This table will provide all the answers you need:
The Oracle licensing webinar was broadcast on Mar19, 2015 and the replay is available at the link below. The webinar and the link, hosted by Database Trends and Analysis (DBTA) was delivered as a collaborative effort between VMware and a number of VMware partners. Don Sullivan moderated the event which included individual presentations of approximately 15 minutes each followed by a robust question and answer session. The presenters, who delivered, their own customized corporate presentations, included the world renowned founder of House of Brick, Dave Welch followed by “License Consulting” founder Daniel Hesselink. The Independent Oracle Users Group (IOUG) also joined us on the eve of their Flagship event “Collaborate” taking place in April at Mandalay Bay in Las Vegas. Dan Young, the President of the new IOUG VMware Special Interest Group (SIG) who is also the Director of Database Services at Indiana University also joined the conversation. An article describing the event has been included as well. The contact information for these partners is listed below as well.
The “Understanding Oracle Licensing, Certification and Support” guide, updated in March 2015 is included in this list along with the results of the IOUG membership survey from 2014 and a link to sign up for the VMware IOUG SIG.
My data center kit has been using too much energy.
Having kit available at my disposable is great, but I have been wasting this resource when it’s not required by my workloads. And if there’s one thing I try to be conscious of, it’s energy consumption. Just ask my kids who I chase from room to room turning off lights, screens, and the lot when they aren’t using them.
But why not in the data center? Did you know that hosts typically use 60%+ of their peak power when idle?
Until recently, I had overlooked configuring my kit to use the vSphere Distributed Power Management (“DPM”) feature to manage power consumption and save energy.
With the release of vSphere 6.0 it’s a good time to review and take deeper look into the capabilities and benefits of this feature.
What is VMware vSphere Distributed Power Management?
VMware vSphere Distributed Power Management is a feature included with vSphere Enterprise and Enterprise Plus editions that dynamically optimizes cluster power consumption based on workload demands. When host CPU and memory resources are lightly used, DPM recommends the evacuation of workloads and powers-off of ESXi hosts. When CPU or memory resource utilization increases for workloads or additional host resources are required, DPM powers on a required set of hosts back online to meet the demand of HA or other workload-specific contraints by executing vSphere Distributed Resource Scheduler (“DRS”) in a “what-if” mode. DRS will ensure host power recommendations are consistent with the cluster constraints and resources being managed by the cluster.
Beneath the covers there are key challenges that DPM addresses to enable effective power-savings capabilities:
Accurately Assessing Workload Resource Demand
Avoiding Frequent Power-on/Power-off of Host and Excessive vMotion Operations
Rapid Response to Workload Demand and Performance Requirements
Appropriate Host Selection for Power-on/Power-Off within Tolerable Host Utilization Ratios
Intelligent Redistribution of Workloads After Host Power-on/Power-Off
Once DPM determines the number of hosts needed to satisfy all workloads and relevant constraints, and DRS has distributed virtual machines across hosts to maintain resource allocation constraints and objectives, each powered-on host is free to handle its power management
Hosts Entering and Exiting Standby
When a host is powered-off by DPM, they are marked in vCenter Server in “standby” mode indicating that they are powered-off but available to be powered-on when required. The host icon is updated with a crescent moon overlay symbolizing a “sleeping” state for the host.
DPM can awaken hosts from the standby mode using one of three power management options:
Intelligent Platform Management Interface (IPMI)
Hewlett Packard Integrated Lights-Out (iLO), or
Each protocol requires its own hardware support and configuration. If a host does not support any of these protocols it cannot be put into standby by DPM. If a host supports multiple protocols, they are used in the following order: IPMI, iLO, WOL. This article is focused on the use of the first two.