Home > Blogs > VMware Consulting Blog > Monthly Archives: November 2013

Monthly Archives: November 2013

SDDC + SAP = CapEx/OpEx Savings

By Girish Manmadkar, an SAP Virtualization Architect at VMware

Earlier this month, my colleague David Gallant wrote about architecting a software-defined data center for SAP and other business-critical applications. I’d like to further explore how SAP fits into the software-defined data center (SDDC) and, specifically, how to optimize it for CapEx and OpEx savings.

A key to remember is that the SDDC is not a single technology that you purchase and install—it is a use case, a strategy, a mind shift. And in that way, it is also a journey that will unfold in stages and should be planned in that way. I’ve outlined the three foundational steps below.

SDDC 1.0

Most of the customers that I work with are well along in this stage, moving their current non-x86 SAP workloads toward a VMware-based x86 environment.

During this process, numerous milestones can be delivered to the business, in particular, an immediate reduction in their CapEx. This benefit is achieved by starting to move non-x86 or current physical x-86 workloads to the virtual x-86 OS platform. Understandably, customers tend to approach this transition with caution, so we often start with low-hanging fruits: non-production and/or development SAP systems.

The next step you can take is to introduce automation. Automation comes in two places: at the infrastructure layer, which is achieved using VMware vCloud Automation Center and Orchestration; and at the application layer, delivered using SAP’s Landscape Virtualization Manager.

During this phase it is best to implement vSphere features, including auto deploy—host profiles, and OS templates—in order to automate vSphere and virtual machine provisioning to the environment.

Often it is a good idea at this time to start a parallel project around storage. You can work with your storage and backup teams to enhance current architectures by enabling storage technologies like de-dup, vSphere storage I/O control and any other storage array plugins.

We also recommend minimizing agents in the guest operating system, such as agents used for backup and/or anti-virus. The team should start putting together new architecture to move such agents from the guest OS to the vSphere hosts to reduce complexity and improve performance. The storage and network teams should look to implement new architecture that will support virtual disaster recovery solution. By planning ahead now, teams can avoid rework later.

During this phase, the team not only migrates SAP application servers to the vSphere platform but also shows business value with CapEx reductions and value-added flexibility to scale out SAP application server capacity on demand.

SDDC 2.0

Once this first stage goes into the operations cycle, it lays the groundwork for various aspects of the SDDC’s second stage. The next shift is toward a converged datacenter or common virtualization framework to deploy a software-defined lifecycle for SAP. This allows better monitoring, migration to the cloud, chargeback, and security.

This is also the phase where you want to virtualize your SAP central instances, or ASCS instances, and database servers. The value here is the removal of a reliance on complex, physical clustered environments by transitioning instead to VMware’s high-availability features. These include fault tolerance (FT) applicable to and determined by the SAP sizing exercise for the ASCS and focused on meeting the business’s SLAs.

SDDC 3.0

Once the SDDC 2.0 is in production, it is a good time to start defining other aspects of SDDC, such as Infrastructure-as-a-Service, Platform-as-a-Service, Storage-as-a-Service, and Disaster-Recovery-as-a-Service.

Keep an eye out for our follow-up post fleshing out the processes and benefits of these later stages.


Girish Manmadkar is a veteran VMware SAP Virtualization Architect with extensive knowledge and hands-on experience with various SAP and VMware products, including various databases. He focuses on SAP migrations, architecture designs, and implementation, including disaster recovery.

Help Your Mirage Implementation Soar with a Pilot Program

By John Kramer, Consultant at VMware

I’ve recently been working on a customer engagement, getting them ready to deploy VMware’s Horizon Mirage to 12,000 endpoints worldwide. The main use case this customer had in mind was backup and recovery of existing endpoints and new endpoint provisioning.

During the initial phase of a Mirage project, the unique data from each endpoint is “centralized” into the data center to provide protection for the user’s data and application personality. With so many endpoints to protect, it’s key that we test our assumptions with a pilot program.

Getting off the ground

We begin by building the pilot infrastructure as close to the production configuration as possible. In this case, that included six Mirage Cluster servers, one Mirage Management server, an EMC Isilon Storage array, and the customer’s existing F5 load balancer. (Note that each Mirage implementation will be unique, as will combined use cases—migration, image deployment, and backup/recovery, for example. More variables and considerations would come into play if more than backup/recovery was needed.)

For this particular pilot we selected 200 endpoints from as many branch office locations as possible. I would normally recommend a much smaller pilot with 50–100 endpoints, but this customer needed to centralize global endpoints to a single US-based datacenter, so we needed a larger data set to test the various network configurations worldwide.

While implementing a single, centralized Mirage cluster is ideal, there are situations in which supporting multiple mirage clusters is the best solution. These include when data privacy regulations require data to be kept in specific countries or regions, when wide area network bandwidth is insufficient to support Mirage operations, and when the customer requires separation of Mirage management responsibilities for different endpoints.

Once the infrastructure is setup we build a functional test plan, which we will use to validate decisions we made when the customer purchased the servers, storage, and infrastructure to support Mirage. Then we extrapolate the data to make sure there will be enough bandwidth, time, and disk space to support the full centralization.

A pilot phase can also help with smaller roll-outs by ensuring resources are utilized as efficiently as possible. Here are the key components of our Mirage pilot.

Average CVD Size

This is the amount of unique data, after deduplication is taken into account, that has to go over the network for an endpoint to be considered “protected” (since Mirage only centralizes data that’s not duplicated on another endpoint). By multiplying the average CVD size by the number of endpoints in an office, we can estimate the amount of data that will need to be centralized from each site.

Network Bandwidth

The next thing we need to know is how much bandwidth is available from each site to transfer that data. That, along with the average CVD size, allows us to calculate the total amount of time for centralization, as well as the amount of network bandwidth required to complete centralization. This helps us determine if expectations need to be reset with the customer and/or if we need to implement some form of Quality of Service (QoS) to rate-limit Mirage traffic over the network so it does not compete with other high-priority traffic.

Storage Sizing

The average CVD also helps us calculate how much total storage space will be necessary (average CVD size times the number of endpoints that need to be backed up). We also make sure there is sufficient storage array IOPS for Mirage to utilize during the centralization phase.

Communication

The early stage of the pilot is also an important opportunity to bring together the various teams that will be affected—server, storage, network, desktop, helpdesk—and start discussions about gathering performance stats from each groups during the pilot to validate the planned architecture. This way we make sure everyone understands how the roll-out will affect their team and what their roles and responsibilities will be. Good communication with these groups is important throughout the pilot to ensure you’re gathering the data needed to help you validate your design or make adjustments when necessary.

After testing the key components during the centralization phase, we work with the customer to build a base layer from which new endpoints will be provisioned. A new endpoint will usually arrive from the manufacturer with a corporate image that’s out of date. When that new endpoint comes onto the network, we add it to the Mirage system, which scans the system, determines which updates are missing, then uses the base layer to send the updates to the new endpoint being provisioned.

This process will also make use of the Branch Reflector, if one is available on the LAN. Branch reflectors are used to reduce the amount of data that needs to be sent to a remote location for base layer provisioning and update operations.

One big advantage of Mirage is that it’s designed to be as hands-off as possible, saving IT time. A great example happened the week after we rolled out the pilot to the customer’s Indianapolis office. An employee’s hard drive failed, and it just happened to be one that was included in the pilot. We were able to restore that endpoint, including all his user applications and data, like nothing ever happened within the day.

We were able to restore that endpoint like nothing ever happened within the day.

Under their old system, it would have taken much longer then a day to get the end users applications and data recovered—assuming they even had a valid backup. Not surprisingly, the customer and the employee were very happy with the results—and we were happy that Mirage clearly proved the value of the system, during the first week of pilot!

This highlights one final benefit of a pilot program: It gives you the opportunity to reiterate the value of the investment and strengthen buy-in even further. So whether you are working on a project involving 2,000, 10,000, or 20,000 endpoints, I recommend starting with a pilot. It will save you time, effort, and money in the long run.


John Kramer is a Consultant for VMware focusing on End-User-Computing (EUC) solutions. He works in the field providing real-world design guidance and hands-on implementation skills for VMware Horizon MirageHorizon View, and Horizon Workspace solutions for Fortune 500 businesses, government entities, and academics across the United States and Canada.

Quickly Calculate Bandwidth Requirements with New vSphere ‘fling’

By Sunny Dua, Senior Technology Consultant at VMware 

With a number of my recent consulting engagements, I have seen an increasing demand for host-based replication solutions for data replication. In a few of my recent projects, I have implemented VMware Site Recovery Manager in combination with VMware vSphere Replication.

I have written about vSphere Replication (VR) in the past and I am not surprised that a number of VMware customers are shifting focus from a storage-based replication solution to a host-based replication solution due to the cost-benefit and flexibility that comes with such a solution.

In my projects I started with replicating simple web servers to DR site using VR; now customers are discussing database servers, exchange, and other critical workloads to be replicated using vSphere Replication. With an out-of-the-box integration with a solution such a as VMware Site Recovery Manager, building a DR environment for your virtualized datacenter has become extremely simple and cost effective.

The configuration of the replication appliance and SRM is as easy as clicking NEXT, NEXT, FINISH; however, the most common challenge has been around estimating the bandwidth requirements from Protected Site to Recovery Site for the replication of workloads. One of the most commonly asked question is: “How do I calculate the bandwidth requirements for replication?” Continue reading

The SDDC Seems Cool … But What Do I Do with It?

By David Gallant, VMWare Professional Services Consultant

Lately I’ve been receiving requests from customers to talk to them about the software-defined data center (SDDC). So I start to explain software-defined networking, software-defined storage, automated provisioning, and self-service portals.

And that’s when I notice the customer looking excited, but also slightly confused.

Last week at SAP TechEd 2013, I was in the middle of just such a talk when I decided to stop and I ask the customer why he looked puzzled.

His response? “That’s great, but what do I do with all that SDDC stuff?”

That’s when the light bulb came on. He was right to question me—why build a software-defined data center if you have no clue what you’re going to do with it?

To really harvest the investment in your SDDC, you need to be building toward a specific set of goals. We don’t build data centers without a purpose; and that purpose for SDDC, as it’s always been, is the application.

In most cases the best data centers have been purpose-designed and built around the organization’s business-critical applications; for instance SAP, Oracle, or Microsoft applications.

I’ll concentrate for now on SAP—if you can architect an SDDC for SAP, you can roll those concepts over to pretty much any other application. Continue reading