Author Archives: VMware Professional Services

Slowing Down for Strategy Speeds Up the Move to Mobile

By Gary Osborne, Senior Solutions Product Manager – End User Computing

Today’s workers are more reliant on—and demanding of—mobility than ever before. They need personalized desktops that follow them from work to home. They need to connect from multiple devices through rich application interfaces. The challenge for IT organizations is that bring-your-own-device (BYOD) initiatives are often wrapped in, and encumbered by, tactical issues—perpetually pushing strategic discussion to the back burner.

Working hard, but standing still

By focusing on a tactical approach, many IT organizations find themselves on the BYOD treadmill—they get a lot of exercise but never really get anywhere!  Developing an overarching strategy before setting out on the journey provides much needed guidance and positioning along the way. This isn’t a step-by-step plan, but rather a clear vision of the business challenges being addressed and the value being delivered back to the organization. This vision, including direction, a clear definition of phased success, and defined checkpoints along the way, should be articulated and understood throughout the organization.

Getting your organization to buy into the importance of an overarching strategy can be a tough sell, especially if near-term goals are looming. But it will pay off many times over. According to a recent study by IBM, “Those IT organizations that treat mobile as both a high priority and a strategic issue are much more likely to experience the benefits that mobile can bring to an organization. The July report, Putting Mobile First: Best Practices of Mobile Technology Leaders, reveals a strong correlation between mobile success and establishing a strategic mobile vision, along with external help to implement it.

Take the time – but not too much

Those IT organizations that achieve measurable success with their VDI and BYOD initiatives found the right balance between too little time developing a sound strategy and the all-too-common “analysis paralysis” of taking too much time. we  We have worked with customers that have found that balance in part by keeping a clear focus on the business value that BYOD solutions can provide and an eye toward what they need to achieve and deliver to the business to declare success.

Jumping straight to the tactical activities and placing orders for “guestimated” infrastructure without knowing the strategy that will support it are two of the most common pitfalls I see lead to failed or stalled BYOD initiatives. By focusing on the value mobility can deliver to the business rather than get bogged down in the technical details, a strategic exercise can be completed swiftly and deliberately, meeting the speed of change in today’s mobility.


Gary Osborne is an IT industry veteran and is part of the VMWare Global Professional Services engineering team responsible for the End User Computing Services Portfolio.  Prior to his current role, he provided field leadership for the VMware End User Computing Professional Services practice for the Americas.

IT Must Overcome Internal Resistance to Maintain Platform ROI

By Samuel Denton-Giles, Business Solutions Architect at VMware

When deciding to adopt a new architecture or technology vision, IT organizations spend a lot of time and resources to make sure the solution fits the business’s needs, that it is cost effective, and that it can scale as the business grows. Unfortunately, I often see those visions chipped away at by daily organizational pressures, ultimately undoing the potential benefits bestowed by the new strategy.

Does this sound familiar? Someone senior from a line of business or a large vendor decides they don’t want to use a new platform. Because IT doesn’t have as much clout, they are overridden and a parallel technology is implemented, sabotaging economies of scale and the ROI for the new platform.

In order to demonstrate the value of its investments, especially in large-scale implementations such as the software-defined data center or private cloud, IT will need a clear strategy to ensure new technology is adopted and used as intended. Below, I outline two strategies I’ve seen IT organizations successfully utilize to rein in architectural drift.

Find a Champion

The best way to maintain agility while making sure new platforms are adopted as intended is to find a champion with the influence to successfully defend IT decisions. This champion should be a line-of-business leader or other executive who can explain the reasoning behind the platform and the costs associated with changing it.

The first challenge of this approach is finding the right champion. The second is to shift the way IT relates to stakeholders generally, promoting earlier and more frequent socialization of ideas and solutions. The IT organization at an oil and gas company I work with is a good example of the benefits of both.

The head of one of the company’s four main divisions cared about cloud and his business unit helped drive many of the technology requirements. Although the group’s CIO was already very active with the business, he tended to talk with LOBs but then work in isolation to deliver a finished technology solution. Because the changes required for a cloud implementation were so broad, the investment so large, and the impact on the business so high, the IT team decided to involve their champion on the business side from the outset to help shape strategy.

This approach not only ensured the platform met the business’s needs, but also strengthened IT’s position through a champion who could explain exactly why decisions had been made and help persuade the business to follow them.

Productize the Platform

The second approach places a high priority on the stability of the platform. In this case, the platform is treated like a product, with set updates at defined intervals (say, every six months). I have seen this approach work well at a large financial institution where the IT organization decided to certify every version of software that made up the platform, from the applications at the top to the firmware in a physical box at the bottom. Any feature request goes through a rigorous testing cycle, and then can be assimilated with all concurrent changes in the platform’s next “product release.”

The IT organization’s biggest hurdle was convincing the business that the review process was important, despite their frustration with longer timelines. It helped that they had an executive mandate to make stability and security the top priorities. In addition, by providing a very specific process for change requests, IT eventually weaned the business off continuous, knee-jerk changes.

To help maintain agility within this controlled environment, the IT organization also implemented a self-service, cloud-provisioning function, allowing them to rapidly deploy services within the capabilities of the platform. And if the business’s needs can’t be met, as a last resort they can step outside the process, but only with executive sign-off and non-production development until testing is complete.

Neither of these approaches to organizational resistance is easy, but the alternative is untenable. By making an up-front investment in relationship building, stakeholder involvement, and clear process documentation, IT can strengthen its influence in and benefit to the business.

Application Remediation for Windows Migration Got You Down?

By Oscar Olivo, Senior Consultant, VMware’s Professional Services

With the expiration date for Windows XP looming—April 8, 2014—many IT organizations are looking for the most efficient path to migration. One potential stumbling block is that, when migrating to Windows 7/8, IT also needs to move from a 32-bit to a 64-bit operating system to provide greater performance and compatibility with newer software.

The Problem

This presents a problem from an application compatibility standpoint, as legacy software that is critical to the business may not even install on a 64-bit operating system, much less run properly.

For the applications that require remediation, not all organizations will be able to upgrade to the latest version of software for the following reasons:

  • The newest version of the application may not work properly on a 64-bit OS.
  • The cost of licensing to upgrade to the newest version of software is restrictive.
  • The server infrastructure accessed by the application may require an older version of the client software (e.g., Project 2010 client is not supported with Project Server 2007).
  • The in-house development efforts to certify existing applications as 64-bit compatible are too costly or the internal development resources are not available to do so.
  • The plan to update/migrate off the software that needs remediation will not meet the April 2014 XP end-of-support date.

The Solution

Luckily, VMware ThinApp allows applications to be virtualized on a variety of 32-bit and 64-bit platforms. This means that applications virtualized on Windows XP 32-bit can be run on Windows 7 or Windows 8.x 64-bit operating systems—an effective method of remediating applications and driving the migration off Windows XP.

In this situation, it is important to note that applications might still contain operating system elements, such as DLLs, executables, and registry entries (in addition to the needed application files). The more of these files that are contained in the virtual application, the greater the difficulty in patching and securing the application once support for Windows XP SP3 is phased out this coming spring.

The Caveat

Although ThinApp can assist in remediating applications, it is critical to define the length of support for these use cases. Each affected business unit needs to understand the security, operational, and (possibly) cost implications of continuing to run legacy software, even in a virtualized state. Having an end state for each remediated application will not only drive standardization to newer software versions, but (most importantly) will also maintain momentum toward a supported, modern operating system.

Read more about why you can’t afford to put off your Microsoft migration over on the VMware Accelerate blog.


Oscar Olivo is a Senior Consultant with VMware’s Professional Services Organization, bringing to the team more than 19 years of IT experience in financial, consulting, and healthcare environments. In addition to being a VMware Certified Professional (for Desktop and Data Center Virtualization), he is fluent in Spanish, English, and currently learning Japanese. Oscar is also a proud University of Michigan alumnus, which at times puts him at odds (literally) with some of his mid-western co-workers.

 

SDDC + SAP = CapEx/OpEx Savings

By Girish Manmadkar, an SAP Virtualization Architect at VMware

Earlier this month, my colleague David Gallant wrote about architecting a software-defined data center for SAP and other business-critical applications. I’d like to further explore how SAP fits into the software-defined data center (SDDC) and, specifically, how to optimize it for CapEx and OpEx savings.

A key to remember is that the SDDC is not a single technology that you purchase and install—it is a use case, a strategy, a mind shift. And in that way, it is also a journey that will unfold in stages and should be planned in that way. I’ve outlined the three foundational steps below.

SDDC 1.0

Most of the customers that I work with are well along in this stage, moving their current non-x86 SAP workloads toward a VMware-based x86 environment.

During this process, numerous milestones can be delivered to the business, in particular, an immediate reduction in their CapEx. This benefit is achieved by starting to move non-x86 or current physical x-86 workloads to the virtual x-86 OS platform. Understandably, customers tend to approach this transition with caution, so we often start with low-hanging fruits: non-production and/or development SAP systems.

The next step you can take is to introduce automation. Automation comes in two places: at the infrastructure layer, which is achieved using VMware vCloud Automation Center and Orchestration; and at the application layer, delivered using SAP’s Landscape Virtualization Manager.

During this phase it is best to implement vSphere features, including auto deploy—host profiles, and OS templates—in order to automate vSphere and virtual machine provisioning to the environment.

Often it is a good idea at this time to start a parallel project around storage. You can work with your storage and backup teams to enhance current architectures by enabling storage technologies like de-dup, vSphere storage I/O control and any other storage array plugins.

We also recommend minimizing agents in the guest operating system, such as agents used for backup and/or anti-virus. The team should start putting together new architecture to move such agents from the guest OS to the vSphere hosts to reduce complexity and improve performance. The storage and network teams should look to implement new architecture that will support virtual disaster recovery solution. By planning ahead now, teams can avoid rework later.

During this phase, the team not only migrates SAP application servers to the vSphere platform but also shows business value with CapEx reductions and value-added flexibility to scale out SAP application server capacity on demand.

SDDC 2.0

Once this first stage goes into the operations cycle, it lays the groundwork for various aspects of the SDDC’s second stage. The next shift is toward a converged datacenter or common virtualization framework to deploy a software-defined lifecycle for SAP. This allows better monitoring, migration to the cloud, chargeback, and security.

This is also the phase where you want to virtualize your SAP central instances, or ASCS instances, and database servers. The value here is the removal of a reliance on complex, physical clustered environments by transitioning instead to VMware’s high-availability features. These include fault tolerance (FT) applicable to and determined by the SAP sizing exercise for the ASCS and focused on meeting the business’s SLAs.

SDDC 3.0

Once the SDDC 2.0 is in production, it is a good time to start defining other aspects of SDDC, such as Infrastructure-as-a-Service, Platform-as-a-Service, Storage-as-a-Service, and Disaster-Recovery-as-a-Service.

Keep an eye out for our follow-up post fleshing out the processes and benefits of these later stages.


Girish Manmadkar is a veteran VMware SAP Virtualization Architect with extensive knowledge and hands-on experience with various SAP and VMware products, including various databases. He focuses on SAP migrations, architecture designs, and implementation, including disaster recovery.

Help Your Mirage Implementation Soar with a Pilot Program

By John Kramer, Consultant at VMware

I’ve recently been working on a customer engagement, getting them ready to deploy VMware’s Horizon Mirage to 12,000 endpoints worldwide. The main use case this customer had in mind was backup and recovery of existing endpoints and new endpoint provisioning.

During the initial phase of a Mirage project, the unique data from each endpoint is “centralized” into the data center to provide protection for the user’s data and application personality. With so many endpoints to protect, it’s key that we test our assumptions with a pilot program.

Getting off the ground

We begin by building the pilot infrastructure as close to the production configuration as possible. In this case, that included six Mirage Cluster servers, one Mirage Management server, an EMC Isilon Storage array, and the customer’s existing F5 load balancer. (Note that each Mirage implementation will be unique, as will combined use cases—migration, image deployment, and backup/recovery, for example. More variables and considerations would come into play if more than backup/recovery was needed.)

For this particular pilot we selected 200 endpoints from as many branch office locations as possible. I would normally recommend a much smaller pilot with 50–100 endpoints, but this customer needed to centralize global endpoints to a single US-based datacenter, so we needed a larger data set to test the various network configurations worldwide.

While implementing a single, centralized Mirage cluster is ideal, there are situations in which supporting multiple mirage clusters is the best solution. These include when data privacy regulations require data to be kept in specific countries or regions, when wide area network bandwidth is insufficient to support Mirage operations, and when the customer requires separation of Mirage management responsibilities for different endpoints.

Once the infrastructure is setup we build a functional test plan, which we will use to validate decisions we made when the customer purchased the servers, storage, and infrastructure to support Mirage. Then we extrapolate the data to make sure there will be enough bandwidth, time, and disk space to support the full centralization.

A pilot phase can also help with smaller roll-outs by ensuring resources are utilized as efficiently as possible. Here are the key components of our Mirage pilot.

Average CVD Size

This is the amount of unique data, after deduplication is taken into account, that has to go over the network for an endpoint to be considered “protected” (since Mirage only centralizes data that’s not duplicated on another endpoint). By multiplying the average CVD size by the number of endpoints in an office, we can estimate the amount of data that will need to be centralized from each site.

Network Bandwidth

The next thing we need to know is how much bandwidth is available from each site to transfer that data. That, along with the average CVD size, allows us to calculate the total amount of time for centralization, as well as the amount of network bandwidth required to complete centralization. This helps us determine if expectations need to be reset with the customer and/or if we need to implement some form of Quality of Service (QoS) to rate-limit Mirage traffic over the network so it does not compete with other high-priority traffic.

Storage Sizing

The average CVD also helps us calculate how much total storage space will be necessary (average CVD size times the number of endpoints that need to be backed up). We also make sure there is sufficient storage array IOPS for Mirage to utilize during the centralization phase.

Communication

The early stage of the pilot is also an important opportunity to bring together the various teams that will be affected—server, storage, network, desktop, helpdesk—and start discussions about gathering performance stats from each groups during the pilot to validate the planned architecture. This way we make sure everyone understands how the roll-out will affect their team and what their roles and responsibilities will be. Good communication with these groups is important throughout the pilot to ensure you’re gathering the data needed to help you validate your design or make adjustments when necessary.

After testing the key components during the centralization phase, we work with the customer to build a base layer from which new endpoints will be provisioned. A new endpoint will usually arrive from the manufacturer with a corporate image that’s out of date. When that new endpoint comes onto the network, we add it to the Mirage system, which scans the system, determines which updates are missing, then uses the base layer to send the updates to the new endpoint being provisioned.

This process will also make use of the Branch Reflector, if one is available on the LAN. Branch reflectors are used to reduce the amount of data that needs to be sent to a remote location for base layer provisioning and update operations.

One big advantage of Mirage is that it’s designed to be as hands-off as possible, saving IT time. A great example happened the week after we rolled out the pilot to the customer’s Indianapolis office. An employee’s hard drive failed, and it just happened to be one that was included in the pilot. We were able to restore that endpoint, including all his user applications and data, like nothing ever happened within the day.

We were able to restore that endpoint like nothing ever happened within the day.

Under their old system, it would have taken much longer then a day to get the end users applications and data recovered—assuming they even had a valid backup. Not surprisingly, the customer and the employee were very happy with the results—and we were happy that Mirage clearly proved the value of the system, during the first week of pilot!

This highlights one final benefit of a pilot program: It gives you the opportunity to reiterate the value of the investment and strengthen buy-in even further. So whether you are working on a project involving 2,000, 10,000, or 20,000 endpoints, I recommend starting with a pilot. It will save you time, effort, and money in the long run.


John Kramer is a Consultant for VMware focusing on End-User-Computing (EUC) solutions. He works in the field providing real-world design guidance and hands-on implementation skills for VMware Horizon MirageHorizon View, and Horizon Workspace solutions for Fortune 500 businesses, government entities, and academics across the United States and Canada.

Quickly Calculate Bandwidth Requirements with New vSphere ‘fling’

By Sunny Dua, Senior Technology Consultant at VMware 

With a number of my recent consulting engagements, I have seen an increasing demand for host-based replication solutions for data replication. In a few of my recent projects, I have implemented VMware Site Recovery Manager in combination with VMware vSphere Replication.

I have written about vSphere Replication (VR) in the past and I am not surprised that a number of VMware customers are shifting focus from a storage-based replication solution to a host-based replication solution due to the cost-benefit and flexibility that comes with such a solution.

In my projects I started with replicating simple web servers to DR site using VR; now customers are discussing database servers, exchange, and other critical workloads to be replicated using vSphere Replication. With an out-of-the-box integration with a solution such a as VMware Site Recovery Manager, building a DR environment for your virtualized datacenter has become extremely simple and cost effective.

The configuration of the replication appliance and SRM is as easy as clicking NEXT, NEXT, FINISH; however, the most common challenge has been around estimating the bandwidth requirements from Protected Site to Recovery Site for the replication of workloads. One of the most commonly asked question is: “How do I calculate the bandwidth requirements for replication?” Continue reading

The SDDC Seems Cool … But What Do I Do with It?

By David Gallant, VMWare Professional Services Consultant

Lately I’ve been receiving requests from customers to talk to them about the software-defined data center (SDDC). So I start to explain software-defined networking, software-defined storage, automated provisioning, and self-service portals.

And that’s when I notice the customer looking excited, but also slightly confused.

Last week at SAP TechEd 2013, I was in the middle of just such a talk when I decided to stop and I ask the customer why he looked puzzled.

His response? “That’s great, but what do I do with all that SDDC stuff?”

That’s when the light bulb came on. He was right to question me—why build a software-defined data center if you have no clue what you’re going to do with it?

To really harvest the investment in your SDDC, you need to be building toward a specific set of goals. We don’t build data centers without a purpose; and that purpose for SDDC, as it’s always been, is the application.

In most cases the best data centers have been purpose-designed and built around the organization’s business-critical applications; for instance SAP, Oracle, or Microsoft applications.

I’ll concentrate for now on SAP—if you can architect an SDDC for SAP, you can roll those concepts over to pretty much any other application. Continue reading

Are You Optimizing your SAP Virtualization?

If you are virtualizing an SAP environment running business-critical applications, chances are these questions will sound familiar: Am I optimizing my SAP virtualization for the maximum benefit? What measures should I take to avoid negative business impact when running SAP production workloads on the VMware virtualized platform?

Luckily, VMware Consulting Architect Girish Manmadkar recently shared his advice on this topic.

To make sure you are designing and sizing your infrastructure for optimum business benefit, Girish suggests two new questions to ask yourself, your IT organization, and your vendors.

1. How will this environment need to scale?

2. Am I sizing my environment to support 3-to-5 years of growth?

When you understand the needs outlined by these questions, you can then work with hardware vendors, as well as your VMware and SAP teams, to find the best solution.

From an operational standpoint, there are also efficiencies within the SAP environment once it is virtualized that you want to be sure to take advantage of.

1. Scaling out during the month-end and quarter-end processing is a snap compared to the hours it can take otherwise.

2. Products like vCenter Operations Manger help make sure your SAP basis admin and VMware admin are always on the same page, making it far faster and easier to troubleshoot the environment.

3. You’ll be able to provide the operations team with 24-hours monitoring of the entire SAP virtual infrastructure, allowing for a proactive approach to minimize or eliminate downtime.

Check out Girish’s video, above, for more details.


Girish Manmadkar is a veteran VMware SAP Virtualization Architect with extensive knowledge and hand-on experience with various SAP and VMware products, including various databases. He focuses on SAP migrations, architecture designs, and implementation, including disaster recovery.


The Tao of IT Transformation

Taken from Chinese philosophy, “tao” refers to a “path” or guiding principal. With the exciting range of new technologies available (such as software-defined storage and network virtualization), it’s important that IT organizations establish that over-arching strategy for integration with the existing architecture.

In this short video, Wade Holmes (VCDX #15 and VMware Staff Solutions Architect) outlines the “VCDX Way,” which emphasizes an integration plan that is closely mapped to business priorities.

How are you approaching the integration of new technologies? Are you mapping updates to business strategy? We’d love to hear about your experiences in the comments.

4 Ways To Overcome Resistance to the Cloud

By Brett Parlier, Solutions Architect, VMware Professional Services

There’s a lot of excitement about cloud computing right now, but I also run into an equal amount of trepidation. In particular, networking pros are worried that increasingly advanced automation will soon put them out of a job.

This is just one of several common points of resistance to the big changes happening in IT. I want to talk about four of them and provide some advice on how to reframe the discussion for clients, colleagues, and possibly yourself.

1. You’re going to automate my job away!

I heard this a lot after the announcement of VMware’s NSX network virtualization platform in August. My response? That’s the same thing all the server guys said 10 years ago when virtualization came out. It just doesn’t happen. Continue reading