Home > Blogs > VMware Consulting Blog > Monthly Archives: May 2013

Monthly Archives: May 2013

Don’t Leave Security Off the Table

By Bill Mansfield, VMWare Professional Services Consultant

I find myself at a large majority of my enterprise customers discussing non-technical issues. Brokering a truce between operational organizations that have evolved in their own silos, and who don’t play well with others.  In the early days of Virtualization, it was difficult to get three key parties in the same room in large shops to hash out architectural requirements and operational process. Networking, Storage, and Virtualization were typically at odds with each other for any number of reasons, and getting everyone to play nice was difficult. These days, it’s primarily Security that’s left out of the room.  A large government customer recently told me flat out “We don’t care about security”, implying that it was another department’s responsibility. Indeed, the SecOps (Security Operations) and SecEngineering (Security Engineering) teams had never been brought into a Virtualization meeting in the 7 years virtualization had been in house.

This segregation of the Security team, whether intentional or not, causes some serious problems during a security incident. Typically SecOps only has a view into the core network infrastructure and some agent based sensors that may or may not make it onto the VMs that are being investigated. Network sensors typically only exist at the edges of the network, and occasionally at the core in larger shops. Any VM to VM traffic may or may not even transit the physical network at any given time.  For a long time, the ability to watch Virtual Switches for data was not available and the Security teams got used to that. These days, all the traditional methods of monitoring and incident investigation are readily available within vSphere. The vSphere 5.1 Distributed Virtual Switch can produce NetFlow data for consumption by any number of tools. RSPAN and ERSPAN can provide full remote network monitoring or recording. Inter VM traffic is no longer invisible to Security tools. Security teams just need to be involved, and need to hook their existing toolset into the Software-defined data center. No need to reinvent the wheel. Sure we can enhance capabilities, but first we need to get the Security teams to the table and allow them to use the tools they already have.

So what are some typical questions from Security Operations about the Software-defined data center? Some of them I can answer, some of them are still works in progress.  All of which deserve their own write-ups.

How do we monitor the network?

  • Port Mirroring has been around for a while, and Netflow, RSPAN and ERSPAN capabilities now allow us to function with a great deal of industry standard tools.

How do we securely log events?

  • SEIM integration is fairly straightforward via Syslog or direct pulls from the relevant vSphere databases.

Where do we put IDS/IPS?

  • Leave the traditional edge monitoring in place, enhance with solutions inside the vSphere stack.
  • vSphere accommodates traditional agent based IPS as well as a good number of agentless solutions via EPSec and NetX API integration.  Most of the major vendors have some amount of integration.

Can you accommodate for segregation of duties?

  • vSphere and vCNS vShield Manager both provide role based segregation and audit capability.

Can you audit against policy?

  • This is a big topic. We can audit host profiles and admin activity in vCenter. We can audit almost anything in vCenter Configuration Manager at all levels of the stack.
  • We can baseline the network traffic of the enterprise with vADP (Application Discovery Planner, not to be confused with our backup API.) We can periodically check for deltas with vADP to find anomalous traffic.

What tools work with VMware to assist with forensics and incident management?

  • Again, this is another big topic. Guests are just data, and a VM doesn’t know when it’s had a snapshot taken. I’ve worked with EnCase, CAINE, BackTrack, and other tools to look at things raw. Procedurally it’s fairly simple. DD off the datastore to run through one of the usual tools and/or run the tool against copies of the VMDKs in question.
  • On the Network side, tie ERSPAN to Wireshark, and use traditional methodology. If you’re feeling clever you can look at live memory by recording a vMotion.

How does legal chain of custody work for forensics on a VM?

  • I’m not a lawyer. I’m not a certified forensic examiner. So, I’ve always had someone from a firm who specializes in forensics like Foundstone with me to handle the paperwork.

Is this a comprehensive list? Not at all. It’s just the beginning. The first step is getting Security to the table, and getting them actively participating in design and operational decisions. With higher and higher consolidation rations it becomes more important than ever to instrument the Virtual Infrastructure. For larger organizations, tools like EMC NetWitness can provide insight into all aspects of software-defined data center. SEIM engines like ArcSight can correlate events and provide an enterprise wide threat dashboard. For small organizations, there’s a large amount of Open Source tools available.

Security professionals, where are you seeing resistance while trying to do your jobs in the software-defined data center? What requirements are you finding most challenging to address? Let us know in the comments below!

Bill Mansfield has worked as a Senior Security Consultant at VMware for the past 6 years. He has extensive knowledge on transitioning traditional security tools into the virtual world.


Virtualize SAP – Risky or Not?

By Girish Manmadkar, VMWare Professional Services Consultant

In years past, some IT managers were not ready to talk about virtualizing SAP due to technical and political reasons. The picture is very different today, in part because of the increased emphasis on IT as a strategic function towards ‘Software–Defined Data Center’ (SDDC).

Virtualization and the road to SDDC expands the cost and operational benefits of server virtualization to all data center infrastructure—network, security, storage, and management. For example, peak workloads such as running consolidated financial reports are handled much more effectively, thanks to streamlined provisioning. Integrating systems because of company acquisitions are more easily managed due to the flexibility offered with virtualized platforms. And finally customers are leveraging their virtualized SAP environment to add additional capabilities such as enhanced disaster recovery/business continuity or chargeback systems.

Many customers have been realizing virtualization benefits ever since they moved their SAP production workloads to the VMware platform. As IT budgets continue to shrink, the imperative to lower operating costs becomes more urgent—and virtualization can make a real difference. Server consolidation through virtualization translates directly into lower costs for power, cooling, and space—and boosts the organizations “green” profile in the bargain.

Organizations Benefit from Virtualizing SAP

The main requirement for any IT manager supporting an SAP environment is to ensure high availability —even a few minutes of downtime can cause loss of dollars, not to mention angry phone calls from executive management as well as frustrated users. VMware virtualization takes advantage of SAP’s high-availability features to ensure that the SAP software stays running without any interruption and helps keep those phonelines quiet.

Greenfield SAP deployments are a great way to start building the environment right from ground zero by utilizing a building-block approach. You will start seeing the benefits of flexibility, scalability and availability of the newly built environment on VMware.

Upgrades comes with two scenario’s

  • A. SAP hardware refresh cycle
  • B. An SAP Application and/or database upgrade

Upgrades are a part of every SAP landscape and they can be complex and require long-term efforts. I have seen that most of my customers who go through their standard physical environment for SAP upgrades, spend many man hours or even days – if they have the hardware available at their disposal. However, in the virtual environment, the provisioning process is pretty rapid and can be executed in minutes, including the deprovisioning to reclaim required resources back in the resource pool which makes the upgrade process that much more streamlined and efficient. When going through an SAP upgrade – a very time and cost sensitive project, it is very important to provide required resources to the development team in a timely manner.

Time to Move

Let’s say that you’ve decided to virtualize your SAP environment—now the question is timing. I have seen many customers take the SAP upgrade and/or platform or hardware refresh as possible opportunities to move to the virtual platform.

A planned SAP upgrade can be a good time to move. I have seen some customers cash in on the planned move to SAP NetWeaver & other add-ons to virtualize their entire SAP landscapes—with savings of more than half of their capital expenses.

A hardware refresh is a great time to move. Many customers take advantage of the change in hardware to also consider a migration to virtualization at the same time. It allows customer to integrate the hardware refresh and virtualization projects to minimize disruptions and combine staff training for new hardware and software.

SAP Requirements: Security,Compliance and Disaster Recovery

Challenges like compliance and security policies often require substantial infrastructure changes, that can highlight the inherent inflexibility of the existing traditional hardware platform and persuade top management to invest in infrastructure. Many customers have successfully implemented VMware-provided solutions to ensure the security and compliance of their SAP environment so that they can experience the benefits from virtualization.

Disaster Recovery
A Business Continuity plan is imperative for many of our SAP customers. Disasters – a natural or man-made disaster severely impacts operation which impacts the bottom line. Which of course, is the reason why executives often order a review of the company’s disaster recovery/business continuity plans. VMware understands this importance and the risk which is addressed by VMware Site Recovery Manager product.

So is virtualizing your platform for your SAP environment too risky? All IT projects have risk. Is it so risky to pass up the benefits of virtualization? In my opinion, no – not if you follow the advice and methodology offered by my colleagues, David Gallant (Business as usual with Tier 1 Business Critical Applications? – Not!) and Eiad (Knowing Your Applications is Key to Successful Data Center Transformation). I ask you – if you haven’t already virtualized your SAP environment, why not explore it now? There’s been so many advances in technology and alliances, you can’t ignore it any longer.

Girish Manmadkar is a veteran VMware SAP Virtualization Architect with extensive knowledge and hand-on experience on various SAP and VMware products including various databases. He focuses on SAP migrations, architecture designs, and implementation, including disaster recovery.


Knowing Your Applications is Key to Successful Data Center Transformation

By Eiad Al-Aqqad, VMWare Professional Services Consultant

This decade has offered more Data Center transformation options than most IT Professionals were able to keep up with. Virtualization has dramatically changed the way things were traditional done in the datacenter. Having the largest Data Center is not something to brag about anymore, as it might be a symbol of inefficiencies. Next, the Cloud Computing storm hit the datacenter and while IT Professionals started to digest it, the Software Defined Data Center concept has evolved. While each of these Data Center Transformations has offered great advantages to adopters, it had its own challenges and quite frankly, planning was not optional for a successful implementation.

Planning is critical for Data Center transformation and does not stop at the infrastructure planning, but extends to understanding your applications.

Most organizations are good at conducting the infrastructure portion of planning, but have difficulties in planning their applications for transformation. I’ve witness many transformation efforts where the customer team has a hard time answering these  simple questions:

  1.      What are your Applications Priorities?
  2.      What are your Applications RPO/RTO and how are you planning to achieve them?
  3.      What are the security requirements of each of your APPs ?
  4.      What does your Application Dependency look like?

It is critical to know your applications well enough before starting any transformation effort. The four questions above are a good start. While the first three questions can normally be answered by collecting bits and pieces from contacting the right SMEs & business units, Application Dependency is more challenging and is what I want to focus on in this article. For more thoughts on workload classifications, please check out my colleague David’s post:  Business as Usual with Tier 1 Business Critical Applications? – Not!

Application Dependency has proved to be more challenging due to many reasons including:

  1.      Applications Dependencies aren’t static and can change on daily basis.
  2.      Most organizations have inherited legacy applications with very little   documentation.
  3.      Current Change Management systems while helping to document changes are still lagging when it comes to documenting Applications Dependencies.
  4.      Application Dependencies are always filled with unexpected surprises that no one wants to admit, like having a critical application dependent on a DB running on a PC hidden under a developer’s desk.

While Application Dependency Planning without the right tools might be challenging, the point is, before any data center transformation, thorough planning and investigation is required for a successful end game. Tools definitely help with your efforts but even more importantly, making sure you ask yourself the questions above is really the first step before anything.

At last the good news is the availability of tools and services that help automate the process of creating an accurate application dependency mapping of your environment.  ADM & the Virtualization Assessment service (includes the use of Capacity Planning & Application Dependency Planner (ADP)) offered by VMware can be quite handy in creating an Application Dependency Mapping for applications within your environment. For more information about ADP, please visit:  My VMware Application Dependency Planner Post

Eiad AlAqqad is a Senior Consultant within the SDDC Professional Services practice. He has been an active consultant using VMware technologies since 2006. He is VMware Certified Design Expert (VCDX#89), as well an an expert in VMware vCloud, vSphere, & SRM.


Business as Usual with Tier 1 Business Critical Applications? – Not!

By David Gallant, VMWare Professional Services Consultant

Ok so you’ve decided to virtualize your Tier 1 Business Critical Applications, awesome that’s great news.  The daunting question is “Where do you start?” As a VMware PS Consultant, I see customers go through this process every day; some customers get us involved when that question comes up, others get us involved much later. I can say with certainty earlier is always better than later.  Tier 1 application design and architecture it is hardly ever business as usual, but it had better be business as usual when you finish!

So, where do you start?

Without a doubt I always start with something I call “Workload Classification.” It’s the phase where the virtualization architect or administrator works with the application teams to understand 3 aspects of enterprise application architecture.

  1. Application dependency planning (Enterprise Architecture)
  2. Understanding the performance profile
  3. Defining the security profile

We will explore these tenets deeper in upcoming blogs for this month, so I’ll start by talking about the core classification work.

When classifying workloads for virtualization the first instinct is to collect as much data as possible.  That would be incorrect, instead think about the 4 components we measure for vSphere: Compute (CPU and Memory), Storage and Network. I recommend only collecting data on these areas to start as it makes our work much simpler to gather and analyze this data.

CPU Memory Storage Network

  • CPU collect Utilization by percentage and by MHz at the server level, instance/process level and database level when measuring databases.
  • Memory collect utilization by percentages and some measure of bytes (KB, MB or GB).
  • Storage collect IOPS, throughput, Storage consumed, and growth rate.
  • Network collect percentage, keep in mind you need to know the link speed of the target and source to match them up.  If you want to go deep use a tool like Wireshark to measure the individual application

Collect the data for a period of time (typically 4 weeks). I like to use ends of quarters when possible so I can see trends in larger data spikes. Further, the fiscal year end is the best time, especially when trying to classify finance applications like SAP ECC.  Also think about the data collection interval, the amount of frequency of when you grab a data point.  I typically use 1 minute intervals for most workloads, but a smaller interval may be necessary for a high performance / low latency application. If a smaller interval is the case, reduce the period so as to limit the amount of data you’ll have to analyze and instead consider two or more collection periods.

Once you have your data; analyze it against the target hosts’ specifications to determine how many hosts are required and some initial placement strategies. Remember vSphere DRS will help with final placement and keep the load balanced, so think of this as a theoretical exercise to help architect and design the environment.

After the workload classification study is complete I always compare my results to an Application Dependency Plan; the two studies together should provide an excellent basis for a migration or re-platform study.  Another piece of the puzzle is defining the security profile of the target environment, comparing and contrasting the existing security profile versus the future state one.  There are tremendous advantages to implementing proper security in the vSphere environment that we will describe in a future blog this month.

I’ll leave you with some final thoughts on workload classification.  If done ahead of time, going through this process will not only guide the design of the future environment, but will probably help define a new optimized way to go to market for your business critical applications. You will probably find business level design flaws in your current environment that when changed, will allow you to more easily manage, maintain, optimize and scale up and/or out in the new environment.

If you’re thinking of virtualizing your business critical applications and you’re not sure where to start, contact your account team and get us involved today.

David Gallant has worked at Vmware for over 2 years with over 20 years experience in the IT industry. He specializes in Virtualizing SAP, Microsoft SQL Server and Oracle Non-RAC.