Home > Blogs > VMware Consulting Blog > Tag Archives: software-defined data center

Tag Archives: software-defined data center

Top 5 Tips When Considering a VMware Virtual SAN Storage Solution

By Mark Moretti

Is a software-defined storage platform right for you? How do you approach evaluating a virtualized storage environment? What are the key considerations to keep in mind? What are VMware customers doing? And, what are the experts recommending?

We recently asked our VMware Professional Services consultants these questions. We asked them to provide us with some of the key tips they provide to their best customers. What did we get? A short list of “tips” that identify how to approach the consideration process for a VMware Virtual SAN solution. It’s not a trivial decision. Your IT decisions are not made in a vacuum. You have existing compute and storage infrastructure, so you need to know what the impact of your decisions will be—in advance.

Read this short list of recommendations, share it with your staff and engage in a conversation on transforming your storage infrastructure.

Top Tips 1

Read now: Top 5 Tips When Considering VMware’s Virtual SAN Storage Solution


Mark Moretti is a Senior Services Marketing Manager for VMware.

Understanding View Disposable Disks

Travis WoodBy Travis Wood, VCDX-97

When VMware introduced Linked-Clones in View 4.5 there was a new type of disk included called the Disposable Disk. The purpose of this disk was to redirect certain volatile files away from the OS Disk to help reduce linked-clone growth.  I have read a lot of designs that utilize disposable disks but it has become clear that there is a lot of confusion and misunderstanding about what they do and exactly how they function.  This confusion is highlighted in a View whitepaper called View Storage Considerations which describes disposable disks as:

Utilizing the disposable disk allows you to redirect transient paging and temporary file operations to a VMDK hosted on an alternate datastore. When the virtual machine is powered off, these disposable disks are deleted.

The three elements from this paragraph I want to demystify are:

  1. What is redirected to the disposable disk?
  2. Where are disposable disks hosted?
  3. When are disposable disks deleted/refreshed?

What is redirected?

By default there are three elements that are redirected to the disposable disk.  The first is the Windows swap file, View Composer will redirect the Swap file from C: to the disposable disk. It is recommended to set this to a specific size to make capacity planning easier.

 

TWood1

 

The other elements that are redirected are the System Environment Variables TMP and TEMP.  By default, the User TEMP and TMP Environment Variables are NOT redirected.  However it is highly recommended to remove the User TEMP and TMP Environment variables, if this is done then Windows will use the System Variables instead and the user temporary files will then be redirected to the disposable disk.

TWood4

 

 

Where is the disposable disk stored?

There is a common misconception that like the User Data Disk, the Disposable Disk can be redirected to a different tier.  This is not the case and the Disposable Disk is always stored with the OS Disk.  In later versions of View you can choose the drive letter within the GUI for the Disposable Disk to avoid conflicts with mapped drives, but this setting and the size are the only customizations you can make to the disposable disk.

When is the disposable disk refreshed?

This is the question that tends to cause the most confusion.  Many people I have spoken to have said that it is refreshed when the user logs off, whilst others say it’s on reboot.  The Disposable Disk is actually only refreshed when View powers off the VM. User initiated shutdown & reboots as well as power actions within vCenter do not impact the disposable disk.  The following actions will cause the disposable disk to be refreshed:

  • Rebalance
  • Refresh
  • Recompose
  • VM powered off due to the Pool Power Policy set to “Always Powered Off”

This is quite important to understand, as if the Pool Power Policy is set to any of the other settings (Powered On, Do Nothing or Suspend) then your disposable disks are not getting refreshed automatically.

What does all this mean?

Understanding Disposable Disks and their functionality will enable you to design your environment appropriately.  The View Storage Reclamation Feature that was introduced in View 5.2 uses an SE Sparse disk for the OS Disk, this allows View to shrink OS disks if files are deleted from within the OS.  However only the OS disk is created as an SE Sparse disk, User Data Disks and Disposable Disks are created as a standard VMDK.  The key difference with this feature compared with Disposable Disks, is it relies on files being deleted from within the Guest Operating System, where as the Disposable Disk is deleted along with all the files it contains when View powers off the VM.  It is also important to note, that currently SE Sparse disks are not supported on VSAN.

If you choose to use Disposable Disks in your design, then depending on your power cycle you may want to add an operational task for administrators to periodically change the Power On setting for the pool within a maintenance window to refresh the Disposable Disk.  This is particularly important for the use case of Persistent Desktops which have long refresh/recompose cycles.


Travis Wood is a VMware Senior Solutions Architect

Automating Security Policy Enforcement with NSX Service Composer

Romain DeckerBy Romain Decker

Over the past decade, IT organizations have gained significant benefits as a direct result of compute virtualization, permitting a reduction in physical complexity and an increase in operational efficiency. It also allowed for dynamic re-purposing of underlying resources to quickly and optimally meet the needs of an increasingly dynamic business.

In dynamic cloud data centers, application workloads are provisioned, moved and decommissioned on demand. In legacy network operating models, network provisioning is slow and workload mobility is limited. While compute virtualization has become the new norm, network and security models remained unchanged in data centers.

NSX is VMware’s solution to virtualize network and security for your software-defined data center. NSX network virtualization decouples the network from hardware and places it into a software abstraction layer, thus delivering for networking what VMware has already delivered for compute and storage.

Inside NSX, the Service Composer is a built-in tool that defines a new model for consuming network and security services; it allows you to provision and assign firewall policies and security services to applications in real time in a virtual infrastructure. Security policies are assigned to groups of virtual machines, and the policy is automatically applied to new virtual machines as they are added to the group.

RDecker 1

From a practical point of view, NSX Service Composer is a configuration interface that gives administrators a consistent and centralized way to provision, apply and automate network security services like anti-virus/malware protection, IPS, DLP, firewall rules, etc. Those services can be available natively in NSX or enhanced by third-party solutions.

With NSX Service Composer, security services can be consumed more efficiently in the software-defined data center. Security can be easily organized by dissociating the assets you want to protect from the policies that define how you want to protect them.

RDecker 2

Security Groups

A security group is a powerful construct that allows static or dynamic grouping based on inclusion and exclusion of objects such as virtual machines, vNICs, vSphere clusters, logical switches, and so on.

If a security group is static, the protected assets are a limited set of specific objects, whereas dynamic membership of a security group can be defined by one or multiple criteria, like vCenter containers (data centers, port groups and clusters), security tags, Active Directory groups, regular expressions on virtual machine names, and so on. When all criteria are met, virtual machines are immediately moved to the security group automatically.

In the example below, any virtual machine with a name containing “web”―AND running in “Capacity Cluster A”―will belong to this security group.

RDecker 3

 

Security group considerations:

  • Security groups can have multiple security policies assigned to them.
  • A virtual machine can live in multiple security groups at the same time.
  • Security groups can be nested inside other security groups.
  • You can include AND exclude objects from security groups.
  • Security group membership can change constantly.
  • If a virtual machine belongs to multiple security groups, the services applied to it depend on the precedence of the security policy mapped to the security groups.

Security Policies

A security policy is a collection of security services and/or firewall rules. It can contain the following:

  • Guest Introspection services (applies to virtual machines) – Data Security or third-party solution provider services such as anti-virus or vulnerability management services.
  • Distributed Firewall rules (applies to vNIC) – Rules that define the traffic to be allowed to/from/within the security group.
  • Network introspection services (applies to virtual machines) – Services that monitor your network such as IPS and network forensics.

Security services such as vulnerability management, IDS/IPS or next-generation firewalling can be inserted into the traffic flow and chained together.

Security policies are applied according to their respective weight: a security policy with a higher weight has a higher priority. By default, a new policy is assigned the highest weight so it is at the top of the table (but you can manually modify the default suggested weight to change the order).

Multiple security policies may be applied to a virtual machine because either (1) the security group that contains the virtual machine is associated with multiple policies, or, (2) the virtual machine is part of multiple security groups associated with different policies. If there is a conflict between services grouped with each policy, the weight of the policies determine the services that will be applied to the virtual machine.

For example: If policy A blocks incoming HTTP and has a weight value of 1,000, while policy B allows incoming HTTP with a weight value of 2,000, incoming HTTP traffic will be allowed because policy B has a higher weight.

The mapping between security groups and security policies results in a running configuration that is immediately enforced. The relationships between all objects can be observed in the Service Composer Canvas.

RDecker 4

 

Each block represents a security group with its associated security policies, Guest Introspection services, firewall rules, network introspection services, and the virtual machines belonging to the group or included security groups.

NSX Service Composer offers a way to automate the consumption of security services and their mapping to virtual machines using a logical policy, and it makes your life easier because you can rely on it to manage your firewall policies; security groups allow you to statically or dynamically include or exclude objects into a container, which can be used as a source or destination in a firewall rule.

Firewall rules defined in security policies are automatically adapted (based on the association between security groups and policies) and integrated into NSX Distributed Firewall (or any third-party firewall). As virtual machines are automatically added and removed from security groups during their lifecycle, the corresponding firewall rules are enforced when needed. With this association, your imagination is your only limit!

In the screenshot below, firewall rules are applied via security policies to a three-tier application; since the security group membership is dynamic, there is no need to modify firewall rules when virtual machines are added to the application (in order to scale-out, for example).

RDecker 5

 

Provision, Apply, Automate

Service Composer is one of the most powerful features of NSX: it simplifies the application of security services to virtual machines within the software-defined data center, and allows administrators to have more control over―and visibility into―security.

Service Composer accomplishes this by providing a three-step workflow:

      • Provision the services to be applied:
        • Registering the third-party service with NSX Manager (if you are not using the out-of-the-box security services available)
        • Deploying the service by installing if necessary the components required for that service to operate into each ESXi host (“Networking & Security > Installation > Service Deployments” tab)
    • Apply and visualize the security services to defined containers by applying the security policies to security groups.
    • Automate the application of these services by defining rules and criteria that specify the circumstances under which each service will be applied to a given virtual machine.

Possibilities around the NSX Service Composer are tremendous; you can create an almost infinite number of associations between security groups and security policies to efficiently automate the how security services will be consumed in the software-defined data center.

You can, for example, combine service composer capabilities and VMware vRealize Automation Center to achieve secure, automated, on-demand micro-segmentation. Another example is a quarantine workflow, where― after a virus detection―a virtual machine is automatically and immediately moved to a quarantine security group, whose security policies can take action, like remediation, strengthened firewall rules and traffic steering.


Romain Decker is a Technical Solutions Architect in the Professional Services Engineering team and is based in France.

The Complexity of Data Center Blueprinting

GKarakasBy Gabor Karakas

Data centers are wildly complicated in nature and grow in an organic fashion, which fundamentally means that very few people in the organization understand the IT landscape in its entirety. Part of the problem is that these complex ecosystems are built up over long periods of time (5–10 years) with very little documentation or global oversight; therefore, siloed IT teams have the freedom to operate according to different standards – if there are any. Oftentimes new contractors or external providers replace these IT teams, and knowledge transfer rarely happens, so the new workforce might not understand every aspect of the technology they are tasked to operate, and this creates key issues as well.

GKarakas 1

 

Migration or consolidation activities can be initiated due to a number of reasons:

  • Reduction of the complexity in infrastructure by consolidating multiple data centers into a single larger one.
  • The organization simply outgrew the IT infrastructure, and moving the current hardware into a larger data center makes more sense from a business or technological perspective.
  • Contract renegotiations fail and significant cost reductions can result from moving to another provider.
  • The business requires higher resiliency; by moving some of the hardware to a new data center and creating fail-proof links in between the workloads, disasters can be avoided and service uptime can be significantly improved.

When the decision is made to move or consolidate the data center for business or technical reasons, a project is kicked off with very little understanding into the moving parts of the elements to be changed. Most organizations realize this a couple of months into the project, and usually find the best way forward is to ask for external help. This help usually comes from the joint efforts of multiple software and consultancy firms to deliver a migration plan that identifies and prioritizes workloads, and creates a blueprint of all their vital internal and external dependencies.

A migration plan is meant to contain at least the following details of identified and prioritized groups of physical or virtual workloads:

  • The applications they contain or serve
  • Core dependencies (such as NTP, DNS, LDAP, Anti-virus, etc.)
  • Capacity and usage trends
  • Contact details for responsible staff members

Any special requirements that can be obtained either via discovery or by interviewing the right people

GKarakas 2

In reality, creating such a plan is very challenging, and there can be many pitfalls. The following are common problems that can surface during development of a migration plan:

Technical Problems

It is vital that communication is strong between all involved, technical details are not overlooked, and all information sources are identified correctly. Issues can develop such as:

  • · Choosing the right tool (VMware Application Dependency Planner as an example)
  • · Finding the right team to implement and monitor the solution
  • · Reporting on the right information, which can prove difficult

Technical and human information sources are equally important, as automated discovery methods can only identify certain patterns; people need to put the extra intelligence behind this information. It is also important to note that a discovery process can take months, during which time the discovery infrastructure needs to function at its best, without interruption to data flows nor appliances.

Miscommunication

As previously stated, team communication is vital. There is a constant need to:

  • Verify discovery data and tweak technical parameters
  • Involve the application team in frequent validation exercises

It is important to accurately identify and document deliverables before starting a project, as misalignment with these goals can cause delays or failures further down the timeline.

Politics

With major changes in the IT landscape, there are also Human Resource-related matters to handle. Depending on the nature of the project, there are potential issues:

  • The organization’s move to another data center might abandon previous suppliers
  • IT staff might be left without a job

It can be part of an outsourcing project that moves certain operations or IT support outside the organization

GKarakas 3

Some of these people will need to help in the execution of the project, so it is crucial to treat them with respect and to make sure sensitive information is closely guarded. The blueprinting team members will probably know what the outcome of the project will bring for suppliers and the customer’s IT team. If some of this information is released, the project can be compromised with valuable information and time lost.

Blueprint Example

When delivering a migration blueprint, each customer will have different demands, but in most cases, the basic request will be the same: to provide a set of documents that contain all servers and applications, and show how they are dependent on each other. Most of the time, customers will also ask for visual maps of these connections, and it is the consultant’s job to make sure these demands are reasonable. There is only so much that can be visualized in a map that is understandable, so it is best to limit the number of servers and connections to about 10–20 per map. The following complex image is an example of just a single server with multiple running services discovered.

 

GKarakas Figure 1

Figure 1. A server and its services visualized in VMware’s ADP discovery tool

Beyond putting individual applications and servers on an automated map, there can also be demand for visualizing application-to-application connectivity, and this will likely involve manipulating data correctly. Some dependencies can be visualized, but others might require a text-based presentation.

The following is an example of a fictional setup, where multiple applications talk to each other―just like in the real world. Both visual and text-based representations are possible, and it is easy to see that for overview and presentation purposes, a visual map is more suitable. However, when planning the actual migration, the text-based method might prove more useful.

GKarakas Figure 2

Figure 2. Application dependency map: visual representation

GKarakas Figure 3

Figure 3. Application dependency map: raw discovery data

GKarakas Figure 4

Figure 4. Application dependency map: raw data in pivot table

It is easy to see that a blueprinting project can be a very challenging exercise with multiple caveats and pitfalls. So, careful planning and execution is required with strong communication between everyone involved.

This is the first in a series of articles that will give detailed overview, implementation and reporting methods on data center blueprinting.


Gabor Karakas is a Technical Solutions Architect in the Professional Services Engineering team and is based in the San Francisco Bay Area.

SDDC is the Future

Michael_Francis

 

 

By Michael Francis

 

VMware’s Transformative Growth

Over the last eight years at VMware I have observed so much change, and in my mind it has been transformative change. I think about my 20 years in IT and the changes I have seen, and feel the emergence of virtualization of x86 hardware will be looked upon as one of the most important catalysts for change in information technology history. It has modified the speed of service delivery, the cost of that delivery and subsequently has enabled innovative business models for computing – such as cloud computing.

I have been part of the transformation of our company in these eight years; we’ve grown from being a single-product infrastructure company to what we are today – an application platform company. Virtualization of compute is now mainstream. We have broadened virtualization to storage and networking, bringing the benefits realized for compute to these new areas. I don’t believe this is incremental value or evolutionary. I think this broader virtualization―coupled with intelligent, business policy-aware management systems―will be so disruptive to the industry that it will be considered a separate milestone potentially, on par with x86 virtualization.

Where We Are Now

Here is why I think the SDDC is significant:

  • The software-defined data center (SDDC) brings balance back to the ongoing discussion between the use of public and private computing.
  • It enables the attributes of agility, reduced operational and capital costs, lower security risk, and a new of stack management visibility.
  • SDDC not only modifies the operational and consumption model for computing infrastructure, but it also modifies the way computing infrastructure is designed and built.
  • Infrastructure is now a combination of software and configuration. It can be programmatically generated based on a specification; hyper-converged infrastructure is one example of this.

As a principal architect in VMware’s team responsible for the generation of tools and intellectual property that can assist our Professional Services and Partners to deliver VMware SDDC solutions, the last point is especially interesting and the one I want to spend some time on.

How We Started

As an infrastructure-focused project resource and lead over the past two decades, I have become very familiar developing design documents and ‘as-built’ documentation. I remember rolling out Microsoft Windows NT 4.0 in 1996 on CDs. There was a guide that showed me what to click and in what order to do certain steps. There was a lot of manual effort, opportunity for human error, inconsistencies between builds, and a lot of potential for the built item to vary significantly from the design specification.

Later, in 2000, I was a technical lead for a systems integrator; we had standard design document templates and ‘as-built’ document templates, and consistency and standardization had become very important. A few of us worked heavily with VBScript, and we started scripting the creation of Active Directory configurations such as Sites and Services definitions, OU structures and the like. We dreamed of the day when we could do a design diagram, click ‘build’, and have scripts build what was in the specification. But we couldn’t get there. The amount of work to develop the scripts, maintain them, and modify them as elements changed was too great. That was when we focused on the operating stack and a single vendor’s back office suite; imagine trying to automate a heterogeneous infrastructure platform.

It’s All About Automated Design

Today we have the ability to leverage the SDDC as an application programming interface (API) that abstracts not only the hardware elements below and can automate the application stack above― but can abstract the APIs of ecosystem partners.

This means I can write to one API to instantiate a system of elements from many vendors at all different layers of the stack, all based on a design specification.

Our dream in the year 2000 is something customers can achieve in their data centers with SDDC today. To be clear – I am not referring to just configuring the services offered by the SDDC to support an application, but also to standing up the SDDC itself. The reality is, we can now have a hyper-converged deployment experience where the playbook of the deployment is driven by a consultant-developed design specification.

For instance, our partners and our professional services organization has access to what we refer to as the SDDC Deployment Tool (an imaginative name, I know) (or SDT for short). This tool can automate the deployment and configuration of all the components that make up the software-defined data center. The following screenshot illustrates this:

MFrancis1

 

Today this tool deploys the SDDC elements in a single use case configuration.

In VMware’s Professional Services Engineering group we have created a design specification for an SDDC platform. It is modular and completely instantiated in software. Our Professional Services Consultants and Partners can use this intellectual property to design and build the SDDC.

What Comes Next?

I believe our next step is to architect our solution design artifacts so the SDDC itself can be described in a format that allows software―like SDT―to automatically provision and configure the hardware platform, the SDDC software fabric, and the services of the SDDC to the point where it is ready for consumption.

A consultant could design the specification of the SDDC infrastructure layer and have that design deployed in a similar way to hyper-converged infrastructure―but allowing the customer to choose the hardware platform.

As I mentioned at the beginning, the SDDC is not just about technology, consumption and operations: it provides the basis for a transformation in delivery. To me a good analogy right now is the 3D printer. The SDDC itself is like the plastic that can be molded into anything; the 3D printer is the SDDC deployment tool, and our service kits would represent the electronic blueprint the printer reads to then build up the layers of the SDDC solution for delivery.

This will create better and more predictable outcomes and also greater efficiency in delivering the SDDC solutions to our customers as we treat our design artifacts as part of the SDDC code.


Michael Francis is a Principal Systems Engineer at VMware, based in Brisbane.

Working with VMware Just Gets Better

Ford DonaldBy Ford Donald, Principal Architect, GTS PSE, VMware

Imagine someone gives you and a group of friends a box of nuts and bolts and a few pieces of metal and tells you to build a model skyscraper. You might start putting the pieces together and end up with a beautiful model, but it probably won’t be the exact result that any of you imagined at the beginning. Now imagine if someone hands you that same box, along with a blueprint and an illustration of the finished product. In this scenario, you all work together to a prescribed end goal, with few questions or disagreements along the way. Think about this in the context of a large technical engagement, for example a software-defined data center (SDDC) implementation. Is it preferable to make it up as you go along, or to start with a vision for success and achieve it through a systematic approach?

Here at VMware, we’re enhancing the way we engage with customers by providing prescriptive guidance, a foundation for success, and a predictable outcome through the SDDC Assess, Design and Deploy Service. As our product line has matured, our consulting approach is maturing along with it. In the past, we have excelled at the “discovery” approach, where we uncover the solution through discussion, and every customized outcome meets a unique customer need. We’ve built thousands of strong skyscrapers that way, and the skill for discovering the right solution remains critical within every customer engagement. Today we bring a common starting point that can be scaled to any size of organization and adapted up the stack or with snap-ins according to customer preference or need. A core implementation brings a number of benefits to the process, and to the end result.

A modular technical solution

Think of the starting point as a blueprint for the well-done data center. With our approach, the core elements of SDDC come standard, including vSphere, vCenter Operations, vCenter Orchestrator, and software-defined networking thru vCNS. This is the clockwork by which the SDDC from VMware is best established, and it lays the foundation for further maturity evolutions to Infrastructure Service and Application Service. The core “SDDC Ready” layer is the default, providing everything you need to be successful in the data center, regardless of whether you adopt the other layers. Beyond that, to meet the unique needs of customers, we developed “snap-ins” as enhancements or upgrades to the core model, which include many of our desirable, but not necessarily included-by-default, assets such as VSAN and NSX.

The Infrastructure Service layer builds on the SDDC by establishing cloud-based metaphors via vCloud Automation Center and other requirements for cloud readiness, including a service portal, catalog-based consumption, and reduction of administrative overhead. The Application Service layer includes vCloud Application Director and elevates the Infrastructure layer with application deployment, blueprinting and standardization.

From our experience, customers demand flexibility and customization. In order to meet that need, we built a full menu of Snap-ins. These snap-ins allow customers to choose any number of options from software-defined storage, NSX, compliance, business continuity & disaster recovery (BCDR), hybrid cloud capabilities and financial/cost management. Snap-ins are elemental to the solution, and can be added as needed according to the customer’s desired end result.

Operational Transformation Support

Once you’ve adopted a cloud computing model, you may want to consider organizational enhancements that take advantage of the efficiency gained by an SDDC architecture. As we work with our customers in designing the technical elements, we also consult with our customers on the operational processes. Changing from high administrative overhead to low overhead, introducing new roles, defining what type of consumer model you want to implement – our consultants help you plan and design your optimal organization to support the cloud model.

The beauty of this approach shines in its ability to serve both green field and brown field projects. In the green field approach, where a customer wants the consultants to take the reins and implement top to bottom, the approach serves as a blueprint. In a brown field model, where the customer has input and opinions and desires integration and customization, the approach can be adapted to the customer’s environment, relative to the original blueprint.

So whether you’re building your skyscraper from the ground up, or remodeling an existing tower, the new SDDC Assess, Design and Deploy Service provides an adaptable model, with a great starting point that will help you get the best out of your investment.

Stay tuned for an upcoming post that gives you a look under the hood of the work stream process for implementing the technical solution.


Ford Donald is a Principal Architect and member of Professional Services Engineering (PSE), a part of the Global Technical Solutions (GTS) team, a seven-year veteran of VMware. Prior to PSE, Ford spent three years as a pre-sales cloud computing specialist focusing on very large/complex virtualization deployments, including the VMware sales cloud known as vSEL. Ford also served as coreteam on VMworld Labs and as a field SE.

 

Go for the Gold: See vSphere with Operations Management In Action

If there’s anything we’ve learned from watching the recent Winter Olympics, it’s that world-class athletes are focused, practice endless hours, and need to be both efficient and agile to win gold.

When it comes to data centers, what sets a world-class data center apart is the software. A software-defined data center (SDDC) provides the efficiency and agility for IT to meet exploding business expectations so your business can win gold.

The VMware exclusive seminar is here! Join us to learn about the latest in SDDC.

Now through March 19, VMware TechTalk Live is hosting free, interactive half-day workshops in 32 cities across the U.S. and Canada. Attendees will get to see a live demo of vSphere with Operations Management.

The workshops will also provide a detailed overview of the key components of the SDDC architecture, as well as results of VMware customer surveys explaining how the SDDC is actually being implemented today.

Check out the TechTalk Live event information to find the location closest to you and to reserve your spot.

SDDC + SAP = CapEx/OpEx Savings

By Girish Manmadkar, an SAP Virtualization Architect at VMware

Earlier this month, my colleague David Gallant wrote about architecting a software-defined data center for SAP and other business-critical applications. I’d like to further explore how SAP fits into the software-defined data center (SDDC) and, specifically, how to optimize it for CapEx and OpEx savings.

A key to remember is that the SDDC is not a single technology that you purchase and install—it is a use case, a strategy, a mind shift. And in that way, it is also a journey that will unfold in stages and should be planned in that way. I’ve outlined the three foundational steps below.

SDDC 1.0

Most of the customers that I work with are well along in this stage, moving their current non-x86 SAP workloads toward a VMware-based x86 environment.

During this process, numerous milestones can be delivered to the business, in particular, an immediate reduction in their CapEx. This benefit is achieved by starting to move non-x86 or current physical x-86 workloads to the virtual x-86 OS platform. Understandably, customers tend to approach this transition with caution, so we often start with low-hanging fruits: non-production and/or development SAP systems.

The next step you can take is to introduce automation. Automation comes in two places: at the infrastructure layer, which is achieved using VMware vCloud Automation Center and Orchestration; and at the application layer, delivered using SAP’s Landscape Virtualization Manager.

During this phase it is best to implement vSphere features, including auto deploy—host profiles, and OS templates—in order to automate vSphere and virtual machine provisioning to the environment.

Often it is a good idea at this time to start a parallel project around storage. You can work with your storage and backup teams to enhance current architectures by enabling storage technologies like de-dup, vSphere storage I/O control and any other storage array plugins.

We also recommend minimizing agents in the guest operating system, such as agents used for backup and/or anti-virus. The team should start putting together new architecture to move such agents from the guest OS to the vSphere hosts to reduce complexity and improve performance. The storage and network teams should look to implement new architecture that will support virtual disaster recovery solution. By planning ahead now, teams can avoid rework later.

During this phase, the team not only migrates SAP application servers to the vSphere platform but also shows business value with CapEx reductions and value-added flexibility to scale out SAP application server capacity on demand.

SDDC 2.0

Once this first stage goes into the operations cycle, it lays the groundwork for various aspects of the SDDC’s second stage. The next shift is toward a converged datacenter or common virtualization framework to deploy a software-defined lifecycle for SAP. This allows better monitoring, migration to the cloud, chargeback, and security.

This is also the phase where you want to virtualize your SAP central instances, or ASCS instances, and database servers. The value here is the removal of a reliance on complex, physical clustered environments by transitioning instead to VMware’s high-availability features. These include fault tolerance (FT) applicable to and determined by the SAP sizing exercise for the ASCS and focused on meeting the business’s SLAs.

SDDC 3.0

Once the SDDC 2.0 is in production, it is a good time to start defining other aspects of SDDC, such as Infrastructure-as-a-Service, Platform-as-a-Service, Storage-as-a-Service, and Disaster-Recovery-as-a-Service.

Keep an eye out for our follow-up post fleshing out the processes and benefits of these later stages.


Girish Manmadkar is a veteran VMware SAP Virtualization Architect with extensive knowledge and hands-on experience with various SAP and VMware products, including various databases. He focuses on SAP migrations, architecture designs, and implementation, including disaster recovery.

The SDDC Seems Cool … But What Do I Do with It?

By David Gallant, VMWare Professional Services Consultant

Lately I’ve been receiving requests from customers to talk to them about the software-defined data center (SDDC). So I start to explain software-defined networking, software-defined storage, automated provisioning, and self-service portals.

And that’s when I notice the customer looking excited, but also slightly confused.

Last week at SAP TechEd 2013, I was in the middle of just such a talk when I decided to stop and I ask the customer why he looked puzzled.

His response? “That’s great, but what do I do with all that SDDC stuff?”

That’s when the light bulb came on. He was right to question me—why build a software-defined data center if you have no clue what you’re going to do with it?

To really harvest the investment in your SDDC, you need to be building toward a specific set of goals. We don’t build data centers without a purpose; and that purpose for SDDC, as it’s always been, is the application.

In most cases the best data centers have been purpose-designed and built around the organization’s business-critical applications; for instance SAP, Oracle, or Microsoft applications.

I’ll concentrate for now on SAP—if you can architect an SDDC for SAP, you can roll those concepts over to pretty much any other application. Continue reading

Developing Defense in Depth for a Software-Defined Data Center

By Jared SkinnerCloud Management Sales Director – West

The software-defined data center (SDDC) is on the tip of a lot of tongues these days, but the fact is, it’s not yet an end-point solution but rather a constantly evolving strategy. For that reason, I meet many customers who are excited about its potential but still wary of the unknowns—in particular around security.

As we abstract different layers of the technological stack, namely storage and network, we must continue to manage security across the stack through industry best practices and/or regulatory standards. Securing the SDDC begins by reinventing Defense in Depth.

What Is “Defense in Depth”?

I think of Defense in Depth like an onion, where the sweetest part is the center, protected under many layers of security. Continue reading