Home > Blogs > VMware Consulting Blog > Tag Archives: SDDC

Tag Archives: SDDC

Automating Security Policy Enforcement with NSX Service Composer

Romain DeckerBy Romain Decker

Over the past decade, IT organizations have gained significant benefits as a direct result of compute virtualization, permitting a reduction in physical complexity and an increase in operational efficiency. It also allowed for dynamic re-purposing of underlying resources to quickly and optimally meet the needs of an increasingly dynamic business.

In dynamic cloud data centers, application workloads are provisioned, moved and decommissioned on demand. In legacy network operating models, network provisioning is slow and workload mobility is limited. While compute virtualization has become the new norm, network and security models remained unchanged in data centers.

NSX is VMware’s solution to virtualize network and security for your software-defined data center. NSX network virtualization decouples the network from hardware and places it into a software abstraction layer, thus delivering for networking what VMware has already delivered for compute and storage.

Inside NSX, the Service Composer is a built-in tool that defines a new model for consuming network and security services; it allows you to provision and assign firewall policies and security services to applications in real time in a virtual infrastructure. Security policies are assigned to groups of virtual machines, and the policy is automatically applied to new virtual machines as they are added to the group.

RDecker 1

From a practical point of view, NSX Service Composer is a configuration interface that gives administrators a consistent and centralized way to provision, apply and automate network security services like anti-virus/malware protection, IPS, DLP, firewall rules, etc. Those services can be available natively in NSX or enhanced by third-party solutions.

With NSX Service Composer, security services can be consumed more efficiently in the software-defined data center. Security can be easily organized by dissociating the assets you want to protect from the policies that define how you want to protect them.

RDecker 2

Security Groups

A security group is a powerful construct that allows static or dynamic grouping based on inclusion and exclusion of objects such as virtual machines, vNICs, vSphere clusters, logical switches, and so on.

If a security group is static, the protected assets are a limited set of specific objects, whereas dynamic membership of a security group can be defined by one or multiple criteria, like vCenter containers (data centers, port groups and clusters), security tags, Active Directory groups, regular expressions on virtual machine names, and so on. When all criteria are met, virtual machines are immediately moved to the security group automatically.

In the example below, any virtual machine with a name containing “web”―AND running in “Capacity Cluster A”―will belong to this security group.

RDecker 3

 

Security group considerations:

  • Security groups can have multiple security policies assigned to them.
  • A virtual machine can live in multiple security groups at the same time.
  • Security groups can be nested inside other security groups.
  • You can include AND exclude objects from security groups.
  • Security group membership can change constantly.
  • If a virtual machine belongs to multiple security groups, the services applied to it depend on the precedence of the security policy mapped to the security groups.

Security Policies

A security policy is a collection of security services and/or firewall rules. It can contain the following:

  • Guest Introspection services (applies to virtual machines) – Data Security or third-party solution provider services such as anti-virus or vulnerability management services.
  • Distributed Firewall rules (applies to vNIC) – Rules that define the traffic to be allowed to/from/within the security group.
  • Network introspection services (applies to virtual machines) – Services that monitor your network such as IPS and network forensics.

Security services such as vulnerability management, IDS/IPS or next-generation firewalling can be inserted into the traffic flow and chained together.

Security policies are applied according to their respective weight: a security policy with a higher weight has a higher priority. By default, a new policy is assigned the highest weight so it is at the top of the table (but you can manually modify the default suggested weight to change the order).

Multiple security policies may be applied to a virtual machine because either (1) the security group that contains the virtual machine is associated with multiple policies, or, (2) the virtual machine is part of multiple security groups associated with different policies. If there is a conflict between services grouped with each policy, the weight of the policies determine the services that will be applied to the virtual machine.

For example: If policy A blocks incoming HTTP and has a weight value of 1,000, while policy B allows incoming HTTP with a weight value of 2,000, incoming HTTP traffic will be allowed because policy B has a higher weight.

The mapping between security groups and security policies results in a running configuration that is immediately enforced. The relationships between all objects can be observed in the Service Composer Canvas.

RDecker 4

 

Each block represents a security group with its associated security policies, Guest Introspection services, firewall rules, network introspection services, and the virtual machines belonging to the group or included security groups.

NSX Service Composer offers a way to automate the consumption of security services and their mapping to virtual machines using a logical policy, and it makes your life easier because you can rely on it to manage your firewall policies; security groups allow you to statically or dynamically include or exclude objects into a container, which can be used as a source or destination in a firewall rule.

Firewall rules defined in security policies are automatically adapted (based on the association between security groups and policies) and integrated into NSX Distributed Firewall (or any third-party firewall). As virtual machines are automatically added and removed from security groups during their lifecycle, the corresponding firewall rules are enforced when needed. With this association, your imagination is your only limit!

In the screenshot below, firewall rules are applied via security policies to a three-tier application; since the security group membership is dynamic, there is no need to modify firewall rules when virtual machines are added to the application (in order to scale-out, for example).

RDecker 5

 

Provision, Apply, Automate

Service Composer is one of the most powerful features of NSX: it simplifies the application of security services to virtual machines within the software-defined data center, and allows administrators to have more control over―and visibility into―security.

Service Composer accomplishes this by providing a three-step workflow:

      • Provision the services to be applied:
        • Registering the third-party service with NSX Manager (if you are not using the out-of-the-box security services available)
        • Deploying the service by installing if necessary the components required for that service to operate into each ESXi host (“Networking & Security > Installation > Service Deployments” tab)
    • Apply and visualize the security services to defined containers by applying the security policies to security groups.
    • Automate the application of these services by defining rules and criteria that specify the circumstances under which each service will be applied to a given virtual machine.

Possibilities around the NSX Service Composer are tremendous; you can create an almost infinite number of associations between security groups and security policies to efficiently automate the how security services will be consumed in the software-defined data center.

You can, for example, combine service composer capabilities and VMware vRealize Automation Center to achieve secure, automated, on-demand micro-segmentation. Another example is a quarantine workflow, where― after a virus detection―a virtual machine is automatically and immediately moved to a quarantine security group, whose security policies can take action, like remediation, strengthened firewall rules and traffic steering.


Romain Decker is a Technical Solutions Architect in the Professional Services Engineering team and is based in France.

The Complexity of Data Center Blueprinting

GKarakasBy Gabor Karakas

Data centers are wildly complicated in nature and grow in an organic fashion, which fundamentally means that very few people in the organization understand the IT landscape in its entirety. Part of the problem is that these complex ecosystems are built up over long periods of time (5–10 years) with very little documentation or global oversight; therefore, siloed IT teams have the freedom to operate according to different standards – if there are any. Oftentimes new contractors or external providers replace these IT teams, and knowledge transfer rarely happens, so the new workforce might not understand every aspect of the technology they are tasked to operate, and this creates key issues as well.

GKarakas 1

 

Migration or consolidation activities can be initiated due to a number of reasons:

  • Reduction of the complexity in infrastructure by consolidating multiple data centers into a single larger one.
  • The organization simply outgrew the IT infrastructure, and moving the current hardware into a larger data center makes more sense from a business or technological perspective.
  • Contract renegotiations fail and significant cost reductions can result from moving to another provider.
  • The business requires higher resiliency; by moving some of the hardware to a new data center and creating fail-proof links in between the workloads, disasters can be avoided and service uptime can be significantly improved.

When the decision is made to move or consolidate the data center for business or technical reasons, a project is kicked off with very little understanding into the moving parts of the elements to be changed. Most organizations realize this a couple of months into the project, and usually find the best way forward is to ask for external help. This help usually comes from the joint efforts of multiple software and consultancy firms to deliver a migration plan that identifies and prioritizes workloads, and creates a blueprint of all their vital internal and external dependencies.

A migration plan is meant to contain at least the following details of identified and prioritized groups of physical or virtual workloads:

  • The applications they contain or serve
  • Core dependencies (such as NTP, DNS, LDAP, Anti-virus, etc.)
  • Capacity and usage trends
  • Contact details for responsible staff members

Any special requirements that can be obtained either via discovery or by interviewing the right people

GKarakas 2

In reality, creating such a plan is very challenging, and there can be many pitfalls. The following are common problems that can surface during development of a migration plan:

Technical Problems

It is vital that communication is strong between all involved, technical details are not overlooked, and all information sources are identified correctly. Issues can develop such as:

  • · Choosing the right tool (VMware Application Dependency Planner as an example)
  • · Finding the right team to implement and monitor the solution
  • · Reporting on the right information, which can prove difficult

Technical and human information sources are equally important, as automated discovery methods can only identify certain patterns; people need to put the extra intelligence behind this information. It is also important to note that a discovery process can take months, during which time the discovery infrastructure needs to function at its best, without interruption to data flows nor appliances.

Miscommunication

As previously stated, team communication is vital. There is a constant need to:

  • Verify discovery data and tweak technical parameters
  • Involve the application team in frequent validation exercises

It is important to accurately identify and document deliverables before starting a project, as misalignment with these goals can cause delays or failures further down the timeline.

Politics

With major changes in the IT landscape, there are also Human Resource-related matters to handle. Depending on the nature of the project, there are potential issues:

  • The organization’s move to another data center might abandon previous suppliers
  • IT staff might be left without a job

It can be part of an outsourcing project that moves certain operations or IT support outside the organization

GKarakas 3

Some of these people will need to help in the execution of the project, so it is crucial to treat them with respect and to make sure sensitive information is closely guarded. The blueprinting team members will probably know what the outcome of the project will bring for suppliers and the customer’s IT team. If some of this information is released, the project can be compromised with valuable information and time lost.

Blueprint Example

When delivering a migration blueprint, each customer will have different demands, but in most cases, the basic request will be the same: to provide a set of documents that contain all servers and applications, and show how they are dependent on each other. Most of the time, customers will also ask for visual maps of these connections, and it is the consultant’s job to make sure these demands are reasonable. There is only so much that can be visualized in a map that is understandable, so it is best to limit the number of servers and connections to about 10–20 per map. The following complex image is an example of just a single server with multiple running services discovered.

 

GKarakas Figure 1

Figure 1. A server and its services visualized in VMware’s ADP discovery tool

Beyond putting individual applications and servers on an automated map, there can also be demand for visualizing application-to-application connectivity, and this will likely involve manipulating data correctly. Some dependencies can be visualized, but others might require a text-based presentation.

The following is an example of a fictional setup, where multiple applications talk to each other―just like in the real world. Both visual and text-based representations are possible, and it is easy to see that for overview and presentation purposes, a visual map is more suitable. However, when planning the actual migration, the text-based method might prove more useful.

GKarakas Figure 2

Figure 2. Application dependency map: visual representation

GKarakas Figure 3

Figure 3. Application dependency map: raw discovery data

GKarakas Figure 4

Figure 4. Application dependency map: raw data in pivot table

It is easy to see that a blueprinting project can be a very challenging exercise with multiple caveats and pitfalls. So, careful planning and execution is required with strong communication between everyone involved.

This is the first in a series of articles that will give detailed overview, implementation and reporting methods on data center blueprinting.


Gabor Karakas is a Technical Solutions Architect in the Professional Services Engineering team and is based in the San Francisco Bay Area.

SDDC is the Future

Michael_Francis

 

 

By Michael Francis

 

VMware’s Transformative Growth

Over the last eight years at VMware I have observed so much change, and in my mind it has been transformative change. I think about my 20 years in IT and the changes I have seen, and feel the emergence of virtualization of x86 hardware will be looked upon as one of the most important catalysts for change in information technology history. It has modified the speed of service delivery, the cost of that delivery and subsequently has enabled innovative business models for computing – such as cloud computing.

I have been part of the transformation of our company in these eight years; we’ve grown from being a single-product infrastructure company to what we are today – an application platform company. Virtualization of compute is now mainstream. We have broadened virtualization to storage and networking, bringing the benefits realized for compute to these new areas. I don’t believe this is incremental value or evolutionary. I think this broader virtualization―coupled with intelligent, business policy-aware management systems―will be so disruptive to the industry that it will be considered a separate milestone potentially, on par with x86 virtualization.

Where We Are Now

Here is why I think the SDDC is significant:

  • The software-defined data center (SDDC) brings balance back to the ongoing discussion between the use of public and private computing.
  • It enables the attributes of agility, reduced operational and capital costs, lower security risk, and a new of stack management visibility.
  • SDDC not only modifies the operational and consumption model for computing infrastructure, but it also modifies the way computing infrastructure is designed and built.
  • Infrastructure is now a combination of software and configuration. It can be programmatically generated based on a specification; hyper-converged infrastructure is one example of this.

As a principal architect in VMware’s team responsible for the generation of tools and intellectual property that can assist our Professional Services and Partners to deliver VMware SDDC solutions, the last point is especially interesting and the one I want to spend some time on.

How We Started

As an infrastructure-focused project resource and lead over the past two decades, I have become very familiar developing design documents and ‘as-built’ documentation. I remember rolling out Microsoft Windows NT 4.0 in 1996 on CDs. There was a guide that showed me what to click and in what order to do certain steps. There was a lot of manual effort, opportunity for human error, inconsistencies between builds, and a lot of potential for the built item to vary significantly from the design specification.

Later, in 2000, I was a technical lead for a systems integrator; we had standard design document templates and ‘as-built’ document templates, and consistency and standardization had become very important. A few of us worked heavily with VBScript, and we started scripting the creation of Active Directory configurations such as Sites and Services definitions, OU structures and the like. We dreamed of the day when we could do a design diagram, click ‘build’, and have scripts build what was in the specification. But we couldn’t get there. The amount of work to develop the scripts, maintain them, and modify them as elements changed was too great. That was when we focused on the operating stack and a single vendor’s back office suite; imagine trying to automate a heterogeneous infrastructure platform.

It’s All About Automated Design

Today we have the ability to leverage the SDDC as an application programming interface (API) that abstracts not only the hardware elements below and can automate the application stack above― but can abstract the APIs of ecosystem partners.

This means I can write to one API to instantiate a system of elements from many vendors at all different layers of the stack, all based on a design specification.

Our dream in the year 2000 is something customers can achieve in their data centers with SDDC today. To be clear – I am not referring to just configuring the services offered by the SDDC to support an application, but also to standing up the SDDC itself. The reality is, we can now have a hyper-converged deployment experience where the playbook of the deployment is driven by a consultant-developed design specification.

For instance, our partners and our professional services organization has access to what we refer to as the SDDC Deployment Tool (an imaginative name, I know) (or SDT for short). This tool can automate the deployment and configuration of all the components that make up the software-defined data center. The following screenshot illustrates this:

MFrancis1

 

Today this tool deploys the SDDC elements in a single use case configuration.

In VMware’s Professional Services Engineering group we have created a design specification for an SDDC platform. It is modular and completely instantiated in software. Our Professional Services Consultants and Partners can use this intellectual property to design and build the SDDC.

What Comes Next?

I believe our next step is to architect our solution design artifacts so the SDDC itself can be described in a format that allows software―like SDT―to automatically provision and configure the hardware platform, the SDDC software fabric, and the services of the SDDC to the point where it is ready for consumption.

A consultant could design the specification of the SDDC infrastructure layer and have that design deployed in a similar way to hyper-converged infrastructure―but allowing the customer to choose the hardware platform.

As I mentioned at the beginning, the SDDC is not just about technology, consumption and operations: it provides the basis for a transformation in delivery. To me a good analogy right now is the 3D printer. The SDDC itself is like the plastic that can be molded into anything; the 3D printer is the SDDC deployment tool, and our service kits would represent the electronic blueprint the printer reads to then build up the layers of the SDDC solution for delivery.

This will create better and more predictable outcomes and also greater efficiency in delivering the SDDC solutions to our customers as we treat our design artifacts as part of the SDDC code.


Michael Francis is a Principal Systems Engineer at VMware, based in Brisbane.

“Gotchas” and Lessons Learned When Using Virtual SAN

jonathanm-profileBy Jonathan McDonald

There are certainly a number of blogs on the Web that talk about software-defined storage, and in particular Virtual SAN. But as someone who has worked at VMware for nine years, my goal is not to rehash the same information, but to provide insights from my experiences.

At VMware, much of my time was spent working for Global Support Services; however, over the last year-and-a-half, I have been working as a member of the Professional Services Engineering team.

As a part of this team, my focus is now on core virtualization elements, including vSphere, Virtual SAN, and Health Check Services. Most recently I was challenged with getting up to speed with Virtual SAN and developing an architecture design for it. At first this seemed pretty intimidating, since I had only heard about the marketing details prior to this; however, Virtual SAN truly did live up to all the hype about being “radically simple”. What I found is that the more I work with Virtual SAN the less concerned I became with the underlying storage. After having used Virtual SAN and tested it in customer environments, I can honestly say my mind is very much changed because of the absolute power it gives an administrator.

To help simplify the design process I broke it out into the following workflow design to not only simplify it for myself, but to help anyone else who is unaware of the different design decisions required to successfully implement Virtual SAN.

Workflow for a Virtual SAN Design_JMcDonald

Workflow for a Virtual SAN Design

When working with a Virtual SAN design, this workflow can be quite helpful. To further simplify it, I break it down into a four key areas:

  1. Hardware selection – In absolutely every environment I have worked in there has always been a challenge to select the hardware. I would guess that 75 percent of the problems I have seen in implementing Virtual SAN have been as a result of hardware selection or configuration. This includes things such as non-supported devices or incorrect firmware/drivers. Note: VMware does not provide support for devices that are not on the Virtual SAN Compatibility List. Be sure that when selecting hardware that it is on the list!
  2. Software configuration – The configuration is simple—rarely have I seen questions on actually turning it on. You merely click a check box, and it will configure itself (assuming of course that the underlying configuration is correct). If it is not, the result can be mixed, such as if the networking is not configured correctly, or if the disks have not been presented properly.
  3. Storage policy – The storage policy is at first a huge decision point. This is what gives Virtual SAN its power, the ability to configure what happens with the virtual machine for performance and availability characteristics.
  4. Monitoring/performance testing/failure testing – This is the final area and it is in regards to how you are supposed to monitor and test the configuration.

All of these things should be taken into account in any design for Virtual SAN, or the design is not really complete. Now, I could talk through a lot of this for hours. Rather than doing that I thought it would be better to post my top “gotcha” moments, along with the lessons learned from the projects I have been involved with.

Common “Gotchas”

Inevitably, “gotcha” moments will happen when implementing Virtual SAN. Here are the top moments I have run into:

  1. 1. Network configuration – No matter what the networking team says, always validate the configuration. The “Misconfiguration detected” error is by far the most common thing I have seen. Normally this means that either the port group has not been successfully configured for Virtual SAN or the multicast has not been set up properly. If I were to guess, most of the issues I have seen are as a result of multicast setup. On Cisco switches, unless an IGMP Snooping Carrier has been configured OR IGMP snooping has been explicitly disabled on the ports used for Virtual SAN, configuration will generally fail. In the default configuration it is simply not configured, and therefore—even if the network admin says it is configured properly it may not be configured at all—double check it to avoid any painNetwork Configuration_JMcDonald
  2. Network speed – Although 1 GB networking is supported, and I have seen it operate effectively for small environments, 10 GB networking is highly recommended for most configurations. I don’t just say this because the documentation says so. From experience, what it really comes down to here is not the regular everyday usage of Virtual SAN. Where people run into problems rather is when an issue occurs, such as during failures or periods of heavy virtual machine creation. Replication traffic during these periods can be substantial and cause huge performance degradation while they are occurring. The only way to know is to test what happens during a failure or peek provisioning cycle. This testing is critical as this tells you what the expected performance will be. When in doubt, always use 10 GB networking.
  3. Storage adapter choice – Although seemingly simple, the queue depth of the controller should be greater than 256 to ensure the best performance. This is not as much of an issue now as it was several months ago because the VMware Virtual SAN compatibility list should no longer have any cards that are under 256 queue depth in it anymore. Be sure to verify though. As an example, there was one card when first released that artificially limited the queue depth of the card in the driver software. Performance was dramatically impacted until an updated driver was released.

Lessons Learned

There are always lessons to be learned when using new software, and ours came with a price of a half or full day’s work in trying to troubleshoot issues. Here’s what we figured out:

  1. Always verify firmware/driver versions – This one always seems to be overlooked, but I am stating it because of experiences onsite with customers.One example that comes to mind is where we had three identical servers bought and shipped in the same order that we were using to configure Virtual SAN. Two of them worked fine, the third just wouldn’t cooperate, no matter what we did. After investigating for several hours we found that not only would Virtual SAN not configure, but all drives attached to that host were Read only. Looking at the utility that was provided with the actual card itself showed that the card was a revision behind on the firmware. As soon as we upgraded the firmware it came online and everything worked brilliantly.
  2. Pass-through/RAID0 controller configuration – It is almost always recommended to use a pass-through controller such as Virtual SAN, as it is the owner of the drives and can have full control of them. In many cases there is only RAID0 mode. Proper configuration of this is required to avoid any problems and to maximize performance for Virtual SAN. First, ensure any controller caching is set to 100% Read Cache. Second, configure each drive as its own “array” and not a giant array of disks. This will ensure it is set up properly.As an example of incorrect configuration that can cause unnecessary overhead, several times I have seen all disks configured as a single RAID volume on the controller. This shows up as a single disk to the operating system (ESXi in this case), which is not desired for Virtual SAN. To fix this you have to go into the controller and configure it correctly, by configuring each disk individually.  You also have to ensure the partition table (if previously created) is removed, which can—in many cases—involve a zero out of the drive if there is not an option to remove the header.
  3. Performance testing – The lesson learned here is you can do an infinite amount of testing – where do you start and stop. Wade Holmes from the Virtual SAN technical marketing team at VMware has an amazing blog series on this that I highly recommend reviewing for guidance here. His methodology allows for both basic and more in-depth testing to be done for your Virtual SAN configuration.

I hope these pointers help in your evaluation and implementation of Virtual SAN. Before diving head first in to anything, I always like to make sure I am informed about the subject matter. Virtual SAN is no different. To be successful you need to make sure you have genuine subject matter expertise for the design, whether it be in-house or by contacting a professional services organization. Remember, VMware is happy to be your trusted advisor if you need assistance with Virtual SAN or any of our other products!


Jonathan McDonald is a Technical Solutions Architect for the Professional Services Engineering team. He currently specializes in developing architecture designs for core Virtualization, and Software-Defined Storage, as well as providing best practices for upgrading and health checks for vSphere environments. 

Have a Chat with Your SDDC (Automating the Management of the Management Portal, Part 2)

By Andrea Siviero, VMware Senior Solutions Architect

Andrea SivieroIn my recent post “Look Mom, no Mouse!” I introduced an amazing new way to interact with your SDDC without a mouse, but now, using a command-line with simple mnemonic instructions you can “Talk” with your SDDC to “Automate the Management of the Management Portal”.

VMware has just announced vRealize CloudClient 3.0 released for general availability (GA) (http://developercenter.vmware.com/web/dp/tool/cloudclient/3.0.0).

So now that it’s GA, I’m excited to explore with you more deeply how to use CloudClient, and also to share its benefits.

What commands do I want to show you today?

-        Create a brand new tenant and service catalog and entitle them to administrators

-        Import an existing blueprint into the brand new CloudClient-made tenant

-        Deploy blueprints from the catalog of services

So wake up your SDDC — it’s time for a lovely chat. :-)

Log in and create a tenant
CloudClient allows you to log in in an interactive way:

CloudClient> vra login userpass --server vcac-l-01a.corp.local --tenant pse --user siviero@gts.local --password ****** --iaasUser corp\\Administrator --iaasPassword ******

Or to edit the CloudClient.properties file to fill in all the details, just type this command to create an empty configuration:

CloudClient> login autologinfile

NOTE: IaaS credentials need to be passed with double back-slash i.e. corp\\Administrator

Login Screen

Figure 1: Login

Create a new tenant, identity-store and admins
When you are logged in as administrator@vsphere.local, creating a tenant is just three commands away. To set the name of the tenant, how users will get authenticated (AD or LDAP) and who will be the adminstrators:

CloudClient> vra tenant add --name "PSE" --url "PSE"
CloudClient> vra tenant identitystore add --name "PSE AD" --type AD --url ldap://controlcenter.corp.local --userdn "CN=Administrator,CN=Users,DC=corp,DC=local" --password "****" --groupbasedn "OU=Nephosoft QE,DC=corp,DC=local" --tenantname "PSE" --alias "gts.local" --domain "corp.local"
CloudClient> vra tenant admin update --tenantname "PSE" --addtenantadmins siviero@gts.local --addiaasadmins admin1@gts.local,admin2@gts.local
Create Tenant

Figure 2: Create Tenant

Create a fabric group and business group and assign resources
Now let’s annotate the returned IDs so they can be used in further commands. (They can be scripted using variables.)

CloudClient> vra fabricgroup add --name "GTS Fabric Group" --admins "admin1@gts.local,admin2@gts.local"
Create Fabric Group

Figure 3: Create Fabric Group

Search for the suitable compute resources. We will select the “Cluster Site A”:

CloudClient> vra computeresource list
Compute Resources

Figure 4: Compute Resources

Let’s finalize the “trivial” steps of assigning the compute resources to the fabric group and creating a business group wth a pre-determined machine prefix.

CloudClient> vra fabricgroup update --id f8bbfcd5-79c0-43db-a382-2473b91862e6 --addcomputeresource c47e3332-bdef-4391-9f93-269dcf14f2c5
CloudClient> vra machineprefix add --prefix gts- --numberOfDigits 3 --nextNumber 001
CloudClient> vra businessgroup add --name "GTS Business Group" --admins "admin1@gts.local,admin2@gts.local" --adContainer "cn=computers" --email admin2@gts.local --description "GTS Group" --machinePrefixId 1c1d20c3-ba91-443e-beb0-b9b0728ee29c
Assign Resources

Figure 5: Assign Resources

Here comes the fun: import/export blueprints
Until now, CloudClient commands used are merely a reproduction of what normally happens on the GUI.

Let me show where the power of it comes out: let’s assume you already created a good blueprint in a tenant with a blueprint profile and you just want to “copy&paste” it to another tenant. You cannot do it in the GUI — you need to manually recreate it — but hey, here comes the CloudClient magic: log in to the source tenant and export the blueprint in a JSON format:

CloudClient> vra iaas blueprint list

CloudClient> vra iaas blueprint detail --id 697b8302-b5a9-4fbf-8544-2f19d4e8a220 --format JSON --export CentOS63.json
Export Blueprint to JSON file

Figure 6: Export Blueprint to JSON file

Now log back to the brand new PSE tenant and import the blueprint like this:

CloudClient> vra iaas blueprint add vsphere --inputfile CentOS63.json --name "CentOS 6.3 x64 Base" --cpu 1 --memory 512
Import Blueprint from JSON

Figure 7: Import Blueprint from JSON

Request the blueprint from the catalog
The remaining steps will be trivial as before: Create a service, an entitlement, and actions and assign the blueprint to catalog. Reading the documentation will help you to get familiar with it.

Note: “Reservations” verbs are not yet implemented, so at some point you need to use the GUI to complete the process.

So please let me fast forward to the final moment when you can successfully deploy a blueprint and see it live. :-)

CloudClient> vra catalog list
Listing the Catalog

Figure 8: Listing the Catalog

Using the ID returned from catalog list, make the request:

CloudClient> vra catalog request submit --id c8a850d2-a089-4afb-b5d8-b298580cf9f9 --groupid 2c220523-60bb-419e-80c8-c5bfd81aa805 --reason fun
Checking the Requests

Figure 9: Checking the Requests

And here it is, our little VM, happy and running. :-)

Happy and Running

Figure 10: Happy and Running

The Occam’s Razor principle: “Entities must not be multiplied beyond necessity.”

In my humble opinion: Please don’t waste lot of time doing everything (Coffe/Tea?) from a command-line. vRealize Automation 6.1 has a nicely improved UI and is very intuitive to work with.

Keep the solutions as simple as possible and use vRealize CloudClient when some real “black magic” is needed.


Andrea Siviero is an eight-year veteran of VMware and a senior solutions architect member of Professional Services Engineering (PSE) for the Software-Defined Datacenter (SDDC), a part of the Global Technical Solutions (GTS) team. Prior to PSE, Andrea spent three years as pre-sales system engineer and three years as a post-sales consultant architect for cloud computing and desktop virtualization solutions focusing on very large and complex deployments, especially for service providers in the finance and telco sectors. 

Holistic Engagements Lead to Successful Outcomes

Ford DonaldBy Ford Donald, Principal Architect, GTS PSE, VMware

In my last post, I introduced an optimized consulting approach called the SDDC Assess, Design, and Deploy Service. The post focused on the technical blueprint, designed with common core elements, and the flexibility for custom implementation using modular elements. In this post, we’ll explore the process improvements that lead to holistic, mutually beneficial engagements.

The Work Stream Process
The six-step process takes into account both our prescribed starting point—the technical foundation—and the unique needs of the customer, with an eye towards a predictable outcome.

1. Solution Overview. We begin with an overview of the technical foundations and the new approach to help the customer understand the benefits of holistic consultation and the specific solution design. This sets a level discussion between the modeled approach and the pre-conceptions of how things work. Stepping back to review the approach gets us to the assessment phase quickly so we are all on the same page about how we’ll be working together.

2. Assessment Phase. In this phase, we assess what the customer already has in place, and where they would like to be at the end of the project. Some customers have strong opinions of design, others don’t. Defined gaps are where we come in with adaptations to the prescribed design, with layers and snap-ins added as desired.

3. Design Phase. Here, we bring forward the adapted solution, shaped to meet the customer’s needs and requirements, relative to our good starting point with the prescribed solution.

4. Deploy Phase. Given all the up-front work up to this point, deployment should be straightforward. We add what’s missing, modify what’s not right, and bulk up or whittle down to get to the adapted solution. Here we would add in things like Orchestrator if it’s not currently deployed, along with the Orchestration workflow library. These pre-defined, generalized, well-documented workflows are field-tested and designed so that we can easily provide support—this ensure that they are consistent across the board.

5. Knowledge Transfer. I like to call this the cool-down period. Here we take two steps back and let the environment learn, stabilize, and cool off a bit. For example, VCOps does best if it’s given three or four weeks to understand what normal is. This is a great time to train administrative staff on the new implementation and announce any operational or organizational transformations needed. It’s important to take the time to get a feeling for what’s new or changed, from interfaces and APIs to dealing with resources and loading up templates.

6. Solution Validation. In this phase we come together to look back and compare the results to the prescribed beginnings. If we haven’t hit the mark, remediation is required.

The Project Timeline
It’s important to note that each phase of the technical transformation has its own work stream process. No engagement should take on the entire thing as one major project. Rather, it should be a series of engagements that meet the customer’s timeline and adoption capability. The various stages will take place over a lengthy time period.

Traditionally, customer engagements have focused on the assessment or the design and deliver phase. By adding in the Solution Overview, and ensuring we’re all starting from the same point, we lay the foundation for success.


Ford Donald is a Principal Architect and member of Professional Services Engineering (PSE), a part of the Global Technical Solutions (GTS) team, a seven-year veteran of VMware. Prior to PSE, Ford spent three years as a pre-sales cloud computing specialist focusing on very large/complex virtualization deployments, including the VMware sales cloud known as vSEL. Ford also served as coreteam on VMworld Labs and as a field SE.

Working with VMware Just Gets Better

Ford DonaldBy Ford Donald, Principal Architect, GTS PSE, VMware

Imagine someone gives you and a group of friends a box of nuts and bolts and a few pieces of metal and tells you to build a model skyscraper. You might start putting the pieces together and end up with a beautiful model, but it probably won’t be the exact result that any of you imagined at the beginning. Now imagine if someone hands you that same box, along with a blueprint and an illustration of the finished product. In this scenario, you all work together to a prescribed end goal, with few questions or disagreements along the way. Think about this in the context of a large technical engagement, for example a software-defined data center (SDDC) implementation. Is it preferable to make it up as you go along, or to start with a vision for success and achieve it through a systematic approach?

Here at VMware, we’re enhancing the way we engage with customers by providing prescriptive guidance, a foundation for success, and a predictable outcome through the SDDC Assess, Design and Deploy Service. As our product line has matured, our consulting approach is maturing along with it. In the past, we have excelled at the “discovery” approach, where we uncover the solution through discussion, and every customized outcome meets a unique customer need. We’ve built thousands of strong skyscrapers that way, and the skill for discovering the right solution remains critical within every customer engagement. Today we bring a common starting point that can be scaled to any size of organization and adapted up the stack or with snap-ins according to customer preference or need. A core implementation brings a number of benefits to the process, and to the end result.

A modular technical solution

Think of the starting point as a blueprint for the well-done data center. With our approach, the core elements of SDDC come standard, including vSphere, vCenter Operations, vCenter Orchestrator, and software-defined networking thru vCNS. This is the clockwork by which the SDDC from VMware is best established, and it lays the foundation for further maturity evolutions to Infrastructure Service and Application Service. The core “SDDC Ready” layer is the default, providing everything you need to be successful in the data center, regardless of whether you adopt the other layers. Beyond that, to meet the unique needs of customers, we developed “snap-ins” as enhancements or upgrades to the core model, which include many of our desirable, but not necessarily included-by-default, assets such as VSAN and NSX.

The Infrastructure Service layer builds on the SDDC by establishing cloud-based metaphors via vCloud Automation Center and other requirements for cloud readiness, including a service portal, catalog-based consumption, and reduction of administrative overhead. The Application Service layer includes vCloud Application Director and elevates the Infrastructure layer with application deployment, blueprinting and standardization.

From our experience, customers demand flexibility and customization. In order to meet that need, we built a full menu of Snap-ins. These snap-ins allow customers to choose any number of options from software-defined storage, NSX, compliance, business continuity & disaster recovery (BCDR), hybrid cloud capabilities and financial/cost management. Snap-ins are elemental to the solution, and can be added as needed according to the customer’s desired end result.

Operational Transformation Support

Once you’ve adopted a cloud computing model, you may want to consider organizational enhancements that take advantage of the efficiency gained by an SDDC architecture. As we work with our customers in designing the technical elements, we also consult with our customers on the operational processes. Changing from high administrative overhead to low overhead, introducing new roles, defining what type of consumer model you want to implement – our consultants help you plan and design your optimal organization to support the cloud model.

The beauty of this approach shines in its ability to serve both green field and brown field projects. In the green field approach, where a customer wants the consultants to take the reins and implement top to bottom, the approach serves as a blueprint. In a brown field model, where the customer has input and opinions and desires integration and customization, the approach can be adapted to the customer’s environment, relative to the original blueprint.

So whether you’re building your skyscraper from the ground up, or remodeling an existing tower, the new SDDC Assess, Design and Deploy Service provides an adaptable model, with a great starting point that will help you get the best out of your investment.

Stay tuned for an upcoming post that gives you a look under the hood of the work stream process for implementing the technical solution.


Ford Donald is a Principal Architect and member of Professional Services Engineering (PSE), a part of the Global Technical Solutions (GTS) team, a seven-year veteran of VMware. Prior to PSE, Ford spent three years as a pre-sales cloud computing specialist focusing on very large/complex virtualization deployments, including the VMware sales cloud known as vSEL. Ford also served as coreteam on VMworld Labs and as a field SE.

 

Go for the Gold: See vSphere with Operations Management In Action

If there’s anything we’ve learned from watching the recent Winter Olympics, it’s that world-class athletes are focused, practice endless hours, and need to be both efficient and agile to win gold.

When it comes to data centers, what sets a world-class data center apart is the software. A software-defined data center (SDDC) provides the efficiency and agility for IT to meet exploding business expectations so your business can win gold.

The VMware exclusive seminar is here! Join us to learn about the latest in SDDC.

Now through March 19, VMware TechTalk Live is hosting free, interactive half-day workshops in 32 cities across the U.S. and Canada. Attendees will get to see a live demo of vSphere with Operations Management.

The workshops will also provide a detailed overview of the key components of the SDDC architecture, as well as results of VMware customer surveys explaining how the SDDC is actually being implemented today.

Check out the TechTalk Live event information to find the location closest to you and to reserve your spot.

SDDC + SAP = CapEx/OpEx Savings

By Girish Manmadkar, an SAP Virtualization Architect at VMware

Earlier this month, my colleague David Gallant wrote about architecting a software-defined data center for SAP and other business-critical applications. I’d like to further explore how SAP fits into the software-defined data center (SDDC) and, specifically, how to optimize it for CapEx and OpEx savings.

A key to remember is that the SDDC is not a single technology that you purchase and install—it is a use case, a strategy, a mind shift. And in that way, it is also a journey that will unfold in stages and should be planned in that way. I’ve outlined the three foundational steps below.

SDDC 1.0

Most of the customers that I work with are well along in this stage, moving their current non-x86 SAP workloads toward a VMware-based x86 environment.

During this process, numerous milestones can be delivered to the business, in particular, an immediate reduction in their CapEx. This benefit is achieved by starting to move non-x86 or current physical x-86 workloads to the virtual x-86 OS platform. Understandably, customers tend to approach this transition with caution, so we often start with low-hanging fruits: non-production and/or development SAP systems.

The next step you can take is to introduce automation. Automation comes in two places: at the infrastructure layer, which is achieved using VMware vCloud Automation Center and Orchestration; and at the application layer, delivered using SAP’s Landscape Virtualization Manager.

During this phase it is best to implement vSphere features, including auto deploy—host profiles, and OS templates—in order to automate vSphere and virtual machine provisioning to the environment.

Often it is a good idea at this time to start a parallel project around storage. You can work with your storage and backup teams to enhance current architectures by enabling storage technologies like de-dup, vSphere storage I/O control and any other storage array plugins.

We also recommend minimizing agents in the guest operating system, such as agents used for backup and/or anti-virus. The team should start putting together new architecture to move such agents from the guest OS to the vSphere hosts to reduce complexity and improve performance. The storage and network teams should look to implement new architecture that will support virtual disaster recovery solution. By planning ahead now, teams can avoid rework later.

During this phase, the team not only migrates SAP application servers to the vSphere platform but also shows business value with CapEx reductions and value-added flexibility to scale out SAP application server capacity on demand.

SDDC 2.0

Once this first stage goes into the operations cycle, it lays the groundwork for various aspects of the SDDC’s second stage. The next shift is toward a converged datacenter or common virtualization framework to deploy a software-defined lifecycle for SAP. This allows better monitoring, migration to the cloud, chargeback, and security.

This is also the phase where you want to virtualize your SAP central instances, or ASCS instances, and database servers. The value here is the removal of a reliance on complex, physical clustered environments by transitioning instead to VMware’s high-availability features. These include fault tolerance (FT) applicable to and determined by the SAP sizing exercise for the ASCS and focused on meeting the business’s SLAs.

SDDC 3.0

Once the SDDC 2.0 is in production, it is a good time to start defining other aspects of SDDC, such as Infrastructure-as-a-Service, Platform-as-a-Service, Storage-as-a-Service, and Disaster-Recovery-as-a-Service.

Keep an eye out for our follow-up post fleshing out the processes and benefits of these later stages.


Girish Manmadkar is a veteran VMware SAP Virtualization Architect with extensive knowledge and hands-on experience with various SAP and VMware products, including various databases. He focuses on SAP migrations, architecture designs, and implementation, including disaster recovery.

The SDDC Seems Cool … But What Do I Do with It?

By David Gallant, VMWare Professional Services Consultant

Lately I’ve been receiving requests from customers to talk to them about the software-defined data center (SDDC). So I start to explain software-defined networking, software-defined storage, automated provisioning, and self-service portals.

And that’s when I notice the customer looking excited, but also slightly confused.

Last week at SAP TechEd 2013, I was in the middle of just such a talk when I decided to stop and I ask the customer why he looked puzzled.

His response? “That’s great, but what do I do with all that SDDC stuff?”

That’s when the light bulb came on. He was right to question me—why build a software-defined data center if you have no clue what you’re going to do with it?

To really harvest the investment in your SDDC, you need to be building toward a specific set of goals. We don’t build data centers without a purpose; and that purpose for SDDC, as it’s always been, is the application.

In most cases the best data centers have been purpose-designed and built around the organization’s business-critical applications; for instance SAP, Oracle, or Microsoft applications.

I’ll concentrate for now on SAP—if you can architect an SDDC for SAP, you can roll those concepts over to pretty much any other application. Continue reading