Home > Blogs > VMware Consulting Blog

How to Add a Linux Machine as PowerShell Host in vRO

By Spas Kaloferov

Introduction

In this article we will look into the alpha version of Microsoft Windows PowerShell v6 for both Linux and Microsoft Windows. We will show how to execute PowerShell commands between Linux , Windows, and VMware vRealize Orchestrator (vRO):

  • Linux to Windows
  • Windows to Linux
  • Linux to Linux
  • vRO to Linux

We will also show how to add a Linux PowerShell (PSHost) in vRO.

Currently, the alpha version of PowerShell v6 does not support the PSCredential object, so we cannot use the Invoke-Command command to programmatically pass credentials and execute commands from vRO, through a Linux PSHost, to other Linux machines, or Windows machines. Conversely, we cannot execute from vRO –> through a Windows PSHost –> to Linux Machines.

To see how we used the Invoke-Command method to do this, see my blog Using CredSSP with the vCO PowerShell Plugin (SKKB1002).

In addition to not supporting the PSCredential object, the alpha version doesn’t support WinRM. WinRM is Microsoft’s implementation of the WS-Management protocol, a standard Simple Object Access Protocol (SOAP)-based, firewall-friendly protocol that enables hardware and operating systems from different vendors to interoperate. Therefore, when adding a Linux machine as a PowerShell host in vRO, we will be using SSH instead of WinRM as the protocol of choice.

The PowerShell v6 RTM version is expected to support WinRM, so we will be able to add the Linux PSHost with WinRM, and not SSH.

So, let’s get started.

Continue reading

The Anatomy of an Instant Clone

By Travis Wood

If you’ve used Horizon View over the last few years, then you most likely have come across linked clones. Linked clones use a parent image, called a “replica,” that serves read requests to multiple virtual machines (VMs), and the writes in each desktop are captured on their own delta disk. Replicas can also be used to change desktop update methodologies; instead of updating every desktop, you can update the parent image and recompose the rest of the desktops.

Horizon 7 has introduced a new method of provisioning with Instant Clones. Instant Clones are similar to linked clones in that all desktops read from a replica disk and write to their own disk, but Instant Clone takes it one step further by doing the same thing with memory. Instant Clones utilize a new feature of vSphere 6 where desktop VMs are forked (that is, Instant Clones are created) off a running VM—instead of cloning a powered-off VM—which provides savings for provisioning, updates, and memory utilization.

Golden Image

With Instant Clones you start with your golden image, in a way that is similar to linked clones. The golden image is the VM you install the operating system on, then join to the domain, and install user applications on; you follow the same OS optimizations procedures you would use for Instant Clones.

When you’re done, release its IP address, shut it down, and create a snapshot. Now you are ready to create your Instant Clone desktop pool. This VM should have VM Tools installed, along with the Horizon Agent with the Instant Clone module. It is NOT possible to have the Instant Clone and Composer modules co-installed, so you will always need different snapshots if using Instant Clones and linked clones from the same golden image. Reservations can be set on the golden image and they will be copied to the Instant Clones, reducing the size of the VSwap file. It is important to note that the golden image must be on storage that’s accessible to the host you are creating your Instant Clone desktop pool on.

Template

When you create your pool, Horizon will create a template. A template is a linked clone from your golden image, created on the same datastore as the golden image. It will have the name cp-template, and will be in the folder ClonePrepInternalTemplateFolder. Template disk usage is quite small, about 60 MB. There will be an initial power-on after the template is created, but it will then shut off.

TWood_Horizon Template

Replica

Next, Horizon will create a replica, which is the same as a Linked Clone replica. It is a thin-provisioned, full clone of the template VM. This will serve as the common read disk for all of your Instant Clones, so it can be tiered onto appropriate storage through the Horizon Administrator console, the same way it is done with Linked Clones. Of course, if you are using VSAN, there is only one datastore, so tiering is done automatically. Horizon will also create a CBRC Digest file for the replica. The replica will be call cp-replica-GUID and will be in the folder ClonePrepReplicaVmFolder. The disk usage of the replica will be depend on how big your Gold Master is, but remember, it’s thin provisioned and not powered on, so you will not have VSwap functionality.

TWood_Horizon Replica

Parent

Horizon will now create the final copy of the original VM, called a parent, which will be used to fork the running VMs. The parent is created on every host in the cluster; remember, we are forking running VMs here, so every host needs to have a running VM. These will be placed on the same datastore as the desktop VMs, where there will be one per host per datastore. Because these are powered on, they have a VSwap file the size of the allocated vMEM. In addition, there will be a small delta disk to capture the writes booting the parent VM and the VMX Overhead VSwap file, but this—and the sum of the other disks—is relatively small, at about 500 MB. These will be placed in ClonePrepReplicaVmFolder.

TWood_Horizon Parent

Something you’ll notice with the parent VM is that it will use 100% of its allocated memory, causing a vCenter alarm.

TWood_vCenter Alarm

TWood_Virtual Machine Error

Instant Clones

OK! At this point, we are finally ready to fork! Horizon will create the Instant Clones based on the provisioning settings, which can be upfront or on-demand. Instant Clones will have a VSwap file equal to the size of the vMEM—minus any reservations set on the Gold Master, plus a differencing disk.

The amount of growth for the differencing disk will depend on how much is written to the local VM during the user’s session, but it is deleted on logout. When running View Planner tests, this can grow to about 500 MB, which is the same as when using View Planner for Linked Clones. The provisioning of Instant Clones will be fast! You’ll see much lower resource utilization of your vCenter Server and less IO on your disk subsystem because there is no boot storm from the VMs powering on.

TWood_vCenter Server

Conclusion

Instant Clones are a great new feature in Horizon 7 that take the concept of Linked Clones one step further. They bring the advantages of:

  • Reducing boot storms
  • Decreasing provisioning times
  • Decreasing change windows
  • Bringing savings to storage utilization

Instant Clones introduce a number of new objects: replicas, parents, and templates. It is important to understand not only how these are structured, but also their interrelationships, in order to plan your environment accordingly.


Travis is a Principal Architect in the Global Technology & Professional Services team, specializing in End User Computing.  He is also a member of the CTO Ambassadors program which connects the global field with R&D and engineering.

VMworld Session Preview: MGT775

Andrea SivieroBy Andrea Siviero

Data center virtualization continues to receive attention in enterprise organizations that want to reduce IT costs and create a more flexible, efficient, and automated applications workload environment.

As an IT organization, you must contend with many different software and hardware components. And not only do you have to manage a lot of different components, you also face the challenge of putting them together!

To solve this complex challenge, VMware Validated Designs (VVDs) provide guidance and speed up the process of building a modern, automated Software-Defined Data Center.

So, exactly what are VMware Validated Designs?

  • They are architectures and designs created and validated by VMware and data center experts.
  • They encompass the entire set of VMware’s Software-Defined Data Center products.
  • They are standardized and streamlined designs for different deployment scenarios and a broad set of use cases.

Marco Righini, from Intel, and I were able to access early content and test it on a real data center, and we would like to share our experience with you.

Visit our session at VMworld 2016 Las Vegas (Session ID: MGT7759) to hear the findings from early adopters of VMware Validated Design.


Presenters: Marco Righini, DataVMworld 2016center Solution Architect, Intel Corp., and Andrea Siviero, Staff Solution Architect, VMware
Session Number: MGT7759
Session Title: Early VVD Adopter Experience: Building a Secure and Automated Cloud
Date and Time: Wednesday, August 31, 2016 10:00 AM‒11:00 AM

Abstract: The session presents the work done during the building of VVD using the Intel Lab in Pisa, Italy. This collaborative team effort between local VMware PSOs and Intel tested and built an entire lab from scratch, using the VVD reference architecture. The challenges of the VVD architecture are addressed, along with how it helped in the fast delivery an automated cloud.


Andrea Siviero is an ten-year veteran of VMware and a senior solutions architect member of Professional Services Engineering (PSE) for the Software-Defined Datacenter (SDDC), a part of the Global Technical Solutions (GTS) team. Prior to PSE, Andrea spent three years as pre-sales system engineer and three years as a post-sales consultant architect for cloud computing and desktop virtualization solutions focusing on very large and complex deployments, especially for service providers in the finance and telco sectors.

VMworld Session Preview: Advanced Network Services with NSX

Romain Decker

 

By Romain Decker

It is no secret that IT is in constant evolution. IT trends such as Cloud Adoption, Distributed Applications, Micro-Services or Internet of Things have emerged over the last years.

Nevertheless, the focus is still on applications and on how they compute and deliver data to consumers. Whether their role is to generate revenue, pilot industries, logistics, health or even your programmable thermostat; top level goals of organizations are still security, agility and operational efficiency, everything else associated with the applications has changed:

  • Threats have become more advanced and persistent.
  • Users now access the data center from devices and locations that represent significant challenges.
  • Application architectures are now more widely distributed and more dynamic than ever before.
  • Infrastructure changes have evolved with the convergence of resources and questions around public cloud offerings.

VMware NSX is a perfect fit to address these concerns from the network and security standpoint. NSX reproduce all Network & Security services of Data Centers in logical space for best speed/agility and a deeper security.

Visit my session at VMworld Las Vegas (Session ID: NET7907) to hear the detailed presentation on NSX firewall, load balancing and SSL-VPN capabilities.

And don’t forget, the GUI is not the king! 😉


Presenter: Romain Decker
Session Number: NET7907
Session Title: Advanced Network Services with NSX
Date and Time: 8/30/16 (Tuesday) 2:00 PM

Abstract: Applications are everywhere and increasingly more complex. They require much more than switching and routing on the network side. Clouds should be able to host any applications, including the complex ones. This session will discuss the concepts for designing and operating NSX network services such as firewalling, load balancing, and VPN. We will examine and explain how you can better consume those services by automating them, or by using other mechanisms such as NSX API. After this session, you will leave with a better understanding of how NSX Network and Security services work, and how to leverage them to better support your applications.

Schedule Builder


Romain Decker is a Senior Solutions Architect member of Professional Services Engineering (PSE) for the Software-Defined Datacenter (SDDC) portfolio – a part of the Global Technical & Professional Solutions (GTPS) team.

How to Change the Package Signing Certificate of a vRealize Orchestrator Appliance (7.0.1)

 

By Spas Kaloferov

In this post, we will take a look at how to change the Package Signing Certificate (PSC) in a vRealize Orchestrator appliance.

To change the PSC, let’s review a few steps first:

  • ŸUse the keytool to:
    • ŸCreate new keystore; the keystore type must be JCEKS.
    • ŸImport the certificate into the keystore.
    • ŸChange the alias of the certificate to _dunesrsa_alias_.
    • ŸGenerate a Security Key and place it in the keystore.
    • ŸChange the alias of the Security Key to _dunessk_alias_.
  • ŸUse the Control Center interface to:
    • Ÿ Import the keystore you created.
    • Ÿ Restart the Orchestrator server.

Here is a screenshot of the original PSC certificate:

SKaloferov_PSC Certificate

Changing the Package Signing Certificate

First, you must obtain a PFX Certificate Package (containing your PSC Certificate), which is issued from the Certificate Authority (CA).

SKaloferov_Package Signing Certificate

SKaloferov_Package Signing Certificate 2

SKaloferov_Certificate Path

Note that the certificate has the Digital Signature and Key_Encipherment Key Usage attributes as shown above. It also has the Server Authentication Extended Key Usage attribute.

Copy the PFX certificate package to any Linux appliance.

SKaloferov_Certificate Signing vRO

We will use the OpenSSL tool to execute commands. Enter the following commands to create a new keystore and, at the same time, import the PFX certificate package:

keytool -importkeystore -srckeystore "/etc/vco/app-server/security/rui.pfx" -srcstoretype pkcs12 -srcstorepass "dunesdunes" -deststoretype jceks -destkeystore "/etc/vco/app-server/security/psckeystore" -deststorepass "dunesdunes"

SKaloferov_PFX Certificate

Enter the following command to change the alias of the certificate:

keytool -changealias -alias rui -destalias _dunesrsa_alias_ -keystore "/etc/vco/app-server/security/psckeystore" -storetype jceks -storepass "dunesdunes"

Next, enter this command to generate a security key:

keytool -genseckey -alias _dunessk_alias_ -keyalg DES -keysize 56 -keypass "dunesdunes" -storetype jceks -keystore "/etc/vco/app-server/security/psckeystore" -storepass "dunesdunes"

Notice in the above command I’ve used the DES algorithm and 56 key size, but you can also use the 3DES (DESese) algorithm and 168 key size.

Enter the following command to list the contents of the store:

keytool -list -storetype jceks -keystore "/etc/vco/app-server/security/psckeystore"

Copy the keystore file to your Windows machine.

Open Control Center and navigate to Certificates > Package Signing Certificate.

Click Import > Import from JavaKeyStore file.

Browse the keystore file, and enter the password.

SKaloferov_Current Certificate

Click Import to import the certificate.

Go to Startup Options and restart the Orchestrator service.

Navigate back to Certificates > Package Signing Certificate.

You should now see the new certificate as shown below:

SKaloferov_New Certificate

Open your vRealize Orchestrator appliance client, and navigate to Tools > Certificate Manager.

SKaloferov_vRO

You should now see the certificate shown below. The common name can differ, but if you compare the thumbprints, it should match the private key entry in your keystore.

SKaloferov_Keystore

I hope this post was valuable in helping you learn how to change the Package Signing Certificate in a vRealize Orchestrator appliance. Stay tuned for my next post!


Spas Kaloferov is an acting Solutions Architect member of Professional Services Engineering (PSE) for the Software-Defined Datacenter (SDDC) – a part of the Global Technical & Professional Solutions (GTPS) team. Prior to VMware, Kaloferov focused on cloud computing solutions.

VMware Validated Design for SDDC 2.0 – Now Available

Jonathan McDonaldBy Jonathan McDonald

Recently I have been involved in a rather cool project inside VMware, aimed at validating and integrating all the different VMware products. The most interesting customer cases I see are related to this work because oftentimes products work independently without issue—but together can create unique problems.

To be honest, it is really difficult to solve some of the problems when integrating many products together. Whether we are talking about integrating a ticketing system, building a custom dashboard for vRealize Operations Manager, or even building a validation/integration plan for Virtual SAN to add to existing processes, there is always the question, “What would the experts recommend?”

The goal of this project is to provide a reference design for our products, called a VMware Validated Design. The design is a construct that:

  • Is built by expert architects who have many years of experience with the products as well as the integrations
  • Allow repeatable deployment of the end solution, which has been tested to scale
  • Integrates with the development cycle, so if there is an issue with the integration and scale testing, it can be identified quickly and fixed by the developers before the products are released.

All in all, this has been an amazing project that I’ve been excited to work on, and I am happy to be able to finally talk about it publicly!

Introducing the VMware Validated Design for SDDC 2.0

The first of these designs—under development for some time—is the VMware Validated Design for SDDC (Software-Defined Data Center). The first release was not available to the public and only internal to VMware, but on July 21, 2016, version 2.0 was released and is now available to everyone! This design builds not only the foundation for a solid SDDC infrastructure platform using VMware vSphere, Virtual SAN, and VMware NSX, but it builds on that foundation using the vRealize product suite (vRealize Operations Manager, vRealize Log Insight, vRealize Orchestrator, and vRealize Automation).

The VMware Validated Design for SDDC outcome requires a system that enables an IT organization to automate the provisioning of common, repeatable requests and to respond to business needs with more agility and predictability. Traditionally, this has been referred to as Infrastructure-as-a-Service (IaaS); however, the VMware Validated Design for SDDC extends the typical IAAS solution to include a broader and more complete IT solution.

The architecture is based on a number of layers and modules, which allows interchangeable components to be part of the end solution or outcome, such as the SDDC. If a particular component design does not fit the business or technical requirements for whatever reason, it should be able to be swapped out for another similar component. The VMware Validated Design for SDDC is one way of putting an architecture together that has been rigorously tested to ensure stability, scalability, and compatibility. Ultimately, however, the system is designed to ensure the desired outcome will be achieved.

The conceptual design is shown in the following diagram:

JMCDonald_VVD Conceptual Design

As you can see, the design brings a lot more than just implementation details. It includes many common “day two” operational tasks such as management and monitoring functions, business continuity, and security.

To simplify such a complex design, it has been broken up into:

  • A high-level Architecture Design
  • A Detailed Design with all the design decisions included
  • Implementation guidance.

Let’s take an in-depth look.

Continue reading

Configuring VMware Identity Manager and VMware Horizon 7 Cloud Pod Architecture

Dale CarterBy Dale Carter

With the release of VMware Horizon® 7 and VMware Identity Manager™ 2.6, it is now possible to configure VMware Identity Manager to work with Horizon Cloud Pod Architecture when deploying your desktop and application pools over multiple data centers or locations.

Using VMware Identity Manager in front of your VMware Horizon deployments that are using Cloud Pod Architecture makes it much easier for users to get access to their desktops and applications. The user has just one place to connect to, and they will be able to see all of their available desktops and applications. Identity Manager will direct the user to the application hosted in the best datacenter for their location. This can also include SaaS applications as well as the applications that are available through VMware Horizon 7.

The following instructions show you how to configure VMware Identity Manager to work with VMware Horizon 7 when using Cloud Pod Architecture.

Configure view on the first connector

  1. From the VMware Identity Manager Admin Portal select Catalog, Managed Desktop Appliances, View Application.

DCarter_View Application

  1. Choose the first Identity Manager Connector. This will redirect you to the connector View setup page.
  2. Select the check box to enable View Pools. Add the correct information to the first View Pod, and click Save.

DCarter_View Pools

  1. If there is an Invalid SSL Cert warning, click the warning and Accept.

DCarter_Invalid SSL Cert

  1. Scroll down the page and select Add View Pool.

DCarter_Add View Pool

  1. Add the correct information to the first View Pod and click Save.

DCarter_View Pod

  1. If there is an Invalid SSL Cert warning, click the warning and Accept.
  2. You will now see both View Pods configured for this connector.

DCarter_Remove View Pod

  1. Scroll to the top of the page.
  2. Select Federation.
  3. Check the Enable CPA Federation check box. Fill out the correct information, and add all of the Pods within the Federation.
    DCarter_View Pools Federation
  4. Click Save.
  5. From the Pods and Sync tab, click Sync Now.

DCarter_View Pool Sync

Configure view on all other connectors

  1. From the VMware Identity Manager Admin Portal, select Catalog, Managed Desktop Appliances, View Application.
  2. Select the next connector and follow the instructions above.
  3. Do this for every connector.

Configure network ranges

Once the VMware Horizon View setup is complete, you will need to configure Network Ranges.

  1. From the Identity Manager Admin page, select the Identity & Access Management Tab and click Setup.
  2. Select Network Ranges and click Add Network Range.

DCarter_Add Network Range

  1. Enter the required information and click Save.

DCarter_Add Network Range View Site

  1. This will need to be repeated for all network ranges, usually for each site and external access.

Dale is a Senior Solutions Architect and member of the CTO Ambassadors. Dale focuses in the End User Compute space, where Dale has become a subject matter expert in a number of the VMware products. Dale has more than 20 years’ experience working in IT having started his career in Northern England before moving the Spain and finally the USA. Dale currently holds a number of certifications including VCP-DV, VCP-DT, VCAP-DTD and VCAP-DTA. For more blog post from Dale visit his website at http://vdelboysview.com

BCDR: Some Things to Consider When Upgrading Your VMware Disaster Recovery Solution

Julienne_PhamBy Julienne Pham

Once upon a time, you protected your VMs with VMware Site Recovery Manager, and now you are wondering how to upgrade your DR solution with minimum impact on the environment. Is it as seamless as you think?

During my days in Global Support and working on customer Business Continuity/Disaster Recovery (BCDR) projects, I found it intriguing how vSphere components can put barriers in an upgrade path. Indeed, one of the first things I learned was that timing and the update sequence of my DR infrastructure was crucial to keep everything running, and with as little disruption as possible.

Here If we look more closely, this is a typical VMware Site Recovery Manager setup:

JPham_SRM 6x

And in a pyramid model, we have something like this:

JPham_SRM Pyramid

Example of a protected site

So, where do we start our upgrade?

Upgrade and maintain the foundation

You begin with the hardware. Then, the vSphere version you are upgrading to. You’ll see a lot of new features available, along with bug fixes, so your hardware and firmware might need some adjustments to support new features and enhancements. It is important at a minimum to check the compatibility of the hardware and software you are upgrading to.

In a DR scenario, it is important to check storage replication compliance

This is where you ensure your data replicates according to your RPO.

If you are using vSphere Replication or Storage Array Replication, you should check the upgrade path and the dependency with vSphere and SRM.

  • As an example, VR cannot be upgraded directly from 5.8 to 6.1
  • You might need to update the Storage Replication Adaptor too.
  • You can probably find other examples of things that won’t work, or find work-arounds you’ll need.
  • You can find some useful information in the VMware Compatibility Guide

Architecture change

If you are looking to upgrade from vSphere 5.5 to 6.1, for example, you should check if you need to migrate from a simple SSO install to an external one for more flexibility, as you might not be able to change in the future. As VMware SRM is dependent on the health of vCenter, you might be better off looking first into upgrading this component as a prerequisite.

Before you start you might want to check out the informative blog, “vSphere Datacenter Design – vCenter Architecture Changes in vSphere 6.0 – Part 1.”

The sites are interdependent

Once the foundation path is planned out, you have to think about how to minimize business impact.

Remember that if your protected site workload is down, you can always trigger a DR scenario, so it is in your best interest to keep the secondary site management layer fully functional and upgrade VMware SRM and vCenter at the last resort.

VMware upgrade path compatibility

Some might assume that you can upgrade from one version to another without compatibility issues coming up. Well, to avoid surprises, I recommend looking into our compatibility matrix, and validate the different product version upgrade paths.

For example, the upgrade of SRM 5.8 to 6.1 is not supported. So, what implications might be related to vCenter and SRM compatibility during the upgrade?

JPham_Upgrade Path Sequence

Back up, back up, back up

The standard consideration is to run backups before every upgrade. A snapshot VM might not be enough in certain situations if you are in different upgrade stages at different sites. You need to carefully plan and synchronise all different database instances for VMware Site Recovery Manager and vCenter—at both sites and eventually vSphere Replication databases.

I hope this addresses some of the common questions and concerns that might come up when you are thinking of upgrading SRM. Planning and timing are key for a successful upgrade. Many components are interdependent, and you need to consider them carefully to avoid an asynchronous environment with little control over outcomes. Good luck!


Julienne Pham is a Technical Solution Architect for the Professional Services Engineering team. She is specialised on SRM and core storage. Her focus is on VIO and BCDR space.

Demo – Dynamically Enforcing Security on a Hot Cloned SQL Server with VMware NSX

Originally posted on the Virtualize Business Critical Applications blog.

Niran_Even_Chen

 

By Niran Even-Chen

VMware NSX is a software defined solution that brings the power of virtualization to network and security.

There are many great papers about NSX in general: for example here & here and many others, the purpose of this demo is not to dive into everything that NSX does. Instead I have focused on one capability in particular and that is the intelligent grouping of NSX Service Composer with the Distributed VMware NSXFirewall (DFW) and how to utilize it to make life easier for SQL DBAs and security admins, its doesn’t have to be only SQL Server, it can be any other database or application for that matter but for this demo I am focusing on SQL Server.

First, a bit of background: The NSX Service Composer allows us to create groups called “Security groups”. These Security groups can have a dynamic membership criteria that can be based on multiple factors: It can be part of the computer name of a VM, its guest OS name, the VM name, AD membership or a tag (tags are especially cool as they can be set automatically by 3rd party tools like antivirus and IPSs, but that is for a different demo)

These Security groups are than placed inside the Distributed Firewall (DFW) rules which allows us to manage thousands of entities with just a few rules and without the need to add these entities to the Security Group manually.

In the demo I have created an environment that is set with 0 trust policy, that means that everything is secured and every packet between the VMs is inspected, the inspection is done on the VMs vNIC level in an east-west micro segmentation way. That means that if a certain traffic is not defined in the DFW it is not allowed to go through.

This is something that wasn’t really possible to do before NSX

Our production app database is an SQL database and in the demo the DBA wants to hot-clone it aside for testing purposes, but obviously the cloned SQL Server needs to have some network traffic allowed to pass to it, yet it needs to be secured from everything else.

Instead of having the traditional testing FW zone with its own physical servers, I created the rules that apply to a test DBs in the DFW, created a dynamic membership Security Group, and nested that group in the rules. Now, any database server that I will clone which corresponds to the criteria will be automatically placed in the rules.  What’s really nice about this is that no traffic is going northbound to the perimeter FW because the packet inspection is done on the vNIC of the VMs (and only relevant rules to it are set on it) , no additional calls to security admins to configure the FW are needed after the first configuration has been made. This is a huge time saver, much more efficient in terms of resources (physical servers are now shared between zones) and a much more secure environment than having only a perimeter FW.

As usual, any comment or feedback is welcome

Cheers,

Niran


Niran is a VMware Staff Solutions Architect in the Enterprise Application Architecture team at who is focused on creating solutions for running Microsoft OS’s and apps on vSphere and vCloud Air platforms and providing top deal support to strategic customers globally.

Define SDDC Success Based on IT Outcomes

Andrea SivieroBy Andrea Siviero

You’ve just deployed a new technology solution; how do you define whether or not it was a success?

People often have difficulty agreeing on the definition of “success” because there are two interconnected dimensions in which a project can be judged as a success or a failure. The first is project management success (delivering in accordance with the agreed-upon project objectives), and the second is 0utcome success (the amount of value the project delivers once it is complete).

Of course, getting agreement on how to define success is not always easy, but based on my day-to-day experience with customers, outcome success is desired over project management success.

Outcomes Are Worth More Than Services

Buying a service rather than an outcome is similar to paying to use equipment at a gym versus working with a personal trainer, whose job is to help you produce an outcome. The latter is worth more than the former.

VMware’s IT Outcomes support the top priority initiatives for CIOs and impact key business metrics, you can check the dedicated web site here.

In my (humble) opinion, indifferently by the IT Outcomes you are focus on, there are three important factors that contribute to a success:

People, Processes, and Architecture.

Based on my experience, customers tend to focus on architecture and technology, sometimes paying less attention to the people and process factors which can contribute more to success. Here is a real-life example from my personal experience.

ASiviero_Simplify the Situation

I was involved with a successful project implementation where all the project’s technical objectives were achieved, but the infrastructure and operations manager did not feel the desired outcomes were achieved. And that manager was right!

After spending an hour talking with the teams, I realized what a great job the consultants had done implementing and demonstrating all the capabilities of their new SDDC.

However, due to their experience, expectations, and culture, they weren’t able to reorganize their teams and processes to take full advantage of the desired outcomes (Speed, Agility and Security).

ASiviero_Amazing SDDC

Here is a summary of the best practices I’ve suggested as a way to leverage VMware technical account managers as coaches.

1 – People

ASiviero_Small Cross Functional Team

  1. Create a blended team of skilled workers with multi-domain and multi-disciplinary knowledge and expertise, and deliver cross-team training.
  1. Encourage autonomy with common goals and operating principles, and focus on service delivery.
  1. Push them to share lessons learned with other teams and expand their use of virtual networking and security.

2 – Process

ASiviero_Application Level Visibility

  1. Decompose management and troubleshooting tasks along virtual and physical boundaries.
  1. Automate manual tasks to improve efficiency and reduce errors.
  1. Correlate the end-to-end view of application health across compute, storage, and networking.

3 – Architecture

ASiviero_Key Requirements for SDDC

  1. Build your SDDC using a design validated by experts.
  1. Implement a comprehensive data center design.
  1. Add in app and network virtualization incrementally.

Putting it all together

ASiviero_Putting it All Together

Achieving 100% of a project’s intended outcomes depends not only on the technology implementation, but also on the organizatonal transformation required to ensure the proper implementation of people and process innovation.


Andrea Siviero is an ten-year veteran of VMware and a senior solutions architect member of Professional Services Engineering (PSE) for the Software-Defined Datacenter (SDDC), a part of the Global Technical Solutions (GTS) team. Prior to PSE, Andrea spent three years as pre-sales system engineer and three years as a post-sales consultant architect for cloud computing and desktop virtualization solutions focusing on very large and complex deployments, especially for service providers in the finance and telco sectors.