Home > Blogs > VMware Consulting Blog > Category Archives: IT Best Practices

Category Archives: IT Best Practices

vRO Architecture Considerations When Digitally Signing Packages

Spas KaloferovBy Spas Kaloferov

In this blog post we will take a look at how digitally signing packages in VMware vRealize® Orchestrator™ (vRO) may affect the way you deploy vRO in your environment.

In some use cases, digitally signing workflow packages may affect your vRO architecture and deployment. Let’s consider a few examples.

Use Case 1 (Single Digital Signature Issuer)

Let’s say you have vRO ServerA and vRO ServerB in your environment. You’ve performed the steps outlined in How to Change the Package Signing Certificate of a vRO Appliance (SKKB1029) to change the PSC on vRO ServerA , export the keystore, and import it on vRO ServerB. This will allow the following:

  • vRO ServerA can digitally sign workflow packages, and vRO ServerB can read packages digitally signed by vRO ServerA.
  • vRO ServerB can digitally sign workflow packages, and vRO ServerA can read packages digitally signed by vRO ServerB.

Now what happens when you add vRO ServerC?

Continue reading

Mini Post; How to Change the Package Signing Certificate of a vRO Appliance for update

Spas Kaloferov


By Spas Kaloferov

Importing Digitally Signed Packages to a Different Destination vRO (vRealize Orchestrator) Server

What we did in the previous changer was to change the PSC certificate on a vRO server to match our company requirements. The certificate will be used to digitally sign packages we export from vRO.

If you will import digitally signed workflow packages only to their original vRO, no further steps are required.

If you will import digitally signed workflow packages to a different vRO, additional configuration steps are required on the destination vRO. Continue reading

VMware Horizon 7 Instant Clones Best Practices

Dale CarterBy Dale Carter

Recently, I have been working with Instant Clones in my lab. Although I have found this easy to get up and running (for more information, see my blog here), it hasn’t been easy to find best practices around configuring Instant Clones, as they are so new.

I reached out to the engineering team, and they provided me with the following best practices for using Instant Clones in VMware Horizon 7.0.2.

Check OS Support for Instant Clones

The following table shows what desktop operating systems are supported when using Instant Clones.

Guest Operating System Version Edition Service Pack
Windows 10 64-Bit and 32-Bit Enterprise None
Windows 7 64-Bit and 32-Bit Enterprise and Professional SP1

For more information, see the architecture planning guide.

Remote Monitor Limitations

If you use Instant Clone desktop pools, the maximum number of monitors that you can use to display a remote desktop is two, with a resolution of up to 2560 X 1600. If your users require more monitors or a higher resolution, I recommend using a Linked Clone desktop pools for these users.

For more information, see the architecture planning guide.

Instant Clones on vSAN

When running Instant Clones on vSAN it is recommended to the R5 configuration that will have the following settings

Name Checksum Rain Level Duplication and Compression Client Cache Sparse Swap
R5 Yes 5 No Enabled Disabled

For more information, see the VMware Horizon 7 on VMware Virtual SAN 6.2 All-Flash, Reference Architecture.

Unsupported Features when using Instant Clones

The following features are currently not supported when using Instant Clones.

View Persona Management

The View Persona Management feature is not supported with Instant Clones. I recommend the User Environment Manager for managing the user’s environment settings.

For more information, see the architecture planning guide.

3D Graphics Features

The software and hardware accelerated graphics features available with the Blast Extreme or PCoIP display protocol are currently not supported with Instant Clones desktops. If your users require this feature, I recommend you use a Linked Clone desktop for them.

For more information, see the architecture planning guide.

Virtual Volumes

VMware vSphere Virtual Volumes Datastores are currently not supported for Instant clone desktop pools. For Instant Clone desktop pools, you can use other storage options, such as VMware Virtual SAN.

For more information, see the architecture planning guide.

Persistent User Disk

Instant Clone pools do not support the creation of a persistent virtual disk. If you have a requirement to store a user’s profile and application data on a separate disk, you can use the writeable disk feature of VMware App Volumes to store this data. The App Volumes writeable volume can also be used to store user installed applications.

For more information, see the architecture planning guide.

Disposable Virtual Disk

Instant Clone pools do not support configuration of a separate, disposable virtual disk for storing the guest operating system’s paging and temp files. Each time a user logs out of an instant clone desktop, Horizon View automatically deletes the clone and provisions and powers on another instant clone based on the latest OS image available for the pool. Any guest operating systems paging and temp files are automatically deleted during the logo operation.

For more information, see the architecture planning guide.

Hopefully, this information will help you configure Instant Clones in your environment. I would like to thank the VMware Engineering team for helping me put this information together.


Dale Carter is a Senior Solutions Architect and member of the CTO Ambassadors. Dale focuses in the End User Compute space, where Dale has become a subject matter expert in a number of the VMware products. Dale has more than 20 years’ experience working in IT having started his career in Northern England before moving the Spain and finally the USA. Dale currently holds a number of certifications including VCP-DV, VCP-DT, VCAP-DTD and VCAP-DTA. For more blog post from Dale visit his website athttp://vdelboysview.com

Architecting an Internet-of-Things (IoT) Solution

Andrea SivieroBy Andrea Siviero

When Luke Skywalker asks Obi-Wan Kenobi, “What is The Force,” the answer was, “It’s an energy field created by all living things. It surrounds us and penetrates us; it binds the galaxy together.”

According to Intel, there are 15 billion devices on the Internet today. In 2020 the number will grow to 200 billion. In order to meet the demand for connectivity, cities are spending $41 trillion dollars to create the infrastructure to accommodate it.

What I want to talk about in this short article is how to architect an IoT solution, and the challenges in this area.

asiveiro_iot-solution

In a nutshell, connecting “things” to a “platform,” where business apps can consume information, is achieved two ways:

  • Simple “direct” connection (2-Tiered approach)
  • Using a “gateway” (3-Tiered approach)

The 3-Tier Approach: Introducing IoT Gateways

You may now be wondering, “what exactly are the reasons behind introducing a gateway into your IoT architecture?”

The answer is in the challenges introduced by the simple connection:

  • Security threat; the more “they” that are out there, the more “doors” that can be opened
  • Identity management; huge amount of devices and configuration changes
  • Configurations/updates can become a complex problem

What Is/Isn’t an IoT Gateway?

An IoT Gateway:

  • Is a function, not necessarily a physical device
  • Is not just a dumb proxy that forwards data from sensors to backend services (because that would be highly ineffective in terms of performance and network utilization).
  • Performs pre-processing of information in the field—including message filtering and aggregation—before being sent to the data center.

asiveiro_filtering-aggregation

Where is All This Leading?

As enterprises transform into digital businesses, they need to find ways to:

  • Improve efficiencies
  • Generate new forms of revenue
  • Deliver new and exciting customer experiences

These will be the tipping points for enterprise IoT to really take off.

For organizations that want to deploy IoT apps across multiple gateway vendors—and those that wish to buy solutions that are not locked into a single silo—IoT can bring problems and frustration.

VMware has taken the first steps in the IoT journey, making the IoT developer’s life easier, and introducing Liota (Little IoT Agent). Liota is a vendor-neutral open source software development kit (SDK) for building secure IoT gateway data and controlling orchestration that resides primarily on IoT gateways.

Liota is available to developers for free now at https://github.com/vmware/liota, and it works with any gateway or operating system that supports Python.

If you are attending VMworld, make a point to visit the Internet of Things Experience zone. Within this pavilion, we will have several pods showing live demos with augmented reality experiences that bring life to workflows across a variety of industries.

May the force be with you.


Andrea Siviero is an ten-year veteran of VMware and a senior solutions architect member of Professional Services Engineering (PSE) for the Software-Defined Datacenter (SDDC), a part of the Global Technical Solutions (GTS) team. Prior to PSE, Andrea spent three years as pre-sales system engineer and three years as a post-sales consultant architect for cloud computing and desktop virtualization solutions focusing on very large and complex deployments, especially for service providers in the finance and telco sectors.

VMware Validated Design for SDDC 3.0 – Now Available!

Jonathan McDonaldBy Jonathan McDonald

I mentioned all the fun details on the VMware Validated Design in my previous blog post. I am happy to report that we have just released the next revision of it, version 3.0. This takes what everyone already knew and loved about the previous version—and made it better!

In case you have not heard of VMware Validated Designs, they are a construct used to build a reference design that:

  • Is built by expert architects who have many years of experience with the products, as well as integrations
  • Allows repeatable deployment of the end solution, which has been tested to scale
  • Integrates with the development cycle, so that if an issue is identified with the integrations and scale testing, it can be quickly identified and fixed by the developers before the products are released

All in all, this is an amazing project that I am excited to have worked on, and I am happy to finally talk about it publicly!

What’s New with the VMware Validated Design for SDDC 3.0?

There are quite a lot of changes in this version of the design. I am not going to go into every detail in this blog, but here is an overview of the major ones:

  • Full Dual Region Support—Previously, in the VMware Validated Design, although there was mention made of having dual sites, there was only implementation guidance for a single site. In this release we have full guidance and support on configuring a dual region environment.
  • Disaster Recovery Guidance—With the addition of dual region support, guidance is needed for disaster recovery. This includes installation, configuration, and operational guidance for VMware Site Recovery Manager, and vSphere Replication. Operationally, plans are created to not only allow for failover and failback of the management components between sites, but also to test these plans as well.
  • Reduced minimum footprint with a 2-pod design —In the prior versions of the VMware Validated design, we focused on a 3-pod architecture. This architecture used 12 ESXi hosts as a minimum recommended architecture:
    • 4 for management
    • 4 for compute
    • 4 for the NSX Edge cluster

In this release the default configuration is to use a 2-pod design which collapses the compute and Edge clusters. This allows for the minimum footprint to be 8 ESXi hosts:

  • 4 for management
  • 4 for shared Edge and compute functions

This marks a significant reduction in size for small or proof-of-concept installations, which can be later expanded to a full 3-pod design if required.

  • Updated bill of materials—The bill of materials has been updated to include new versions of many software components, including NSX for vSphere and vRealize Log Insight. In addition, Site Recovery Manager and vSphere Replication have been added to support the new design.
  • Upgrade Guidance—As a result of the upgraded bill of materials, guidance has been provided for any component which needs upgrading as a result of this revision. This guidance will continue to grow as products are released and incorporated into the design.

The good news is that the actual architecture has not changed significantly. As always, if a particular component design does not fit the business or technical requirements for whatever reason, it can be swapped out for another similar component. Remember, the VMware Validated Design for SDDC is one way of putting an architecture together that has been rigorously tested to ensure stability, scalability, and compatibility. Our design has been created to ensure the desired outcome will be achieved in a scalable and supported fashion.

Let’s take a more in-depth look at some of the changes.

Virtualized Infrastructure

The SDDC virtual infrastructure has not changed significantly. Each site consists of a single region, which can be expanded. Each region includes:

  • A management pod
  • A shared edge and compute pod
    jmcdonald_compute-management-pod

This is a standard design practice that has been tested in many customer environments. The following is the purpose of each pod.

Management Pod

Management pods run the virtual machines that manage the SDDC. These virtual machines host:

  • vCenter Server
  • NSX Manager
  • NSX Controller
  • vRealize Operations
  • vRealize Log Insight
  • vRealize Automation
  • Site Recovery Manager
  • And other shared management components

All management, monitoring, and infrastructure services are provisioned to a vCenter Server High Availability cluster which provides high availability for these critical services. Permissions on the management cluster limit access to only administrators. This limitation protects the virtual machines that are running the management, monitoring, and infrastructure services.

Shared Edge and Compute Pod

The shared edge and compute pod runs the required NSX services to enable north-south routing between the SDDC and the external network and east-west routing inside the SDDC. This pod also hosts the SDDC tenant virtual machines (sometimes referred to as workloads or payloads). As the SDDC grows, additional compute-only pods can be added to support a mix of different types of workloads for different types of SLAs.

Disaster Recovery and Data Protection

Nobody wants a disaster to occur, but in the worst case in case something does happen, you need to be prepared. The VMware Validated Design for SDDC 3.0, includes guidance on using VMware Products and technologies for both data protection and disaster recovery.

Data Protection Architecture

VMware Data protection is used as a backup solution for the architecture. It allows the virtual machines involved in the solution to be backed up and restored. This allows you to meet many company policies for recovery as well as data retention. The design goes across both regions, and looks as follows:

jmcdonald_vsphere-data-protection

Disaster Recovery

In addition to back ups, the design includes guidance on using Site Recovery Manager to back up the configuration. This includes a design that is used for both regions, and includes guidance on using vSphere Replication to replicate the data between sites. It also details how to create protection groups as well as recovery plans to ensure the management components are failed over between sites, including vRealize Automation and vRealize Operations Manager VMs where appropriate.

The architecture is shown as follows:
jmcdonald_vrealize-replicated

The Cloud

Of course, no SDDC is complete without a cloud platform and the design still includes familiar guidance on installation of the cloud components as well. vRealize Automation is definitely a part of the design and has not significantly changed, other than adding multiple region support. It is a big piece but I did want to show the conceptual design of the architecture here because it provides a high level overview of the components, user types, and operations in workload provisioning.

jmcdonald_workload-provisioning-end-user

The beauty here is that the design has been tried and tested to scale in the Validated design. This will allow for issues to be identified and fixed before the platform has been deployed.

Monitoring and Operational Procedures

Finally, last but not least, what design is complete without proper monitoring and operational procedures? The VMware Validated Design for SDDC includes a great design for both vRealize Operations Manager as well as vRealize Log Insight. In addition, it also goes into all the different practices for being able to backup, restore, and operate the actual cloud that has been built. It doesn’t go as far as a formal operational transformation for the business, but it does a great job of showing many standard practices can be used as a basis for defining what you—as a business owner—need in order to operate a cloud.

To show a bit of the design, vRealize Operations Manager contains functional elements that collaborate for data analysis and storage, and supports the creation of clusters of nodes with different roles:

jmcdonald_remote-collector

Overall, this is a really powerful platform that revolutionizes the way that you see the environment.

Download It Now!

Hopefully, this overview of the changes in the new VMware Validated Design for SDDC 3.0 has been useful. There is much more to the design than just the few items I’ve told you about in this blog, so I encourage you to check out the Validated Designs webpage for more details.

In addition—if you are interested—VMware Professional Services are available to help with the installation and configuration of a VMware Validated Design as well.

I hope this helps you in your architectural design discussions to show that integration stories are not only possible, but can make your experience deploying an SDDC much easier.

Look for myself and other folks from the Professional Services Engineering team and Integrated Systems Business Unit from VMware at VMworld Europe. We are happy to answer any questions you have about VMware Validated Designs!


Jonathan McDonald is a Technical Solutions Architect for the Professional Services Engineering team. He currently specializes in developing architecture designs for core Virtualization, and Software-Defined Storage, as well as providing best practices for upgrading and health checks for vSphere environments

VMworld Session Preview: Advanced Network Services with NSX

Romain Decker

 

By Romain Decker

It is no secret that IT is in constant evolution. IT trends such as Cloud Adoption, Distributed Applications, Micro-Services or Internet of Things have emerged over the last years.

Nevertheless, the focus is still on applications and on how they compute and deliver data to consumers. Whether their role is to generate revenue, pilot industries, logistics, health or even your programmable thermostat; top level goals of organizations are still security, agility and operational efficiency, everything else associated with the applications has changed:

  • Threats have become more advanced and persistent.
  • Users now access the data center from devices and locations that represent significant challenges.
  • Application architectures are now more widely distributed and more dynamic than ever before.
  • Infrastructure changes have evolved with the convergence of resources and questions around public cloud offerings.

VMware NSX is a perfect fit to address these concerns from the network and security standpoint. NSX reproduce all Network & Security services of Data Centers in logical space for best speed/agility and a deeper security.

Visit my session at VMworld Las Vegas (Session ID: NET7907) to hear the detailed presentation on NSX firewall, load balancing and SSL-VPN capabilities.

And don’t forget, the GUI is not the king! 😉


Presenter: Romain Decker
Session Number: NET7907
Session Title: Advanced Network Services with NSX
Date and Time: 8/30/16 (Tuesday) 2:00 PM

Abstract: Applications are everywhere and increasingly more complex. They require much more than switching and routing on the network side. Clouds should be able to host any applications, including the complex ones. This session will discuss the concepts for designing and operating NSX network services such as firewalling, load balancing, and VPN. We will examine and explain how you can better consume those services by automating them, or by using other mechanisms such as NSX API. After this session, you will leave with a better understanding of how NSX Network and Security services work, and how to leverage them to better support your applications.

Schedule Builder


Romain Decker is a Senior Solutions Architect member of Professional Services Engineering (PSE) for the Software-Defined Datacenter (SDDC) portfolio – a part of the Global Technical & Professional Solutions (GTPS) team.

VMware Validated Design for SDDC 2.0 – Now Available

Jonathan McDonaldBy Jonathan McDonald

Recently I have been involved in a rather cool project inside VMware, aimed at validating and integrating all the different VMware products. The most interesting customer cases I see are related to this work because oftentimes products work independently without issue—but together can create unique problems.

To be honest, it is really difficult to solve some of the problems when integrating many products together. Whether we are talking about integrating a ticketing system, building a custom dashboard for vRealize Operations Manager, or even building a validation/integration plan for Virtual SAN to add to existing processes, there is always the question, “What would the experts recommend?”

The goal of this project is to provide a reference design for our products, called a VMware Validated Design. The design is a construct that:

  • Is built by expert architects who have many years of experience with the products as well as the integrations
  • Allow repeatable deployment of the end solution, which has been tested to scale
  • Integrates with the development cycle, so if there is an issue with the integration and scale testing, it can be identified quickly and fixed by the developers before the products are released.

All in all, this has been an amazing project that I’ve been excited to work on, and I am happy to be able to finally talk about it publicly!

Introducing the VMware Validated Design for SDDC 2.0

The first of these designs—under development for some time—is the VMware Validated Design for SDDC (Software-Defined Data Center). The first release was not available to the public and only internal to VMware, but on July 21, 2016, version 2.0 was released and is now available to everyone! This design builds not only the foundation for a solid SDDC infrastructure platform using VMware vSphere, Virtual SAN, and VMware NSX, but it builds on that foundation using the vRealize product suite (vRealize Operations Manager, vRealize Log Insight, vRealize Orchestrator, and vRealize Automation).

The VMware Validated Design for SDDC outcome requires a system that enables an IT organization to automate the provisioning of common, repeatable requests and to respond to business needs with more agility and predictability. Traditionally, this has been referred to as Infrastructure-as-a-Service (IaaS); however, the VMware Validated Design for SDDC extends the typical IAAS solution to include a broader and more complete IT solution.

The architecture is based on a number of layers and modules, which allows interchangeable components to be part of the end solution or outcome, such as the SDDC. If a particular component design does not fit the business or technical requirements for whatever reason, it should be able to be swapped out for another similar component. The VMware Validated Design for SDDC is one way of putting an architecture together that has been rigorously tested to ensure stability, scalability, and compatibility. Ultimately, however, the system is designed to ensure the desired outcome will be achieved.

The conceptual design is shown in the following diagram:

JMCDonald_VVD Conceptual Design

As you can see, the design brings a lot more than just implementation details. It includes many common “day two” operational tasks such as management and monitoring functions, business continuity, and security.

To simplify such a complex design, it has been broken up into:

  • A high-level Architecture Design
  • A Detailed Design with all the design decisions included
  • Implementation guidance.

Let’s take an in-depth look.

Continue reading

BCDR: Some Things to Consider When Upgrading Your VMware Disaster Recovery Solution

Julienne_PhamBy Julienne Pham

Once upon a time, you protected your VMs with VMware Site Recovery Manager, and now you are wondering how to upgrade your DR solution with minimum impact on the environment. Is it as seamless as you think?

During my days in Global Support and working on customer Business Continuity/Disaster Recovery (BCDR) projects, I found it intriguing how vSphere components can put barriers in an upgrade path. Indeed, one of the first things I learned was that timing and the update sequence of my DR infrastructure was crucial to keep everything running, and with as little disruption as possible.

Here If we look more closely, this is a typical VMware Site Recovery Manager setup:

JPham_SRM 6x

And in a pyramid model, we have something like this:

JPham_SRM Pyramid

Example of a protected site

So, where do we start our upgrade?

Upgrade and maintain the foundation

You begin with the hardware. Then, the vSphere version you are upgrading to. You’ll see a lot of new features available, along with bug fixes, so your hardware and firmware might need some adjustments to support new features and enhancements. It is important at a minimum to check the compatibility of the hardware and software you are upgrading to.

In a DR scenario, it is important to check storage replication compliance

This is where you ensure your data replicates according to your RPO.

If you are using vSphere Replication or Storage Array Replication, you should check the upgrade path and the dependency with vSphere and SRM.

  • As an example, VR cannot be upgraded directly from 5.8 to 6.1
  • You might need to update the Storage Replication Adaptor too.
  • You can probably find other examples of things that won’t work, or find work-arounds you’ll need.
  • You can find some useful information in the VMware Compatibility Guide

Architecture change

If you are looking to upgrade from vSphere 5.5 to 6.1, for example, you should check if you need to migrate from a simple SSO install to an external one for more flexibility, as you might not be able to change in the future. As VMware SRM is dependent on the health of vCenter, you might be better off looking first into upgrading this component as a prerequisite.

Before you start you might want to check out the informative blog, “vSphere Datacenter Design – vCenter Architecture Changes in vSphere 6.0 – Part 1.”

The sites are interdependent

Once the foundation path is planned out, you have to think about how to minimize business impact.

Remember that if your protected site workload is down, you can always trigger a DR scenario, so it is in your best interest to keep the secondary site management layer fully functional and upgrade VMware SRM and vCenter at the last resort.

VMware upgrade path compatibility

Some might assume that you can upgrade from one version to another without compatibility issues coming up. Well, to avoid surprises, I recommend looking into our compatibility matrix, and validate the different product version upgrade paths.

For example, the upgrade of SRM 5.8 to 6.1 is not supported. So, what implications might be related to vCenter and SRM compatibility during the upgrade?

JPham_Upgrade Path Sequence

Back up, back up, back up

The standard consideration is to run backups before every upgrade. A snapshot VM might not be enough in certain situations if you are in different upgrade stages at different sites. You need to carefully plan and synchronise all different database instances for VMware Site Recovery Manager and vCenter—at both sites and eventually vSphere Replication databases.

I hope this addresses some of the common questions and concerns that might come up when you are thinking of upgrading SRM. Planning and timing are key for a successful upgrade. Many components are interdependent, and you need to consider them carefully to avoid an asynchronous environment with little control over outcomes. Good luck!


Julienne Pham is a Technical Solution Architect for the Professional Services Engineering team. She is specialised on SRM and core storage. Her focus is on VIO and BCDR space.

Demo – Dynamically Enforcing Security on a Hot Cloned SQL Server with VMware NSX

Originally posted on the Virtualize Business Critical Applications blog.

Niran_Even_Chen

 

By Niran Even-Chen

VMware NSX is a software defined solution that brings the power of virtualization to network and security.

There are many great papers about NSX in general: for example here & here and many others, the purpose of this demo is not to dive into everything that NSX does. Instead I have focused on one capability in particular and that is the intelligent grouping of NSX Service Composer with the Distributed VMware NSXFirewall (DFW) and how to utilize it to make life easier for SQL DBAs and security admins, its doesn’t have to be only SQL Server, it can be any other database or application for that matter but for this demo I am focusing on SQL Server.

First, a bit of background: The NSX Service Composer allows us to create groups called “Security groups”. These Security groups can have a dynamic membership criteria that can be based on multiple factors: It can be part of the computer name of a VM, its guest OS name, the VM name, AD membership or a tag (tags are especially cool as they can be set automatically by 3rd party tools like antivirus and IPSs, but that is for a different demo)

These Security groups are than placed inside the Distributed Firewall (DFW) rules which allows us to manage thousands of entities with just a few rules and without the need to add these entities to the Security Group manually.

In the demo I have created an environment that is set with 0 trust policy, that means that everything is secured and every packet between the VMs is inspected, the inspection is done on the VMs vNIC level in an east-west micro segmentation way. That means that if a certain traffic is not defined in the DFW it is not allowed to go through.

This is something that wasn’t really possible to do before NSX

Our production app database is an SQL database and in the demo the DBA wants to hot-clone it aside for testing purposes, but obviously the cloned SQL Server needs to have some network traffic allowed to pass to it, yet it needs to be secured from everything else.

Instead of having the traditional testing FW zone with its own physical servers, I created the rules that apply to a test DBs in the DFW, created a dynamic membership Security Group, and nested that group in the rules. Now, any database server that I will clone which corresponds to the criteria will be automatically placed in the rules.  What’s really nice about this is that no traffic is going northbound to the perimeter FW because the packet inspection is done on the vNIC of the VMs (and only relevant rules to it are set on it) , no additional calls to security admins to configure the FW are needed after the first configuration has been made. This is a huge time saver, much more efficient in terms of resources (physical servers are now shared between zones) and a much more secure environment than having only a perimeter FW.

As usual, any comment or feedback is welcome

Cheers,

Niran


Niran is a VMware Staff Solutions Architect in the Enterprise Application Architecture team at who is focused on creating solutions for running Microsoft OS’s and apps on vSphere and vCloud Air platforms and providing top deal support to strategic customers globally.

Define SDDC Success Based on IT Outcomes

Andrea SivieroBy Andrea Siviero

You’ve just deployed a new technology solution; how do you define whether or not it was a success?

People often have difficulty agreeing on the definition of “success” because there are two interconnected dimensions in which a project can be judged as a success or a failure. The first is project management success (delivering in accordance with the agreed-upon project objectives), and the second is 0utcome success (the amount of value the project delivers once it is complete).

Of course, getting agreement on how to define success is not always easy, but based on my day-to-day experience with customers, outcome success is desired over project management success.

Outcomes Are Worth More Than Services

Buying a service rather than an outcome is similar to paying to use equipment at a gym versus working with a personal trainer, whose job is to help you produce an outcome. The latter is worth more than the former.

VMware’s IT Outcomes support the top priority initiatives for CIOs and impact key business metrics, you can check the dedicated web site here.

In my (humble) opinion, indifferently by the IT Outcomes you are focus on, there are three important factors that contribute to a success:

People, Processes, and Architecture.

Based on my experience, customers tend to focus on architecture and technology, sometimes paying less attention to the people and process factors which can contribute more to success. Here is a real-life example from my personal experience.

ASiviero_Simplify the Situation

I was involved with a successful project implementation where all the project’s technical objectives were achieved, but the infrastructure and operations manager did not feel the desired outcomes were achieved. And that manager was right!

After spending an hour talking with the teams, I realized what a great job the consultants had done implementing and demonstrating all the capabilities of their new SDDC.

However, due to their experience, expectations, and culture, they weren’t able to reorganize their teams and processes to take full advantage of the desired outcomes (Speed, Agility and Security).

ASiviero_Amazing SDDC

Here is a summary of the best practices I’ve suggested as a way to leverage VMware technical account managers as coaches.

1 – People

ASiviero_Small Cross Functional Team

  1. Create a blended team of skilled workers with multi-domain and multi-disciplinary knowledge and expertise, and deliver cross-team training.
  1. Encourage autonomy with common goals and operating principles, and focus on service delivery.
  1. Push them to share lessons learned with other teams and expand their use of virtual networking and security.

2 – Process

ASiviero_Application Level Visibility

  1. Decompose management and troubleshooting tasks along virtual and physical boundaries.
  1. Automate manual tasks to improve efficiency and reduce errors.
  1. Correlate the end-to-end view of application health across compute, storage, and networking.

3 – Architecture

ASiviero_Key Requirements for SDDC

  1. Build your SDDC using a design validated by experts.
  1. Implement a comprehensive data center design.
  1. Add in app and network virtualization incrementally.

Putting it all together

ASiviero_Putting it All Together

Achieving 100% of a project’s intended outcomes depends not only on the technology implementation, but also on the organizatonal transformation required to ensure the proper implementation of people and process innovation.


Andrea Siviero is an ten-year veteran of VMware and a senior solutions architect member of Professional Services Engineering (PSE) for the Software-Defined Datacenter (SDDC), a part of the Global Technical Solutions (GTS) team. Prior to PSE, Andrea spent three years as pre-sales system engineer and three years as a post-sales consultant architect for cloud computing and desktop virtualization solutions focusing on very large and complex deployments, especially for service providers in the finance and telco sectors.