Home > Blogs > VMware Consulting Blog > Tag Archives: IT Consulting

Tag Archives: IT Consulting

VMware Horizon 7 Instant Clones Best Practices

Dale CarterBy Dale Carter

Recently, I have been working with Instant Clones in my lab. Although I have found this easy to get up and running (for more information, see my blog here), it hasn’t been easy to find best practices around configuring Instant Clones, as they are so new.

I reached out to the engineering team, and they provided me with the following best practices for using Instant Clones in VMware Horizon 7.0.2.

Check OS Support for Instant Clones

The following table shows what desktop operating systems are supported when using Instant Clones.

Guest Operating System Version Edition Service Pack
Windows 10 64-Bit and 32-Bit Enterprise None
Windows 7 64-Bit and 32-Bit Enterprise and Professional SP1

For more information, see the architecture planning guide.

Remote Monitor Limitations

If you use Instant Clone desktop pools, the maximum number of monitors that you can use to display a remote desktop is two, with a resolution of up to 2560 X 1600. If your users require more monitors or a higher resolution, I recommend using a Linked Clone desktop pools for these users.

For more information, see the architecture planning guide.

Instant Clones on vSAN

When running Instant Clones on vSAN it is recommended to the R5 configuration that will have the following settings

Name Checksum Rain Level Duplication and Compression Client Cache Sparse Swap
R5 Yes 5 No Enabled Disabled

For more information, see the VMware Horizon 7 on VMware Virtual SAN 6.2 All-Flash, Reference Architecture.

Unsupported Features when using Instant Clones

The following features are currently not supported when using Instant Clones.

View Persona Management

The View Persona Management feature is not supported with Instant Clones. I recommend the User Environment Manager for managing the user’s environment settings.

For more information, see the architecture planning guide.

3D Graphics Features

The software and hardware accelerated graphics features available with the Blast Extreme or PCoIP display protocol are currently not supported with Instant Clones desktops. If your users require this feature, I recommend you use a Linked Clone desktop for them.

For more information, see the architecture planning guide.

Virtual Volumes

VMware vSphere Virtual Volumes Datastores are currently not supported for Instant clone desktop pools. For Instant Clone desktop pools, you can use other storage options, such as VMware Virtual SAN.

For more information, see the architecture planning guide.

Persistent User Disk

Instant Clone pools do not support the creation of a persistent virtual disk. If you have a requirement to store a user’s profile and application data on a separate disk, you can use the writeable disk feature of VMware App Volumes to store this data. The App Volumes writeable volume can also be used to store user installed applications.

For more information, see the architecture planning guide.

Disposable Virtual Disk

Instant Clone pools do not support configuration of a separate, disposable virtual disk for storing the guest operating system’s paging and temp files. Each time a user logs out of an instant clone desktop, Horizon View automatically deletes the clone and provisions and powers on another instant clone based on the latest OS image available for the pool. Any guest operating systems paging and temp files are automatically deleted during the logo operation.

For more information, see the architecture planning guide.

Hopefully, this information will help you configure Instant Clones in your environment. I would like to thank the VMware Engineering team for helping me put this information together.


Dale Carter is a Senior Solutions Architect and member of the CTO Ambassadors. Dale focuses in the End User Compute space, where Dale has become a subject matter expert in a number of the VMware products. Dale has more than 20 years’ experience working in IT having started his career in Northern England before moving the Spain and finally the USA. Dale currently holds a number of certifications including VCP-DV, VCP-DT, VCAP-DTD and VCAP-DTA. For more blog post from Dale visit his website athttp://vdelboysview.com

Architecting an Internet-of-Things (IoT) Solution

Andrea SivieroBy Andrea Siviero

When Luke Skywalker asks Obi-Wan Kenobi, “What is The Force,” the answer was, “It’s an energy field created by all living things. It surrounds us and penetrates us; it binds the galaxy together.”

According to Intel, there are 15 billion devices on the Internet today. In 2020 the number will grow to 200 billion. In order to meet the demand for connectivity, cities are spending $41 trillion dollars to create the infrastructure to accommodate it.

What I want to talk about in this short article is how to architect an IoT solution, and the challenges in this area.

asiveiro_iot-solution

In a nutshell, connecting “things” to a “platform,” where business apps can consume information, is achieved two ways:

  • Simple “direct” connection (2-Tiered approach)
  • Using a “gateway” (3-Tiered approach)

The 3-Tier Approach: Introducing IoT Gateways

You may now be wondering, “what exactly are the reasons behind introducing a gateway into your IoT architecture?”

The answer is in the challenges introduced by the simple connection:

  • Security threat; the more “they” that are out there, the more “doors” that can be opened
  • Identity management; huge amount of devices and configuration changes
  • Configurations/updates can become a complex problem

What Is/Isn’t an IoT Gateway?

An IoT Gateway:

  • Is a function, not necessarily a physical device
  • Is not just a dumb proxy that forwards data from sensors to backend services (because that would be highly ineffective in terms of performance and network utilization).
  • Performs pre-processing of information in the field—including message filtering and aggregation—before being sent to the data center.

asiveiro_filtering-aggregation

Where is All This Leading?

As enterprises transform into digital businesses, they need to find ways to:

  • Improve efficiencies
  • Generate new forms of revenue
  • Deliver new and exciting customer experiences

These will be the tipping points for enterprise IoT to really take off.

For organizations that want to deploy IoT apps across multiple gateway vendors—and those that wish to buy solutions that are not locked into a single silo—IoT can bring problems and frustration.

VMware has taken the first steps in the IoT journey, making the IoT developer’s life easier, and introducing Liota (Little IoT Agent). Liota is a vendor-neutral open source software development kit (SDK) for building secure IoT gateway data and controlling orchestration that resides primarily on IoT gateways.

Liota is available to developers for free now at https://github.com/vmware/liota, and it works with any gateway or operating system that supports Python.

If you are attending VMworld, make a point to visit the Internet of Things Experience zone. Within this pavilion, we will have several pods showing live demos with augmented reality experiences that bring life to workflows across a variety of industries.

May the force be with you.


Andrea Siviero is an ten-year veteran of VMware and a senior solutions architect member of Professional Services Engineering (PSE) for the Software-Defined Datacenter (SDDC), a part of the Global Technical Solutions (GTS) team. Prior to PSE, Andrea spent three years as pre-sales system engineer and three years as a post-sales consultant architect for cloud computing and desktop virtualization solutions focusing on very large and complex deployments, especially for service providers in the finance and telco sectors.

How to Add a Linux Machine as PowerShell Host in vRO

By Spas Kaloferov

Introduction

In this article we will look into the alpha version of Microsoft Windows PowerShell v6 for both Linux and Microsoft Windows. We will show how to execute PowerShell commands between Linux , Windows, and VMware vRealize Orchestrator (vRO):

  • Linux to Windows
  • Windows to Linux
  • Linux to Linux
  • vRO to Linux

We will also show how to add a Linux PowerShell (PSHost) in vRO.

Currently, the alpha version of PowerShell v6 does not support the PSCredential object, so we cannot use the Invoke-Command command to programmatically pass credentials and execute commands from vRO, through a Linux PSHost, to other Linux machines, or Windows machines. Conversely, we cannot execute from vRO –> through a Windows PSHost –> to Linux Machines.

To see how we used the Invoke-Command method to do this, see my blog Using CredSSP with the vCO PowerShell Plugin (SKKB1002).

In addition to not supporting the PSCredential object, the alpha version doesn’t support WinRM. WinRM is Microsoft’s implementation of the WS-Management protocol, a standard Simple Object Access Protocol (SOAP)-based, firewall-friendly protocol that enables hardware and operating systems from different vendors to interoperate. Therefore, when adding a Linux machine as a PowerShell host in vRO, we will be using SSH instead of WinRM as the protocol of choice.

The PowerShell v6 RTM version is expected to support WinRM, so we will be able to add the Linux PSHost with WinRM, and not SSH.

So, let’s get started.

Continue reading

The Anatomy of an Instant Clone

By Travis Wood

If you’ve used Horizon View over the last few years, then you most likely have come across linked clones. Linked clones use a parent image, called a “replica,” that serves read requests to multiple virtual machines (VMs), and the writes in each desktop are captured on their own delta disk. Replicas can also be used to change desktop update methodologies; instead of updating every desktop, you can update the parent image and recompose the rest of the desktops.

Horizon 7 has introduced a new method of provisioning with Instant Clones. Instant Clones are similar to linked clones in that all desktops read from a replica disk and write to their own disk, but Instant Clone takes it one step further by doing the same thing with memory. Instant Clones utilize a new feature of vSphere 6 where desktop VMs are forked (that is, Instant Clones are created) off a running VM—instead of cloning a powered-off VM—which provides savings for provisioning, updates, and memory utilization.

Golden Image

With Instant Clones you start with your golden image, in a way that is similar to linked clones. The golden image is the VM you install the operating system on, then join to the domain, and install user applications on; you follow the same OS optimizations procedures you would use for Instant Clones.

When you’re done, release its IP address, shut it down, and create a snapshot. Now you are ready to create your Instant Clone desktop pool. This VM should have VM Tools installed, along with the Horizon Agent with the Instant Clone module. It is NOT possible to have the Instant Clone and Composer modules co-installed, so you will always need different snapshots if using Instant Clones and linked clones from the same golden image. Reservations can be set on the golden image and they will be copied to the Instant Clones, reducing the size of the VSwap file. It is important to note that the golden image must be on storage that’s accessible to the host you are creating your Instant Clone desktop pool on.

Template

When you create your pool, Horizon will create a template. A template is a linked clone from your golden image, created on the same datastore as the golden image. It will have the name cp-template, and will be in the folder ClonePrepInternalTemplateFolder. Template disk usage is quite small, about 60 MB. There will be an initial power-on after the template is created, but it will then shut off.

TWood_Horizon Template

Replica

Next, Horizon will create a replica, which is the same as a Linked Clone replica. It is a thin-provisioned, full clone of the template VM. This will serve as the common read disk for all of your Instant Clones, so it can be tiered onto appropriate storage through the Horizon Administrator console, the same way it is done with Linked Clones. Of course, if you are using VSAN, there is only one datastore, so tiering is done automatically. Horizon will also create a CBRC Digest file for the replica. The replica will be call cp-replica-GUID and will be in the folder ClonePrepReplicaVmFolder. The disk usage of the replica will be depend on how big your Gold Master is, but remember, it’s thin provisioned and not powered on, so you will not have VSwap functionality.

TWood_Horizon Replica

Parent

Horizon will now create the final copy of the original VM, called a parent, which will be used to fork the running VMs. The parent is created on every host in the cluster; remember, we are forking running VMs here, so every host needs to have a running VM. These will be placed on the same datastore as the desktop VMs, where there will be one per host per datastore. Because these are powered on, they have a VSwap file the size of the allocated vMEM. In addition, there will be a small delta disk to capture the writes booting the parent VM and the VMX Overhead VSwap file, but this—and the sum of the other disks—is relatively small, at about 500 MB. These will be placed in ClonePrepReplicaVmFolder.

TWood_Horizon Parent

Something you’ll notice with the parent VM is that it will use 100% of its allocated memory, causing a vCenter alarm.

TWood_vCenter Alarm

TWood_Virtual Machine Error

Instant Clones

OK! At this point, we are finally ready to fork! Horizon will create the Instant Clones based on the provisioning settings, which can be upfront or on-demand. Instant Clones will have a VSwap file equal to the size of the vMEM—minus any reservations set on the Gold Master, plus a differencing disk.

The amount of growth for the differencing disk will depend on how much is written to the local VM during the user’s session, but it is deleted on logout. When running View Planner tests, this can grow to about 500 MB, which is the same as when using View Planner for Linked Clones. The provisioning of Instant Clones will be fast! You’ll see much lower resource utilization of your vCenter Server and less IO on your disk subsystem because there is no boot storm from the VMs powering on.

TWood_vCenter Server

Conclusion

Instant Clones are a great new feature in Horizon 7 that take the concept of Linked Clones one step further. They bring the advantages of:

  • Reducing boot storms
  • Decreasing provisioning times
  • Decreasing change windows
  • Bringing savings to storage utilization

Instant Clones introduce a number of new objects: replicas, parents, and templates. It is important to understand not only how these are structured, but also their interrelationships, in order to plan your environment accordingly.


Travis is a Principal Architect in the Global Technology & Professional Services team, specializing in End User Computing.  He is also a member of the CTO Ambassadors program which connects the global field with R&D and engineering.

VMworld Session Preview: MGT775

Andrea SivieroBy Andrea Siviero

Data center virtualization continues to receive attention in enterprise organizations that want to reduce IT costs and create a more flexible, efficient, and automated applications workload environment.

As an IT organization, you must contend with many different software and hardware components. And not only do you have to manage a lot of different components, you also face the challenge of putting them together!

To solve this complex challenge, VMware Validated Designs (VVDs) provide guidance and speed up the process of building a modern, automated Software-Defined Data Center.

So, exactly what are VMware Validated Designs?

  • They are architectures and designs created and validated by VMware and data center experts.
  • They encompass the entire set of VMware’s Software-Defined Data Center products.
  • They are standardized and streamlined designs for different deployment scenarios and a broad set of use cases.

Marco Righini, from Intel, and I were able to access early content and test it on a real data center, and we would like to share our experience with you.

Visit our session at VMworld 2016 Las Vegas (Session ID: MGT7759) to hear the findings from early adopters of VMware Validated Design.


Presenters: Marco Righini, DataVMworld 2016center Solution Architect, Intel Corp., and Andrea Siviero, Staff Solution Architect, VMware
Session Number: MGT7759
Session Title: Early VVD Adopter Experience: Building a Secure and Automated Cloud
Date and Time: Wednesday, August 31, 2016 10:00 AM‒11:00 AM

Abstract: The session presents the work done during the building of VVD using the Intel Lab in Pisa, Italy. This collaborative team effort between local VMware PSOs and Intel tested and built an entire lab from scratch, using the VVD reference architecture. The challenges of the VVD architecture are addressed, along with how it helped in the fast delivery an automated cloud.


Andrea Siviero is an ten-year veteran of VMware and a senior solutions architect member of Professional Services Engineering (PSE) for the Software-Defined Datacenter (SDDC), a part of the Global Technical Solutions (GTS) team. Prior to PSE, Andrea spent three years as pre-sales system engineer and three years as a post-sales consultant architect for cloud computing and desktop virtualization solutions focusing on very large and complex deployments, especially for service providers in the finance and telco sectors.

VMware Validated Design for SDDC 2.0 – Now Available

Jonathan McDonaldBy Jonathan McDonald

Recently I have been involved in a rather cool project inside VMware, aimed at validating and integrating all the different VMware products. The most interesting customer cases I see are related to this work because oftentimes products work independently without issue—but together can create unique problems.

To be honest, it is really difficult to solve some of the problems when integrating many products together. Whether we are talking about integrating a ticketing system, building a custom dashboard for vRealize Operations Manager, or even building a validation/integration plan for Virtual SAN to add to existing processes, there is always the question, “What would the experts recommend?”

The goal of this project is to provide a reference design for our products, called a VMware Validated Design. The design is a construct that:

  • Is built by expert architects who have many years of experience with the products as well as the integrations
  • Allow repeatable deployment of the end solution, which has been tested to scale
  • Integrates with the development cycle, so if there is an issue with the integration and scale testing, it can be identified quickly and fixed by the developers before the products are released.

All in all, this has been an amazing project that I’ve been excited to work on, and I am happy to be able to finally talk about it publicly!

Introducing the VMware Validated Design for SDDC 2.0

The first of these designs—under development for some time—is the VMware Validated Design for SDDC (Software-Defined Data Center). The first release was not available to the public and only internal to VMware, but on July 21, 2016, version 2.0 was released and is now available to everyone! This design builds not only the foundation for a solid SDDC infrastructure platform using VMware vSphere, Virtual SAN, and VMware NSX, but it builds on that foundation using the vRealize product suite (vRealize Operations Manager, vRealize Log Insight, vRealize Orchestrator, and vRealize Automation).

The VMware Validated Design for SDDC outcome requires a system that enables an IT organization to automate the provisioning of common, repeatable requests and to respond to business needs with more agility and predictability. Traditionally, this has been referred to as Infrastructure-as-a-Service (IaaS); however, the VMware Validated Design for SDDC extends the typical IAAS solution to include a broader and more complete IT solution.

The architecture is based on a number of layers and modules, which allows interchangeable components to be part of the end solution or outcome, such as the SDDC. If a particular component design does not fit the business or technical requirements for whatever reason, it should be able to be swapped out for another similar component. The VMware Validated Design for SDDC is one way of putting an architecture together that has been rigorously tested to ensure stability, scalability, and compatibility. Ultimately, however, the system is designed to ensure the desired outcome will be achieved.

The conceptual design is shown in the following diagram:

JMCDonald_VVD Conceptual Design

As you can see, the design brings a lot more than just implementation details. It includes many common “day two” operational tasks such as management and monitoring functions, business continuity, and security.

To simplify such a complex design, it has been broken up into:

  • A high-level Architecture Design
  • A Detailed Design with all the design decisions included
  • Implementation guidance.

Let’s take an in-depth look.

Continue reading

Configuring VMware Identity Manager and VMware Horizon 7 Cloud Pod Architecture

Dale CarterBy Dale Carter

With the release of VMware Horizon® 7 and VMware Identity Manager™ 2.6, it is now possible to configure VMware Identity Manager to work with Horizon Cloud Pod Architecture when deploying your desktop and application pools over multiple data centers or locations.

Using VMware Identity Manager in front of your VMware Horizon deployments that are using Cloud Pod Architecture makes it much easier for users to get access to their desktops and applications. The user has just one place to connect to, and they will be able to see all of their available desktops and applications. Identity Manager will direct the user to the application hosted in the best datacenter for their location. This can also include SaaS applications as well as the applications that are available through VMware Horizon 7.

The following instructions show you how to configure VMware Identity Manager to work with VMware Horizon 7 when using Cloud Pod Architecture.

Configure view on the first connector

  1. From the VMware Identity Manager Admin Portal select Catalog, Managed Desktop Appliances, View Application.

DCarter_View Application

  1. Choose the first Identity Manager Connector. This will redirect you to the connector View setup page.
  2. Select the check box to enable View Pools. Add the correct information to the first View Pod, and click Save.

DCarter_View Pools

  1. If there is an Invalid SSL Cert warning, click the warning and Accept.

DCarter_Invalid SSL Cert

  1. Scroll down the page and select Add View Pool.

DCarter_Add View Pool

  1. Add the correct information to the first View Pod and click Save.

DCarter_View Pod

  1. If there is an Invalid SSL Cert warning, click the warning and Accept.
  2. You will now see both View Pods configured for this connector.

DCarter_Remove View Pod

  1. Scroll to the top of the page.
  2. Select Federation.
  3. Check the Enable CPA Federation check box. Fill out the correct information, and add all of the Pods within the Federation.
    DCarter_View Pools Federation
  4. Click Save.
  5. From the Pods and Sync tab, click Sync Now.

DCarter_View Pool Sync

Configure view on all other connectors

  1. From the VMware Identity Manager Admin Portal, select Catalog, Managed Desktop Appliances, View Application.
  2. Select the next connector and follow the instructions above.
  3. Do this for every connector.

Configure network ranges

Once the VMware Horizon View setup is complete, you will need to configure Network Ranges.

  1. From the Identity Manager Admin page, select the Identity & Access Management Tab and click Setup.
  2. Select Network Ranges and click Add Network Range.

DCarter_Add Network Range

  1. Enter the required information and click Save.

DCarter_Add Network Range View Site

  1. This will need to be repeated for all network ranges, usually for each site and external access.

Dale is a Senior Solutions Architect and member of the CTO Ambassadors. Dale focuses in the End User Compute space, where Dale has become a subject matter expert in a number of the VMware products. Dale has more than 20 years’ experience working in IT having started his career in Northern England before moving the Spain and finally the USA. Dale currently holds a number of certifications including VCP-DV, VCP-DT, VCAP-DTD and VCAP-DTA. For more blog post from Dale visit his website at http://vdelboysview.com

BCDR: Some Things to Consider When Upgrading Your VMware Disaster Recovery Solution

Julienne_PhamBy Julienne Pham

Once upon a time, you protected your VMs with VMware Site Recovery Manager, and now you are wondering how to upgrade your DR solution with minimum impact on the environment. Is it as seamless as you think?

During my days in Global Support and working on customer Business Continuity/Disaster Recovery (BCDR) projects, I found it intriguing how vSphere components can put barriers in an upgrade path. Indeed, one of the first things I learned was that timing and the update sequence of my DR infrastructure was crucial to keep everything running, and with as little disruption as possible.

Here If we look more closely, this is a typical VMware Site Recovery Manager setup:

JPham_SRM 6x

And in a pyramid model, we have something like this:

JPham_SRM Pyramid

Example of a protected site

So, where do we start our upgrade?

Upgrade and maintain the foundation

You begin with the hardware. Then, the vSphere version you are upgrading to. You’ll see a lot of new features available, along with bug fixes, so your hardware and firmware might need some adjustments to support new features and enhancements. It is important at a minimum to check the compatibility of the hardware and software you are upgrading to.

In a DR scenario, it is important to check storage replication compliance

This is where you ensure your data replicates according to your RPO.

If you are using vSphere Replication or Storage Array Replication, you should check the upgrade path and the dependency with vSphere and SRM.

  • As an example, VR cannot be upgraded directly from 5.8 to 6.1
  • You might need to update the Storage Replication Adaptor too.
  • You can probably find other examples of things that won’t work, or find work-arounds you’ll need.
  • You can find some useful information in the VMware Compatibility Guide

Architecture change

If you are looking to upgrade from vSphere 5.5 to 6.1, for example, you should check if you need to migrate from a simple SSO install to an external one for more flexibility, as you might not be able to change in the future. As VMware SRM is dependent on the health of vCenter, you might be better off looking first into upgrading this component as a prerequisite.

Before you start you might want to check out the informative blog, “vSphere Datacenter Design – vCenter Architecture Changes in vSphere 6.0 – Part 1.”

The sites are interdependent

Once the foundation path is planned out, you have to think about how to minimize business impact.

Remember that if your protected site workload is down, you can always trigger a DR scenario, so it is in your best interest to keep the secondary site management layer fully functional and upgrade VMware SRM and vCenter at the last resort.

VMware upgrade path compatibility

Some might assume that you can upgrade from one version to another without compatibility issues coming up. Well, to avoid surprises, I recommend looking into our compatibility matrix, and validate the different product version upgrade paths.

For example, the upgrade of SRM 5.8 to 6.1 is not supported. So, what implications might be related to vCenter and SRM compatibility during the upgrade?

JPham_Upgrade Path Sequence

Back up, back up, back up

The standard consideration is to run backups before every upgrade. A snapshot VM might not be enough in certain situations if you are in different upgrade stages at different sites. You need to carefully plan and synchronise all different database instances for VMware Site Recovery Manager and vCenter—at both sites and eventually vSphere Replication databases.

I hope this addresses some of the common questions and concerns that might come up when you are thinking of upgrading SRM. Planning and timing are key for a successful upgrade. Many components are interdependent, and you need to consider them carefully to avoid an asynchronous environment with little control over outcomes. Good luck!


Julienne Pham is a Technical Solution Architect for the Professional Services Engineering team. She is specialised on SRM and core storage. Her focus is on VIO and BCDR space.

Define SDDC Success Based on IT Outcomes

Andrea SivieroBy Andrea Siviero

You’ve just deployed a new technology solution; how do you define whether or not it was a success?

People often have difficulty agreeing on the definition of “success” because there are two interconnected dimensions in which a project can be judged as a success or a failure. The first is project management success (delivering in accordance with the agreed-upon project objectives), and the second is 0utcome success (the amount of value the project delivers once it is complete).

Of course, getting agreement on how to define success is not always easy, but based on my day-to-day experience with customers, outcome success is desired over project management success.

Outcomes Are Worth More Than Services

Buying a service rather than an outcome is similar to paying to use equipment at a gym versus working with a personal trainer, whose job is to help you produce an outcome. The latter is worth more than the former.

VMware’s IT Outcomes support the top priority initiatives for CIOs and impact key business metrics, you can check the dedicated web site here.

In my (humble) opinion, indifferently by the IT Outcomes you are focus on, there are three important factors that contribute to a success:

People, Processes, and Architecture.

Based on my experience, customers tend to focus on architecture and technology, sometimes paying less attention to the people and process factors which can contribute more to success. Here is a real-life example from my personal experience.

ASiviero_Simplify the Situation

I was involved with a successful project implementation where all the project’s technical objectives were achieved, but the infrastructure and operations manager did not feel the desired outcomes were achieved. And that manager was right!

After spending an hour talking with the teams, I realized what a great job the consultants had done implementing and demonstrating all the capabilities of their new SDDC.

However, due to their experience, expectations, and culture, they weren’t able to reorganize their teams and processes to take full advantage of the desired outcomes (Speed, Agility and Security).

ASiviero_Amazing SDDC

Here is a summary of the best practices I’ve suggested as a way to leverage VMware technical account managers as coaches.

1 – People

ASiviero_Small Cross Functional Team

  1. Create a blended team of skilled workers with multi-domain and multi-disciplinary knowledge and expertise, and deliver cross-team training.
  1. Encourage autonomy with common goals and operating principles, and focus on service delivery.
  1. Push them to share lessons learned with other teams and expand their use of virtual networking and security.

2 – Process

ASiviero_Application Level Visibility

  1. Decompose management and troubleshooting tasks along virtual and physical boundaries.
  1. Automate manual tasks to improve efficiency and reduce errors.
  1. Correlate the end-to-end view of application health across compute, storage, and networking.

3 – Architecture

ASiviero_Key Requirements for SDDC

  1. Build your SDDC using a design validated by experts.
  1. Implement a comprehensive data center design.
  1. Add in app and network virtualization incrementally.

Putting it all together

ASiviero_Putting it All Together

Achieving 100% of a project’s intended outcomes depends not only on the technology implementation, but also on the organizatonal transformation required to ensure the proper implementation of people and process innovation.


Andrea Siviero is an ten-year veteran of VMware and a senior solutions architect member of Professional Services Engineering (PSE) for the Software-Defined Datacenter (SDDC), a part of the Global Technical Solutions (GTS) team. Prior to PSE, Andrea spent three years as pre-sales system engineer and three years as a post-sales consultant architect for cloud computing and desktop virtualization solutions focusing on very large and complex deployments, especially for service providers in the finance and telco sectors.

Troubleshooting Tips: Orchestrator PowerShell Plug-in

By Spas Kaloferov

Background and General Considerations

In this post will we will take a look at some common issues one might experience when using the VMware vRealize Orchestrator (vRO) PowerShell Plug-In, especially when using HTTPS protocol or Kerberos authentication for the PowerShell Host (PSHost).

Most use cases require that the PowerShell script run with some kind of administrator-level permissions in the target system that vRO integrates with. Here are some of them:

  • Add, modify, or remove DNS records for virtual machines.
  • Register IP address for a virtual machine in an IP management system.
  • Create, modify, or remove a user account mailbox.
  • Execute remote PowerShell commands against multiple Microsoft Windows operating systems in the environment.
  • Run a PowerShell script (.ps1) file from within a PowerShell script file from vRO.
  • Access mapped network drives from vRO.
  • Interact with Windows operating systems that have User Access Control (UAC) enabled.
  • Execute PowerCLI commands.
  • Integrate with Azure.

When you add a PowerShell Host, you must specify a user account. That account will be used to execute all PowerShell scripts from vRO. In most use cases, like the one above, that account must be an administrator account in the corresponding target system the script interacts with. In most cases, this is a domain-level account.

In order to successfully add the PowerShell Host to that account—and use that account when executing scripts from vRO—some prerequisites need to be met. In addition, the use cases mentioned require the PowerShell Host to be prepared for credential delegation (AKA Credential Security Service Provider [CredSSP], double-hop authentication or multi-hop authentication).

To satisfy the above use cases for adding a PowerShell Host in vRO:

The high-level requirements are:

  • Port: 5986
  • PowerShell remote host type: WinRM
  • Transport protocol: HTTPS (recommended)
  • Authentication: Kerberos
  • User name: <Administrator_user_name>

The low-level requirements are:

  • PSHost: Configure WinRM and user token delegation
  • PSHost: Configure Windows service principal names (SPNs) for WinRM
  • PSHost: Import a CA signed-server certificate containing Client Authentication and Server authentication Exchange Key Usage Properties
  • PSHost: Configure Windows Credential Delegation using the Credential Security Service Provider (CredSSP) module
  • vRO: Edit the Kerberos Domain Realm (krb5.conf) on the vCO Appliance (Optional/Scenario specific)
  • vRO: Add the PS Host as HTTPS host with Kerberos authentication
  • vRO: Use the Invoke-Command cmdlet in your PowerShell code

Troubleshooting Issues when Adding a PSHost

To resolve most common issues when adding a PSHost for use with HTTPS transport protocol and Kerberos authentication, follow these steps:

  1. Prepare the Windows PSHost.

For more information on all the configurations needed on the PSHost, visit my blog, “Using CredSSP with the vCO PowerShell Plug-in.”

  1. After preparing the PSHost, test it to make sure it accepts the execution or removes PowerShell commands.

Start by testing simple commands. I like to use the $env:computername PowerShell command that returns the hostname of the PSHost. You can use the winrs command in Windows for the test. Here’s an example of the syntax:

winrs -r:https://lan1dc1.vmware.com:5986 -u:vmware\administrator -p:VMware1! powershell.exe $env:computername

 

Continue by testing a command that requires credential delegation. I like to use a simple command, like dir \\<Server_FQDN\<sharename>, that accesses a share residing on a computer other than the PSHost itself. Here’s an example of the syntax:

winrs -r:https://lan1dc1.vmware.com:5986 -ad -u:vmware\administrator -p:VMware1! powershell.exe dir \\lan1dm1.vmware.com\share


Note
: Make sure to specify the –ad command line switch.

  1. Prepare the vRO so it can handle Kerberos authentication. You need this in order to use a domain-level account when adding the PSHost.

For more information about the Kerberos configuration on vRO for single domain, visit my blog, “Using CredSSP with the vCO PowerShell Plugin.”

If you are planning to add multiple PSHosts and are using domain-level accounts for each PSHost that are from different domains (e.g., vmware.com and support.vmware.com) you need to take this into consideration when preparing vRO for Kerberos authentication.

For more information about the Kerberos configuration on vRO for multiple domains, visit my blog, “How to add PowerShell hosts from multiple domains with Kerberos authentication to the same vRO.”

If you make a mistake in the configuration, you might see the following error then adding the PSHost:

Cannot locate default realm (Dynamic Script Module name : addPowerShellHost#12
tem: ‘Add a PowerShell host/item8′, state: ‘failed’, business state: ‘Error’, exception: ‘InternalError: java.net.ConnectException: Connection refused (Workflow:Import a certificate from URL with certificate alias / Validate (item1)#5)’
workflow: ‘Add a PowerShell

 

If this is the case, go back and re-validate the configurations.

  1. If the error persists, make sure the conf file is correctly formatted.

For more information about common formatting mistakes, visit my blog, “Wrong encoding or formatting of Linux configuration files can cause problems in VMware Appliances.”

  1. Make sure you use the following parameters when adding the PSHost:
    • Port: 5986
    • PowerShell remote host type: WinRM
    • Transport protocol: HTTPS (recommended)
    • Authentication: Kerberos
    • User name: <Administrator_user_name>

Note: In order to add the PSHost, the user must be a local administrator on the PSHost.

  1. If you still cannot add the host, make sure your VMware appliance can authenticate successfully using Kerberos against the domains you’ve configured. To do this you can use the ldapsearch command and test Kerberos connectivity to the domain.

Here is an example of the syntax:

vco-a-01:/opt/vmware/bin # ldapsearch -h lan1dc1.vmware.com -D “CN=Administrator,CN=Users,DC=vmware,DC=com” -w VMware1! -b “” -s base “objectclass=*”
  1. If your authentication problems continue, most likely there is a general authentication problem that might not be directly connected to the vRO appliance, such as:
    • A network related issue
    • Blocked firewall ports
    • DNS resolution problems
    • Unresponsive domain controllers

Troubleshooting Issues when Executing Scripts

Once you’ve successfully added the PSHost, it’s time to test PowerShell execution from the vRO.

To resolve the most common issues when executing PowerShell scripts from vRO, follow these steps:

  1. While in vRO go to the Inventory tab and make sure you don’t see the word “unusable” in front of the PSHost name. If you do, remove the PSHost and add it to the vRO again.
  1. Use the Invoke an external script workflow that is shipped with vRO to test PowerShell execution commands. Again, start with a simple command, like $env:computername.

Then, process with a command that requires credential delegation. Again, as before, you can use a command like dir \\<Server_FQDN\<sharename>.

Note: This command doesn’t support credential delegation, so a slight workaround is needed to achieve this functionality. You need to wrap the command you want to execute around an Invoke-Command command.

For more information on how to achieve credential delegation from vRO, visit my blog, “Using CredSSP with the vCO PowerShell Plug-in.”

If you try to execute a command that requires credential delegation without using a workaround, you will receive an error similar to the following:

PowerShellInvocationError: Errors found while executing script <script>: Access is denied


SKaloferov_Power Shell Error

  1. Use the SilentlyContinue PowerShell error action preference to suppress output from “noisy” commands. Such commands are those that generate some kind of non-standard output, like:
    • Progress par showing the progress of the command execution
    • Hashes and other similar content

Finally, avoid using code in your commands or scripts that might generate popup messages, open other windows, or open other graphical user interfaces.


Spas Kaloferov is an acting Solutions Architect member of Professional Services Engineering (PSE) for the Software-Defined Datacenter (SDDC) – a part of the Global Technical & Professional Solutions (GTPS) team. Prior to VMware, Kaloferov focused on cloud computing solutions.