Home > Blogs > VMware Consulting Blog

VMworld Session Preview: Advanced Network Services with NSX

Romain Decker

 

By Romain Decker

It is no secret that IT is in constant evolution. IT trends such as Cloud Adoption, Distributed Applications, Micro-Services or Internet of Things have emerged over the last years.

Nevertheless, the focus is still on applications and on how they compute and deliver data to consumers. Whether their role is to generate revenue, pilot industries, logistics, health or even your programmable thermostat; top level goals of organizations are still security, agility and operational efficiency, everything else associated with the applications has changed:

  • Threats have become more advanced and persistent.
  • Users now access the data center from devices and locations that represent significant challenges.
  • Application architectures are now more widely distributed and more dynamic than ever before.
  • Infrastructure changes have evolved with the convergence of resources and questions around public cloud offerings.

VMware NSX is a perfect fit to address these concerns from the network and security standpoint. NSX reproduce all Network & Security services of Data Centers in logical space for best speed/agility and a deeper security.

Visit my session at VMworld Las Vegas (Session ID: NET7907) to hear the detailed presentation on NSX firewall, load balancing and SSL-VPN capabilities.

And don’t forget, the GUI is not the king! 😉


Presenter: Romain Decker
Session Number: NET7907
Session Title: Advanced Network Services with NSX
Date and Time: 8/30/16 (Tuesday) 2:00 PM

Abstract: Applications are everywhere and increasingly more complex. They require much more than switching and routing on the network side. Clouds should be able to host any applications, including the complex ones. This session will discuss the concepts for designing and operating NSX network services such as firewalling, load balancing, and VPN. We will examine and explain how you can better consume those services by automating them, or by using other mechanisms such as NSX API. After this session, you will leave with a better understanding of how NSX Network and Security services work, and how to leverage them to better support your applications.

Schedule Builder


Romain Decker is a Senior Solutions Architect member of Professional Services Engineering (PSE) for the Software-Defined Datacenter (SDDC) portfolio – a part of the Global Technical & Professional Solutions (GTPS) team.

How to Change the Package Signing Certificate of a vRealize Orchestrator Appliance (7.0.1)

 

By Spas Kaloferov

In this post, we will take a look at how to change the Package Signing Certificate (PSC) in a vRealize Orchestrator appliance.

To change the PSC, let’s review a few steps first:

  • ŸUse the keytool to:
    • ŸCreate new keystore; the keystore type must be JCEKS.
    • ŸImport the certificate into the keystore.
    • ŸChange the alias of the certificate to _dunesrsa_alias_.
    • ŸGenerate a Security Key and place it in the keystore.
    • ŸChange the alias of the Security Key to _dunessk_alias_.
  • ŸUse the Control Center interface to:
    • Ÿ Import the keystore you created.
    • Ÿ Restart the Orchestrator server.

Here is a screenshot of the original PSC certificate:

SKaloferov_PSC Certificate

Changing the Package Signing Certificate

First, you must obtain a PFX Certificate Package (containing your PSC Certificate), which is issued from the Certificate Authority (CA).

SKaloferov_Package Signing Certificate

SKaloferov_Package Signing Certificate 2

SKaloferov_Certificate Path

Note that the certificate has the Digital Signature and Key_Encipherment Key Usage attributes as shown above. It also has the Server Authentication Extended Key Usage attribute.

Copy the PFX certificate package to any Linux appliance.

SKaloferov_Certificate Signing vRO

We will use the OpenSSL tool to execute commands. Enter the following commands to create a new keystore and, at the same time, import the PFX certificate package:

keytool -importkeystore -srckeystore "/etc/vco/app-server/security/rui.pfx" -srcstoretype pkcs12 -srcstorepass "dunesdunes" -deststoretype jceks -destkeystore "/etc/vco/app-server/security/psckeystore" -deststorepass "dunesdunes"

SKaloferov_PFX Certificate

Enter the following command to change the alias of the certificate:

keytool -changealias -alias rui -destalias _dunesrsa_alias_ -keystore "/etc/vco/app-server/security/psckeystore" -storetype jceks -storepass "dunesdunes"

Next, enter this command to generate a security key:

keytool -genseckey -alias _dunessk_alias_ -keyalg DES -keysize 56 -keypass "dunesdunes" -storetype jceks -keystore "/etc/vco/app-server/security/psckeystore" -storepass "dunesdunes"

Notice in the above command I’ve used the DES algorithm and 56 key size, but you can also use the 3DES (DESese) algorithm and 168 key size.

Enter the following command to list the contents of the store:

keytool -list -storetype jceks -keystore "/etc/vco/app-server/security/psckeystore"

Copy the keystore file to your Windows machine.

Open Control Center and navigate to Certificates > Package Signing Certificate.

Click Import > Import from JavaKeyStore file.

Browse the keystore file, and enter the password.

SKaloferov_Current Certificate

Click Import to import the certificate.

Go to Startup Options and restart the Orchestrator service.

Navigate back to Certificates > Package Signing Certificate.

You should now see the new certificate as shown below:

SKaloferov_New Certificate

Open your vRealize Orchestrator appliance client, and navigate to Tools > Certificate Manager.

SKaloferov_vRO

You should now see the certificate shown below. The common name can differ, but if you compare the thumbprints, it should match the private key entry in your keystore.

SKaloferov_Keystore

I hope this post was valuable in helping you learn how to change the Package Signing Certificate in a vRealize Orchestrator appliance. Stay tuned for my next post!


Spas Kaloferov is an acting Solutions Architect member of Professional Services Engineering (PSE) for the Software-Defined Datacenter (SDDC) – a part of the Global Technical & Professional Solutions (GTPS) team. Prior to VMware, Kaloferov focused on cloud computing solutions.

VMware Validated Design for SDDC 2.0 – Now Available

Jonathan McDonaldBy Jonathan McDonald

Recently I have been involved in a rather cool project inside VMware, aimed at validating and integrating all the different VMware products. The most interesting customer cases I see are related to this work because oftentimes products work independently without issue—but together can create unique problems.

To be honest, it is really difficult to solve some of the problems when integrating many products together. Whether we are talking about integrating a ticketing system, building a custom dashboard for vRealize Operations Manager, or even building a validation/integration plan for Virtual SAN to add to existing processes, there is always the question, “What would the experts recommend?”

The goal of this project is to provide a reference design for our products, called a VMware Validated Design. The design is a construct that:

  • Is built by expert architects who have many years of experience with the products as well as the integrations
  • Allow repeatable deployment of the end solution, which has been tested to scale
  • Integrates with the development cycle, so if there is an issue with the integration and scale testing, it can be identified quickly and fixed by the developers before the products are released.

All in all, this has been an amazing project that I’ve been excited to work on, and I am happy to be able to finally talk about it publicly!

Introducing the VMware Validated Design for SDDC 2.0

The first of these designs—under development for some time—is the VMware Validated Design for SDDC (Software-Defined Data Center). The first release was not available to the public and only internal to VMware, but on July 21, 2016, version 2.0 was released and is now available to everyone! This design builds not only the foundation for a solid SDDC infrastructure platform using VMware vSphere, Virtual SAN, and VMware NSX, but it builds on that foundation using the vRealize product suite (vRealize Operations Manager, vRealize Log Insight, vRealize Orchestrator, and vRealize Automation).

The VMware Validated Design for SDDC outcome requires a system that enables an IT organization to automate the provisioning of common, repeatable requests and to respond to business needs with more agility and predictability. Traditionally, this has been referred to as Infrastructure-as-a-Service (IaaS); however, the VMware Validated Design for SDDC extends the typical IAAS solution to include a broader and more complete IT solution.

The architecture is based on a number of layers and modules, which allows interchangeable components to be part of the end solution or outcome, such as the SDDC. If a particular component design does not fit the business or technical requirements for whatever reason, it should be able to be swapped out for another similar component. The VMware Validated Design for SDDC is one way of putting an architecture together that has been rigorously tested to ensure stability, scalability, and compatibility. Ultimately, however, the system is designed to ensure the desired outcome will be achieved.

The conceptual design is shown in the following diagram:

JMCDonald_VVD Conceptual Design

As you can see, the design brings a lot more than just implementation details. It includes many common “day two” operational tasks such as management and monitoring functions, business continuity, and security.

To simplify such a complex design, it has been broken up into:

  • A high-level Architecture Design
  • A Detailed Design with all the design decisions included
  • Implementation guidance.

Let’s take an in-depth look.

Continue reading

Configuring VMware Identity Manager and VMware Horizon 7 Cloud Pod Architecture

Dale CarterBy Dale Carter

With the release of VMware Horizon® 7 and VMware Identity Manager™ 2.6, it is now possible to configure VMware Identity Manager to work with Horizon Cloud Pod Architecture when deploying your desktop and application pools over multiple data centers or locations.

Using VMware Identity Manager in front of your VMware Horizon deployments that are using Cloud Pod Architecture makes it much easier for users to get access to their desktops and applications. The user has just one place to connect to, and they will be able to see all of their available desktops and applications. Identity Manager will direct the user to the application hosted in the best datacenter for their location. This can also include SaaS applications as well as the applications that are available through VMware Horizon 7.

The following instructions show you how to configure VMware Identity Manager to work with VMware Horizon 7 when using Cloud Pod Architecture.

Configure view on the first connector

  1. From the VMware Identity Manager Admin Portal select Catalog, Managed Desktop Appliances, View Application.

DCarter_View Application

  1. Choose the first Identity Manager Connector. This will redirect you to the connector View setup page.
  2. Select the check box to enable View Pools. Add the correct information to the first View Pod, and click Save.

DCarter_View Pools

  1. If there is an Invalid SSL Cert warning, click the warning and Accept.

DCarter_Invalid SSL Cert

  1. Scroll down the page and select Add View Pool.

DCarter_Add View Pool

  1. Add the correct information to the first View Pod and click Save.

DCarter_View Pod

  1. If there is an Invalid SSL Cert warning, click the warning and Accept.
  2. You will now see both View Pods configured for this connector.

DCarter_Remove View Pod

  1. Scroll to the top of the page.
  2. Select Federation.
  3. Check the Enable CPA Federation check box. Fill out the correct information, and add all of the Pods within the Federation.
    DCarter_View Pools Federation
  4. Click Save.
  5. From the Pods and Sync tab, click Sync Now.

DCarter_View Pool Sync

Configure view on all other connectors

  1. From the VMware Identity Manager Admin Portal, select Catalog, Managed Desktop Appliances, View Application.
  2. Select the next connector and follow the instructions above.
  3. Do this for every connector.

Configure network ranges

Once the VMware Horizon View setup is complete, you will need to configure Network Ranges.

  1. From the Identity Manager Admin page, select the Identity & Access Management Tab and click Setup.
  2. Select Network Ranges and click Add Network Range.

DCarter_Add Network Range

  1. Enter the required information and click Save.

DCarter_Add Network Range View Site

  1. This will need to be repeated for all network ranges, usually for each site and external access.

Dale is a Senior Solutions Architect and member of the CTO Ambassadors. Dale focuses in the End User Compute space, where Dale has become a subject matter expert in a number of the VMware products. Dale has more than 20 years’ experience working in IT having started his career in Northern England before moving the Spain and finally the USA. Dale currently holds a number of certifications including VCP-DV, VCP-DT, VCAP-DTD and VCAP-DTA. For more blog post from Dale visit his website at http://vdelboysview.com

BCDR: Some Things to Consider When Upgrading Your VMware Disaster Recovery Solution

Julienne_PhamBy Julienne Pham

Once upon a time, you protected your VMs with VMware Site Recovery Manager, and now you are wondering how to upgrade your DR solution with minimum impact on the environment. Is it as seamless as you think?

During my days in Global Support and working on customer Business Continuity/Disaster Recovery (BCDR) projects, I found it intriguing how vSphere components can put barriers in an upgrade path. Indeed, one of the first things I learned was that timing and the update sequence of my DR infrastructure was crucial to keep everything running, and with as little disruption as possible.

Here If we look more closely, this is a typical VMware Site Recovery Manager setup:

JPham_SRM 6x

And in a pyramid model, we have something like this:

JPham_SRM Pyramid

Example of a protected site

So, where do we start our upgrade?

Upgrade and maintain the foundation

You begin with the hardware. Then, the vSphere version you are upgrading to. You’ll see a lot of new features available, along with bug fixes, so your hardware and firmware might need some adjustments to support new features and enhancements. It is important at a minimum to check the compatibility of the hardware and software you are upgrading to.

In a DR scenario, it is important to check storage replication compliance

This is where you ensure your data replicates according to your RPO.

If you are using vSphere Replication or Storage Array Replication, you should check the upgrade path and the dependency with vSphere and SRM.

  • As an example, VR cannot be upgraded directly from 5.8 to 6.1
  • You might need to update the Storage Replication Adaptor too.
  • You can probably find other examples of things that won’t work, or find work-arounds you’ll need.
  • You can find some useful information in the VMware Compatibility Guide

Architecture change

If you are looking to upgrade from vSphere 5.5 to 6.1, for example, you should check if you need to migrate from a simple SSO install to an external one for more flexibility, as you might not be able to change in the future. As VMware SRM is dependent on the health of vCenter, you might be better off looking first into upgrading this component as a prerequisite.

Before you start you might want to check out the informative blog, “vSphere Datacenter Design – vCenter Architecture Changes in vSphere 6.0 – Part 1.”

The sites are interdependent

Once the foundation path is planned out, you have to think about how to minimize business impact.

Remember that if your protected site workload is down, you can always trigger a DR scenario, so it is in your best interest to keep the secondary site management layer fully functional and upgrade VMware SRM and vCenter at the last resort.

VMware upgrade path compatibility

Some might assume that you can upgrade from one version to another without compatibility issues coming up. Well, to avoid surprises, I recommend looking into our compatibility matrix, and validate the different product version upgrade paths.

For example, the upgrade of SRM 5.8 to 6.1 is not supported. So, what implications might be related to vCenter and SRM compatibility during the upgrade?

JPham_Upgrade Path Sequence

Back up, back up, back up

The standard consideration is to run backups before every upgrade. A snapshot VM might not be enough in certain situations if you are in different upgrade stages at different sites. You need to carefully plan and synchronise all different database instances for VMware Site Recovery Manager and vCenter—at both sites and eventually vSphere Replication databases.

I hope this addresses some of the common questions and concerns that might come up when you are thinking of upgrading SRM. Planning and timing are key for a successful upgrade. Many components are interdependent, and you need to consider them carefully to avoid an asynchronous environment with little control over outcomes. Good luck!


Julienne Pham is a Technical Solution Architect for the Professional Services Engineering team. She is specialised on SRM and core storage. Her focus is on VIO and BCDR space.

Demo – Dynamically Enforcing Security on a Hot Cloned SQL Server with VMware NSX

Originally posted on the Virtualize Business Critical Applications blog.

Niran_Even_Chen

 

By Niran Even-Chen

VMware NSX is a software defined solution that brings the power of virtualization to network and security.

There are many great papers about NSX in general: for example here & here and many others, the purpose of this demo is not to dive into everything that NSX does. Instead I have focused on one capability in particular and that is the intelligent grouping of NSX Service Composer with the Distributed VMware NSXFirewall (DFW) and how to utilize it to make life easier for SQL DBAs and security admins, its doesn’t have to be only SQL Server, it can be any other database or application for that matter but for this demo I am focusing on SQL Server.

First, a bit of background: The NSX Service Composer allows us to create groups called “Security groups”. These Security groups can have a dynamic membership criteria that can be based on multiple factors: It can be part of the computer name of a VM, its guest OS name, the VM name, AD membership or a tag (tags are especially cool as they can be set automatically by 3rd party tools like antivirus and IPSs, but that is for a different demo)

These Security groups are than placed inside the Distributed Firewall (DFW) rules which allows us to manage thousands of entities with just a few rules and without the need to add these entities to the Security Group manually.

In the demo I have created an environment that is set with 0 trust policy, that means that everything is secured and every packet between the VMs is inspected, the inspection is done on the VMs vNIC level in an east-west micro segmentation way. That means that if a certain traffic is not defined in the DFW it is not allowed to go through.

This is something that wasn’t really possible to do before NSX

Our production app database is an SQL database and in the demo the DBA wants to hot-clone it aside for testing purposes, but obviously the cloned SQL Server needs to have some network traffic allowed to pass to it, yet it needs to be secured from everything else.

Instead of having the traditional testing FW zone with its own physical servers, I created the rules that apply to a test DBs in the DFW, created a dynamic membership Security Group, and nested that group in the rules. Now, any database server that I will clone which corresponds to the criteria will be automatically placed in the rules.  What’s really nice about this is that no traffic is going northbound to the perimeter FW because the packet inspection is done on the vNIC of the VMs (and only relevant rules to it are set on it) , no additional calls to security admins to configure the FW are needed after the first configuration has been made. This is a huge time saver, much more efficient in terms of resources (physical servers are now shared between zones) and a much more secure environment than having only a perimeter FW.

As usual, any comment or feedback is welcome

Cheers,

Niran


Niran is a VMware Staff Solutions Architect in the Enterprise Application Architecture team at who is focused on creating solutions for running Microsoft OS’s and apps on vSphere and vCloud Air platforms and providing top deal support to strategic customers globally.

Define SDDC Success Based on IT Outcomes

Andrea SivieroBy Andrea Siviero

You’ve just deployed a new technology solution; how do you define whether or not it was a success?

People often have difficulty agreeing on the definition of “success” because there are two interconnected dimensions in which a project can be judged as a success or a failure. The first is project management success (delivering in accordance with the agreed-upon project objectives), and the second is 0utcome success (the amount of value the project delivers once it is complete).

Of course, getting agreement on how to define success is not always easy, but based on my day-to-day experience with customers, outcome success is desired over project management success.

Outcomes Are Worth More Than Services

Buying a service rather than an outcome is similar to paying to use equipment at a gym versus working with a personal trainer, whose job is to help you produce an outcome. The latter is worth more than the former.

VMware’s IT Outcomes support the top priority initiatives for CIOs and impact key business metrics, you can check the dedicated web site here.

In my (humble) opinion, indifferently by the IT Outcomes you are focus on, there are three important factors that contribute to a success:

People, Processes, and Architecture.

Based on my experience, customers tend to focus on architecture and technology, sometimes paying less attention to the people and process factors which can contribute more to success. Here is a real-life example from my personal experience.

ASiviero_Simplify the Situation

I was involved with a successful project implementation where all the project’s technical objectives were achieved, but the infrastructure and operations manager did not feel the desired outcomes were achieved. And that manager was right!

After spending an hour talking with the teams, I realized what a great job the consultants had done implementing and demonstrating all the capabilities of their new SDDC.

However, due to their experience, expectations, and culture, they weren’t able to reorganize their teams and processes to take full advantage of the desired outcomes (Speed, Agility and Security).

ASiviero_Amazing SDDC

Here is a summary of the best practices I’ve suggested as a way to leverage VMware technical account managers as coaches.

1 – People

ASiviero_Small Cross Functional Team

  1. Create a blended team of skilled workers with multi-domain and multi-disciplinary knowledge and expertise, and deliver cross-team training.
  1. Encourage autonomy with common goals and operating principles, and focus on service delivery.
  1. Push them to share lessons learned with other teams and expand their use of virtual networking and security.

2 – Process

ASiviero_Application Level Visibility

  1. Decompose management and troubleshooting tasks along virtual and physical boundaries.
  1. Automate manual tasks to improve efficiency and reduce errors.
  1. Correlate the end-to-end view of application health across compute, storage, and networking.

3 – Architecture

ASiviero_Key Requirements for SDDC

  1. Build your SDDC using a design validated by experts.
  1. Implement a comprehensive data center design.
  1. Add in app and network virtualization incrementally.

Putting it all together

ASiviero_Putting it All Together

Achieving 100% of a project’s intended outcomes depends not only on the technology implementation, but also on the organizatonal transformation required to ensure the proper implementation of people and process innovation.


Andrea Siviero is an ten-year veteran of VMware and a senior solutions architect member of Professional Services Engineering (PSE) for the Software-Defined Datacenter (SDDC), a part of the Global Technical Solutions (GTS) team. Prior to PSE, Andrea spent three years as pre-sales system engineer and three years as a post-sales consultant architect for cloud computing and desktop virtualization solutions focusing on very large and complex deployments, especially for service providers in the finance and telco sectors.

Troubleshooting Tips: Orchestrator PowerShell Plug-in

By Spas Kaloferov

Background and General Considerations

In this post will we will take a look at some common issues one might experience when using the VMware vRealize Orchestrator (vRO) PowerShell Plug-In, especially when using HTTPS protocol or Kerberos authentication for the PowerShell Host (PSHost).

Most use cases require that the PowerShell script run with some kind of administrator-level permissions in the target system that vRO integrates with. Here are some of them:

  • Add, modify, or remove DNS records for virtual machines.
  • Register IP address for a virtual machine in an IP management system.
  • Create, modify, or remove a user account mailbox.
  • Execute remote PowerShell commands against multiple Microsoft Windows operating systems in the environment.
  • Run a PowerShell script (.ps1) file from within a PowerShell script file from vRO.
  • Access mapped network drives from vRO.
  • Interact with Windows operating systems that have User Access Control (UAC) enabled.
  • Execute PowerCLI commands.
  • Integrate with Azure.

When you add a PowerShell Host, you must specify a user account. That account will be used to execute all PowerShell scripts from vRO. In most use cases, like the one above, that account must be an administrator account in the corresponding target system the script interacts with. In most cases, this is a domain-level account.

In order to successfully add the PowerShell Host to that account—and use that account when executing scripts from vRO—some prerequisites need to be met. In addition, the use cases mentioned require the PowerShell Host to be prepared for credential delegation (AKA Credential Security Service Provider [CredSSP], double-hop authentication or multi-hop authentication).

To satisfy the above use cases for adding a PowerShell Host in vRO:

The high-level requirements are:

  • Port: 5986
  • PowerShell remote host type: WinRM
  • Transport protocol: HTTPS (recommended)
  • Authentication: Kerberos
  • User name: <Administrator_user_name>

The low-level requirements are:

  • PSHost: Configure WinRM and user token delegation
  • PSHost: Configure Windows service principal names (SPNs) for WinRM
  • PSHost: Import a CA signed-server certificate containing Client Authentication and Server authentication Exchange Key Usage Properties
  • PSHost: Configure Windows Credential Delegation using the Credential Security Service Provider (CredSSP) module
  • vRO: Edit the Kerberos Domain Realm (krb5.conf) on the vCO Appliance (Optional/Scenario specific)
  • vRO: Add the PS Host as HTTPS host with Kerberos authentication
  • vRO: Use the Invoke-Command cmdlet in your PowerShell code

Troubleshooting Issues when Adding a PSHost

To resolve most common issues when adding a PSHost for use with HTTPS transport protocol and Kerberos authentication, follow these steps:

  1. Prepare the Windows PSHost.

For more information on all the configurations needed on the PSHost, visit my blog, “Using CredSSP with the vCO PowerShell Plug-in.”

  1. After preparing the PSHost, test it to make sure it accepts the execution or removes PowerShell commands.

Start by testing simple commands. I like to use the $env:computername PowerShell command that returns the hostname of the PSHost. You can use the winrs command in Windows for the test. Here’s an example of the syntax:

winrs -r:https://lan1dc1.vmware.com:5986 -u:vmware\administrator -p:VMware1! powershell.exe $env:computername

 

Continue by testing a command that requires credential delegation. I like to use a simple command, like dir \\<Server_FQDN\<sharename>, that accesses a share residing on a computer other than the PSHost itself. Here’s an example of the syntax:

winrs -r:https://lan1dc1.vmware.com:5986 -ad -u:vmware\administrator -p:VMware1! powershell.exe dir \\lan1dm1.vmware.com\share


Note
: Make sure to specify the –ad command line switch.

  1. Prepare the vRO so it can handle Kerberos authentication. You need this in order to use a domain-level account when adding the PSHost.

For more information about the Kerberos configuration on vRO for single domain, visit my blog, “Using CredSSP with the vCO PowerShell Plugin.”

If you are planning to add multiple PSHosts and are using domain-level accounts for each PSHost that are from different domains (e.g., vmware.com and support.vmware.com) you need to take this into consideration when preparing vRO for Kerberos authentication.

For more information about the Kerberos configuration on vRO for multiple domains, visit my blog, “How to add PowerShell hosts from multiple domains with Kerberos authentication to the same vRO.”

If you make a mistake in the configuration, you might see the following error then adding the PSHost:

Cannot locate default realm (Dynamic Script Module name : addPowerShellHost#12
tem: ‘Add a PowerShell host/item8′, state: ‘failed’, business state: ‘Error’, exception: ‘InternalError: java.net.ConnectException: Connection refused (Workflow:Import a certificate from URL with certificate alias / Validate (item1)#5)’
workflow: ‘Add a PowerShell

 

If this is the case, go back and re-validate the configurations.

  1. If the error persists, make sure the conf file is correctly formatted.

For more information about common formatting mistakes, visit my blog, “Wrong encoding or formatting of Linux configuration files can cause problems in VMware Appliances.”

  1. Make sure you use the following parameters when adding the PSHost:
    • Port: 5986
    • PowerShell remote host type: WinRM
    • Transport protocol: HTTPS (recommended)
    • Authentication: Kerberos
    • User name: <Administrator_user_name>

Note: In order to add the PSHost, the user must be a local administrator on the PSHost.

  1. If you still cannot add the host, make sure your VMware appliance can authenticate successfully using Kerberos against the domains you’ve configured. To do this you can use the ldapsearch command and test Kerberos connectivity to the domain.

Here is an example of the syntax:

vco-a-01:/opt/vmware/bin # ldapsearch -h lan1dc1.vmware.com -D “CN=Administrator,CN=Users,DC=vmware,DC=com” -w VMware1! -b “” -s base “objectclass=*”
  1. If your authentication problems continue, most likely there is a general authentication problem that might not be directly connected to the vRO appliance, such as:
    • A network related issue
    • Blocked firewall ports
    • DNS resolution problems
    • Unresponsive domain controllers

Troubleshooting Issues when Executing Scripts

Once you’ve successfully added the PSHost, it’s time to test PowerShell execution from the vRO.

To resolve the most common issues when executing PowerShell scripts from vRO, follow these steps:

  1. While in vRO go to the Inventory tab and make sure you don’t see the word “unusable” in front of the PSHost name. If you do, remove the PSHost and add it to the vRO again.
  1. Use the Invoke an external script workflow that is shipped with vRO to test PowerShell execution commands. Again, start with a simple command, like $env:computername.

Then, process with a command that requires credential delegation. Again, as before, you can use a command like dir \\<Server_FQDN\<sharename>.

Note: This command doesn’t support credential delegation, so a slight workaround is needed to achieve this functionality. You need to wrap the command you want to execute around an Invoke-Command command.

For more information on how to achieve credential delegation from vRO, visit my blog, “Using CredSSP with the vCO PowerShell Plug-in.”

If you try to execute a command that requires credential delegation without using a workaround, you will receive an error similar to the following:

PowerShellInvocationError: Errors found while executing script <script>: Access is denied


SKaloferov_Power Shell Error

  1. Use the SilentlyContinue PowerShell error action preference to suppress output from “noisy” commands. Such commands are those that generate some kind of non-standard output, like:
    • Progress par showing the progress of the command execution
    • Hashes and other similar content

Finally, avoid using code in your commands or scripts that might generate popup messages, open other windows, or open other graphical user interfaces.


Spas Kaloferov is an acting Solutions Architect member of Professional Services Engineering (PSE) for the Software-Defined Datacenter (SDDC) – a part of the Global Technical & Professional Solutions (GTPS) team. Prior to VMware, Kaloferov focused on cloud computing solutions.

Virtualization and VMware Virtual SAN … the Old Married Couple

Don’t Mistake These Hyper-Converged Infrastructure Technologies as Mutually Exclusive

Jonathan McDonaldBy Jonathan McDonald

I have not posted many blogs recently as I’ve been in South Africa. I have however been hard at work on the latest release of VMware vSphere 6.0 Update 2 and VMware Virtual SAN 6.2. Some amazing features are included that will make life a lot easier and add some exciting new functionality to your hyper-converged infrastructure. I will not get into these features in this post, because I want to talk about one of the bigger non-technical questions that I get from customers and consultants alike. This is not one that is directly tied to the technology or architecture of the products. It is the idea that you can go into an environment and just do Virtual SAN, which from my experience is not true. I would love to know if your thoughts and experiences have shown you the same thing.

Let me first tell those of you who are unaware of Virtual SAN that I am not going to go into great depth about the technology. The key is that, as a platform, it is hyper-converged, meaning it is included with the ESXi hypervisor. This makes it radically simple to actually configure—and, more importantly, use—once it is up and running.

My hypothesis is that 80 to 90% of what you have to do to design for Virtual SAN focuses on the Virtualization design, and not so much on Virtual SAN.  This is not to say the Virtual SAN design is not important, but virtualization has to be integral to the design when you are building for it. To prove this, take a look at what the standard tasks are when creating the design for the environment:

  1. Hardware selection, racking, configuration of the physical hosts
  2. Selection and configuration of the physical network
  3. Software installation of the VMware ESXi hosts and VMware vCenter server
  4. Configuration of the ESXi hosts
    • Networking (For management traffic, and for VMware vSphere vMotion, at a minimum)
    • Disks
    • Features (VMware vSphere High Availability, VMware vSphere Distributed Resource Scheduler, VMware vSphere vMotion, at a minimum)
  5. Validation and testing of the configuration

If I add the Virtual SAN-specific tasks in, you have a holistic view of what is required in most greenfield configurations:

  1. Configuration of the Virtual SAN network
  2. Turning on Virtual SAN
  3. Creating new policies (optional, as the default is in place once configured)
  4. Testing Virtual SAN

As you can see, my first point shows that the majority of the work is actually virtualization and not Virtual SAN. In fact, as I write this, I am even more convinced of my hypothesis. The first three tasks alone are really the heavy hitters for time spent. As a consultant or architect, you need to focus on these tasks more than anything. Notice above where I mention “configure” in regards to Virtual SAN, and not installation; this is because it is already a hyper-converged element installed with ESXi. Once you get the environment up and running with ESXi hosts installed, Virtual SAN needs no further installation, simply configuration. You turn it on with a simple wizard, and, as long as you have focused on the supportability of the hardware and the underlying design, you will be up and running quickly. Virtual SAN is that easy.

Many of the arguments I get are interesting as well. Some of my favorites include:

  • “The customer has already selected hardware.”
  • “I don’t care about hardware.”
  • “Let’s just assume that the hardware is there.”
  • “They will be using existing hardware.”

My response is always that you should care a great deal about the hardware. In fact, this is by far the most important part of a Virtual SAN engagement. With Virtual SAN, if the hardware is not on the VMware compatibility list, then it is not supported. By not caring about hardware, you risk data loss and the loss of all VMware support.

If the hardware is already chosen, you should ensure that the hardware being proposed, added, or assumed as in place is proper. Get the bill of materials or the quote, and go over it line-by-line if that’s what’s needed to ensure that it is all supported.

Although the hardware selection is slightly stricter than with an average design, it is much the same as any traditional virtualization engagement in how you come to the situation. Virtual SAN Ready nodes are a great approach and make this much quicker and simpler, as they offer a variety of pre-configured hardware to meet the needs of Virtual SAN. Along with the Virtual SAN TCO Calculator it makes the painful process of hardware selection a lot easier.

Another argument I hear is “If I am just doing Virtual SAN, that is not enough time.” Yes, it is. It really, really is. I have been a part of multiple engagements for which the first five tasks above are already completely done. All we have to do is come in and turn on Virtual SAN. In Virtual SAN 6.2, this is made really easy with the new wizard:

JMcDonald_Configure VSAN

Even with the inevitable network issues (not lying here; every single time there is a problem with networking), environmental validation, performance testing, failure testing, testing virtual machine creation workflows, I have never seen it take more than a week to do this piece for a single cluster regardless of size of configuration. In many cases, after three days, everything is up and running and it is purely customer validation that is taking place. As a consultant or architect, don’t be afraid of the questions customers ask in regards to performance and failures. Virtual SAN provides mechanisms to easily test the environment as well as see as what “normal” is.

Here are two other arguments I hear frequently:

  • “We have never done this before.”
  • “We don’t have the skillset.”

These claims are probably not 100% accurate. If you have used VMware, or you are a VMware administrator, you are probably aware of the majority of what you have to do here. For Virtual SAN, specifically, this is where the knowledge needs to be grown. I suggest a training, or a review of VMworld presentations for Virtual SAN, to get familiar with this piece of technology and its related terminology. VMware offers training that will get you up to speed on hyper-converged infrastructure technologies, and the new features of VMware vSphere 6.0 Update Manager 2 and Virtual SAN 6.2.

For more information about free learnings, check out the courses below:

In addition, most of the best practices you will see are not unfamiliar since they are vCenter- or ESXi-related. Virtual SAN Health gives an amazing overview that is frequently refreshed, so any issues you may be seeing are reported here; this also takes a lot of the guess work out of the configuration tasks as you can see from the screenshot below, as many, if not all of, the common misconfigurations are shown.

JMcDonald_VSAN Health

In any case, I hope I have made the argument that Virtual SAN is mostly a virtualization design that just doesn’t use traditional SANs for storage.  Hyper-converged infrastructure is truly bringing change to many customers. This is, of course, just my opinion, and I will let you judge for yourself.

Virtual SAN has quickly become one of my favorite new technologies that I have worked with in my time at VMware, and I am definitely passionate about people using it to change the way they do business. I hope this helps in any engagements that you are planning as well as to prioritize and give a new perspective to how infrastructure is being designed.


Jonathan McDonald is a Technical Solutions Architect for the Professional Services Engineering team. He currently specializes in developing architecture designs for core Virtualization, and Software-Defined Storage, as well as providing best practices for upgrading and health checks for vSphere environments

VMware App Volumes Backup Utility Fling: Introduction

First published on VMware’s End-User Computing blog

By Dale Carter, Chris Halstead and Stéphane Asselin

In December 2014, VMware released VMware App Volumes, and since then, lots of new features have been added, and people love using App Volumes. Organizations use App Volumes not only in VMware environments, but also in many Citrix environments.

However, there has been one big request from our App Volumes users: Every time I talk to people about App Volumes, they ask about how to back up their AppStacks and writable volumes. Normal virtual-machine backup tools cannot back up App Volumes AppStacks and writable volumes because the AppStacks and writable volumes are not part of the vCenter inventory unless they are connected to a user’s virtual machine (VM). As I talked to other people within VMware, I found this question coming up more and more, so I started to think of how we could help.

Last summer during an internal conference, Travis Wood, Senior Solutions Architect at VMware, and I were throwing around a few ideas of how to address this request, and we came up with the idea of an App Volumes backup tool.

Because I do not have any programming skills, I started talking with Chris Halstead, End-User-Computing Architect at VMware, about the idea for this tool. Chris was instantly excited and agreed that this would be a great solution. Chris and I also enlisted Stéphane Asselin, Senior End-User-Computing Architect, to help with creating and testing the tool.

Over the last couple of months, Chris, Stéphane, and I have been working on the tool, and today we are happy to announce that the App Volumes Backup Utility has been released as a VMware Fling for everyone to download.

Use Case and Benefits

The issue with backing up App Volumes AppStacks and writable volumes is that these VMDK files do not show up in the vCenter inventory unless they are currently in use and connected to a user’s virtual desktop. The standard backup tools do not see the VMDKs on the datastore if they are not in the vCenter inventory, and you do not want to back up these files while users are connected to their desktops.

The use case for this tool was to provide a way to make your backup tools see the AppStack and writable-volume VMDKs when they are not connected to a user’s virtual desktop. We also did not want to create other virtual machines that would require an OS; we wanted to keep the footprint and resources to a minimum, and the cost down.

The benefits of using the App Volumes Backup Utility are

  • It connects AppStacks and writable volumes to a VM that is never in use and that also does not have an OS installed.
  • The solution is quick and uses very few resources. The only resource that the tool does use is a 1 MB storage footprint for each temporary backup VM you create.
  • The tool can be used in conjunction with any standard software that backs up your current virtual infrastructure.

How Does the Tool Work?

DCarter_app-volumes-backup-utility-19

In the App Volumes Backup Utility, we made it easy for your existing backup solution to see and back up all of the AppStacks and writable volumes. This is accomplished in a fairly straightforward way. Using the tool, you connect to both your App Volumes Manager and vCenter. Then, using the tool, you create a backup VM. This VM is only a shell, has no OS installed, and has a very small footprint of just 1 MB.

Note: This VM will never be powered on.

After the backup VM is created, you select which AppStacks and writable volumes you want to back up, and you attach them to the backup VM using the App Volumes Backup Utility.

After the AppStacks and writable volumes are attached, you can use your standard backup solution to back up the backup VM, including the attached VMDK files. After the backup is complete, open the tool and detach the AppStacks and writable volumes from the backup VM, and delete the backup VM.

For more details on how to use the tool, see the VMware App Volumes Backup Utility Fling: Instructions.

Download the App Volumes Backup Utility Fling, and feel free to give Chris Halstead, Stéphane Asselin, and me your feedback. You can comment on the Fling site or below this blog post, or find our details on this blog site and connect with us.


Dale CarterDale is a Senior Solutions Architect and member of the CTO Ambassadors. Dale focuses in the End User Compute space, where Dale has become a subject matter expert in a number of the VMware products. Dale has more than 20 years’ experience working in IT having started his career in Northern England before moving the Spain and finally the USA. Dale currently holds a number of certifications including VCP-DV, VCP-DT, VCAP-DTD and VCAP-DTA. For more blog post from Dale visit his website at http://vdelboysview.com

Chris_Halstead

Chris Halstead is an EUC Architect on the End User Computing Technical Marketing & Enablement team. He has over 20 years’ experience in the End User Computing space. Chris’ experience ranges from managing a global desktop environment for a Fortune 500 company, to managing and proving EUC professional services at a VMware partner–and most recently as an End User Computing SE for VMware. Chris has written four other VMware Flings, many detailed blog articles (http://chrisdhalstead.net), has been a VMware vExpert since 2012 and is active on Twitter at @chrisdhalstead

Stephane_Asselin

Stéphane Asselin with his twenty years experience in IT, is a Senior Consultant for the Global Center of Excellence (CoE) for the End-User Computing business unit at VMware. In his recent role, he had national responsibility for Canada for EUC planning, designing and implementing virtual infrastructure solutions and all processes involved. At VMware, Stephane has worked on EUC pre-sales activities, internal IP, product development and technical specialist lead on BETA programs. He has also done work as a Subject Matter Expert for project Octopus, Horizon, View, vCOps and ThinApp. Previously, he was with CA as Senior Systems Engineer where he has worked on Enterprise Monitoring pre sales activities and technical specialist. 

In his current role in the Global Center of Excellence at VMware, he’s one of the resources developing presentation materials and technical documentation for training and knowledge transfer to customers and peer systems engineers. Visit myeuc.net for more information.