Home > Blogs > VMware Consulting Blog > Tag Archives: IT Services

Tag Archives: IT Services

Architecting an Internet-of-Things (IoT) Solution

Andrea SivieroBy Andrea Siviero

When Luke Skywalker asks Obi-Wan Kenobi, “What is The Force,” the answer was, “It’s an energy field created by all living things. It surrounds us and penetrates us; it binds the galaxy together.”

According to Intel, there are 15 billion devices on the Internet today. In 2020 the number will grow to 200 billion. In order to meet the demand for connectivity, cities are spending $41 trillion dollars to create the infrastructure to accommodate it.

What I want to talk about in this short article is how to architect an IoT solution, and the challenges in this area.

asiveiro_iot-solution

In a nutshell, connecting “things” to a “platform,” where business apps can consume information, is achieved two ways:

  • Simple “direct” connection (2-Tiered approach)
  • Using a “gateway” (3-Tiered approach)

The 3-Tier Approach: Introducing IoT Gateways

You may now be wondering, “what exactly are the reasons behind introducing a gateway into your IoT architecture?”

The answer is in the challenges introduced by the simple connection:

  • Security threat; the more “they” that are out there, the more “doors” that can be opened
  • Identity management; huge amount of devices and configuration changes
  • Configurations/updates can become a complex problem

What Is/Isn’t an IoT Gateway?

An IoT Gateway:

  • Is a function, not necessarily a physical device
  • Is not just a dumb proxy that forwards data from sensors to backend services (because that would be highly ineffective in terms of performance and network utilization).
  • Performs pre-processing of information in the field—including message filtering and aggregation—before being sent to the data center.

asiveiro_filtering-aggregation

Where is All This Leading?

As enterprises transform into digital businesses, they need to find ways to:

  • Improve efficiencies
  • Generate new forms of revenue
  • Deliver new and exciting customer experiences

These will be the tipping points for enterprise IoT to really take off.

For organizations that want to deploy IoT apps across multiple gateway vendors—and those that wish to buy solutions that are not locked into a single silo—IoT can bring problems and frustration.

VMware has taken the first steps in the IoT journey, making the IoT developer’s life easier, and introducing Liota (Little IoT Agent). Liota is a vendor-neutral open source software development kit (SDK) for building secure IoT gateway data and controlling orchestration that resides primarily on IoT gateways.

Liota is available to developers for free now at https://github.com/vmware/liota, and it works with any gateway or operating system that supports Python.

If you are attending VMworld, make a point to visit the Internet of Things Experience zone. Within this pavilion, we will have several pods showing live demos with augmented reality experiences that bring life to workflows across a variety of industries.

May the force be with you.


Andrea Siviero is an ten-year veteran of VMware and a senior solutions architect member of Professional Services Engineering (PSE) for the Software-Defined Datacenter (SDDC), a part of the Global Technical Solutions (GTS) team. Prior to PSE, Andrea spent three years as pre-sales system engineer and three years as a post-sales consultant architect for cloud computing and desktop virtualization solutions focusing on very large and complex deployments, especially for service providers in the finance and telco sectors.

How to Add a Linux Machine as PowerShell Host in vRO

By Spas Kaloferov

Introduction

In this article we will look into the alpha version of Microsoft Windows PowerShell v6 for both Linux and Microsoft Windows. We will show how to execute PowerShell commands between Linux , Windows, and VMware vRealize Orchestrator (vRO):

  • Linux to Windows
  • Windows to Linux
  • Linux to Linux
  • vRO to Linux

We will also show how to add a Linux PowerShell (PSHost) in vRO.

Currently, the alpha version of PowerShell v6 does not support the PSCredential object, so we cannot use the Invoke-Command command to programmatically pass credentials and execute commands from vRO, through a Linux PSHost, to other Linux machines, or Windows machines. Conversely, we cannot execute from vRO –> through a Windows PSHost –> to Linux Machines.

To see how we used the Invoke-Command method to do this, see my blog Using CredSSP with the vCO PowerShell Plugin (SKKB1002).

In addition to not supporting the PSCredential object, the alpha version doesn’t support WinRM. WinRM is Microsoft’s implementation of the WS-Management protocol, a standard Simple Object Access Protocol (SOAP)-based, firewall-friendly protocol that enables hardware and operating systems from different vendors to interoperate. Therefore, when adding a Linux machine as a PowerShell host in vRO, we will be using SSH instead of WinRM as the protocol of choice.

The PowerShell v6 RTM version is expected to support WinRM, so we will be able to add the Linux PSHost with WinRM, and not SSH.

So, let’s get started.

Continue reading

Configuring VMware Identity Manager and VMware Horizon 7 Cloud Pod Architecture

Dale CarterBy Dale Carter

With the release of VMware Horizon® 7 and VMware Identity Manager™ 2.6, it is now possible to configure VMware Identity Manager to work with Horizon Cloud Pod Architecture when deploying your desktop and application pools over multiple data centers or locations.

Using VMware Identity Manager in front of your VMware Horizon deployments that are using Cloud Pod Architecture makes it much easier for users to get access to their desktops and applications. The user has just one place to connect to, and they will be able to see all of their available desktops and applications. Identity Manager will direct the user to the application hosted in the best datacenter for their location. This can also include SaaS applications as well as the applications that are available through VMware Horizon 7.

The following instructions show you how to configure VMware Identity Manager to work with VMware Horizon 7 when using Cloud Pod Architecture.

Configure view on the first connector

  1. From the VMware Identity Manager Admin Portal select Catalog, Managed Desktop Appliances, View Application.

DCarter_View Application

  1. Choose the first Identity Manager Connector. This will redirect you to the connector View setup page.
  2. Select the check box to enable View Pools. Add the correct information to the first View Pod, and click Save.

DCarter_View Pools

  1. If there is an Invalid SSL Cert warning, click the warning and Accept.

DCarter_Invalid SSL Cert

  1. Scroll down the page and select Add View Pool.

DCarter_Add View Pool

  1. Add the correct information to the first View Pod and click Save.

DCarter_View Pod

  1. If there is an Invalid SSL Cert warning, click the warning and Accept.
  2. You will now see both View Pods configured for this connector.

DCarter_Remove View Pod

  1. Scroll to the top of the page.
  2. Select Federation.
  3. Check the Enable CPA Federation check box. Fill out the correct information, and add all of the Pods within the Federation.
    DCarter_View Pools Federation
  4. Click Save.
  5. From the Pods and Sync tab, click Sync Now.

DCarter_View Pool Sync

Configure view on all other connectors

  1. From the VMware Identity Manager Admin Portal, select Catalog, Managed Desktop Appliances, View Application.
  2. Select the next connector and follow the instructions above.
  3. Do this for every connector.

Configure network ranges

Once the VMware Horizon View setup is complete, you will need to configure Network Ranges.

  1. From the Identity Manager Admin page, select the Identity & Access Management Tab and click Setup.
  2. Select Network Ranges and click Add Network Range.

DCarter_Add Network Range

  1. Enter the required information and click Save.

DCarter_Add Network Range View Site

  1. This will need to be repeated for all network ranges, usually for each site and external access.

Dale is a Senior Solutions Architect and member of the CTO Ambassadors. Dale focuses in the End User Compute space, where Dale has become a subject matter expert in a number of the VMware products. Dale has more than 20 years’ experience working in IT having started his career in Northern England before moving the Spain and finally the USA. Dale currently holds a number of certifications including VCP-DV, VCP-DT, VCAP-DTD and VCAP-DTA. For more blog post from Dale visit his website at http://vdelboysview.com

Define SDDC Success Based on IT Outcomes

Andrea SivieroBy Andrea Siviero

You’ve just deployed a new technology solution; how do you define whether or not it was a success?

People often have difficulty agreeing on the definition of “success” because there are two interconnected dimensions in which a project can be judged as a success or a failure. The first is project management success (delivering in accordance with the agreed-upon project objectives), and the second is 0utcome success (the amount of value the project delivers once it is complete).

Of course, getting agreement on how to define success is not always easy, but based on my day-to-day experience with customers, outcome success is desired over project management success.

Outcomes Are Worth More Than Services

Buying a service rather than an outcome is similar to paying to use equipment at a gym versus working with a personal trainer, whose job is to help you produce an outcome. The latter is worth more than the former.

VMware’s IT Outcomes support the top priority initiatives for CIOs and impact key business metrics, you can check the dedicated web site here.

In my (humble) opinion, indifferently by the IT Outcomes you are focus on, there are three important factors that contribute to a success:

People, Processes, and Architecture.

Based on my experience, customers tend to focus on architecture and technology, sometimes paying less attention to the people and process factors which can contribute more to success. Here is a real-life example from my personal experience.

ASiviero_Simplify the Situation

I was involved with a successful project implementation where all the project’s technical objectives were achieved, but the infrastructure and operations manager did not feel the desired outcomes were achieved. And that manager was right!

After spending an hour talking with the teams, I realized what a great job the consultants had done implementing and demonstrating all the capabilities of their new SDDC.

However, due to their experience, expectations, and culture, they weren’t able to reorganize their teams and processes to take full advantage of the desired outcomes (Speed, Agility and Security).

ASiviero_Amazing SDDC

Here is a summary of the best practices I’ve suggested as a way to leverage VMware technical account managers as coaches.

1 – People

ASiviero_Small Cross Functional Team

  1. Create a blended team of skilled workers with multi-domain and multi-disciplinary knowledge and expertise, and deliver cross-team training.
  1. Encourage autonomy with common goals and operating principles, and focus on service delivery.
  1. Push them to share lessons learned with other teams and expand their use of virtual networking and security.

2 – Process

ASiviero_Application Level Visibility

  1. Decompose management and troubleshooting tasks along virtual and physical boundaries.
  1. Automate manual tasks to improve efficiency and reduce errors.
  1. Correlate the end-to-end view of application health across compute, storage, and networking.

3 – Architecture

ASiviero_Key Requirements for SDDC

  1. Build your SDDC using a design validated by experts.
  1. Implement a comprehensive data center design.
  1. Add in app and network virtualization incrementally.

Putting it all together

ASiviero_Putting it All Together

Achieving 100% of a project’s intended outcomes depends not only on the technology implementation, but also on the organizatonal transformation required to ensure the proper implementation of people and process innovation.


Andrea Siviero is an ten-year veteran of VMware and a senior solutions architect member of Professional Services Engineering (PSE) for the Software-Defined Datacenter (SDDC), a part of the Global Technical Solutions (GTS) team. Prior to PSE, Andrea spent three years as pre-sales system engineer and three years as a post-sales consultant architect for cloud computing and desktop virtualization solutions focusing on very large and complex deployments, especially for service providers in the finance and telco sectors.

Troubleshooting Tips: Orchestrator PowerShell Plug-in

By Spas Kaloferov

Background and General Considerations

In this post will we will take a look at some common issues one might experience when using the VMware vRealize Orchestrator (vRO) PowerShell Plug-In, especially when using HTTPS protocol or Kerberos authentication for the PowerShell Host (PSHost).

Most use cases require that the PowerShell script run with some kind of administrator-level permissions in the target system that vRO integrates with. Here are some of them:

  • Add, modify, or remove DNS records for virtual machines.
  • Register IP address for a virtual machine in an IP management system.
  • Create, modify, or remove a user account mailbox.
  • Execute remote PowerShell commands against multiple Microsoft Windows operating systems in the environment.
  • Run a PowerShell script (.ps1) file from within a PowerShell script file from vRO.
  • Access mapped network drives from vRO.
  • Interact with Windows operating systems that have User Access Control (UAC) enabled.
  • Execute PowerCLI commands.
  • Integrate with Azure.

When you add a PowerShell Host, you must specify a user account. That account will be used to execute all PowerShell scripts from vRO. In most use cases, like the one above, that account must be an administrator account in the corresponding target system the script interacts with. In most cases, this is a domain-level account.

In order to successfully add the PowerShell Host to that account—and use that account when executing scripts from vRO—some prerequisites need to be met. In addition, the use cases mentioned require the PowerShell Host to be prepared for credential delegation (AKA Credential Security Service Provider [CredSSP], double-hop authentication or multi-hop authentication).

To satisfy the above use cases for adding a PowerShell Host in vRO:

The high-level requirements are:

  • Port: 5986
  • PowerShell remote host type: WinRM
  • Transport protocol: HTTPS (recommended)
  • Authentication: Kerberos
  • User name: <Administrator_user_name>

The low-level requirements are:

  • PSHost: Configure WinRM and user token delegation
  • PSHost: Configure Windows service principal names (SPNs) for WinRM
  • PSHost: Import a CA signed-server certificate containing Client Authentication and Server authentication Exchange Key Usage Properties
  • PSHost: Configure Windows Credential Delegation using the Credential Security Service Provider (CredSSP) module
  • vRO: Edit the Kerberos Domain Realm (krb5.conf) on the vCO Appliance (Optional/Scenario specific)
  • vRO: Add the PS Host as HTTPS host with Kerberos authentication
  • vRO: Use the Invoke-Command cmdlet in your PowerShell code

Troubleshooting Issues when Adding a PSHost

To resolve most common issues when adding a PSHost for use with HTTPS transport protocol and Kerberos authentication, follow these steps:

  1. Prepare the Windows PSHost.

For more information on all the configurations needed on the PSHost, visit my blog, “Using CredSSP with the vCO PowerShell Plug-in.”

  1. After preparing the PSHost, test it to make sure it accepts the execution or removes PowerShell commands.

Start by testing simple commands. I like to use the $env:computername PowerShell command that returns the hostname of the PSHost. You can use the winrs command in Windows for the test. Here’s an example of the syntax:

winrs -r:https://lan1dc1.vmware.com:5986 -u:vmware\administrator -p:VMware1! powershell.exe $env:computername

 

Continue by testing a command that requires credential delegation. I like to use a simple command, like dir \\<Server_FQDN\<sharename>, that accesses a share residing on a computer other than the PSHost itself. Here’s an example of the syntax:

winrs -r:https://lan1dc1.vmware.com:5986 -ad -u:vmware\administrator -p:VMware1! powershell.exe dir \\lan1dm1.vmware.com\share


Note
: Make sure to specify the –ad command line switch.

  1. Prepare the vRO so it can handle Kerberos authentication. You need this in order to use a domain-level account when adding the PSHost.

For more information about the Kerberos configuration on vRO for single domain, visit my blog, “Using CredSSP with the vCO PowerShell Plugin.”

If you are planning to add multiple PSHosts and are using domain-level accounts for each PSHost that are from different domains (e.g., vmware.com and support.vmware.com) you need to take this into consideration when preparing vRO for Kerberos authentication.

For more information about the Kerberos configuration on vRO for multiple domains, visit my blog, “How to add PowerShell hosts from multiple domains with Kerberos authentication to the same vRO.”

If you make a mistake in the configuration, you might see the following error then adding the PSHost:

Cannot locate default realm (Dynamic Script Module name : addPowerShellHost#12
tem: ‘Add a PowerShell host/item8′, state: ‘failed’, business state: ‘Error’, exception: ‘InternalError: java.net.ConnectException: Connection refused (Workflow:Import a certificate from URL with certificate alias / Validate (item1)#5)’
workflow: ‘Add a PowerShell

 

If this is the case, go back and re-validate the configurations.

  1. If the error persists, make sure the conf file is correctly formatted.

For more information about common formatting mistakes, visit my blog, “Wrong encoding or formatting of Linux configuration files can cause problems in VMware Appliances.”

  1. Make sure you use the following parameters when adding the PSHost:
    • Port: 5986
    • PowerShell remote host type: WinRM
    • Transport protocol: HTTPS (recommended)
    • Authentication: Kerberos
    • User name: <Administrator_user_name>

Note: In order to add the PSHost, the user must be a local administrator on the PSHost.

  1. If you still cannot add the host, make sure your VMware appliance can authenticate successfully using Kerberos against the domains you’ve configured. To do this you can use the ldapsearch command and test Kerberos connectivity to the domain.

Here is an example of the syntax:

vco-a-01:/opt/vmware/bin # ldapsearch -h lan1dc1.vmware.com -D “CN=Administrator,CN=Users,DC=vmware,DC=com” -w VMware1! -b “” -s base “objectclass=*”
  1. If your authentication problems continue, most likely there is a general authentication problem that might not be directly connected to the vRO appliance, such as:
    • A network related issue
    • Blocked firewall ports
    • DNS resolution problems
    • Unresponsive domain controllers

Troubleshooting Issues when Executing Scripts

Once you’ve successfully added the PSHost, it’s time to test PowerShell execution from the vRO.

To resolve the most common issues when executing PowerShell scripts from vRO, follow these steps:

  1. While in vRO go to the Inventory tab and make sure you don’t see the word “unusable” in front of the PSHost name. If you do, remove the PSHost and add it to the vRO again.
  1. Use the Invoke an external script workflow that is shipped with vRO to test PowerShell execution commands. Again, start with a simple command, like $env:computername.

Then, process with a command that requires credential delegation. Again, as before, you can use a command like dir \\<Server_FQDN\<sharename>.

Note: This command doesn’t support credential delegation, so a slight workaround is needed to achieve this functionality. You need to wrap the command you want to execute around an Invoke-Command command.

For more information on how to achieve credential delegation from vRO, visit my blog, “Using CredSSP with the vCO PowerShell Plug-in.”

If you try to execute a command that requires credential delegation without using a workaround, you will receive an error similar to the following:

PowerShellInvocationError: Errors found while executing script <script>: Access is denied


SKaloferov_Power Shell Error

  1. Use the SilentlyContinue PowerShell error action preference to suppress output from “noisy” commands. Such commands are those that generate some kind of non-standard output, like:
    • Progress par showing the progress of the command execution
    • Hashes and other similar content

Finally, avoid using code in your commands or scripts that might generate popup messages, open other windows, or open other graphical user interfaces.


Spas Kaloferov is an acting Solutions Architect member of Professional Services Engineering (PSE) for the Software-Defined Datacenter (SDDC) – a part of the Global Technical & Professional Solutions (GTPS) team. Prior to VMware, Kaloferov focused on cloud computing solutions.

Virtual SAN Stretch Clusters – Real World Design Practices (Part 2)

Jonathan McDonaldBy Jonathan McDonald

This is the second part of a two blog series as there was just too much detail for a single blog. For Part 1 see: http://blogs.vmware.com/consulting/2016/01/virtual-san-stretch-clusters-real-world-design-practices-part-1.html .

As I mentioned at the beginning of the last blog, I want to start off by saying that all of the details here are based on my own personal experiences. It is not meant to be a comprehensive guide to setting up stretch clustering for Virtual SAN, but rather a set of pointers to show the type of detail most commonly asked for. Hopefully it will help you prepare for any projects of this type.

Continuing on with the configuration, the next set of questions regarded networking!

Networking, Networking, Networking

With sizing and configuration behind us, the next step was to enable Virtual SAN and set up the stretch clustering. As soon as we turned it on, however, we got the infamous “Misconfiguration Detected” message for the networking.

In almost all engagements I have been a part of, this has been a problem, even though the networking team said it was already set up and configured. This always becomes a fight, but it gets easier with the new Health UI Interface and multicast checks. Generally, when multicast is not configured properly, you will see something similar to the screenshot shown below.

JMcDonald VSAN Pt2 (1)

It definitely makes the process of going to the networking team easier. The added bonus is there are no messy command line syntaxes needed to validate the configuration. I can honestly say the health interface for Virtual SAN is one of the best features introduced for Virtual SAN!

Once we had this configured properly the cluster came online and we were able to configure the cluster, including stretch clustering, the proper vSphere high availability settings and the affinity rules.

The final question that came up on the networking side was about the recommendation that L3 is the preferred communication mechanism to the witness host. The big issue when using L2 is the potential that traffic could be redirected through the witness in the case of a failure, which has a substantially lower bandwidth requirement. A great description of this concern is in the networking section of the Stretched Cluster Deployment Guide.

In any case, the networking configuration is definitely more complex in stretched clustering because the networking across multiple sites. Therefore, it is imperative that it is configured correctly, not only to ensure that performance is at peak levels, but to ensure there is no unexpected behavior in the event of a failure.

High Availability and Provisioning

All of this talk finally led to the conversation about availability. The beautiful thing about Virtual SAN is that with the “failures to tolerate” setting, you can ensure there are between one and three copies of the data available, depending on what is configured in the policy. Gone are the long conversations of trying to design this into a solution with proprietary hardware or software.

A difference with stretch clustering is that the maximum “failures to tolerate” is one. This is because we have three fault domains: the two sites and the witness. Logically, when you look at it, it makes sense: more than that is not possible with only three fault domains. The idea here is that there is a full copy of the virtual machine data at each site. This allows for failover in case an entire site fails as components are stored according to site boundaries.

Of course, high availability (HA) needs to be aware of this. The way this is configured from a vSphere HA perspective is to assign the percentage of cluster resources allocation policy and set both CPU and memory to 50 percent:
JMcDonald VSAN Pt2 (2)

This may seem like a LOT of resources, but when you think of it from a site perspective, it makes sense; if you have an entire site fail, resources in the failed site will be able to restart without issues.

The question came up as to whether or not we allow more than 50 percent to be assigned. Yes, we can set it to use more than half consumed, but there might be an issue if there is a failure, as all virtual machines may not start back up. This is why it is recommended that 50 percent of resources be reserved. If you do want to configure a utilization of more than 50 percent of the resources for virtual machines, it is still possible, but not recommended. This configuration generally consists of setting a priority on the most important virtual machines so HA will start up as many as possible, starting with the most critical ones. Personally, I recommend not setting above 50 percent for a stretch cluster.

An additional question came up about using host and virtual machine affinity rules to control the placement of virtual machines. Unfortunately, the assignment to these groups is not easy during provisioning process and did not fit easily into the virtual machine provisioning practices that were used in the environment. vSphere Distributed Resource Scheduler (DRS) does a good job ensuring balance, but more control was needed rather than just relying on DRS to balance the load. The end goal was that during provisioning, placement in the appropriate site could be done automatically by staff.

This discussion boiled down to the need for a change to provisioning practices. Currently, it is a manual configuration change, but it is possible to use automation such as vRealize Orchestrator to automate deployment appropriately. This is something to keep in mind when working with customers to design a stretch cluster, as changes to provisioning practices may be needed.

Failure Testing

Finally, after days of configuration and design decisions, we were ready to test failures. This is always interesting because the conversation always varies between customers. Some require very strict testing and want to test every scenario possible, while others are OK doing less. After talking it over we decided on the following plan:

  • Host failure in the secondary site
  • Host failure in the primary site
  • Witness failure (both network and host)
  • Full site failure
  • Network failures
    • Witness to site
    • Site to site
  • Disk failure simulation
  • Maintenance mode testing

This was a good balance of tests to show exactly what the different failures look like. Prior to starting, I always go over the health status windows for Virtual SAN as it updates very quickly to show exactly what is happening in the cluster.

The customer was really excited about how seamlessly Virtual SAN handles errors. The key is to operationally prepare and ensure the comfort level is high with handling the worst-case scenario. When starting off, host and network failures are always very similar in appearance, but showing this is important; so I suggested running through several similar tests just to ensure that tests are accurate.

As an example, one of the most common failure tests requested (which many organizations don’t test properly) is simulating what happens if a disk fails in a disk group. Simply pulling a disk out of the server does not replicate what would happen if a disk actually fails, as a completely different mechanism is used to detect this. You can use the following commands to properly simulate a disk actually failing by injecting an error.  Follow these steps:

  1. Identify the disk device in which you want to inject the error. You can do this by using a combination of the Virtual SAN Health User Interface, and running the following command from an ESXi host and noting down the naa.<ID> (where <ID> is a string of characters) for the disk:
     esxcli vsan storage list
  2. Navigate to /usr/lib/vmware/vsan/bin/ on the ESXi host.
  3. Inject a permanent device error to the chosen device by running:
    python vsanDiskFaultInjection.pyc -p -d <naa.id>
  4. Check the Virtual SAN Health User Interface. The disk will show as failed, and the components will be relocated to other locations.
  5. Once the re-sync operations are complete, remove the permanent device error by running:
    python vsanDiskFaultInjection.pyc -c -d <naa.id>
  6. Once completed, remove the disk from the disk group and uncheck the option to migrate data. (This is not a strict requirement because data has already been migrated as the disk officially failed.)
  7. Add the disk back to the disk group.
  8. Once this is complete, all warnings should be gone from the health status of Virtual SAN.
    Note: Be sure to acknowledge and reset any alarms to green.

After performing all the tests in the above list, the customer had a very good feeling about the Virtual SAN implementation and their ability to operationally handle a failure should one occur.

Performance Testing

Last, but not least, was performance testing. Unfortunately, while I was onsite for this one, the 10G networking was not available. I would not recommend using a gigabit network for most configurations, but since we were not yet in full production mode, we did go through many of the performance tests to get a baseline. We got an excellent baseline of what the performance would look like with the gigabit network.

Briefly, because I could write an entire book on performance testing, the quickest and easiest way to test performance is with the Proactive Tests menu which is included in Virtual SAN 6.1:

JMcDonald VSAN Pt2 (3)

It provides a really good mechanism to test different types of workloads that are most common – all the way from a basic test, to a stress test. In addition, using IOmeter for testing (based on environmental characteristics) can be very useful.

In this case, to give you an idea of performance test results, we were pretty consistently getting a peak of around 30,000 IOPS with the gigabit network with 10 hosts in the cluster. Subsequently, I have been told that once the 10G network was in place, this actually jumped up to a peak of 160,000 IOPS for the same 10 hosts. Pretty amazing to be honest.

I will not get into the ins and outs of testing, as it very much depends on the area you are testing. I did want to show, however, that it is much easier to perform a lot of the testing this way than it was using the previous command line method.

One final note I want to add in the performance testing area is that one of the key things (other than pure “my VM goes THISSSS fast” type tests), is to test the performance of rebalancing in the case of maintenance mode, or failure scenarios. This can be done from the Resyncing Components Menu:

JMcDonald VSAN Pt2 (4)

Boring by default perhaps, but when you either migrate data in maintenance mode, or change a storage policy, you can see what the impact will be to resync components. It will either show when creating an additional disk stripe for a disk, or when fully migrating data off the host when going into maintenance mode. The compliance screen will look like this:

JMcDonald VSAN Pt2 (5)

This represents a significant amount of time, and is incredibly useful when testing normal workloads such as when data is migrated during the enter maintenance mode workflow. Full migrations of data can be incredibly expensive, especially if the disks are large, or if you are using gigabit rather than 10G networks. Oftentimes, convergence can take a significant amount of time and bandwidth, so this allows customers to plan for the amount of data to be moved while in or maintenance mode, or in the case of a failure.

Well, that is what I have for this blog post. Again, this is obviously not a conclusive list of all decision points or anything like that; it’s just where we had the most discussions that I wanted to share. I hope this gives you an idea of the challenges we faced, and can help you prepare for the decisions you may face when implementing stretch clustering for Virtual SAN. This is truly a pretty cool feature and will provide an excellent addition to the ways business continuity and disaster recovery plans can be designed for an environment.


Jonathan McDonald is a Technical Solutions Architect for the Professional Services Engineering team. He currently specializes in developing architecture designs for core Virtualization, and Software-Defined Storage, as well as providing best practices for upgrading and health checks for vSphere environments

VMware Horizon View Secret Weapon

Andreas LambrechtBy Andreas Lambrecht

Over the last couple of years, I have worked on many challenging Horizon View projects with different business, technical and security requirements. Finding the balance between these points is not always easy. During design workshops and the discussions with desktop management teams and security departments the following questions come up over and over again:

“How can we apply different settings (e.g., clip boards, redirection, printing, etc.) to the user session or desktop based on the user’s location?”

“How can we apply PCoIP optimization to the user session or desktop based on the user’s location?”

Note that these can be internal (LAN or office) or external (Internet or home office) connections.

From the Horizon View architecture point of view we can create different desktop pools with different hardening policies and PCoIP settings, but this means the user will have two different virtual desktops: one for internal and one for external. This may not be optimal in terms of the end user experience because they expect the same virtual desktop behavior in both working environments; when they disconnect the session in the office they expect to continue working on the same document from home without encountering issues. And here is the challenge: ensuring a positive end user experience vs. security policies/PCoIP optimization.

After some research on this particular use case I found a way to manage this requirement without additional costs – while using out-of-the-box Horizon View features. This service comes with the Horizon View Agent as a standard feature and offers many capabilities. I call it the Horizon View Secret Weapon.

Let’s take a closer look at what this secret weapon looks like and what it offers. There are three main ingredients:

  1. VMware Horizon View Script Host Service
  2. System information sent to View Desktop upon user connect or reconnect.
  3. Start Session Script. But note, the intelligence of this script depends on the use case, the security requirements and the ingenuity of the script owner.

Official recommendation: Use start session scripts only if you have a particular need to configure desktop policies before a desktop session begins. As a best practice, use the View Agents CommandsToRunOnConnect and CommandsToRunOnReconnect group policy settings to run command scripts after a desktop session is connected or reconnected. Running scripts within a desktop session will satisfy most use cases. For details, see “Running Commands on View Desktops” in the View Administration document.

For some requirements you can use the View Agents CommandsToRunOnConnect and

CommandsToRunOnReconnect group policy settings, as mentioned above. But what if this is a computer setting or view desktops setting that needs to be configured before the desktop session starts, e.g., PCoIP optimization, clipboard redirection, etc. This is where the secret weapon kicks in and can help fulfill this requirement.

Note: To apply PCoIP optimization there is no need to reconnect because these settings are set before the session or PCoIP protocol start.

In this example I would like to cover a use case with the following technical requirements.

Internal connect

Clipboard redirection:

  • Enabled in both directions

PCoIP settings:

  • BTL set to off
  • Maximum image quality 80
  • Minimum image quality 40
  • Maximum frames per seconds 20

PCoIP Audio limit:

  • 250 kbit/s

USB access:

  • Enabled

ThinPrint:

  • Enabled

External connect

Clipboard redirection:

  • Disabled in both directions

PCoIP setting:

  • BTL set to off
  • Maximum image quality 70
  • Minimum image quality 30
  • Maximum frames per seconds 16

PCoIP Audio limit:

  • 50 kbits/s

USB access:

  • Disabled

ThinPrint:

  • Disabled

First, we must enable the VMware Horizon View Script Host Service on each View Desktop where we want View to run the start session script (e.g., on the base image for a Linked Clone Pool). The service is disabled by default.

To configure the VMware View Script Host Service:

  1. Start the Windows Services tool by entering msc in the command prompt.
  2. In the details pane, right-click on the VMware View Script Host service entry and select Properties.
  3. On the General tab, in Startup type, select Automatic.
  4. If you do not want the local system account to run the start session script, select This account, and enter the details of the account to run the start session script.
  5. Click OK and exit the Services tool.

ALambrecht 1
For more details see “Dynamically Setting Desktop Policies with Start Session Scripts.“

Now we need to find a way to differentiate between an internal and external connection. Here we can draw on the information the Horizon View client has gathered about the client system when a user connects or reconnects to the View Desktop, or we can use the values sent directly from the View Connection Server. This can be any variable from the list (see link below) but I would recommend using ViewClient_Broker_DNS_Name. The reason for this choice is simple: if the user connects from the outside (external connect) the authentication will be managed by the View Connection Server that is paired with the View Security Server. But keep an important View Architecture rule in mind; the View Connection Server paired with the View Security Server should be used exclusively for external connections.

For more details see “Client System Information Sent to View Desktops.”

Important note: The start session variables have the prefix VDM_StartSession_ instead of ViewClient_. This is important for our scripts and is described below.

We are now at the point where we need to talk about the most important ingredient of the secret weapon. But before we start writing the script we need to set some registry values to make the start session script available for execution.

  1. Start the Windows Registry Editor by entering regedit at the command prompt.
  2. In the registry, navigate to HKLM\SOFTWARE\VMware, Inc.\VMware VDM\ScriptEvents.
  3. Edit > Select New > Key, and create a key named StartSession.
  4. In the navigation area, right-click StartSession, select New > String Value, and create a string value (REG_SZ) “Bullet1” and at the command line enter (wscript C:\Program Files\VMware\VMware View\Agent\scripts\bullet1.vbs) .
  5. This will invoke the start session script. Click OK.

Note: As a best practice, place the start session scripts in the following location: %ProgramFiles%\VMware\VMware View\Agent\scripts. By default, this folder is accessible only by the SYSTEM and administrator accounts.

ALambrecht 2

  1. Navigate to HKLM\SOFTWARE\VMware, Inc.\VMware VDM\Agent\Configuration.
  2. Edit > Select New > DWord (32 bit) Value, and type RunScriptsOnStartSession and type 1 to enable start session scripting.

ALambrecht 3

  1. Navigate to HKLM\SOFTWARE\VMware, Inc.\VMware VDM\ScriptEvents.
  2. Add a DWord value called TimeoutsInMinutes.
  3. Set a data value of 0.

ALambrecht 4

For more details see “Add Windows Registry Entries for a Start Session Script.”

Here is a simple script example which covers the technical requirements of this use case.

‘===========================================================================

‘ This script dynamically applies specific session settings based on

‘ the user location.

‘ Author: Andreas Lambrecht VMware PSO CEMEA.

‘ Date: October 2015

‘===========================================================================

Option Explicit

On Error Resume Next

 

Dim objShell

Dim WshShell

Dim objWMIService

Dim strComputer

Dim colServiceList

Dim objService

Dim WScript

 

strComputer = “.”

Set objWMIService = GetObject(“winmgmts:\\” & strComputer & “\root\cimv2”)

Set objShell = CreateObject(“WScript.Shell”)

‘————————————————————————–

‘ Check to see if the user was authenticated and has assigned the session

‘ by the “external” View Connection Servers, which is paired with

‘ View Security Server or by the “internal” View Connection Server.

‘ Based on the result this script will set appropriate settings.

‘————————————————————————–

If objShell.ExpandEnvironmentStrings(“%VDM_StartSession_Broker_DNS_Name%”)=”NAMEOFYOURCONNECTIONSERVER1″ Or objShell.ExpandEnvironmentStrings(“%VDM_StartSession_Broker_DNS_Name%”) = “NAMEOFYOURCONNECTIONSERVER2” Then

‘————————————————————————–

‘ Apply the settings for external connect

‘ – Stop and disable TP Auto Connect Service and TP VC Gateway Service

‘ – Disable enable_build_to_lossless

‘ – Set minimum_image_quality to 30

‘ – Set maximum_initial_image_quality to 70

‘ – Set maximum_frame_rate to 12

‘ – Disable Use image settings from Zero client, if available

‘ – Disable server_clipboard_state in both directions

‘ – Set audio_bandwidth_limit to 80

‘ – Exclude all USB devices

‘————————————————————————–

Set colServiceList = objWMIService.ExecQuery _

(“Select * from Win32_Service where Name = ‘TPAutoConnSvc’ OR Name = ‘TPVCGateway'”)

 

For Each objService in colServiceList

If objService.State = “Running” Then

objService.StopService()

objService.ChangeStartMode(“Disabled”)

Wscript.Sleep 5000

End If

Set WshShell = CreateObject( “WScript.Shell” )

WshShell.RegWrite “HKLM\SOFTWARE\Policies\Teradici\PCoIP\pcoip_admin\pcoip.enable_build_to_lossless”, 0, “REG_DWORD”

WshShell.RegWrite “HKLM\SOFTWARE\Policies\Teradici\PCoIP\pcoip_admin\pcoip.minimum_image_quality”, 30, “REG_DWORD”

WshShell.RegWrite “HKLM\SOFTWARE\Policies\Teradici\PCoIP\pcoip_admin\pcoip.maximum_initial_image_quality”, 70, “REG_DWORD”

WshShell.RegWrite “HKLM\SOFTWARE\Policies\Teradici\PCoIP\pcoip_admin\pcoip.maximum_frame_rate”, 12, “REG_DWORD”

WshShell.RegWrite “HKLM\SOFTWARE\Policies\Teradici\PCoIP\pcoip_admin\pcoip.use_client_img_settings”, 0, “REG_DWORD”

WshShell.RegWrite “HKLM\SOFTWARE\Policies\Teradici\PCoIP\pcoip_admin\pcoip.server_clipboard_state”, 0, “REG_DWORD”

WshShell.RegWrite “HKLM\SOFTWARE\Policies\Teradici\PCoIP\pcoip_admin\pcoip.audio_bandwidth_limit”, 80, “REG_DWORD”

WshShell.RegWrite “HKLM\SOFTWARE\Policies\VMware, Inc.\VMware VDM\Agent\USB\ExcludeAllDevices”, “true”, “REG_SZ”

Set WshShell = Nothing

Next

Else

‘————————————————————————–

‘ Apply the settings for internal connect

‘ – Start and enable TP Auto Connect Service and TP VC Gateway Service

‘ – Disable enable_build_to_lossless

‘ – Set minimum_image_quality to 40

‘ – Set maximum_initial_image_quality to 80

‘ – Set maximum_frame_rate to 20

‘ – Disable Use image settings from Zero client, if available

‘ – Enable server_clipboard_state in both directions

‘ – Set audio_bandwidth_limit to 250

‘ – Disable Exclude all USB devices

‘————————————————————————–

Set colServiceList = objWMIService.ExecQuery _

(“Select * from Win32_Service where Name = ‘TPAutoConnSvc’ OR Name = ‘TPVCGateway'”)

For Each objService in colServiceList

If objService.State = “Stopped” Then

objService.ChangeStartMode(“Automatic”)

objService.StartService()

Wscript.Sleep 5000

End If

Set WshShell = CreateObject( “WScript.Shell” )

WshShell.RegWrite “HKLM\SOFTWARE\Policies\Teradici\PCoIP\pcoip_admin\pcoip.enable_build_to_lossless”, 0, “REG_DWORD”

WshShell.RegWrite “HKLM\SOFTWARE\Policies\Teradici\PCoIP\pcoip_admin\pcoip.minimum_image_quality”, 40, “REG_DWORD”

WshShell.RegWrite “HKLM\SOFTWARE\Policies\Teradici\PCoIP\pcoip_admin\pcoip.maximum_initial_image_quality”, 80, “REG_DWORD”

WshShell.RegWrite “HKLM\SOFTWARE\Policies\Teradici\PCoIP\pcoip_admin\pcoip.maximum_frame_rate”, 20, “REG_DWORD”

WshShell.RegWrite “HKLM\SOFTWARE\Policies\Teradici\PCoIP\pcoip_admin\pcoip.use_client_img_settings”, 0, “REG_DWORD”

WshShell.RegWrite “HKLM\SOFTWARE\Policies\Teradici\PCoIP\pcoip_admin\pcoip.server_clipboard_state”, 1, “REG_DWORD”

WshShell.RegWrite “HKLM\SOFTWARE\Policies\Teradici\PCoIP\pcoip_admin\pcoip.audio_bandwidth_limit”, 250, “REG_DWORD”

WshShell.RegWrite “HKLM\SOFTWARE\Policies\VMware, Inc.\VMware VDM\Agent\USB\ExcludeAllDevices”, “false”, “REG_SZ”

Set WshShell = Nothing

Next

End If

 

Now the secret weapon is ready for use.

Once the secret weapon is implemented and is running, we need to validate whether the specified settings were applied accordingly.

There are four places where we can check the functionality of our solution:

  1. VDM Debug log for StartSessionScript

ALambrecht 5

In the red rectangle we can see that the Start Session Script was sucessfully applied before the PCoIP protocol starts.

For more details see “Location of VMware View log files (1027744).“

  1. PCoIP Server log for PCoIP optimization

ALambrecht 6

In this red rectangle was can see the PCoIP optimization for external connect, as specified in the script.

For more details see “Location of VMware View log files (1027744).“

  1. Management Tools > Services.exe for ThinPrint settings

ALambrecht 7

Here we can see that the ThinPrint services have been stopped and disabled, and the user is no longer able to print.

  1. Registry.exe for USB Access, PCoIP Optimization and Clipboard redirection

ALambrecht 8

 

ALambrecht 9

Finally we can see that all settings were applied as specified by the secret weapon.


 

Andreas Lambrecht is an experienced senior consultant and architect for VMware’s Professional Services Organization specializing in the EUC space. He has worked at VMware for the past 4 years with more than 15 years of experience in the IT industry. Andreas is certified VCP-DCV, VCP-DT, VCAP-DTA VCAP-DTD and also owns the ITIL v4 Foundation certification.

Manually Uploading Dedup Files on Mirage Branch Reflector

Eric MonjoinBy Eric Monjoin

Mirage is a great desktop administration tool; it not only makes it easier to backup and restore user data easily and conveniently via a web interface, or backup all or part of the system by the IT department, but it also ensures the compliance of the user’s workstation by sending and applying operating system or applications updates through “Base Layers” and “Application Layers.”

One problem that arises for IT managers is how to update workstations located at remote locations and connected to the data center via a low-bandwidth network or saturated by other network services.

Therein Mirage provides a first solution by using “Branch Reflectors,” which can be either a PC dedicated to this task or any PC on the remote site as long as it has sufficient disk space and it stays on constantly to receive all Base Layers and Application Layers. This is used to apply updates to workstations located on the same local network, thus avoiding all desktops receiving tedious updates from central Mirage servers.

But sometimes—despite the use of reflector Branch—it appears that the necessary bandwidth is too small for an update, even if it is for only one desktop. The solution that will be developed—which is shown below—explains how to manually update a Branch Reflector from extracted Base Layers or Application Layers.

So, the first thing to do is export layers. This is achieved using a command line from the management servers, but before exporting this layer you need to know its ID and version.

From the MMC or the Web Management Console, look at the Image Composer\Base Layer or Image Composer\App Layer and note the ID and Version of the layer you want to extract.

EMonjoin Dedup 1

 

Then, open a command line in your Mirage Management Server or Mirage Server and run the following command:

# “c:\Program Files\Wanova\Mirage Management Server\Wanova.Server.Tools.exe” LayerExtract \\MirageStorage ID Version Path Target_Path

EMonjoin Dedup 2

Once you export the layers, copy all files to an appropriate storage device, such as an external HDD or USB stick, send it or bring it to your remote location, and copy all files to a folder in your branch reflector.

Note: If the branch reflector is not dedicated but runs on a user desktop, I would recommend hiding the folder where you put the files.

In the meantime, we have to configure the factory policy to scan for this folder so Mirage will know that files are already on the Branch Reflector and will not try to push them again.

  1. On the Mirage Management Server, open a Command Prompt and type the following command:
# “c:\Program Files\Wanova\Mirage Management Server\Wanova.Server.Cli.exe” 127.0.0.1
  1. In the CLI type: GetFactoryPolicy c:\factory_policy.xml
  1. Open and edit the file: c:\factory_policy.xml
  1. Find the “ExtraDedupArea” area and modify the section to add the directory used to receive all dedup files on the Branch Reflector:
  <ExtraDedupArea>
    <IncludeList>
      <Directory path="%windows%.old" recursive="true" filter="*" />
       <Directory path="%systemvolume%\MirageDedup" recursive="true" filter="*" />     <<= Line added
    </IncludeList>
    <ExcludeList />
  </ExtraDedupArea>
  1. Import the new rules to add them in the Factory Policy by typing in the CLI: setFactoryPolicy c:\factory_policy.xml

This generally needs to be done only once, unless you have a really big update with new applications to push to desktops.


Eric Monjoin joined VMware France in 2009 as PSO Senior Consultant after spending 15 years at IBM as a Certified IT Specialist. Passionate for new challenges and technology, Eric has been a key leader in the VMware EUC practice in France. Recently, Eric has moved to the VMware Professional Services Engineering organization as Technical Solutions Architect. Eric is certified VCP6-DT, VCAP-DTA and VCAP-DTD and was awarded vExpert for the 4th consecutive year.

VMworld 2015 US Sneak Peek: Successful Virtual SAN Evaluation/Proof-Of-Concepts

Julienne_PhamBy Julienne Pham

This is the first in a series of previews of our VMware Professional Services speaking sessions at VMworld 2015 US starting August 30th in San Francisco. Our Technical Architects will present “deep-dive” sessions in their areas of expertise. Read these previews, register early and enjoy.

Sneak Peek of Session STO4572: Successful Virtual SAN Evaluation/Proof-Of-Concepts

This is an update to last year’s Virtual SAN proof-of-concept (POC) talk. A lot has changed in the last year, and the idea of this session is to fill you in on all the potential “gotchas” you might encounter when trying to evaluate VMware Virtual SAN.

Cormac Hogan, Corporate Storage Architect, and Julienne Pham, Technical Solution Architect of VMware, will cover everything you need to know, including how to conduct various failure scenarios, and get the best performance. Thinking about deploying Virtual SAN? Then this session is for you.

This session will share key tips on how to conduct a successful Virtual SAN proof-of-concept. It will show you how to correctly and properly set up a Virtual SAN environment (hosts, storage and networking), verify it is operating correctly, and then show you how to test the Virtual SAN full functionality. This session will also highlight how to verify that VM Storage Policies are working correctly, as they are an integral part of SDS and Virtual SAN.

We will also discuss how a Virtual SAN handles failures, and how to test if it is handling events correctly. In addition, it will cover numerous monitoring tools that can be used during a POC, such as the Ruby vSphere Console, VSAN, Observer Web-based analysis tools, and the new Virtual SAN Health Service plug-in. After attending this session, you will be empowered to implement your own Virtual SAN POC.

 Learn more at VMworld 2015 US in San Francisco

STO4572: Successful Virtual SAN Evaluation/Proof-Of-Concepts

Monday, Aug. 31, 8:30 – 9:30 AM

 Speakers:

Cormac Hogan, Corporate Storage Architect, VMware

Julienne Pham, Technical Solution Architect, VMware


Julienne Pham is a Technical Solution Architect for the Professional Services Engineering team. She is specialised on SRM and core storage. Her focus is on VIO and BCDR space.

VMware App Volumes Multi-vCenter and Multi-Site Deployments

By Dale Carter

With the release of VMware App Volumes 2.9 comes one of the most requested features so far: multi-vCenter support. With multi-vCenter support it is now possible to manage virtual desktops and AppStacks from multiple vCenter instances within the same App Volume Manager.

The following graphic shows how this works:

DCarter App Volumes

With this new feature App Volumes can now be used to support the Horizon Block and Pod architecture with just one App Volumes manager, or cluster of managers.

Now that we can support multi-vCenters, I started to wonder if this new capability could be leveraged across multiple sites to help support multiple site deployments.

After speaking with the App Volumes Product Manager, I am happy to confirm that, “Yes,” you can use this new feature to support multi-site deployments – as long as you are using the supported SQL database.

The architecture for this type of deployment would look like this:

DCarter App Volumes 2

 

I would recommend that App Volumes Managers at each site be clustered. Read the following blog to learn how to cluster App Volumes Managers: http://blogs.vmware.com/consulting/2015/02/vmware-appvolumes-f5.html

Although 2.9 is just a point release, this is one of the biggest features added so far for multi-vCenter support.

To add a second―or more―vCenter instance to App Volumes, follow these simple steps:

  1. Login to the App Volumes Manager
  2. Select Configuration, then Machine Manager, and then click Add Machine Manager
    DCarter App Volumes 3
  3. Enter the vCenter information and click Save.
    DCarter App Volumes 4
  4. Follow these steps for each vCenter instance you want to add.

Dale is a Senior Solutions Architect and member of the CTO Ambassadors. Dale focuses in the End User Compute space, where Dale has become a subject matter expert in a number of the VMware products. Dale has more than 20 years experience working in IT having started his career in Northern England before moving the Spain and finally the USA. Dale currently hold a number of certifications including VCP-DV, VCP-DT, VCAP-DTD and VCAP-DTA.

For updates you can follow Dale on twitter @vDelboy