Home > Blogs > VMware Consulting Blog

How to Set Up a BYOD/Mobility Policy

By TJ Vatsa, Principal Architect, VMware Americas Professional Services Organization

TJ Vatsa

Smart phones have surpassed one billion worldwide for the first time in 2012 and that number will likely double by 2015, says Bloomberg. Smart phone sales are even surpassing desktop and laptop sales, according to IDC’s Worldwide Smart Connected Device Forecast Data.

Rolling out a bring-your-own-device (BYOD) policy and infrastructure to handle the influx of personal devices can be a harrowing journey if it’s not well planned. With users today demanding anytime access to business productivity apps, devices, and data on personal devices, not having a policy in place can be even more detrimental.

The first step to implementing a BYOD policy is to think about the devices themselves, how you’ll manage them, and the applications that are being used. VMware’s Horizon EUC (End User Computing) suite can act as the broker and management platform between devices and applications to ensure that the corporate network stays secure. (And users stay happy.)

The recent acquisition of AirWatch makes VMware the undisputed leader in the space of BYOD and mobility, providing the most mature EUC solution portfolio on the market today. This solution portfolio includes some of the key capabilities, such as:

  1. MDM: Mobile Device Management
  2. MAM: Mobile Application Management
  3. MCM: Mobile Content Management
  4. MEML Mobile Email Management
  5. SCL: Secure Content Locker
  6. And a plethora of additional features and functionalities

Now, having touched on the “why” above, let’s take a look at the “what” and “how” of the BYOD/mobility policy.

What: Devices, Applications, Management, Customizations

Below, I’ll lay out general steps to think about in your BYOD policy and tips to putting it in place. That said, every policy requires its own customizations: there’s no-one-size-fits-all approach. Healthcare has different requirements than a financial institution would, for example.

First Step: Devices and Access
With many solutions in the market, customers and integrators can overlook design. So the burning question an architect needs to ask is: “What kind of access for what types of devices?” For the purposes of this blog, we’ll look at the three most typical categories: LAN, VPN, and public network access (see chart below). You can use the sample matrix below to better assess the access you’d like to grant.

For instance, you’ll put devices on the X axis and network access on Y axis. Your LAN will need to be the most secure; therefore, only company-issued devices will have access. But BYOD devices can still gain network access through VPN or a public network, just no access to the LAN itself. These access and device controls need to be driven by your corporate security policies.

How: Design Dos and Don'ts (Devices & Access)

 

Second Step: Features and Capabilities
Once you’ve figured out access levels, next create a matrix to assess the desktop features and capabilities you’d like to grant. Public network settings will be the most stringent, but VPN and LAN will provide the security you need to enable most desktop features. You’ll want feature category on the X axis against network access on the Y axis, like so:

How: Design Dos & Don'ts (Features & Capabilities)

With your LAN, multimedia redirection is another consideration. If a user is accessing a desktop on the corporate network, audio and video capabilities might require provisioning on the end device. In certain cases, WAN bandwidth may cause an issue accessing corporate recordings. The same issue may happen with printing as well. Ensure that you comply with corporate IT policies while evaluating and enabling such features.

Third Step: Applications
Last, consider your applications entitlement. It’s easy to restrict applications through the catalog of applications provided in the Virtual Workspace Catalog, and the entitlements can be adjusted by department–so your finance department will get access to a different catalog of applications than HR would. Or you can restrict application entitlements based on security rules. For instance, Active Directory GPOs (Group Policy Objects) can be effectively used to enforce business/department specific security policies.
image4-Entitlements-Vatsa-4.18.14

As you can see, creating a BYOD policy doesn’t need to be daunting. If you think through the various steps, you’ll have a secure network access, happy end-users, and a policy that ensures a successful and a mature adoption of your enterprise BYOD/mobility strategy.

I hope you will find this information handy and useful during your BYOD/mobility architecture design and deployment strategy.


TJ Vatsa has worked at VMware for over four years, with over 19 years of expertise in the IT industry, mainly focusing on the enterprise architecture. He has extensive experience in professional services consulting, cloud computing, VDI/End-User Computing infrastructure, SOA architecture planning, implementation, functional/solution architecture, and technical project management related to enterprise application development, content management, and data warehousing technologies. Catch up with TJ on Twitter, Facebook, or LinkedIn.

Success and Innovation Starts with the Right Platform

By Gary Hamilton, Senior Cloud Management Solutions Architect, VMware

GH 2012_001_medium

Every day, companies like Square, Uber, Netflix, Airbnb, the Climate Corporation, and Etsy are creating innovative new business models. But they are only as innovative as the developers who build their applications and the agility of the platform on which those applications are delivered.

By using Pivotal CF, an enterprise PaaS solution (powered by Cloud Foundry) that is constantly delivering updates to and horizontally scaling their applications with no downtime, companies can develop applications at the speed of customer need/demand, not inhibited by infrastructure.

Businesses, now more than ever, have a greater need for agility and speed–a solid underlying platform is the key to delivering faster services.

We all consume software as a service (SaaS) like Gmail every day via our laptops, smart phones, and tablets. Platform as a service, or PaaS, acts as the middle layer between the applications and the infrastructure (that is compute, storage and network). If everything is operating smoothly, the actual infrastructure on which software is built is something that few users even give a second thought. And that’s how it should be.

The concept and value of infrastructure as a service (IaaS) is easy to understand and grasp. Being able to consume virtual machines (VMs) on demand, instead of waiting days or weeks for a physical server, is a tangible problem. Platform as a service (PaaS) is different. Delivering VMs with middleware installed is how PaaS solutions have traditionally been presented, but isn’t that a software distribution and automation problem?

And therein lies the problem. We have neither identified the real problem, nor the real end user to whom PaaS is a real solution, and it is therefore difficult to quantify the real value proposition of PaaS.

As stated earlier, PaaS is intended to provide that middle layer between the infrastructure and the application. PaaS should be providing services that are leveraged/used by the application, enabling the application to deliver its services to its end user, abstracting that middle layer and the infrastructure. When we think about PaaS in these terms, we begin to hone in on the real problem and the real PaaS consumer: the developer.

However, the problem the developer faces is how to plug new services into an application on demand as quickly as he/she is able to develop the new application. Developers are neither DBA or Hadoop experts, nor are they experts in high availability (HA) and resilience, they are not security experts nor are they scaling and capacity management specialists.

With PaaS, developers can use services that meet functional and non-functional requirements on demand: they should be plugged right in with a variety of databases on demand. (Think of it as any database, elasticity, security, HA, or analytics on demand.) The possibilities are exciting! PaaS essentially brings in an application with business services wrapped around it and applications are enterprise-ready at the click of a button, versus waiting weeks or months to complete integration and performance testing.

The PaaS model is a bit different as it means consultants support a developer who then supports a business. The conventional cloud solutions are aimed at the end user or a customer, whereas now the focus is on the applications. As far as IT goes, the focus is shifting toward innovation away from the mentality that IT is about cost savings.

IT is No Longer About Saving Money

That’s right, IT is no longer about saving money. Sure, saving money is important, but that’s not where the real value is. The value is in new services that create new revenue streams.

Just look at the innovative companies I listed above. To succeed, they had to recognize that developers are the engine of innovation and innovation helps to drive revenue.

To help educate customers, consultants need to assume the role of educator so companies can understand how to become more agile in the face of a changing industry.

The problem is, many businesses see IT as a cost center and think that spending on IT isn’t money well spent. Businesses need to innovate to grow revenue. PaaS resonates with those innovative companies: they recognize that a fast and agile platform can only help them innovate and deliver new services faster. And, in turn, that leads to profitability.


Gary Hamilton is a Senior Cloud Management Solutions Architect at VMware and has worked in various IT industry roles since 1985, including support, services and solution architecture; spanning hardware, networking and software. Additionally, Gary is ITIL Service Manager certified and a published author. Before joining VMware, he worked for IBM for over 15 years, spending most of his time in the service management arena, with the last five years being fully immersed in cloud technology. He has designed cloud solutions across Europe, the Middle East and the US, and has led the implementation of first of a kind (FOAK) solutions. Follow Gary on Twitter @hamilgar.

Create a vCOps One-Click Cluster Capacity Dashboard Part 2

Sunny DuaBy Sunny Dua, Senior Technology Consultant at VMware

As I promised in my last post, Create a One-Click Cluster Capacity Dashboard Using vCOps, I am going to share the recipe for preparing dashboards similar to the “One-Click Cluster Capacity Dashboard,” which received a lot of appreciation from the Twitterati. A number of people  deployed the dashboard and within minutes they could showcase the capacity of their vSphere Clusters.

Now I want to take this one level deeper and tell you how you can create your own cool XMLs within vCOps Custom UI (included with Advanced & Enterprise Edition) to create the dashboard to showcase to your CxO, IT VP or the NOC team who are monitoring the virtual infrastructure. I call this the “behind the scenes” post because it will get into XML coding. Creating these XMLs is way easier than I thought, so go ahead, read on….

To begin, let’s have a look at the XML file I created for scoreboard interactions in Part 1 of this two post series. Here is how the file is structured and the details of the components that make up this file. Understanding this is critical.

 

One-Click Part 2 Image 1

Hint: Open this image on a separate page to get all the details.

Now if you have spent some time reading the details of the image above, the first question you will have is “Where can I find the adapterkindKey, resourcekindKey and the Metric attrkey to make my dream dashboard?”

adapterkindKey – This is the easiest one. If you want to see metrics from your vSphere environment, you will use VMWARE as the adapter kind. If you have collectors installed for third-party products, refer to their documentation for the adapter name.

resourcekindKey and attrkey – These keys are stored in the vCOps database. The procedure to access the database is defined in VMware KB – 2011714, but I have simplified it in the steps below.

To access the VCOps database and retrieve the resourcekindKey and attrkey

1. Open the following URL in your environment:

https:///vcops-custom/dbAccessQuery.action

2. When you see the vCOps DB Access Query page, run the following query. This will fetch the data you need. Note: Copy and paste the query starting at select and ending at ‘HostSystem’. (Ignore the asterisks.)

*********************************************************************************
select a.ADAPTER_KIND_ID, a.ADAPTER_KEY, b.RESKND_ID, b.RESKND_KEY, e.ATTRKEY_ID, e.ATTR_KEY
from AdapterKind a
inner join ResourceKind b on (b.ADAPTER_KIND_ID = a.ADAPTER_KIND_ID)
inner join AliveResource c on (c.RESKND_ID = b.RESKND_ID)
inner join ResourceAttributeKey d on (d.RESOURCE_ID = c.RESOURCE_ID)
inner join AttributeKey e on (e.ATTRKEY_ID = d.ATTRKEY_ID)

where a.ADAPTER_KEY = ‘VMWARE‘ or b.RESKND_KEY = ‘HostSystem

*********************************************************************************

If you are looking for keys related to an adapter other than VMware, change the values highlighted in blue in the query.

3. The query will give you all the data you need, in the following format. (The screenshot below is from my lab.)

One-Click Part 2 Image 2

Here, you will see the resourcekindKey and attrkey which will help you to create your own XML for the values you want to showcase for a particular resource. Once you have done that, you just need to import this XML into the default interactions location mentioned in my last post. Now you are ready for scoreboard interactions.

It’s that easy!

I hope you will use this recipe for good, and I would appreciate if you can share the XMLs you create with it. I am planning to host a repository on my blog to include some easily re-usable dashboards that can help those in the VMware Community who are using or planning to use the vCenter Operations Manager. As always, please share your thoughts and ideas in the comments section.


This post originally appeared on Sunny Dua’s vXpress blog. Sunny is a senior technology consultant for VMware’s Professional Services Organization, focused on India and SAARC countries. Follow Sunny on Twitter @sunny_dua.

Horizon Mirage 4.4: Game Changer for Mobile Workforce Backup and Recovery

John KramerBy John Kramer, Consultant at VMware

I am excited to share what I think is a game changing feature of the new release of Horizon Mirage: its ability to do remote backup and recovery in the cloud. This provides a huge boost in both ease of use and security of end user data on your corporate endpoints.

Previously, using Mirage off network required some form of VPN access to connect to the Mirage servers in the data center, but new enhancements mean that’s no longer the case. With Horizon Mirage 4.4, VMware introduces the Mirage Edge Gateway. Thanks to collaboration between the Mirage development team, the VMware Light House program, and VMware Professional Services, our behind-the-scenes efforts have brought this new feature to all Mirage customers with this release.

This new feature is something I have been asking product management to consider for a while now, as more and more people no longer use VPN to access corporate resources. It’s a pain to constantly log into VPN—a complaint I’ve heard often in my years supporting sales reps who say that the VPN just gets in the way of getting their jobs done.

How Does It Work?
The Mirage Edge Gateway sits in the DMZ of the enterprise network and allows a Mirage client to securely sync with the Mirage servers in the data center whenever a laptop has an active Internet connection.

Deployment is simple. The diagram below gives you an overview of how to put all the pieces together. Most companies have an external firewall and the Mirage Edge Gateway simply sits in the DMZ and proxies Mirage traffic back to the Mirage Cluster that sits on the corporate network.

Mirage Edge Implementation Architecture

There is one main difference between an on-network and off-network Mirage client connection: when off network, all Mirage traffic is directed to the Mirage Edge Gateway during which time the Mirage client will prompt the end user for credentials.

This added layer of security is based on Active Directory or LDAP credentials and a security token is granted for a specific amount of time that a network administrator determines. This means the end user could be prompted for a password once a week, twice a month, or whatever a security team deems appropriate.

Using a security token means end user credentials are not stored or cached and end users aren’t constantly bombarded with prompts for credentials to accomplish a Mirage sync. (I do recommend a longer timeout value versus a shorter timeout because you want to make sure the endpoints are backed up at the end of the day.)

Mirage on Site with Customers

A few customers recently told me that they have remote workers who rarely or never come into the office. In one particular customer’s case, a third of its workforce is completely mobile—meaning 4000 mobile end points. Before Mirage, those mobile workers said they would rather come into the office than log into the VPN.

This is why the Mirage Edge Gateway is such a genius solution. Not only does the Mirage solution allow remote users to protect the data on their endpoints, but also they don’t need to be at the office or on the VPN for backups to take place.

With the addition of the Mirage Edge gateway, Mirage can completely replace cloud-based backup solutions like CrashPlan, Mozy, and Carbonite, with the benefit of allowing IT to securely control the solution in the corporate data center

Commercial cloud-based backup solutions don’t typically offer the image management and layer management features that are included out of the box with Mirage. Furthermore, while Mirage secures mobile workforce data in your corporate data center, it allows both IT and end users flexibility when they need to recover data. For example end users can recover deleted files or previous versions of files directly from Windows Explorer by right clicking a file or folder.

Mirage Edge in Windows Explorer

 

Mirage also makes a great solution for migrating user data when it comes time for a lease refresh of old endpoints to new hardware. If you’re still running Windows XP, Mirage can help reduce the effort around a Windows 7 migration.

With its remote backup and recovery in the cloud, Mirage means ease of use for remote users and a more secure solution for IT. The only problem now is that those remote users may never head into the office.


John Kramer is a Consultant for VMware focusing on End-User-Computing (EUC) solutions. He works in the field providing real-world design guidance and hands-on implementation skills for VMware Horizon Mirage, Horizon View, and Horizon Workspace solutions for Fortune 500 businesses, government entities, and academics across the United States and Canada. Read more from John at his blog: www.eucpractice.com

New Technology Implementation Plan: Start by Stepping Back

Jeremy Carter headshotBy Jeremy Carter, VMware Senior Consultant

I’ve been working on a customer engagement recently that takes advantage of vCloud Automation Center (vCAC), which is designed to centralize and automate key IT activities, freeing the organization to focus on the needs of internal and external customers.

In our deployment of vCAC, I’ve been reminded of a key principal of IT and business transformation: The technology is only part of the process. Often a shift in technology requires a period of assessment and realignment that is as valuable as the technology itself.

When the VMware Professional Services team is brought in for an engagement, the company wants to get the best return on its investment, so the IT team is receptive to our schedule of meetings and stock-taking. But every IT organization will benefit by starting their new technology implementation plan by stepping back to survey the systems in place before integrating a new one.

We put a lot of emphasis on investigating how things are currently done, often starting by asking the teams to draw their processes, for creating a virtual machine, for instance. Frequently we find they have two or three different processes in place, depending on who’s making request. This is especially common in government and higher education, where each department is likely to have it’s own IT team and strategy.

The unfortunate fact is that automation still scares people, thinking they’re going to be out of a job. On the contrary, if you look at any IT organization out there, you’ll see that it’s overwhelmed with tasks, many of which are never getting done. Automation can give them time back to focus on what’s important to their customers.

A new implementation is a perfect opportunity to look at which processes are working the best and align all the teams to them. When a team sees that they’ll be able to provide a better experience and quicker turnaround, their resistance to automation often fades.

And luckily vCAC provides enough flexibility that users don’t have to adopt exactly the same systems across the organization. With a college I worked with recently, we were able to build on what teams are already doing. Next we focused on handoff systems to cut down on the number of emails flying around: one for DNS, another to install the OS, etc.

This process—of assessing current processes, building in automation and consistency, and then refocusing on customer needs—is undeniably valuable. But it does take time. It’s worth putting these reassessments on the calendar every 6 or 12 months; if that doesn’t work, I recommend taking the opportunity presented by the implementation of a new technology to keep moving toward the best your organization can be.


Jeremy Carter is a Senior Consultant with VMware and is focused on the Software Defined Data Center (SDDC). He has special expertise in cloud infrastructure and automation, and BCDR. Over his 14 years in IT he has gained a variety of experience as an architect, DBA, and developer. Prior to joining VMware, Jeremy was a Principal Architect at one of the largest VMware service providers. 

 

Create a One-Click Cluster Capacity Dashboard Using vCOps

Sunny DuaBy Sunny Dua, Senior Technology Consultant at VMware

It’s easy to set up a cluster capacity dashboard in just one click and I’ll show you how to do it with vCenter Operations Manager Custom Dashboards. In this two-part blog series, I’ll guide you through steps to get this dashboard installed in your environment and explain how to create the interaction XML.

Let’s take a look at the final dashboard in the screenshot below, the problems it will solve, and its features. Then we’ll take a closer look at the process of designing this dashboard and the related customizations you can do. DuaOCCCD1
Here is a quick summary and the features of this dashboard:

  • The list of clusters in the environment being monitored in your Virtual Infrastructure (left pane).
  • Once you select a given cluster, you will see the Capacity Overview of the cluster (right scoreboard widget).
  • The scoreboard gives you the summary of the cluster, consolidation ratios, capacity remaining, waste, and stress data.
  • Each score’s color designates VMware configuration maximums. (For example, if the number of hosts comes out to 33, the box will turn red as vSphere 5.x currently supports a 32-node ESXi Cluster. You have the option to define these thresholds while creating the XML—I’ll share this in a moment.)
  • This dashboard can help CXOs get details about the capacity of each cluster with just a click of a button. It can also easily help them make procurement decisions.
  • Using this dashboard helps IT teams quickly decide which clusters can be used for any new Virtual Machine demand from the business, etc.
  • Finally, large service providers can use this dashboard to keep tabs on the resource utilization and available capacity.

Download Files

The beauty behind this customization is that I can export this dashboard right from my vCOps instance and import it into any vCOps instance with a few steps–and it will work like a charm. You can successfully reuse this dashboard in your vCOps instance, if you have the vCOps advance or Enterprise edition, which includes a custom UI.

Download the Cluster-XML.xml file below to see all of the metrics to display in the scorecard on the right as soon as a cluster is selected on the left pane. In part two of this series, I will tell you how to write this file. The Cluster-Capacity Dashboard.xml file is just a simple export of the dashboard from the Custom UI.

You can do the same for any dashboard that does not have any dependencies for resource IDs (unique identity number given by vCOps to each of its inventory object). You would take a two-step approach to use these files to achieve the final result.

Files to download:

Cluster-XML.xml

Cluster-Capacity Dashboard.xml

Step-by-Step Instructions to Place the Cluster-XML.xml in a Specific Location of UI VM

  1. Use an SCP software to login to the UI VM using the root credentials. I am using WinSCP.
    Change the directory to the following location: /usr/lib/vmware-vcops/tomcat-enterprise/webapps/vcops-custom/WEB-INF/classes/resources/reskndmetrics
  2. Drag and drop the Cluster-XML.xml file from your system where you downloaded it to this directory as shown in the screenshot below.DuaOCCCD2
  3. Right click the target file, and then click on Properties to change the permission level to 644 (for read and execute rights) as shown below.DuaOCCCD3

Now that you’ve finished the first set of steps, let’s go through the second set of instructions.

Step-by-Step Instructions: Import Cluster-Capacity Dashboard.xml Dashboard in vCOps Custom UI

  1. Log into vCOps Custom UI using an ID with administrative privileges.
  2. Click the Import Option under the Dashboard Tools menu.
    DuaOCCCD4
  3. Browse to the location where you saved the Cluster-Capacity Dashboard.xml and click Import.
    DuaOCCCD3
  4. You’ll now see a dialog box indicating that your dashboard was successfully imported. Close the window and click the Dashboards Menu to find a new dashboard named “CLUSTER-WISE CAPACITY OVERVIEW”
    DuaOCCCD6
  5. Click this and you will now have see the dashboard I displayed at beginning of this post. It’s that simple! :-)After importing the dashboard, if you do not see the names of your cluster in the Resources Widget, you must edit the “Resources” Widget -> Select “Cluster Compute Resource” in the left pane and click OK. This will list all your clusters.

Stay tuned for part two of this article where I’ll provide steps to help create your own .XML files to build additional dashboards. This is useful for those who want a single pane to view the entire capacity of a Virtual Infrastructure.

Additional Notes and Resources

Lior Kamrat, who like myself is a part of VMware Consulting group, has a list of great list of vCOps resources available on a dedicated page of his blog called IMALLVIRTUAL.COM. I would highly recommend you bookmark the page if you are using, learning about, or want to become an expert on vCenter Operations Manager. He also has a blog series on One Click Capacity Planning Dashboards with another angle on capacity in your Virtual Datacenter. In addition, you can review other articles on vCOps on vXpress.


This post originally appeared on Sunny Dua’s vXpress blog. Sunny is a Senior Technology Consultant for VMware’s Professional Services Organization, focused on India and SAARC countries. Follow Sunny on Twitter @sunny_dua.

Go for the Gold: See vSphere with Operations Management In Action

If there’s anything we’ve learned from watching the recent Winter Olympics, it’s that world-class athletes are focused, practice endless hours, and need to be both efficient and agile to win gold.

When it comes to data centers, what sets a world-class data center apart is the software. A software-defined data center (SDDC) provides the efficiency and agility for IT to meet exploding business expectations so your business can win gold.

The VMware exclusive seminar is here! Join us to learn about the latest in SDDC.

Now through March 19, VMware TechTalk Live is hosting free, interactive half-day workshops in 32 cities across the U.S. and Canada. Attendees will get to see a live demo of vSphere with Operations Management.

The workshops will also provide a detailed overview of the key components of the SDDC architecture, as well as results of VMware customer surveys explaining how the SDDC is actually being implemented today.

Check out the TechTalk Live event information to find the location closest to you and to reserve your spot.

Horizon Workspace Tips: Increased Performance for the Internal Database

Dale CarterBy Dale Carter, Consulting Architect, End User Computing

During my time deploying VMware Horizon Workspace 1.5 to a very large corporation with a very large Active Directory (AD) infrastructure, I noticed that the internal Horizon database would have performance issues when syncing the database with AD.

After discussing the issues with VMware Engineering we found a number of ways to improve performance of the database during these times. Below I’ve outlined the changes I made to Horizon Workspace to increase performance for the internal database.

I should note that the VMware best practice for production environments is to use an external database. However in some deployments customers still prefer to use the internal database; for instance, for a Pilot deployment.

Service-va sizing

It is very important to size this VM correctly; this is where the database sits and the VM that will be doing most of the work. It is very important that this VM not be undersized. The following is a recommended size for the service-va, but you should monitor this VM and change as needed.

  • 6 x CPUs
  • 16GBs x RAM

Database Audit Queue

If you have a very large users population, then you will need to increase the audit queue size to handle the huge deluge of messages generated by entitling a large volume of users to an application at once. VMware recommends that the queue be at least three times the number of users. Make this change to the database with the following SQL:

  1. Log in to the console on the service-va as root
  2. Stop the Horizon Frontend service

service horizon-frontend stop

  1. Start PSQL as horizon user.  You will be prompted for a password.

psql -d “saas” -U horizon

  1. Increase Audit queue size

INSERT INTO “GlobalConfigParameters” (“strKey”, “idEncryptionMethod”, “strData”)

VALUES (‘maxAuditsInQueueBeforeDropping’, ’3′, ’125000′);

  1. Exit

\q

  1. Start the Horizon Frontend service

service horizon-frontend start.

  1. Start the Horizon Frontend service

service horizon-frontend start.

Adding Indexes to the Database

A number of indexes can be added to the internal database to improve performance when dealing with a large number of users.

The following commands can be run on the service-va to add these indexes

  1. Log in to the console on the service-va as root
  2. Stop the Horizon Frontend service

service horizon-frontend stop

  1. Start PSQL as horizon user.  You will be prompted for a password.

psql -d “saas” -U horizon

  1. Create an index on the UserEntitlement table

CREATE INDEX userentitlement_resourceuuid

ON “UserEntitlement”

USING btree

(“resourceUuid” COLLATE pg_catalog.”default”);

  1. Create 2nd index

CREATE INDEX userentitlement_userid

ON “UserEntitlement”

USING btree

(“userId”);

  1. Exit

\q

  1. Start the Horizon Frontend service

service horizon-frontend start.

I would also like to point out that these performance issues have been fixed in the up and coming Horizon 1.8 release.  For now, though, I hope this helps. Feel free to leave any questions in the comments of this post.


Dale Carter, a VMware Consulting Architect specializing in the EUC space, has worked in IT for more than 20 years. He is also a VCP4-DT, VCP5-DT and VCAP-DTD.

Think Like a Service Provider, Build in vCenter Resilience

Jeremy Carter headshot By Jeremy Carter, VMware Senior Consultant

When I’m working on a customer engagement, we always strategize to ensure resiliency and failover protection for vCenter Automation Center (vCAC). While these considerations continue to be top priorities, there is another question that seems to be coming up more and more: “What about vCenter?”

vCenter has long been thought of as the constant, the unshakable foundation that supports business differentiators like vCAC. Although we’re happy for that reputation, it’s important for IT organizations to take the appropriate actions to protect all components up and down the stack

This is increasingly necessary as organizations move into an IT-as-a-Service model. As more parts of the business come to rely on the services that IT provides, IT must be sure to deliver on its SLAs—and that means improved resilience for vCenter as well as the applications that sit on top of it.

Our customers have found vCenter Server Heartbeat to be an essential tool to support this effort. Heartbeat allows IT to monitor and protect vCenter from a centralized easy-to-use web interface and protects against application or operator errors, operating system or hardware failure and external. In addition to protecting against the unplanned downtime, it provides improved control during planned downtime, such as during Windows updates, allowing patches without vCenter downtime.

In the past, Heartbeat was most popular with service providers who needed to securely open up vCenter to customers. Now that more IT organizations are becoming service providers themselves, I encourage them to support their internal customers at the same level and make sure vCenter resilience and protection is part of the plan.


Jeremy Carter is a VMware Senior Consultant with special expertise in BCDR and cloud automation. Although he joined VMware just three months ago, he has worked in the IT industry for more than 14 years.

Successful IaaS Deployment Requires Flexibility & Alignment

Alex SalicrupBy Alex Salicrup, IT Transformation Strategist

When the CEO of a global food retailer announces his goal to triple revenues in five years, the IT organization knows it’s time to step up its plans to overhaul the IT infrastructure.

That’s just what happened in a recent customer engagement where we helped the IT organization automate provisioning, eliminate the need for a significant increase in headcount, and enable a new service provider approach to support their software-defined data center.

The engagement started off with a very aggressive, short interval, cloud service implementation plan. But halfway through the engagement we had to quickly pivot when the CIO accelerated a major service offering commitment to the business. Because of that course change, this engagement is a great example of why an IT organization’s journey needs to build toward an agile infrastructure and cross-team alignment to ensure success—even in the face of unexpected change.

The Goal

The IT department was eager to adopt an IT-as-a-Service (ITaaS) model to support its transformation for two key reasons:

  1. It would help keep IT operations humming as the company continued to expand and innovate.
  2. It would showcase the IT team’s strategic value by improving IT services to other organizations.

We first worked with the customer to establish their end-state vision, complete with a timeline that would allow employees to learn the new technology and gradually get comfortable with the ITaaS approach. The client also chose to start by introducing Infrastructure-as-a-Service (IaaS) through a pilot to automate provisioning. Four weeks into the engagement, the CIO made the announcement.

A key business unit had been preparing to roll out changes to the company’s public website and needed an infrastructure platform for their testing, development, and QA efforts. Although the business unit’s IT staff was looking at a external cloud service provider’s infrastructure platform, the CIO stood firm: The pre-launch testing was to be conducted on the new IaaS foundation currently being built.

The original plan to gradually build project momentum instantly switched to a full-out sprint. The new plan was to execute on multiple project points simultaneously, rather than one step at a time. This is where our program design that combines organizational with technology development to meet the desired end-state IT transformation was key.

While we addressed the requirements for the new infrastructure, the customer’s IT infrastructure team continued to develop new functionality for the service offering, which would provide additional capabilities on top of the core infrastructure offering. Knowing success depended on a close partnership with the IT team, as well as buy-in across the business, we implemented a series of three workshops, wrapping up with a clear plan to move forward.

1. Organizational Readiness Assessment

Our team began by interviewing leaders in 30 functional areas of the IT business to score the retailer on its current level of efficiency, automation, and documentation. The areas with lower scores showed us where we needed to make improvements as we created the new infrastructure.

2. Organizational Readiness Discovery Sessions

These formal meetings with the retailer’s management team helped us reach an in-depth understanding of how the business unit operated its IT business, technically as well as operationally. After each concentrated session, we crafted a summary that outlined progress and achievements.

3. Validation Sessions

Conducted in parallel, these provided an opportunity to share observations from the previous sessions and compare notes. This also allowed the internal IT team to provide recommendations and alternatives early on and contribute to the decision-making process for next steps.

4. Validation Report

Finally, we presented a roadmap and plan for what we would build and how it would be done.

Simultaneously, we focused on integrating the organization’s diverse provisioning technologies using the findings from our readiness assessment. To get the company closer to its goal—to shorten provisioning from 10 weeks to 10 minutes—we needed to free IT from its current method of manually inputting information into one system at a time, one step at a time. After outlining a plan and identifying process areas with opportunities for automation, we successfully integrated directory and collaboration applications, security tools, and all of the IT management systems with a compressed schedule and minimum hiccups.

This project was particularly satisfying. Given the scale and the time pressure, everyone was in sync—including the customer. And it reminded me that with careful assessment, planning, and socialization, along with a flexible mindset, IT can adapt to rapid changes—from outside or inside the business.


Alex Salicrup is currently VMware’s Program Manager for the IT Transformation Programs effort at a major global food retailer. He has more than 17 years experience in the IT and telecommunications Industry and has held an array of positions within service providers. Read more insights from Alex on the VMware Accelerate Blog.