Home > Blogs > VMware Consulting Blog > Monthly Archives: February 2014

Monthly Archives: February 2014

Horizon Workspace Tips: Increased Performance for the Internal Database

By Dale Carter, Consulting Architect, End User Computing

During my time deploying VMware Horizon Workspace 1.5 to a very large corporation with a very large Active Directory (AD) infrastructure, I noticed that the internal Horizon database would have performance issues when syncing the database with AD.

After discussing the issues with VMware Engineering we found a number of ways to improve performance of the database during these times. Below I’ve outlined the changes I made to Horizon Workspace to increase performance for the internal database.

I should note that the VMware best practice for production environments is to use an external database. However in some deployments customers still prefer to use the internal database; for instance, for a Pilot deployment.

Service-va sizing

It is very important to size this VM correctly; this is where the database sits and the VM that will be doing most of the work. It is very important that this VM not be undersized. The following is a recommended size for the service-va, but you should monitor this VM and change as needed.

  • 6 x CPUs
  • 16GBs x RAM

Database Audit Queue

If you have a very large users population, then you will need to increase the audit queue size to handle the huge deluge of messages generated by entitling a large volume of users to an application at once. VMware recommends that the queue be at least three times the number of users. Make this change to the database with the following SQL:

  1. Log in to the console on the service-va as root
  2. Stop the Horizon Frontend service

service horizon-frontend stop

  1. Start PSQL as horizon user.  You will be prompted for a password.

psql -d “saas” -U horizon

  1. Increase Audit queue size

INSERT INTO “GlobalConfigParameters” (“strKey”, “idEncryptionMethod”, “strData”)

VALUES (‘maxAuditsInQueueBeforeDropping’, ‘3’, ‘125000’);

  1. Exit


  1. Start the Horizon Frontend service

service horizon-frontend start.

  1. Start the Horizon Frontend service

service horizon-frontend start.

Adding Indexes to the Database

A number of indexes can be added to the internal database to improve performance when dealing with a large number of users.

The following commands can be run on the service-va to add these indexes

  1. Log in to the console on the service-va as root
  2. Stop the Horizon Frontend service

service horizon-frontend stop

  1. Start PSQL as horizon user.  You will be prompted for a password.

psql -d “saas” -U horizon

  1. Create an index on the UserEntitlement table

CREATE INDEX userentitlement_resourceuuid

ON “UserEntitlement”

USING btree

(“resourceUuid” COLLATE pg_catalog.”default”);

  1. Create 2nd index

CREATE INDEX userentitlement_userid

ON “UserEntitlement”

USING btree


  1. Exit


  1. Start the Horizon Frontend service

service horizon-frontend start.

I would also like to point out that these performance issues have been fixed in the up and coming Horizon 1.8 release.  For now, though, I hope this helps. Feel free to leave any questions in the comments of this post.

Dale is a Senior Solutions Architect and member of the CTO Ambassadors. Dale focuses in the End User Compute space, where Dale has become a subject matter expert in a number of the VMware products. Dale has more than 20 years experience working in IT having started his career in Northern England before moving the Spain and finally the USA. Dale currently hold a number of certifications including VCP-DV, VCP-DT, VCAP-DTD and VCAP-DTA.

For updates you can follow Dale on twitter @vDelboy

Think Like a Service Provider, Build in vCenter Resilience

Jeremy Carter headshot By Jeremy Carter, VMware Senior Consultant

When I’m working on a customer engagement, we always strategize to ensure resiliency and failover protection for vCenter Automation Center (vCAC). While these considerations continue to be top priorities, there is another question that seems to be coming up more and more: “What about vCenter?”

vCenter has long been thought of as the constant, the unshakable foundation that supports business differentiators like vCAC. Although we’re happy for that reputation, it’s important for IT organizations to take the appropriate actions to protect all components up and down the stack

This is increasingly necessary as organizations move into an IT-as-a-Service model. As more parts of the business come to rely on the services that IT provides, IT must be sure to deliver on its SLAs—and that means improved resilience for vCenter as well as the applications that sit on top of it.

Our customers have found vCenter Server Heartbeat to be an essential tool to support this effort. Heartbeat allows IT to monitor and protect vCenter from a centralized easy-to-use web interface and protects against application or operator errors, operating system or hardware failure and external. In addition to protecting against the unplanned downtime, it provides improved control during planned downtime, such as during Windows updates, allowing patches without vCenter downtime.

In the past, Heartbeat was most popular with service providers who needed to securely open up vCenter to customers. Now that more IT organizations are becoming service providers themselves, I encourage them to support their internal customers at the same level and make sure vCenter resilience and protection is part of the plan.

Jeremy Carter is a VMware Senior Consultant with special expertise in BCDR and cloud automation. Although he joined VMware just three months ago, he has worked in the IT industry for more than 14 years.

Successful IaaS Deployment Requires Flexibility & Alignment

Alex SalicrupBy Alex Salicrup, IT Transformation Strategist

When the CEO of a global food retailer announces his goal to triple revenues in five years, the IT organization knows it’s time to step up its plans to overhaul the IT infrastructure.

That’s just what happened in a recent customer engagement where we helped the IT organization automate provisioning, eliminate the need for a significant increase in headcount, and enable a new service provider approach to support their software-defined data center.

The engagement started off with a very aggressive, short interval, cloud service implementation plan. But halfway through the engagement we had to quickly pivot when the CIO accelerated a major service offering commitment to the business. Because of that course change, this engagement is a great example of why an IT organization’s journey needs to build toward an agile infrastructure and cross-team alignment to ensure success—even in the face of unexpected change.

The Goal

The IT department was eager to adopt an IT-as-a-Service (ITaaS) model to support its transformation for two key reasons:

  1. It would help keep IT operations humming as the company continued to expand and innovate.
  2. It would showcase the IT team’s strategic value by improving IT services to other organizations.

We first worked with the customer to establish their end-state vision, complete with a timeline that would allow employees to learn the new technology and gradually get comfortable with the ITaaS approach. The client also chose to start by introducing Infrastructure-as-a-Service (IaaS) through a pilot to automate provisioning. Four weeks into the engagement, the CIO made the announcement.

A key business unit had been preparing to roll out changes to the company’s public website and needed an infrastructure platform for their testing, development, and QA efforts. Although the business unit’s IT staff was looking at a external cloud service provider’s infrastructure platform, the CIO stood firm: The pre-launch testing was to be conducted on the new IaaS foundation currently being built.

The original plan to gradually build project momentum instantly switched to a full-out sprint. The new plan was to execute on multiple project points simultaneously, rather than one step at a time. This is where our program design that combines organizational with technology development to meet the desired end-state IT transformation was key.

While we addressed the requirements for the new infrastructure, the customer’s IT infrastructure team continued to develop new functionality for the service offering, which would provide additional capabilities on top of the core infrastructure offering. Knowing success depended on a close partnership with the IT team, as well as buy-in across the business, we implemented a series of three workshops, wrapping up with a clear plan to move forward.

1. Organizational Readiness Assessment

Our team began by interviewing leaders in 30 functional areas of the IT business to score the retailer on its current level of efficiency, automation, and documentation. The areas with lower scores showed us where we needed to make improvements as we created the new infrastructure.

2. Organizational Readiness Discovery Sessions

These formal meetings with the retailer’s management team helped us reach an in-depth understanding of how the business unit operated its IT business, technically as well as operationally. After each concentrated session, we crafted a summary that outlined progress and achievements.

3. Validation Sessions

Conducted in parallel, these provided an opportunity to share observations from the previous sessions and compare notes. This also allowed the internal IT team to provide recommendations and alternatives early on and contribute to the decision-making process for next steps.

4. Validation Report

Finally, we presented a roadmap and plan for what we would build and how it would be done.

Simultaneously, we focused on integrating the organization’s diverse provisioning technologies using the findings from our readiness assessment. To get the company closer to its goal—to shorten provisioning from 10 weeks to 10 minutes—we needed to free IT from its current method of manually inputting information into one system at a time, one step at a time. After outlining a plan and identifying process areas with opportunities for automation, we successfully integrated directory and collaboration applications, security tools, and all of the IT management systems with a compressed schedule and minimum hiccups.

This project was particularly satisfying. Given the scale and the time pressure, everyone was in sync—including the customer. And it reminded me that with careful assessment, planning, and socialization, along with a flexible mindset, IT can adapt to rapid changes—from outside or inside the business.

Alex Salicrup is currently VMware’s Program Manager for the IT Transformation Programs effort at a major global food retailer. He has more than 17 years experience in the IT and telecommunications Industry and has held an array of positions within service providers. Read more insights from Alex on the VMware Accelerate Blog.

BCDR Strategy: Three Critical Questions

Jeremy Carter headshotBy Jeremy Carter, VMware Senior Consultant

Organizations in every industry are increasingly dependent on technology, making increased resiliency and decreased downtime a critical priority. In fact, Forrester cites resiliency as the number three overall infrastructure priority this year.

A business continuity solution that utilizes the virtual infrastructure, like the one VMware offers, can greatly simplify the process, though IT still needs to understand how all the pieces of their business continuity and disaster recover (BCDR) strategy fit together.

I often run up against the expectation of a one-size-fits-all BCDR solution. Instead it’s helpful to understand the three key facets of IT resilience—data protection, local application availability, and site application availability—and how different tools protect each one, for both planned and unplanned downtime (see the diagram below). If you’d like to learn more on that front, there is a free two-part webcast coming up that I recommend you sign up for here.

As important as it is to find the right tool, you only know a tool is “right” if it meets a set of clearly defined business objectives. That’s why I recommend that organizations start their BCDR planning with a few high-level questions to help them assess their business needs.

1. What is truly critical?

Almost everyone’s initial response is that they want to protect everything, but when you look at the trade-off in complexity, you’ll quickly recognize the need to prioritize.

An important (and sometimes overlooked) step in this decision-making process is to check in with the business users who will be affected. They might surprise you. For instance, I was working with a government organization where IT assumed everything was super critical. When we talked to the business users, it turned out they had all of their information on paper forms that would then be entered into the computer. If the computer went down, they would lose almost no data.

On the other hand, the organization’s 911 center’s data was extremely critical and any downtime or loss of data could have catastrophic consequences. Understanding what could be deprioritized allowed us to spend the time (and money) properly protecting the 911 center.

As we move further into cloud computing, another option is emerging: Let the application owners decide at deployment. With tools like vCloud Automation Center (vCAC), we can define resources with differing service levels. An oil company I recently worked with integrated SRM with vCAC so that any applications deployed into Gold or Silver tiers would be protected by SRM.

Planned Downtime Unplanned Downtime VMware Data Protection2. Which failures are you preventing?

Each level of the data center has its preferred method of protection, although all areas also need to work together. If you’re concerned about preventing failures within the data center, maybe you rely on HA and App HA; however, if you want to protect the entire datacenter, you’ll need SRM and vSphere Replication (again, see chart).


Another helpful step in choosing the best BCDR strategy is to define a recovery time objective (RTO), recovery point objective (RPO) and maximum tolerable downtime (MTD) for both critical and non-critical systems.

These objectives are often dictated by a contract or legal regulations that require a certain percentage of uptime. When established internally, they should take many factors into account, including if data exists elsewhere and the repercussions of downtime, especially financial ones.

The final step in the implementation of any successful IT strategy is not a question, but rather an ongoing diligence. Remember that your BCDR strategy is a living entity—you can’t just set it and forget it. Every time you make a change to the infrastructure or add a new application, you’ll need to work it into the BCDR plans. But I hope that each update will be a little easier now that you know the right questions to ask.

VMware-WebcastWant to learn more about building out a holistic business continuity and disaster recovery strategy?
Join these two great (free) webcasts that are right around the corner.

Implementing a Holistic BC/DR Strategy with VMware – Part One
Tuesday, February 18 – 10 a.m. PST

Technical Deep Dive – Implementing a Holistic BC/DR Strategy with VMware – Part Two
Tuesday, February 25 – 10 a.m. PST

Jeremy Carter is a VMware Senior Consultant with special expertise in BCDR and cloud automation. Although he joined VMware just three months ago, he has worked in the IT industry for more than 14 years.