By Lavanya Goruganthu
Out of the many tools that my team built at VMware, there is one that unifies tests from existing sources into a JSON format with a common specification. It’s called Unified Test Services (UTS) and allows people across the company to streamline their software tests. We’ll talk more about that in a separate post. Today, we focus on how we improved the performance of the app’s dashboards, thereby improving the user experience.
As the number of people using these dashboards grew and we continued adding new features, we started to have problems with slow performance, and we had to use many performance-boosting techniques to maintain their speed and functionality.
Before we get into the specifics of our techniques, let’s start with a brief introduction to website performance and scalability.
Performance vs. scalability
Performance and scalability are essential aspects of software design. They are often confused, and although they are interrelated and affect each other, they are different.
What does “website performance” mean?
Website performance measures how fast the pages of a website load and display in the web browser. Good website performance is a cornerstone of any successful website because it’s the first event that all visitors experience. First impressions influence how users feel about a website, its associated business or organization, and whether they convert as a customer, buy a product, or bounce away from the website.
What does “scalability” mean?
Scalability is a system’s ability to expand by adding extra hardware or upgrading the existing hardware without major application modifications. A scalable system should handle large amounts of users, data, or traffic without disrupting the end-user services. Scalability increases overall system performance, prevents system downtime, and ensures a seamless user experience. That results in an increase in customer engagement, a higher retention rate, revenue growth, and cost reduction for the organization.
What causes website slowness?
A site speed test can identify website slowness. There could be many reasons the site load time could be slow, from server load time to image size to the number of redirects configured.
What is a good page load time?
Google recommends a page load time of less than two seconds. However, many websites struggle to meet this standard.
Note that web pages don’t load all at once—they load piece by piece. Website speed varies from webpage to webpage and from user to user, depending on each page’s attributes and the user’s browser, device, and internet speed. So, even though it’s important to measure objective data about how long a page takes to load, this isn’t the same as how users see it in the real world.
Where and how we improved performance
Solve issues related to bundle size
You can use tools like WebPageTest or Chrome’s Lighthouse dev tool to audit a page and look at a performance analysis report that lists problems and ways to fix them. Opportunities like “Reduce unused JavaScript,” “Avoid an excessive DOM size,” and “Reduce JavaScript execution time” show that the site’s bundle size—the number of characters in the code or memory utilization—needs to be decreased.
Here’s an example Lighthouse report used to diagnose bundle size issues:
Use tree shaking
Tree shaking, also known as “live code inclusion,” is a way to optimize JavaScript code. Over time, as apps add more dependencies, some of them are likely to stop being used. This leads to “bloat,” which is messy code that wastes resources and slows down the performance of your app. The goal of tree shaking is to remove any JavaScript that isn’t being used so that the user only gets sent executable code. This makes the app bundle smaller, so it takes less time to download and uses less memory in the browser.
Here is a diagram of how tree shaking is implemented:
We used Angular CLI’s framework to incorporate tree shaking into our project. By default, Angular CLI uses WebPack to bundle the script files needed to implement tree shaking. This helped us automatically remove unused code from the final dist bundle.
Minify and combine files
When you minify a file, you remove unnecessary formatting, white space, and code. Since extra spaces, line breaks, and indentations add to the size of the webpage, it’s important to get rid of them. This ensures the pages are as lean as they can be. If the site uses multiple CSS and JavaScript files, they can be combined into one. The fewer elements there are on a page, the fewer HTTP requests a browser needs to make to render the page, which means it will load faster.
In our dashboards, we enabled minification by using the following configuration:
We ran Angular’s command ng build -prod
to create a dist folder with bundled and minified JS and CSS files. As a result, the webpage’s load speed and bandwidth usage improved significantly. This, in turn, improved our site speed.
Limit blocking requests
A blocking request is an asset that the browser needs to render (display) for the first time. By default, JavaScript and CSS assets block the rendering of other assets. By limiting the number of blocking requests a website gets and reducing the number of JS scripts and CSS files, the browser can spend more time displaying the first view of the page, making the page loading seem faster.
Use lazy loading
Lazy loading optimizes the loading times of websites. With it, the website first loads only the required content and then waits until the user needs to see the rest of the page. With lazy loading, a website opens faster because the browser only loads a part of the page at a time. When a “trigger” is found, the website will load any extra content needed.
Using Angular Router, we had already implemented infinite scrolling in our dashboards to load data from the test pipeline. We used the loadChildren
property to set up lazy loading. This property loads the nested route subtree when a user navigates to a URL within our website. This has helped speed up the application by approximately 10 seconds by loading only the necessary components.
Apply the right cache policies
When data is cached, a copy is stored where it can be retrieved faster than by going to the original source. But using cached data can be problematic because it might not have the same updates as the source, and so the data might be stale.
The original system architecture we used for our dashboards was based on an important assumption: that the UI service would have a single pod (a Kubernetes VM that runs one or more containers). Once we had to add more pods to handle more user traffic, we found that, because of in-memory caching, each instance maintained its own cache with its own copy of the data. Also, these multiple copies might not be in sync with each other, which could make the UI behave in an unexpected way. Multiple API calls to the backend might hit different instances and therefore return different data back to the UI for each call. We also saw that this problem caused strange behavior where multiple UI clients accessing the same page at the same time saw different data. Multiple caches in use made the problem even worse because each cache was used in a different way and needed a different invalidation strategy, which is how the cache decides which items to keep in the cache and which to take out based on when they expire.
As a first step toward fixing the data discrepancy problems, we removed the unnecessary in-memory caches and optimized the database queries tied to the APIs to return results faster. In the future, we plan on using a distributed cache like Redis or Memcached to ensure that responses are always the same when more than one instance is running.
The chart below shows the improvement in API response time:
Improve rendering time by reducing API bottlenecks
If a webpage makes multiple requests to API endpoints, it’s important to ensure that waiting for API responses doesn’t delay initial page rendering and/or impact performance after a page is loaded. For server-configuration-driven UIs, the response time of the configuration is vital for displaying the first view of the page. You can use server-side rendering (SSR) to paint the first page accurately. You can use Chrome DevTools to investigate how long APIs take to respond.
Reduce HTTP requests
Many HTTP requests extend the time for a webpage to load. What’s more, bigger files will take significantly longer to download. Using fewer images, text, CSS, Flash, and other elements is the easiest way to reduce the amount of HTTP requests.
Here is an example of how API call volume reduction dramatically helped improve the page load time of our dashboard:
Other important techniques to improve performance
In addition to the techniques we’ve already talked about, here are some other important ones that we recommend as best practices and have incorporated into our dashboards.
Preload important scripts
Preloading loads carefully selected scripts into memory and keeps them ready to run. This improves performance because the browser doesn’t need to wait for the scripts to load into memory.
Enable compression
By enabling GZIP compression, the time it takes to download the HTML, CSS, JavaScript files, and JSON payloads can be greatly reduced. This is because the files are downloaded as much smaller compressed data, which is then decompressed at the browser. This happens automatically as long as the right HTTP header (Content-Encoding) is included in the response payload.
Delay loading render-blocking resources
When a user visits a website, before the page is rendered on the screen, they must wait for synchronously downloaded resources like CSS stylesheets or JavaScript to load or for the synchronous JavaScript tasks to finish executing. These issues can cause delays in displaying the page. By eliminating these render-blocking resources, we have seen an improvement in the website’s end-user performance.
Conclusion
We’ve discussed many different ways to improve website performance, and we’ve detailed how my team at VMware has used many of these techniques for some of our dashboards. As the number of people using these dashboards grows and we add new features, we will reevaluate their performance and look for ways to improve them.