When you work for a remote-first company, the Summer Company Party is something you greatly anticipate the whole year th...
Browser Rendering Optimization Explained
How do long website load times make you feel? Perhaps you know what it’s like: the website finally loads, yet still, you cannot click or scroll because of stuff going on in the background.
And once the page finally becomes responsive, there are then crazy animations making everything sluggish and trashy… Ever suffered the experience?
I have – and not long ago. And amazingly, I continue to see it happen, despite there being so many optimization tools and practices at one’s disposal.
Browser rendering optimization is a vast subject covering many areas and involves tools, techniques and endless opportunities to make improvements.
This article will focus mainly on efficient page delivery for the end-user, looking at webpage loading times, how they’re measured and how to improve them when working as a frontend developer.
Aspects of page optimization
As for rendering pages faster and more responsive after loading (along with quicker user actions and smoother animations), these will be addressed in a separate blog article.
So first, let’s turn to the following aspects of page optimization:
Why you should optimize page rendering
Application lifecycle - the RAIL model
How to audit a page - available indicators and tools
Optimization techniques - various solutions and optimizing tricks for improving website speed in four fields: environment, assets, build and delivery
*(As page rendering optimization and auditing have been growing rapidly in recent years, this will not cover every subject in detail. Some fields can only be touched upon briefly; therefore, wherever possible, I will be referencing external sources to find out more.)
Why optimize browser rendering?
There has already been much research and writing done on how important various user experience metrics are for businesses.
A colleague of mine has recently written an excellent article putting together a large amount of data about how user experience with a given website impacts many vital, functional areas of an internet-present company.
You can read this article here.
Here are some quick stats to give you an idea of how important optimization has become:
Faster loading times: 25% higher ad viewability, 70% longer sessions and a 35% lower bounce rate are enjoyed by any site loading within 5 seconds.
Five seconds is not ideal, however. A study from 2016 showed that one in two mobile users expect a page to load in less than two seconds, and as much as 53% of visitors are likely to abandon the page if it takes more than 3 seconds to load.
In 2017, Google published a report on how page loading time increases bounce rate. In the same report, a survey showed that the speed of the initial page load is the most important factor for users when it comes to user experience.
In fact, it’s 3 times more important than a site’s appearance. It’s even more important than the ease with which visitors find the information they’re looking for! Loading time can also be directly tied to revenue the page brings in: 80% of internet users are willing to pay extra for a good user experience.
But enough theory. Let’s dive into some technical stuff.
RAIL and the application lifecycle
When we talk about general site performance, it’s useful to refer to the RAIL model. RAIL assumes 4 stages of the application’s lifecycle (though is not to be confused with the component lifecycle in any of the major frameworks): response, animation, idle and load.
However, this order does not correspond to the actual chronology of things happening when we open a new website (in which case we would call it LIAR: load, idle, animation and response).
I’ll stick to this order and discuss the details and importance of these stages one by one.
The Loading stage is the initial load of the page. Obviously, the shorter, the better. Varying sources suggest that this event should take a maximum of 1 to 5 seconds, however, Google metrics categorize anything above 2.5 seconds as ”needing improvement”.
Here is one of the most essential parameters when it comes to user experience.
The Idle stage is the time after the page has initially loaded when users consider what to do next. The page is waiting for them to interact, so here we can finish the other jobs we removed from the loading stage part (image assets, videos, comments sections).
These tasks should be executed in 50ms chunks so that when the user interacts with the page, interaction response takes precedence over anything else going on in the background.
For Animations to feel smooth, a frame must be generated in less than 16ms. In an article on the RAIL model in web.dev we read:
In high-pressure points like animations, the key is to do nothing where you can, and the absolute minimum where you can't. Whenever possible, make use of the 100 ms response to pre-calculate expensive work so that you maximize your chances of hitting 60 frames per second.
The Response stage is responding to user action (e.g. pressing a button, routing) and should occur within 100ms. This is the maximum time that goes unnoticed when interacting with the page; when there is still the impression of instantaneous response.
In the diagram above, Load and Idle have been marked blue, while Animation and Response are green. This is because while Loading and Idle are one-time phases for every new page loaded (in SPAs they can be one-time for the life of the entire app), Animation and Response are repetitive phases as responses to every new user interaction.
Using Google Web Vitals
In order for us to measure site performance and thereby determine if, when and where it requires optimizing, we need tools. Ideally, these tools should be universal so that we can compare results with other pages on the Internet.
Google took the lead in this field by creating Web Vitals. Web Vitals is an initiative by Google to provide unified guidance for quality signals that are essential to delivering a great user experience on the web.
While there exist many tools for measuring and optimizing website performance, Google resolved that it would develop one that could be easily understood and applied by anyone.
For this purpose, they also implemented a subset called Core Web Vitals which is aimed at measuring three aspects of a site’s user experience: loading, interactivity, and visual stability.
These are then measured using three metrics:
Largest contentful paint (LCP) - This measures initial loading time. As you can see, loading should happen within the first 2.5 seconds. An additional 1.5 seconds (4 in total) and already the user experience is considered poor.
First Input Delay (FID) - This measures interactivity. User action should be reflected (reacted to) within 100ms. Any longer and users perceive the website as lagging.
Cumulative Layout Shift (CLS) - This ratio shows the stability of the page. If a page loads in chunks with the content being displaced by imposing imagery or has adverts appearing seconds after, this is also considered as providing a poor user experience.
Optimization Measuring Tools
It may surprise you just how many optimization tools there are out there and, moreover, ones not just for developers:
Measuring Core Web Vitals with Lighthouse
There are a number of ways to measure Core Web Vitals (and not only CWVs). One every developer should start with is Lighthouse: an automated, open-source tool for measuring and reporting on site performance and its weaknesses.
It was located in the Audits tab in developer tools, though has recently been removed and replaced with a separate Chrome extension.
Also, if the website to be audited is already live, extensions are not required. The tool is available here. Lighthouse incorporates the three indicators described above (LCP, FID, CLS), with additions.
Let me demonstrate:
Here is a Lighthouse audit run for a randomly chosen website with corresponding results:
As you can see, there are four main areas Lighthouse focuses on Performance, Accessibility, Best Practices and SEO. Given that our focus thus far has been on performance, let’s break this scoring down further.
At the bottom of the report there are six metrics defining different areas of performance which I’ll now explain:
Speed Index (SI) - A comparison of your page's speed index and the speed indices of real websites, based on data from the HTTP Archive. You can learn more about Speed Index here.
Time to Interactive (TTI) - Measures the time the site takes before becoming fully interactive. It takes into account factors such as FCP, the registering event handlers for visible page elements and the response to user actions within a 50ms parameter. Read here for a more detailed explanation of TTI’s calculation methods.
Total Blocking Time (TBT) - Is strictly tied to the Idle phase of the RAIL model. It shows the total blocking time of the page due to the background execution of task chunks. Ideally, these chunks should take no more than 50ms.
Another great tool for measuring site performance is the Google Developers PageSpeed site. It’s as simple as it gets: paste the site URL of the page you want to audit, then press Analyze. Within seconds, you’ll see a report comprising two sections plus device selection.
I’ll start with the second section, Diagnose Performance Issues. Look familiar? Correct: just like a Lighthouse report. Which indeed it is. PageSpeed incorporates lab results from Lighthouse as part of its results.
Cool. Or not. What if your page shows a mediocre performance yet you still don’t know what causes poor results in particular areas? Lighthouse has you covered.
If you scroll the report page down you will see a list of potential problems with any given aspect of website performance along with their possible solutions. These are marked and sorted according to their severity and the impact they may have on optimization.
The first section of the report is most important as it illustrates REAL user experience on your website. Based on real data from actual, Chrome-based user sessions, we see how the page behaves in real life as opposed to Lighthouse’s more laboratorial environment.
What’s more, is that you can measure this user experience for both desktop and mobile devices.
Looking at the PageSpeed report at the top of the page, you will see one indicator we haven’t yet discussed - FCP. Or, First Contentful Paint.
FCP indicates when users are already seeing some content on the page. This is another useful metric that is taken into account since it is crucial that at least some content appears on a page and as quickly as possible:
But all this sure is a lot of theory and ratios. Let’s move on to something far more interesting for us developers. That is to say, to the main reasons for metrics looking poor and how we can improve them.
Optimize with Environment, Assets, Build and Delivery
Since we can finally talk about optimization, I will now give you ten hints on how to make your page lightning fast so you’ll never have to optimize it again.
Actually, I’m not. So what do I mean?
There have been many guides written on how to optimize website performance. Vitaly Friedman wrote an amazing article on pretty much all methods of website rendering optimization used in 2021. Which explains why it’s a thorough, 3-hour read that it is.
Indeed, with so many different variables in our developer environments, there is endless potential for error. Or, rather, many combinations of these variables that may work in one app, but not necessarily in another.
But we won’t cover each and every one here, as this would require writing another 3-hour long article whilst repeating many other authors more knowledgeable than me.
Instead, I will give you a list of things that may be worth considering when optimizing your application and in the four areas, a frontend developer can address: environment, assets, build and delivery.
Build tools - Grunt, Gulp, Webpack, Parcel, Rollup, Snowpack… As long as maintenance and performance goals are met, you’re on the right track with any configuration. If you have difficulties choosing which tool or combination of tools to choose, remember that Webpack has been around a long time, offering a wide variety of optimization plugins.
Framework - Angular, Vue, React, Svelte? Bear in mind that modern frameworks do not prioritize weaker devices. Also, not every page in SPA requires loading the entire framework. For further reference, have a look at this awesome Netflix case study.
Rendering - Server-Side, Static, Streaming Server-Side, Client-Side… whichever option you choose, remember that the most important task is minimizing the time between LCP and TTI.
Baseline performance cost - Instead of going for a pure framework, you can go for none at all – or an opinionated alternative such asNext.js, Nuxt.js or React etc. And here’s a cool study on framework performance costs.
AVIF, AV1, WebP, WebM - Using modern photo and video formats can be a massive boost in reducing load times, and although there’s still some more support needed for WebM and AVIF, it’s on the way.
Responsive images - Always use responsive images with `srcset`, sizes and the <picture> element. Background image may also be responsive - check the CSS `image-set` property.
Web fonts, Google fonts - A big subject in itself and one you can learn more about in this article.
Brotli - A promising, open-source, lossless data format.
GIFs - Don’t bother with these -- they’re useless in terms of sizing, so you should use looped html videos instead:
Tree-shaking, code-splitting, scope-hoisting - You’ll probably know these as they have become standard practices in builders such as Webpack.
Web-worker - For reducing TTI, it’s a good idea to delegate some of the initial calculations to a web worker.
SPA optimization - The majority of frontend devs these days use some SPA frameworks for their work. Obviously, there are a number of things to improve their performance but for the sake of brevity, I’ll link this great hands-on article from CSS Tricks.
Identify and remove unused CSS/JS - You can use Chrome Dev Tools (Coverage tool), or Puppeteer which also performs many other useful tasks such as preparing the pre-rendered content of a page.
Cache-control - Although caching is nothing new when you dive into it, you immediately discover the below-the-surface part of the iceberg. There are many different strategies and gotchas when setting how assets should be cached by a browser. For a quick introduction, go here. If you’re more advanced, this article about how network requests may actually be faster than retrieving cached data should be of much interest.
Third-party libraries - Put simply, the fewer, the better. According to thirdpartyweb.today third-party libraries account for 57% of all JS code execution. If you do need a third-party library, it’s always better to self-host it.
Lazy loading - Apart from using defer or async for some scripts, native lazy-loading is already supported by all browsers (for images, iFrames will be supported soon).
Intersection observer - According to its definition: Intersection Observer API provides a way to asynchronously observe changes in the intersection of a target element with an ancestor element or with a top-level document's viewport. In simpler terms, thanks to the Intersection Observer, we can have granular control over elements that are about to come into view. Along with lazy-loading, it can be a very powerful tool. If you’ve never used it before, here’s a cool example of its actual implementation.
Content visibility - Once all browsers are supporting it, we’ll be able to implement the lazy-loading of entire sections of the page with pure CSS. If you have a long home screen in your app (loading multiple sections etc.) it will be a good idea to render them just before they appear on the screen. Here’s a great example of how you can use it to speed up the LCP.
Critical CSS - Common practice these days is to separate or mark above-the-fold CSS and attach it to the <head> part of the page.
Service worker - These can also be used to dramatically decrease subsequent page load times by using cached resources.
Rendering performance - For a website to feel responsive, we must be mindful of the frame-rendering process and optimize it accordingly. But as this is in itself another vast subject, we will instead publish a separate article on how to optimize the Animation and Response phases of the RAIL model.
Connection aware components - You can make use of ever-expanding browser APIs and render components based on user network bandwidth (see navigator.connection object). It’s very useful when we launch a worldwide application to be used by users with connection speeds ranging from super slow to ultra-fast:
Proactive and reactive browser optimization
The list above may at first seem overwhelming. So, you may ask: if I want my website to perform its very best, do I have to go through each and every point and check whether or not it needs optimization?
While it’s good to have these points at the back of one’s mind, optimization should instead be a mix of proactive and reactive approaches. Some points are obvious and indeed should be taken into account while writing code.
These include managing dependencies, SSR, asset management, fonts, tree-shaking, 3rd party libraries, caching and lazy loading. When incorporated into daily practice, it greatly increases the chance of scoring highly when subject to any performance measuring analysis.
However, if results are still not satisfying, the above tools will highlight any weak spots in our application so that we can optimize them afterwards (Webpack, CDN, SSR, differential serving, partial hydration, delayed imports, deferred rendering, critical CSS, service worker, and so on).
If you’d like to see what the optimization path for a major fashion company’s landing page looks like, you might look at this video recorded by Addy Osmani, one of Google’s top optimization specialists.
At this point, I hope it’s clear just how crucial it is to deliver an optimum user experience. Having touched upon the first two stages of the RAIL model, it should also be clear what the proper approach to page-loading is, and how to provide users with the content they seek as promptly as possible.
We also covered the Lighthouse and PageSpeed tools that help identify any factors which may be hindering your webpage’s responsiveness. We then referred to a condensed cheat sheet listing the various optimization techniques you may resort to for achieving this goal.
All in all, remember that the many topics described here are merely the tip of that iceberg I mentioned earlier. But I do hope the subject remains interesting enough to inspire you to further explore some of the techniques I’ve illustrated and that by doing so, you find your website optimization skills considerably enhanced.
You may also like...
User testing is a process in which real users are asked to use a product or service to assess its usability. It is typic...