How to speed up a website in 2019: technical checklist
Today a man would prefer a bicycle to going on foot, a car to a bicycle, a high-speed train to a car, and a plane to a high-speed train. And I'm pretty sure that if people could take Elon Musk's FALCON 9 rocket to a local supermarket, they definitely would. There's just one fat reason behind all this — SPEED.
Page speed has been a ranking factor for Google's desktop search for years now. But only recently Google introduced a mobile page speed update that officially made page speed a ranking factor for mobile devices.
What's more, page speed is also a core user experience metric. So not working on your page speed can cost you money, rankings, and loyal customers. In fact, according to the latest Google research, 50% of your visitors expect your page to fully load within less than 2 seconds. And searchers that had a negative experience with your mobile page speed-wise are 62% less likely to make a purchase.
If you don't want this to happen, you need to take action now. And the very first step towards improving your page speed is measuring it. And of course, the best way to do it is with the help of our old trusty PageSpeed Insights.
New PageSpeed Insights: What's new?
If you monitor your website's speed regularly, you must have noticed that Google has silently rolled out a new update to its PageSpeed Insights tool. And honestly speaking, the tool has changed a lot. For those who didn't know — the tool no longer supplies you with Speed and Optimization scores like it used to.
Here's what data you can find in the tool's new version:
The speed score
Now you get only one total score that is based on lab data from Lighthouse (a speed tool from Google). According to the Lighthouse's measurements, PageSpeed Insights will mark your page as fast, average, or slow.
This data is collected from CrUX and includes information about the way Chrome users interact with your page, the devices they use, how long it takes your content to load for them, etc.
The trick is, Google may see your site as slow if the majority of your users have a slow Internet connection or old devices. But on the bright side, your site may seem fast to Google due to your users' fast Internet and better devices.
The best way to see how Google perceives your website is by accessing your CrUX data. It's at your disposal on Google BigQuery (part of the Google Cloud Platform). Here is a nice guide for you on how to get first-hand insights from your real-world visitors.
This is the data that the tool collects with the help of Lighthouse. Basically, it simulates the way a mobile device loads a certain page. It incorporates a number of performance metrics such as:
- First Contentful Paint — measures the time it takes for the first visual element to appear for a user.
- Speed Index — measures how quickly the contents of a page are visibly populated.
- Time to Interactive — measures how fast a page becomes fully interactive.
- First Meaningful Paint — measures when the primary content of a page is visible (biggest above-the-fold layout change has happened, and web fonts have loaded).
- First CPU Idle — measures when a page is minimally interactive (most but not all UI elements on the screen are interactive).
- Estimated Input Latency — estimates how long it takes your app to respond to user input, in milliseconds, during the busiest 5s window of page load.
The tool even supplies you with screenshots of how your page is being loaded and viewed during the loading process.
The Opportunities section supplies you with the list of optimization tips for your page. It also shows you the Estimated Savings after fixing or improving this or that parameter.
The thing is, these technical criteria influence lab data parameters that have a direct impact on your overall speed score. Therefore, it's crucial to work on them in the first place.
Last but by no means least, the Passed audits section shows the well-optimized technical parameters that your page has no problems with.
What influences your page speed score the most?
Not so long ago our team has conducted an experiment that intended to figure out the correlation between page speed and pages' positions in mobile SERPs after the update. The main takeaway from the experiment was that the Optimization Score (now Opportunities) is what influences mobile rankings the most.
So far, this hasn't changed at all, and technical optimization still rules organic Google rankings. The only "but" is that there are now 22 factors to optimize for instead of just 9 that we used to have a couple of months ago. The good news is that your page speed score can be significantly boosted as all these parameters are totally fixable and optimizable.
So there's quite an impressive list of what you can do to speed up your page load.
- Avoid multiple page redirects
- Properly size images
- Defer unused CSS
- Minify CSS
- Efficiently encode images
- Enable text compression
- Preload key requests
- Avoid enormous network payloads
- Defer offscreen images
- Reduce server response times (TTFB)
- Eliminate render-blocking resources
- Use video formats for animated content
- Preconnect to required origins
- Serve images in next-gen formats
- Ensure text remains visible during webfont load
- Minimize main-thread work
- Serve static assets with an efficient cache policy
- Avoid an excessive DOM size
- Minimize Critical Requests Depth
- Measure performance
Please don't be scared by the number of optimization opportunities. Most probably the majority of them are not
relevant, and there are 5-6 for you to work on.
So at our next stop, I'll explain in more detail how to optimize every above-mentioned parameter.
Now that we've outlined areas for improvement, let's see how we can optimize them step by step.
1) Landing page redirects
I guess it goes without saying that getting rid of all unnecessary redirects is one of the most obvious things you can possibly do to your site speed-wise. The thing is, every additional redirect slows down page rendering time as each redirect adds one (if you're lucky) or many (happens more often) HTTP request-response roundtrips.
- Switch to responsive design
The very first thing Google recommends while dealing with unneeded redirects is switching to responsive design. By doing so, you can avoid unnecessary redirects between desktop, tablet, and mobile versions of your website as well as provide great multi-device experience for your users.
- Pick a suitable redirect type
Of course, the best practice is not using redirects altogether. However, if you desperately need to use one, it's crucial to choose the right redirect type. Surely, it's better to use a 301 redirect for permanent redirection. But if, let’s say, you're willing to redirect users to some short-term promotional pages or device-specific URLs, temporary 302 redirects are the best option.
I would like to point out that Google doesn't give any particular recommendations on the matter. So when deciding on a redirection policy, your users need to be taken into consideration first. They just won't be able to see your brilliant content if your redirects are inconsistent or point to the wrong content on the desktop or mobile site. And of course, by minimizing the number of redirects, you can significantly boost your website's speed performance.
2) Image size
Like it or not, but, on average, images account for about 80% of the bytes needed to load one webpage. And since they're responsible for such a high load for an average website, it's important to make sure you don't send huge, oversized images to your users. This actually happens very often, since different devices need images of different sizes to display them properly (usually the smaller the screen, the smaller the image you need). So one of the widely spread "mistakes" is sending big images to smaller devices — and thus if your page contains images that are larger than the version that's rendered on your users' screen, page load time will slow down significantly.
According to Google's official recommendation, the best practice is to implement so-called "responsive images". Basically, it means that you can generate various versions of every image so that it nicely fits all screen sizes. Surely you can specify which version to use in your HTML or CSS with the help of media queries, viewport dimensions, etc. By the way, here's a good tool that gives you a helping hand with generating images in various sizes.
In a nutshell, when requesting an image, the browser advertises its viewport dimensions and device pixel ratio, while the server takes care of producing correctly sized images in return. Check out these step-by-step instructions on client hints implementation to see how it can be done.
On the other hand, implementing vector-based image formats like SVG is also a nice option to go for. As you may know, SVG images can scale to any size, which makes the format uber-convenient — the images will be resized in real time directly in a browser.
3) Defer unused CSS
Unused CSS can also slow down a browser's construction of the render tree. The thing is, a browser must walk the entire DOM tree and check what CSS rules apply to every node. Therefore, the more unused CSS there is, the more time a browser will need to spend calculating the styles for each node.
Just before you get down to minifying CSS files, you need to look for some that you no longer need and remove them with no regrets. Remember that the best-optimized resource is the one that is not sent.
After you've cleared out your CSS, it's important to optimize the rest of CSS rules (reduce unnecessary code, split CSS files, reduce whitespace, etc.).
It's also a very good idea to inline critical, small-sized CSS resources directly into the HTML document. This is how you can eliminate extra HTTP requests. But please make sure that you do it only with small CSS files because inlining large CSS can result in slowing down HTML rendering. And finally, to avoid unnecessary duplication, you'd better not inline CSS attributes into HTML tags.
Of course, deferring uncritical CSS can be done manually. However, I would strongly suggest automating this process. Here is a whole selection of tools to help you with that.
4) CSS minification
Reducing CSS files is yet another activity that can win you precious milliseconds. Practice shows that CSS is quite often much larger than necessary. Therefore, you can painlessly minify your CSS without fear of losing anything.
If you run a small website, which doesn't get updated frequently consider using an online tool like CSSNano for CSS minification. Simply insert the code into the tool and wait for it to provide you with the minified version of your CSS — as simple as that. Such minifiers do a very good job of minimizing the number of bytes. For instance, they can reduce the color value #000000 to just #000, which is a pretty good saving if there are many color values like this.
And just like with CSS minification, the fastest and least painful way to get rid of unneeded data in your code is by using an online minifier. UglifyJS comes highly recommended. On top of that, you can set up a process, which minifies development files and saves them to a production directory every time you deploy a new version.
6) Encoding images
I think it's crystal clear that the smaller your content size is, the less time is required to download the resource. Image optimization is yet another uber important activity that can reduce your total page load size by up to 80%. On top of that, enabling compression reduces data usage for the client as well as minimizes rendering time of your pages.
Compressing every single image you upload to your site may be a very tiresome process. But more importantly, it's super easy to forget about it. Therefore, it's always better to automate image compression and forget about it for good. So do yourself a huge favor and use imagemin or libvips for your build process. Remember that the smaller in file size your images are, the smoother network experience you are offering to your users – especially on mobile devices.
7) Text compression
Textual content of your website is yet another thing that can increase the byte size of network responses. And as you already know, the fewer bytes are to be downloaded, the faster your page loads.
Google highly recommends gzipping all the compressible data, and all modern browsers suggest gzip compression for all HTTP requests. Fact is, having resources compressed can cut down the size of the transferred response by up to 90%. To that tune, this will also minimize the time of your pages' first rendering as well as reduce data usage for the client.
So make sure to check out these sample configuration files for most popular servers. After that, find your server on the list, move to the gzip section, and confirm that your server is configured with the recommended settings.
As an alternative to gzip, you can also use Brotli, which is one of the most up-to-date lossless data formats. Unlike the gzip format, Brotli has much better compression characteristics. But there's a catch — the higher the level of compression, the more resources a browser will need to accomplish it. That is why all the size benefits of Brotli will be completely nullified by slow server response time. Therefore, I wouldn't recommend going beyond the 4th level of compression (Brotli has 10 levels of compression) for dynamic assets. However, with static assets that you pre-compress in advance, you can implement the highest level of compression.
8) Preloading key requests
As you know, it's up to browsers to decide what resources to load first. Therefore, they often attempt to load the most important resources such as CSS before scripts and images, for instance. Unfortunately, this isn't always the best way to go. By preloading resources, you can change the priority of content load in modern browsers by letting them know what you’ll need later.
With the help of the <link rel="preload"> tag, you can inform the browser that a resource is needed as part of the code responsible for rendering the above-the-fold content, and make it fetch the resource as soon as possible.
Here is an example of how the tag can be used:
<link rel="preload" as="script" href="super-important.js">
<link rel="preload" as="style" href="critical.css">
Please note that the resource will be loaded with the same priority. The difference is that the download will start earlier as the browser knows about the preload ahead of time. For more detailed instructions, please consult with this guide on resource prioritization.
9) Enormous network payloads
Reducing the total size of network requests can not only speed up your page, but also save your users' money that they would spend on cellular data.
There are quite a few ways of reducing the size of payloads. First of all, you need to eliminate unneeded server requests.
After you got rid of all the unnecessary requests, it's only right to make the ones that are left as small as possible. So here are just a small number of resources' minification techniques for you to consider. Think of enabling text and image compression and using WebP format instead of JPEG or PNG. It's also a good idea to cache requests so that resources don't download from scratch on repeat visits. Please refer to this guide on HTTP caching to see how it can be done.
10) Dealing with offscreen images
Offscreen images are the ones that appear below the fold. Because of that, there's simply no need to download them as part of the initial page load. Therefore, it's only right to defer their load in order to improve your page speed as well as time to interactivate.
Basically, the best strategy to follow is to download above-the-fold images prior to offscreen ones and start downloading below-the-fold images only when a user gets to them. This technique is called lazy loading. With a tool like IntersectionObserver, you can make images load only when a user has scrolled down to them.
11) Improving server response time
When a user navigates to a certain URL to access some content, the browser makes a network request in order to fetch that content. For instance, if users are willing to access their order history, the server will have to fetch every user's history from a database, and then insert that content into the page. Sometimes this process can take too much time. Therefore, optimizing the server response time is one of the possible ways to reduce the time that users spend waiting for pages to load.
The most unpleasant thing about these response delays is that there is quite a wide selection of reasons that may cause them. For instance, these can be slow routing, slow application logic, resource CPU starvation, slow database queries, memory starvation, slow frameworks, etc.
So keep your fingers firmly on the pulse with these parameters and try to keep the response time under 200ms.
12) Eliminate render-blocking resources
It's highly advisable to leave only the most important external scripts because otherwise, this will add some extra roundtrips to fully render the page.
If the external scripts are small, you can inline them directly into the HTML document to avoid any extra network requests. Remember that inlining enlarges the size of your HTML document. That's why you should only do it with small scripts. When it comes to non-critical HTML imports, Google recommends to mark them with the async attribute. This will make your scripts load asynchronously with the rest of the page (the script will be executed while the page continues the parsing) and won't influence the overall speed much. But please remember that this should be done only to the scripts that are not required for the initial page loading.
Speaking of stylesheets, it's nice to split up your styles into different files and add a media attribute to each stylesheet link. If you do that, the browser will only block the first paint to retrieve the stylesheets that match a user's device.
13) Using video formats for animated content
Believe it or not, but animated GIFs can take up too much space. That is why to reach our ultimate goal of making your webpages load at the speed of lightning, you need to convert GIF-heavy animation to video.
The fastest way of converting GIFs to video is with the help of the ffmpeg tool. Once you've installed the tool, simply upload your GIFs to it and choose the video format you're willing to convert them to. It's advisable to pick the MPEG-4 one because it has the broadest support across browsers.
You can also try a relatively new WebM format developed by Google (just like WebP for images). While browser support for WebM isn't as wide as for MPEG-4, it's still very good in terms of its compression characteristics.
And because the <video> element allows you to specify multiple <source> elements, you can do the trick by stating a preference for a WebM source that many browsers can use while falling back to a MPEG-4 source that all other browsers can understand.
14) Preconnecting to required origins
As a rule, establishing connections, especially secure ones, takes a lot of time. The thing is, it requires DNS lookups, SSL handshakes, secret key exchange, and some roundtrips to the final server that is responsible for the user’s request. So in order to save this precious time, you can preconnect to the required origins ahead of time.
To preconnect your website to some third-party source, you only need to add a link tag to your page. Here's what it looks like:
<link rel="preconnect" href="https://example.com">
After you implement the tag, your website won't need to spend additional time on establishing a connection with the required server, saving your users from waiting for several additional roundtrips.
15) Serving images in next-gen formats
Not all image formats are created equal. The truth is, our old trusty JPEG and PNG formats now have much worse compression and quality characteristics compared to JPEG 2000, JPEG XR, and WebP. So what I'm trying to say is that encoding your images in these formats will make them load faster as well as consume less cellular data.
Just like with video formats discussed earlier, you need to make sure your images are visible to all your visitors. This can be done by using the <picture> element, which allows you to list multiple, alternative image formats in order of priority. So even if a user's browser doesn't support a certain format, it can move on to the next specified format and display an image properly.
16) Ensuring text visibility during webfont load
All website owners out there want to stand out with their super cool custom fonts. The only bad thing about it is that such fonts may take too long to load. If that's what happens, the browser will replace your font with a fallback one (like Arial or Times New Roman, for instance).
It's not quite an optimization tip, but if you don't want your content to be displayed improperly, you need to make sure that it looks fine with some basic fallback fonts like Arial or Georgia (especially on mobile devices). After doing so, you can be sure that users can actually read your content, and your page looks appropriate.
17) Minimize main-thread work
When downloading a certain page, your browser simultaneously carries out multiple tasks, such as script parsing and compilation, rendering, HTML and CSS parsing, garbage collection, script evaluation, etc.
Sometimes it can be quite challenging to get a breakdown of where CPU time was spent loading a page. Luckily, with the help of Lighthouse's new Main Thread Work Breakdown audit feature, you can now clearly see how much and what kind of activity occurs during page load. This will give you an understanding of loading performance issues related to layout, script eval, parsing, or any other activity.
19) Implementing a caching policy
When a browser requests a resource, the server that provides the resource can make the browser store it for a certain period of time. So for all repeat visits, the browser will use a local copy instead of fetching it from scratch.
In order to automatically control how and for how long the individual response can be cached by the browser, use cache-control.
In addition to HTTP caching, determining optimal lifetimes for scripts (max-age), and supplying validation tokens (ETag), don't forget about Service Worker caching, including the above-mentioned V8’s code caching.
20) Avoiding an excessive DOM size
A too large DOM tree with complicated style rules can negatively affect such things as speed, runtime, and memory performance. The best practice is to have a DOM tree, which is less than 1500 nodes total, has a maximum depth of 32 nodes and no parent node with over 60 child nodes.
A very good practice is to remove the DOM nodes that you don't need anymore. To that tune, consider removing the nodes that are currently not displayed from the loaded document and try to create them only after a user scrolls down a page or hits a button.
21) Minimizing critical requests depth
The Critical Request Chain is part of the Critical Rendering Path (CRP) strategy the core idea of which is prioritizing the loading of certain resources and changing the order in which they load. Even though the Critical Request Chain is meant to download the most important resources, they still can be minified.
Unfortunately, there's no one-size-fits-all piece of advice on how to minimize critical requests depth exactly for your site (just like for many of the above-listed factors). However, it's always good to minimize your chains' length, reduce the size of downloaded resources, and, as always, defer the download of unnecessary resources.
22) User Timing marks and measures
I know it has been a long article filled with tons of technical stuff. However, I would still strongly recommend taking the technical side of page speed optimization really seriously as so far this is what influences your speed score the most. What's more, you should keep an eye on the real-world measurements from CrUX. Because even if you have a 100-speed score, your webpage may seem slow to users because of bad Internet connection or old devices.
Just as always, I'm looking forward to your feedback in the comment section below. Please share your experience with the new PageSpeed Insights tool as well as with technical optimization. See you there!