•  
  •  

WordPress Cache and How It Can Improve Your Website Speed

A laptop with a hovering screen in front of it that says Cache

Nearly half of the World Wide Web is powered by WordPress, but somehow there’s a common misconception that WordPress websites are slow and laggy.

In this article, we’ll give you full information about one of the techniques that can highly improve your website performance, WordPress caching.

Caching is a technique that allows you to store the result of a long-running task in a fast access memory, and then reuse that result without actually doing the task.

This means that cached content is displayed a lot faster than content that’s loaded directly from the server. It’s like memorizing your multiplication tables. Once you have memorized it, it’s so much faster to recite the answer to a multiplication problem off-hand than trying to calculate the answer all over again.

Caching can be performed in different stages of a running web application, so every stage is called a cache layer. We’ll go through and discuss each stage later in the article. For now, let’s see how the default WordPress cache works.

Default WordPress Caching

Object Caching is enabled by default. Almost every query is cached by WordPress and the result is reused. But the downside is that it works only for the current request, so cache is not shared between requests, and this can be a really bad bottleneck for your application. There are many ways to improve this, so let’s dive into it.

Types of WordPress Cache Layers

As we said earlier, there are many caching techniques that can speed up and increase the throughput of your WordPress website. Let’s divide them into cache layers and have a closer look. The default one is an Object Caching layer, so let’s start with that. Then we’ll check Bytecode cache, Page Cache, and CDN cache.

Object Caching

Object caching is a highly used technique when you store the result of your database queries in a faster data provider and then reuse that data. It’s enabled in WordPress by default but isn’t shared through different requests.

This means that if WordPress caches some data for request #1, the cached data will be used only during request #1. For request #2 WordPress will query the database again. That also improves performance but not that much if we use persistent cache storage like Redis, or Memcached, or APCu.

Redis

Redis is an open source, in-memory data structure store. With WordPress, you have the option to use Redis for storing the values that are generated by WordPress’s native object cache in a persistent manner allowing cached objects to be reused between page loads. To install it, you should follow the install instructions for the Redis database, and then install one of the following plugins in WordPress: WP Redis or Redis Object Cache, and configure the Redis connection.

Memcached

With Memcached you have a distributed memory object caching system that was initially made for accelerating dynamic web applications by alleviating database load. Basically, it’s like a short-term memory for your apps.

To use it for WordPress object caching, you should follow the instructions and install the Memcached server, then install the PHP Memcached extension, and finally install the W3 Total Cache (or MemcacheD Is Your Friend) plugin and choose Memcached as a Cache Method in Object Caching settings.

APCu

APCu is the successor of the APC extension. With APC you had both opcode caching (aka opcache which we’ll talk about it later) and object caching. So what exactly made it extinct? Well, PHP versions 5.5 and higher came with their own opcache, rendering APC incompatible, and its opcache functionality useless. This is where APCu came in, offering only the object caching functionality (the outdated opcache was removed).

To use APCu, you should install the APCu PHP extension and then install the W3 Total Cache plugin for WordPress and configure it to use APCu.

Bytecode cache

Overview

One of the best solutions to speed up your not only WordPress website or every application written in PHP in about 3x times.

In the nutshell, our written PHP code compiles into bytecode which executes in the PHP engine. The compiling is a slow process, but we can save compiled bytecode and reuse it for every request. Here’s the diagram of that process.

the process of bytecode caching

The downside of this cache layer is that if we change our PHP script, we have to clear the bytecode cache. But it’s a brilliant solution because we don’t often change our code in production, do we? Also, there are timers that will check files for changes and then clear the cache.

Installation on Ubuntu

To enable it on Ubuntu, you should first install the OPcache extension by running

sudo apt-get install php7.4-opcache

You can change the PHP version to the one that you’re using.
After installation, you should edit your php.ini file

sudo vim /etc/php/7.4/fpm/php.ini

and add this block:

opcache.enable=1
opcache.validate_timestamps=1
opcache.revalidate_freq=60
opcache.max_accelerated_files=20000
opcache.memory_consumption=128
opcache.interned_strings_buffer=32
opcache.fast_shutdown=1

Then save the file and restart FPM.

sudo systemctl restart php7.4-fpm 

The Zend OPcache extension should now be configured and caching PHP scripts hosted by your websites. We can check if it’s enabled by running a custom PHP script. We’ll do that later, but first, let’s see what the directives mean.

opcache.enable=1

By setting this directive to 1 you enable the Zend OPcache. By setting it to 0, you’ll disable the cache.

opcache.validate_timestamps=1

Setting this directive to 1 will make PHP check the timestamps of each file to see if it has been modified and if so it will update the cache for that file. When set to 0 it will not update each file’s cache, and you’ll need to restart PHP each time you modify a file.

opcache.revalidate_freq=60

By now you’re probably wondering how frequently (in seconds) the system should examine file timestamps for changes to the shared memory storage allocation. (“1” indicates that a check will be performed once per second. “0” will result in OPcache checking for updates on every request.).

opcache.max_accelerated_files=20000

When it comes to the maximum number of scripts that can be stored in the cache the allowed number ranges from 200 to 1000000. Generally, it should be higher than the number of scripts you have in your hosting directories.

opcache.memory_consumption=128

Set this directive to the maximum amount of memory in MB the cache should use. You’ll be able to see how much is being used using the statics script later. But for a WordPress installation with minimum modules, 128 MB should be high enough.

opcache.interned_strings_buffer=32

PHP uses a technique called string interning that reduces memory and improves performance by storing duplicate strings in the code in a single variable. Set this directive to 32 MB and check the stats later to see how much higher you need to increase it by.

opcache.fast_shutdown=1

If enabled, a fast shutdown sequence is used for the accelerated code. Depending on the used memory manager this may cause some incompatibilities. Set this to 0 if you experience problems.

This is a required minimum of configs, but there are many other variables, you can find them all in the Official OPCache Documentation.

Test if OPcache is enabled

We can test to see if the OPCache is enabled by creating a PHP file and using the opcache_get_status() function. This function shows you lots of useful information like which scripts have been cached and how much memory has been used, etc.

To save time, there’s an open-source project called amnuts/opcache-gui that displays all stats on a nicely formatted page. The code is contained in a single index.php file. To get started, run the following command to download the file to your www root directory.

curl -o /var/www/opcache.php https://raw.githubusercontent.com/amnuts/opcache-gui/master/index.php

Assuming your web document root is located at /var/www the file will be downloaded there with the name opcache.php.

Browse to the file using a web browser and you should see the following output.

check if opcache is enabled

This is my development laptop, so OPcache isn’t configured to cache aggressively. But you can see that it uses 7% of memory. You can see cached files in the second tab, reset the cache and enable lifetime updates. Using this dashboard you can tune up the settings described above to achieve maximal performance and minimal memory consumption.

Page Cache

Until now, we’ve been exploring different cache techniques that will slightly improve the performance of your website. But the fastest website is a static website. With the Page Cache layer, we can turn our dynamic WordPress web page into a static website, which Nginx will serve with a speed of light. Basically, the idea is to cache the recently generated web page response from PHP-FPM as a static file with a specific cache key, like a URL, and serve that file from the cache directory to all the visitors of your website.

The downside of this method is that it won’t fit every website. For example, if you have a very dynamic, non-consistent website (an e-shop for example), or pages under the same URL with different data( example.com/account page is different for every user logged in). But don’t hesitate, you can configure exclusion from cache for that page and still use the Page Cache layer.

Two more popular Page Cache implementations are Nginx FastCGI Cache and Varnish cache. As you can guess from the name, the first one will only work with Nginx, as the second one is configured separately and works as a proxy server. So you can use it with every frontend webserver. I will show the basic configuration for the Nginx FastCGI cache, as we do it at 10Web.

Nginx FastCGI Cache

First of all, you need to add the FastCGI cache configuration to /etc/nginx/nginx.conf.
So to edit file, type:

sudo nano /etc/nginx/nginx.conf

then in http{} block, add the following lines:

fastcgi_cache_path /var/lib/nginx-tmp/cache levels=1:2 keys_zone=LIVE_SITE:100m inactive=60m max_size=64m;
fastcgi_cache_key $scheme$request_method$host$request_uri;

fastcgi_cache_lock on;
fastcgi_cache_revalidate on;
fastcgi_cache_use_stale error timeout invalid_header http_500;
fastcgi_ignore_headers Cache-Control Expires Set-Cookie;
fastcgi_pass_header Set-Cookie;
fastcgi_pass_header Cookie;

Now let’s understand what each block in the fastcgi_cache_path is for:

  1. The first argument specifies the cache location in the file system (/var/lib/nginx-tmp/cache).
  2. The levels parameter sets up a two-level directory hierarchy under /var/lib/nginx-tmp/cache. Having a large number of files in a single directory can slow down file access, so I recommend a two-level directory for most deployments. If the levels parameter isn’t included, Nginx puts all files in the same directory. The first directory uses one character in its name. The sub-directory uses two characters.
  3. The 3rd argument specifies the name of the shared memory zone (LIVE_SITE) and its size (100m). This memory zone is for storing cache keys and metadata such as usage times. Having a copy of the keys in memory enables Nginx to quickly determine if a request is a HIT or MISS without having to go to disk, greatly speeding up the checkup. A 1MB zone can store data for about 8,000 keys, so a 100MB zone can store data for about 800,000 keys.
  4. max_size sets the upper limit of the size of the cache (64m in this example). If not specified, the cache can use all remaining disk space. Once the cache reaches its maximum size, the Nginx cache manager will remove the least recently used files from the cache.
  5. Data that hasn’t been accessed during the inactive time period (60 minutes) will be purged from the cache by the cache manager, regardless of whether or not it has expired. The default value is 10 minutes. You can also use values like 12h (12 hours) and 7d (7 days).

The 2nd directive fastcgi_cache_key defines the key for cache lookup. Nginx will apply an MD5sum hash function on the cache key and use the hash result as the name of cache files.

With fastcgi_cache_lock enabled, if multiple clients request a file that is not currently in the cache, only the first of those requests is allowed through to the upstream PHP-FPM server. The remaining requests wait for that request to be satisfied and then pull the file from the cache. Without fastcgi_cache_lock enabled, all requests go straight to the upstream PHP-FPM server.

fastcgi_cache_revalidate enables revalidation of expired cache items using conditional requests with the “If-Modified-Since” and “If-None-Match” header fields.

In the fastcgi_cache_use_stale directive, we configure Nginx to deliver stale content from its cache when it can’t get updated content from the upstream PHP-FPM server. For example, when the MySQL/MariaDB database server is down. Rather than relaying the error to clients, Nginx can deliver the stale version of the file from its cache.

We configure Nginx to ignore Cache-Control Expires Set-Cookie headers from the FastCGI server using fastcgi_ignore_headers. And we allow passing headers from the FastCGI server using the fastcgi_pass_header directive.

After entering these directives in the HTTP context, save and close the file.

Now we’ll configure cache exclude directives and set up our virtual host to use previously configured cache.

Open your WordPress vhost configuration

sudo nano /etc/nginx/sites-enabled/wp.conf

And add this to the server{} block, where each string is commented on.

set $skip_cache 0;
set $no_cache 0; # use $no_cache for easy enable and disable cache using sed in future

# POST requests and url's with a query string should always skip cache
if ($request_method = POST) {
    set $no_cache 1;
}

if ($query_string != "") {
    set $no_cache 1;
}

# Don't cache url's containing the following segments
if ($request_uri ~* "/wp-admin/|/xmlrpc.php|wp-.*.php|/feed/|index.php|sitemap(_index)?.xml") {
    set $no_cache 1;
}

if ($request_uri ~* "/store.*|/cart.*|/my-account.*|/checkout.*|/addons.*") {
    set $no_cache 1;
}

# Don't use the cache for logged in users or recent commenters
if ($http_cookie ~* "comment_author|wordpress_[a-f0-9]+|wp-postpass|wordpress_no_cache|wordpress_logged_in") {
    set $no_cache 1;
}

# Don't use the cache for woocommerce cookies
if ( $cookie_woocommerce_items_in_cart = "1" ){
    set $no_cache 1;
}

#include custom fastcgi configs, especially custom pages that won't be cached
include /etc/nginx/fastcgi_conf.d/*.conf;
if ($no_cache = 1) {
   set $skip_cache $no_cache;
}

And finally add this to location ~ \.php$ {}:

fastcgi_cache_bypass $skip_cache;
fastcgi_no_cache $skip_cache;
fastcgi_cache LIVE_SITE;
fastcgi_cache_valid 60m;
add_header X-Cache $upstream_cache_status;

Here we use the previously defined $skip_cache parameter to easily configure using the cache. Also in the fastcgi_cache directive, we point this vhost to use the previously defined LIVE_SITE shared memory zone and to configure Nginx to add the X-FastCGI-Cache header in the HTTP response.

That’s how you can validate whether the request has been served from the FastCGI cache. The values it can take are MISS, BYPASS, or HIT. MISS means that FastCGI cache is enabled for this page and it was served using FastCGI backend processing, HIT means that it was served from the FastCGI cache and BYPASS means that page was skipped from FastCGI cache processing, and served using the backend.

Now you save that file and exit, validate your configuration via

sudo nginx -t

If everything is ok, you can reload Nginx via

sudo service nginx reload 

or

sudo systemctl reload nginx 

There are many other configuration options, you can view them in the official documentation.

Testing Nginx FastCGI Cache

For testing, we just use the curl command and look for the X-Cache header.

Run

curl -I https://known-hamster.10web.me/

On the first run you should see X-Cache: MISS, that’s because the cache is populated during this request and was served via PHP-FPM.

X-cache value: MISS

On the second request, it should be served from the Nginx FastCGI cache and the X-Cache header value should be HIT as in the picture below.

X-cache value: HIT

And requesting the /wp-admin/ page will result in BYPASS because we configured it to not use Page Cache for that page.

Requesting /wp-admin/ page results in BYPASS

Now we converted our dynamic application to serve static HTML files, but we can achieve ultimate speed by using CDN to serve them.

CDN Cache

What is CDN?

A content delivery network (CDN) allows your content to travel fast by distributing servers across different locations.

A CDN allows for the quick transfer of assets needed for loading Internet content including HTML pages, javascript files, stylesheets, images, and videos. Nowadays, a great deal of web traffic (including that of Facebook, Netflix, and other major sites) travels through CDNs.

In a nutshell, the purpose of CDNs is to speed up data travel and improve connectivity. To do this, a CDN locates servers at the exchange points between different networks. These so-called Internet exchange points (IXPs) are connection spots for different Internet providers allowing them to give each other access to traffic originating from their different networks. By being linked to these high-performing and intertwined locations, a CDN provider has the capacity to reduce both costs and travel times in data delivery.

A CDN network

CDN Caching

A CDN, or content delivery network, caches content (such as images, videos, or webpages) in proxy servers that are located closer to end-users than origin servers. (A proxy receives requests from users and transfers them to other servers.) The proximity of servers to users who make the requests is the reason why a CDN can deliver content faster.

So if we enable CDN Caching, if a client device makes a request for content it will be served from the nearby server. That is called a cache hit. Oppositely, if CDN doesn’t have the copy of the resource to serve and redirects the request to the original server, a cache miss occurs.

You can configure how long data will be cached in CDN servers by setting the s-maxage header, which sets TTL for data. Then CDN will not refresh its data from the original server until the TTL expires. When the TTL expires, the cache removes the content, so the next request for it will go to the original server. Some CDNs will even preemptively remove files from the cache if the content isn’t requested for a while, or if a CDN user manually removes specific content.

Cache Invalidation

There are only two hard things in Computer Science: cache invalidation and naming things.

This is one of my favorite quotes by the famous Netscape (hi, oldfags!) developer Phil Karlton because it really describes the most painful side of using aggressive caching strategies. It is really hard to tell when you should flush your cache because flushing it more often than it should be will lead to performance loss, and flushing it rarely will lead to serving outdated data.

At 10web, we flush cache on theme updates, post or page edits, plugins activations, and deactivations.

The flushing cache should be done from the code layer to the CDN layer, First, you need to flush the OPcache, then the Object Cache, then the Page Cache, and finally the CDN cache.

This was a long guide for you on WordPress Caching Layers. I tried to explain and show every step in configuring different cache layers. We will have an article about benchmarks of each cache layer in near future. Stay tuned and let us know what you think!

Don't forget to share this post!

Vanush Ghamaryan
Vanush Ghamaryan
Vanush is a Software Architect at 10Web. He loves what every other architect loves, too: cats, his dog, beer, and coding.

Leave a comment

Your email address will not be published. Required fields are marked *

Your email address will never be published or shared. Required fields are marked *

COMMENT

NAME *

WEBSITE

Cancel reply