Updates from March, 2016 Toggle Comment Threads | Keyboard Shortcuts

  • Profdil 11:57 am on March 18, 2016 Permalink
    Tags: membership site, scaling for logged in users   

    Scaling a WordPress membership site? 

    I have a more specific problem. I can cache content for non-registered users but I do not know how to scale WordPress for my membership area? What should I do to serve as many logged in users as possible concurrently? AFAIK caching is not recommended for logged in users. Then what do I do? I use Debian and Nginx.

    • Beau Lebens 6:22 pm on March 18, 2016 Permalink | Reply

      Have you looked at using something like memcached as your object cache? Also some basic scaling tuning around your webserver, database server, PHP install (e.g. OpCode caching such as APC) will help you get pretty far, even on a single server. Once you exceed that, you’d want to get your DB on a separate server (or more than one), then look at memcache on a separate server (or pool), and start scaling horizontally.

    • Nick Ciske 6:06 pm on March 22, 2016 Permalink | Reply

      FYI – WP Rocket supports per user caching: http://wp-rocket.me

      Other options:

      Cache what you can (in an object cache or memory cache) to avoid hitting the DB – e.g. menus, widget output, common queries, api requests, etc. Use auto loaded options or transients as a second line cache to avoid expensive operations.

      Split the DB onto a different server (or servers) — look into MariaDB, or use a high performance option like Amazon Dynamo DB.

      Move towards a client side app that makes API calls to the WP API vs. full page loads.

  • davidcoveney 10:57 am on August 24, 2012 Permalink  

    Making WordPress Scale, On A Budget 

    This is copied in from the interconnect/it post on scaling WordPress.

    If you create great content, your WordPress site is going to get a lot of traffic. That’s a good thing! One of our clients has done just that, but we had a couple of problems – he’s become popular in general, bringing in, on busy days, over 10,000 visitors, many of whom look around the site. And worse, he’s also become popular on Twitter.

    This means that when he tweets about an update he and his many followers create huge spikes in traffic. But there’s a an issue of cost – the site, Sniff Petrol, carries no advertising and is essentially a spare time project for the owner. And that means there isn’t thousands to be spent on its hosting. We needed to manage these spikes well, but keep the costs down. A bigger server, as offered by the hosting company, was not the answer. It was time to geek out.


    Running experiments is the only way to test what will improve your sites performance. Below are the admittedly rather technical findings. We hope you find them useful.

    sniffpetrol.com is a WordPress based motoring and motorsport satire site. It is currently hosted on a Linode VPS (Virtual Private Server) [affiliate link] with 4 CPU cores running at 2.27GHz and 1GB of RAM. A LAMP (Linux Apache, MySQL and PHP) installation is used to serve the site.

    This article outlines the problems we encountered when this site experienced a sudden spike in traffic as well as methods we have employed to make the site more responsive under heavy load, without having to resort to using a more expensive server. A brief guide to how we implemented our solution will also be given and Changes made to the server configuration settings for Apache, PHP and MySQL will also be outlined.

    The Problem

    When using the default configurations for Apache, PHP and MySQL and no server-side caching, we found that when load testing the site, load times increased sharply as the number of concurrent users passed the 250 mark. Also, system load reached such high levels that the server became completely unresponsive (sometimes to the point of needing a manual reboot) due to excessive disk “thrashing” caused by Apache rapidly swapping from RAM to disk in an attempt to free-up more RAM to serve additional clients.

    The Solution

    The PHP, Apache and MySQL configurations on the server were changed from the defaults and the APC (Alternative PHP Cache) caching module was installed. In order to make best use of the APC caching module, the W3 Total Cache WordPress plug-in (version was installed on the site. A brief guide to installing APC and W3-Total-Cache and getting them to work together can be found in the next section.

    So, why did we decide to use the APC caching PHP module, or any other method of server-side caching for that matter? The short answer is: Efficiency.

    APC allows us to cache dynamically generated content. This cached content can then be sent to the client when a request for it is received, instead of wasting more server resources to regenerate it when nothing has changed. This considerably reduces the load on the server.

    We also made use of Amazon’s Cloudfront CDN (Content Delivery Network) and S3 services to store and serve static content (theme files and images, for example) to further lighten the load on the server. Our main reasons for choosing Amazon’s CDN solution were the pay-as-you-go pricing structure and the low storage/data transfer costs. A table detailing the costs can be found here.

    The W3 Total Cache plug-in allows you to configure the site to make use of Amazon’s S3 and Cloudfront services as a CDN from the WordPress dashboard. It takes care of uploading the theme files – and other static content and also takes care of URL rewriting for uploaded files automatically. Overall, we were very impressed by how intuitive the whole setup process was. One online guide we found useful when setting up the CDN can be found on the Freedom Target site.

    Getting APC and W3-Total-Cache Up and Running

    If you are using Ubuntu Server, installing the APC Caching module on your server is as simple as running the command below:

    sudo apt-get install php-apc

    You will then need to restart Apache when the installation process has finished. Ubuntu/Debian users can do this by issuing the following command:

    sudo /etc/init.d/apache2 restart

    The installation and configuration of the W3-Total-Cache plug-in is a little more involved.

    Before you install the plug-in, you will need to make sure that you have the following Apache server modules installed and enabled:

    • expires
    • mime
    • deflate
    • headers
    • env
    • setenvif
    • rewrite

    It’s best to obtain the latest stable version from the WordPress plug-in SVN repository and upload the files to your server manually, rather than using the installer integrated into WordPress.

    The plug-in comes with quite comprehensive documentation in the form of a readme file. Other setup guides can also be found quite easily on the Web. One installation guide we found useful can be found here.

    When you have everything installed and the W3-Total-Cache plug-in has been activated, you will have to configure it to use the APC Caching module on the server. To do this, select the General Settings option from the Performance menu in the WordPress Dashboard and, from the dropdown list next to each option (Page Cache, Minify, Database Cache and Object Cache) select the ‘Opcode: Alternative PHP Cache (APC)’ option. Make sure that the Enable checkbox is checked for each option, and then click the Save Changes button next to each option.

    Server Configuration Changes

    The changes made to the configurations for each component of the LAMP stack are outlined below:


    The following changes were made to the ‘Prefork MPM’, ‘Worker MPM’ and ‘Event MPM’ sections of the apache.conf configuration file:

    • The Timeout option was set to 150 seconds.
    • The KeepAliveTimeout option was set to 3 seconds to minimise the amount of time each apache process sits idle waiting for the client to send a KeepAlive request.
    • The MaxClients option was set to 250 to allow for more concurrent users.
    • The MaxRequestsPerChild option was set to 400 to both minimise the consumption of system resources by an individual server process and to allow resources (especially RAM) to be freed up quicker.


    The relevant lines for the MySQL configuration file can be found below:


    key_buffer = 16M

    max_allowed_packet = 16M

    thread_stack = 192K

    thread_cache_size = 8

    myisam-recover = BACKUP

    query_cache_limit = 1M

    query_cache_size = 16M


    key_buffer = 16M


    To lessen the consumption of RAM by PHP scripts when under heavy load, the memory_limit option in php.ini was changed to 64MB.

    Testing Method

    The load testing service Load Impact was used to perform the load testing on the server.

    For each test, we used a simulated load of 250-1000 simultaneous clients with each ‘client’ spending an average of 20 seconds viewing a page. We started the test with an initial load of 250 clients and then ramped up the number of clients by 250 each time, up-to the limit of 1000 simultaneous clients.

    Test Results

    The User Load time results using the amended Apache, MySQL and PHP configurations without using APC caching or a CDN are shown below.

    User Load Time (No APC caching or CDN enabled)

    User Load Time (No APC caching or CDN enabled)

    Although the server did not become completely unresponsive, the load time increases considerably after 250 clients, with load times exceeding 10 seconds after approximately 350 clients. The bandwidth usage results for this test can be found in the graph below:

    Bandwidth Usage (No APC caching or CDN enabled)

    Bandwidth Usage (No APC caching or CDN enabled)

    The maximum amount of bandwidth used in this test (approximately 33 Mbps) was considerably less than the 100Mbps the server was capable of transferring. Taking both the user load time and bandwidth usage results into account, it was apparent that the server was not yet performing as efficiently as it should be.

    With the APC caching module used in conjunction with the W3-Total-Cache plug-in on the site, the reduction in load times was considerable, with user load times at 1000 clients being approximately 25 times faster as the graph below shows:

    User Load Time (Using W3-Total-Cache plug-in with APC caching)

    User Load Time (Using W3-Total-Cache plug-in with APC caching)

    The bandwidth usage results for this test can be found in the graph below:

    Bandwidth Usage (Using W3-Total-Cache plug-in with APC caching)

    Bandwidth Usage (Using W3-Total-Cache plug-in with APC caching)

    Although there is a considerable improvement in bandwidth usage up-to 750 clients, the bandwidth usage drops to around the same level (33Mbps) at 1000 clients as was seen during the first test. This is possibly a function of the VPS having to share its network interface with other websites and may even be due to a certain amount of bandwidth throttling at the hosts.

    Switching to Content Delivery Networks

    When static content was served from the Amazon CDN and APC caching was enabled from within the W3-Total-Cache plug-in, we found that performance could be further improved:

    User Load Time (W3-Total-Cache Plugin and Amazon S3 CDN Used)

    User Load Time (W3-Total-Cache Plugin and Amazon S3 CDN Used)

    Although the performance is not as dramatic as the previous test, when compared to the load times with APC-caching only, the increase in load times as the number of concurrent clients increase is much smoother. The bandwidth usage graph for this test can be found below. The data shown is the combined bandwidth usage of both the server and the CDN:

    Bandwidth Usage (W3-Total-Cache Plugin and Amazon S3 CDN Used)

    Bandwidth Usage (W3-Total-Cache Plugin and Amazon S3 CDN Used)

    Here, we found that the bandwidth usage increased far more smoothly as the test progressed, than in the test with APC caching and W3-Total-Cache only. This is to be expected, as the server no longer had to deal with serving large static files so fewer system resources were required to serve the same number of clients.


    It’s easy to see that using server side caching and careful server configuration gives excellent results. What using a content delivery network means is that the delivery of content will grow more consistently. One problem with many servers, and one which is rarely acknowledged, is the performance available from the network interface. Most won’t serve more than 100mb/s in theory, and about 70mb/s. What can’t be seen in the charts is the momentary output peaks of over 130mb/s that we saw using the content delivery network. The charts just show the averages. As a consequence it’s hard to show the improvement gained from using a CDN at the 1000 user level.

    What we’d like to do, in the future, is to test the server up to 5,000 concurrent users. This is serious traffic, and also costs quite a bit of money to test. At the moment we know that the Sniff Petrol site can handle around 130,000+ page views per hour. But it may be able to handle a lot more. We’d love to see how far it can be pushed. Would it be possible to have the capacity to serve up to a million pages in an hour without having to commission a massive server? Keep coming back as we’ll be carrying out this test in the future.

    As most of our clients use their own large scale hosting (we work with newspapers and publishers a lot!) we’ve generally let them worry about hosting requirements. They usually do pretty well and have some impressive hardware. But recently we’ve started offering a managed WordPress hosting service to our clients, and had to start learning about WordPress scaling ourselves. We love efficiency, and the idea of simply buying bigger boxes as a solution to performance problems appalls us. Modern computers are incredibly powerful – they can do a lot, for very little money.

    • Gavin Pearce 11:00 am on August 24, 2012 Permalink | Reply

      You’re a star. Great article thanks David.

    • simonthepiman 11:37 am on August 24, 2012 Permalink | Reply

      Caching full pages and offloading the static content to another server (CDN) I’d expect better performance than this.

      Or maybe it’s just because you’re using Apache. I would like to see a comparison of performance using Apache, Nginx, or a combination of both.

      Really makes me want to knock up a site and do some load testing… [insert loadImpact affiliate link here]

      PS: joking about the affiliate link.

      • davidcoveney 10:28 am on August 28, 2012 Permalink | Reply

        It’s hard to know for sure – it could well be Apache hitting limits, or server config, or even the processing performance of the VPS. There’s so many variables it’s scary.

        • simonthepiman 10:38 am on August 28, 2012 Permalink

          You’re right about there being so many variables. You can easily make a full time job out of testing and improving performance. I enjoy it, it’s pretty addictive.

  • gavinpearce379 10:35 am on August 24, 2012 Permalink
    Tags: servers setiup   

    Current setups 

    Interested to hear in current set-ups out their for scaling WordPress.

    Architecture, traffic/data figures, server specs/software, caching, plugins / custom development, pricing all welcomed!

    From one of our enterprise set-ups (all in the Rackspace Cloud):

    • 2x Red Hat Linux (web) servers
    • 1x Red Hat Linux (db) server
    • 1x Load Balancer

    Web Server Spec
    1,024MB RAM
    40GB Disk

    DB Server Spec
    2,048MB RAM
    80GB Disk

    Total monthly cost aprox: £160
    [You could lower this cost further by loosing the Red Hat license fees]

    We wrote a custom plugin to allow the media library to upload direct into Rackspace Cloud Files, improving loading times through CDN, and solving the problem of sharing files between servers We also serve as many of the static theme files as possible from the Rackspace CDN.

    Shameful Rackspace partner link: http://www.rackspace.co.uk?id=3637&cmp=partner_3seven9

    Rackspace link (without our partner tracking): http://www.rackspace.co.uk/

    • benmay 8:00 am on September 1, 2012 Permalink | Reply

      What are your traffic stats for that setup? Quite a beefy solution, would expect it to be able to handle quite a bit!

      • rickalee2k 2:58 pm on June 4, 2013 Permalink | Reply

        Incredibly interested in “We wrote a custom plugin to allow the media library to upload direct into Rackspace Cloud Files”. You skipped local /wp-content/ completely? Chances you could release this to public?

    • shawn 8:19 pm on August 3, 2013 Permalink | Reply

      Agreed, the plugin sounds incredibly useful. Do you plan on releasing it?

  • davidcoveney 10:18 am on August 24, 2012 Permalink
    Tags: cheap   

    Howdy all – we did some research on scaling up on a budget, and the results are here: http://interconnectit.com/1254/make-wordpress-scale-on-a-budget/

    Using the same approach we’ve been able to set up a site that tested up to 5,000 concurrent visitors but, once we undid some things that caused stability problems we pulled back to about 3,500. Currently the site has a fail-over server but we could put a load balancer in front and pretty much double the capacity very quickly. That sort of load means a client with a million page views a month ticks over on a cheap 8GB VPS with a typical load average of 0.6 on a four processor machine.

    • Gavin Pearce 10:44 am on August 24, 2012 Permalink | Reply

      Nice article, thanks David. Would you be happy to cross post the article in its entirety over here? Hoping to collate all of the information into one place.

    • davidcoveney 10:45 am on August 24, 2012 Permalink | Reply

      Sure – would you like here in the stream?

      • Gavin Pearce 10:47 am on August 24, 2012 Permalink | Reply

        I think it deserves it’s own blog post – don’t you? Your account should let you login to /wp-admin I hope! Shout if you have any problems.

    • Tom Barrett (@TCBarrett) 10:18 am on August 28, 2012 Permalink | Reply

      We ran a site that had very few logged in users (basically just serving up pages). It served 10k+ visitors 100k+ page impressions a day using Nginx fcgi cache on a Linode 512 ($20/month). That’s without any other caching.

      • davidcoveney 10:26 am on August 28, 2012 Permalink | Reply

        Tom, one thing to be aware of with local caching is that without a cdn you can quickly saturate your network connection. If each page with images etc is 1MB (not so unusual these days!) then you can only serve about 6 of those concurrently, per second, on a 100Mb/s connection. That’s not a lot of users for a busy site, although it’s still high traffic.

        One of the key things with getting scaling right isn’t the overall traffic, but handling of peaks. 5,000 concurrent visitors would be equivalent to 21 mill+ per day if they were neatly spread out, but sadly they never are! However, a tweet from somebody very famous can easily send a lot of traffic. I’ve worked out that the traffic level is approximately equal to one per 50 active followers (the latter being tricky to work out – e.g. light-entertainment celebrities have far fewer active followers than niche players) so somebody with 25,000 genuine followers will create a traffic surge of 500 concurrent visitors.

  • Dwain Maralack 10:02 am on August 22, 2012 Permalink  

    Proposal: Resource Page 

    I suggest adding a resources page to this blog.

    From there we can collaboratively add resources to the comments which can be incorporated into the page by the administrator.


  • gavinpearce379 10:23 am on August 21, 2012 Permalink  

    Scaling WordPress 

    Dedicated to the pros and cons of scaling a WordPress website. Thoughts on a postcard.

    • blobaugh 4:34 pm on August 21, 2012 Permalink | Reply

      You need to invite us to the blog or we cannot create new posts, only reply to your post

    • Simon 5:55 pm on August 21, 2012 Permalink | Reply

      This all sounds very interesting and I hope this site takes off.

      I know there are various resources online for scaling WordPress (a video here, a blog post there) but it would be great to have a central source where people can share experiences and data.

      I keep a log of page generation times for all dynamic PHP requests (nginx upstream log) as well as data throughput, CPU, memory usage etc using Graphite + Host sFlow. I’m currently trying to bring all of this data together in a useful way and once I’ve achieved that I’m going to reassess our hosting infrastructure.

      I currently have a modest number of servers working in a cluster. Nginx serving static content, upstream Apache instances (with APC) dealing with the dynamic requests and a single MySQL box. I also have memcached in use for certain configurations as an object cache.

      I’m just not sure this is the best solution so like I said I’m looking into others.

    • Dwain Maralack 11:01 am on August 22, 2012 Permalink | Reply

      Would be nice to see this page eventually make it to make.wordpress.org/scale 🙂

    • Greg Turner 10:27 am on August 24, 2012 Permalink | Reply

      Could someone please fix the misspelling of the word practical in the header?

Compose new post
Next post/Next comment
Previous post/Previous comment
Show/Hide comments
Go to top
Go to login
Show/Hide help
shift + esc