Recent Updates Toggle Comment Threads | Keyboard Shortcuts

  • Profdil 11:57 am on March 18, 2016 Permalink
    Tags: membership site, scaling for logged in users   

    Scaling a WordPress membership site? 

    I have a more specific problem. I can cache content for non-registered users but I do not know how to scale WordPress for my membership area? What should I do to serve as many logged in users as possible concurrently? AFAIK caching is not recommended for logged in users. Then what do I do? I use Debian and Nginx.

     
    • Beau Lebens 6:22 pm on March 18, 2016 Permalink | Reply

      Have you looked at using something like memcached as your object cache? Also some basic scaling tuning around your webserver, database server, PHP install (e.g. OpCode caching such as APC) will help you get pretty far, even on a single server. Once you exceed that, you’d want to get your DB on a separate server (or more than one), then look at memcache on a separate server (or pool), and start scaling horizontally.

    • Nick Ciske 6:06 pm on March 22, 2016 Permalink | Reply

      FYI – WP Rocket supports per user caching: http://wp-rocket.me

      Other options:

      Cache what you can (in an object cache or memory cache) to avoid hitting the DB – e.g. menus, widget output, common queries, api requests, etc. Use auto loaded options or transients as a second line cache to avoid expensive operations.

      Split the DB onto a different server (or servers) — look into MariaDB, or use a high performance option like Amazon Dynamo DB.

      Move towards a client side app that makes API calls to the WP API vs. full page loads.

  • simonthepiman 11:52 pm on August 27, 2012 Permalink
    Tags: apc, opcode cache, performance   

    APC opcode caching of multiple sites 

    APC performs two functions, one as an object cache and the other as an opcode cache. In this post I’m talking specifically about the opcode caching functionality when used on a server hosting multiple websites.

    What is an opcode cache?

    I’m sure most of you know but I’ll just cover it quickly for those that don’t. Each time a PHP web page is requested the web server has to compile the human readable PHP code into a language the processor can understand, this is called opcode. APC will cache opcode in RAM so that subsequent requests for that file do not have to go through that same process of opening up the file from disk and compiling it into opcode. So not only does it save on compiling the code it also saves the disk access. If your website’s files are located on a distributed filesystem such as NFS then opcode caching will give you upwards of 100% improvement in performance any day of the week.

    Configure APC with enough memory for your needs

    APC’s default configuration is probably fine if you’re hosting a single website with hardly any plugins and a basic theme. Otherwise you should change the configuration, especially If you have more than a one website hosted on the server or your website has lots of PHP files.

    In its default state APC will allocate 30MB of shared memory. The PHP files of a pretty moderate WordPress website will need more than 30MB space for the opcode cache (there are a lot of PHP files). You’ll probably want to look at budgeting 40MB for each site (so 10 websites would wipe out the RAM of a 500MB small cloud server). If APC runs out of space to store its cached PHP files then it will totally expunge the cache and start over and if that happens on every page load then you can say goodbye to your performance increase. There are some settings you can tweak to improve things a little but really you just need to allocate enough memory to APC.

    Use apc.php to keep track of your APC usage and make sure it has enough free space to fit everything in. If you use APC as an object cache then you will need to allocate even more space to every website you host.

    The benefits of using the APC opcode cache on a WordPress website

    The graph on the left is without APC enabled and the graph on the right has APC enabled. The initial load test done without the APC opcode cache nearly crashed the server at 75 clients so the load test had to be stopped. Once APC was enabled requests per second are pretty stable with the server now easily doubling its performance.

    More bang for your buck

    I’ve thought for some time now that if you run multiple sites on your web server and those sites use a common codebase then disk space, RAM and processor time could be saved by creating a symlink to those common files. Rather than each website loading up and caching its own copy of the same code it would be much better if that code could be cached once and accessible to all sites.

    In my case the codebase is WordPress but this could also apply to other frameworks such as CodeIgniter or CakePHP.

    So to test this and confirm my initial thoughts I created a single line PHP file called simon.php, then created 2 symlinks and 2 hard links to that PHP file. Then went through a process of clearing the APC system cache, executing the symlinks, checking with apc.php to see if there were entries in the system cache for either the links or the file they referenced.

    No matter which of the 4 files I accessed in a web browser there was only a single entry in the APC system cache. Each time I refreshed a page the system cache hit count incremented by one, whichever of the 4 files I accessed that same counter increased. It’s the path which is initially used to access simon.php that is recorded by APC then any subsequent requests that ultimately resolve to the simon.php file get attributed to the same entry in APC.

    There is one slight difference between using hard and soft links in this situation (besides the usual differences). If the first request to a PHP file is through a hard link then the path to the hard link is stored by APC. If however the first request to a PHP file is through a soft link (symlink) then it’s the ultimate path (simon.php) that is recorded.

    So you should be able to share a collection of PHP files among an unlimited number of sites using soft/hard links and those files will only be put in the opcode cache once and therefore only be taking up space in RAM once.

    Let’s just confirm at a lower level that the opcode is being cached

    I still felt like I needed further proof, so I forced Apache to just use a single process and then I watched that process to see exactly what it was doing each time it processed a request for my links. I’ve included those interactions between Apache and the OS in the gists below, both when Apache should have been opcode caching the file and when it should have been reading the file out of the cache rather than compiling it all over.

    For those of you that can’t be bothered to read through those gists (120-180 lines each) It pretty much goes like this:

    Initial request to a PHP file using the hard link

    1. Apache uses the “stat” command to get some information about the file requested
    2. Apache tries to open .htaccess files all the way up the directory tree
    3. Apache uses the “lstat” command to get information about the link and every directory leading to that link file (goes through this exact process 3 or 4 times for some reason)
    4. Then it opens up the file from the disk
    5. The file is then added to the opcode cache in shared memory
    6. Serves the resulting file to the browser
    7. Logs the request

    Subsequent requests to a PHP file using the hard link

    1. Apache uses the “stat” command to get some information about the file requested
    2. Apache tries to open .htaccess files all the way up the directory tree
    3. Apache reads the file from the opcode cache in shared memory
    4. Serves the resulting file to the browser
    5. Logs the request

    The only difference I can see when doing the same with a soft link (symlink) is that once Apache has used “stat” to get file information it then follows the link and switches to using the ultimate file name/path for further “lstat” checks.

    ————————————–

    So the opportunity to squeeze out further performance in this situation is there. I’m definitely on the lookout for the best way to run separate WordPress installs with a shared core codebase (Multitenancy)

    ————————————–

    Gists for the hard linked file
    Apache + APC opcode cache : initial request for a hard linked PHP file

    Apache + APC opcode cache : subsequent request for a hard linked PHP file (it’s coming from the opcode cache)

    Gists for the soft linked file
    Apache + APC opcode cache : initial request for a soft linked PHP file (symlink)

    Apache + APC opcode cache : subsequent request for a soft linked PHP file (it’s coming from the opcode cache)

     
    • household cleaning supplies organizer 11:33 pm on December 20, 2012 Permalink | Reply

      Does your website have a contact page? I’m having problems locating it but, I’d like to
      send you an e-mail. I’ve got some ideas for your blog you might be interested in hearing. Either way, great site and I look forward to seeing it expand over time.

    • Watch Spartacus War of the Damned Season 1 Episode 3 10:54 pm on February 8, 2013 Permalink | Reply

      I think this is one of the most significant information for
      me. And i’m glad reading your article. But should remark on some general things, The site style is wonderful, the articles is really nice : D. Good job, cheers

    • Leigh 4:58 am on March 4, 2013 Permalink | Reply

      Excellent blog you have here but I was wanting to know if you knew of any
      message boards that cover the same topics discussed in this article?
      I’d really like to be a part of group where I can get opinions from other experienced people that share the same interest. If you have any suggestions, please let me know. Cheers!

    • DenverFeri 8:53 am on April 7, 2014 Permalink | Reply

      Подтверждаю. Похожее смотрел на этом сайте мудрые мысли

    • sdorttuiiplmnr 2:45 pm on June 21, 2015 Permalink | Reply

      F*ckin¦ amazing issues here. I¦m very satisfied to look your post. Thanks so much and i’m having a look ahead to contact you. Will you please drop me a mail?

  • David Coveney 10:57 am on August 24, 2012 Permalink  

    Making WordPress Scale, On A Budget 

    This is copied in from the interconnect/it post on scaling WordPress.

    If you create great content, your WordPress site is going to get a lot of traffic. That’s a good thing! One of our clients has done just that, but we had a couple of problems – he’s become popular in general, bringing in, on busy days, over 10,000 visitors, many of whom look around the site. And worse, he’s also become popular on Twitter.

    This means that when he tweets about an update he and his many followers create huge spikes in traffic. But there’s a an issue of cost – the site, Sniff Petrol, carries no advertising and is essentially a spare time project for the owner. And that means there isn’t thousands to be spent on its hosting. We needed to manage these spikes well, but keep the costs down. A bigger server, as offered by the hosting company, was not the answer. It was time to geek out.

    Experiments

    Running experiments is the only way to test what will improve your sites performance. Below are the admittedly rather technical findings. We hope you find them useful.

    sniffpetrol.com is a WordPress based motoring and motorsport satire site. It is currently hosted on a Linode VPS (Virtual Private Server) [affiliate link] with 4 CPU cores running at 2.27GHz and 1GB of RAM. A LAMP (Linux Apache, MySQL and PHP) installation is used to serve the site.

    This article outlines the problems we encountered when this site experienced a sudden spike in traffic as well as methods we have employed to make the site more responsive under heavy load, without having to resort to using a more expensive server. A brief guide to how we implemented our solution will also be given and Changes made to the server configuration settings for Apache, PHP and MySQL will also be outlined.

    The Problem

    When using the default configurations for Apache, PHP and MySQL and no server-side caching, we found that when load testing the site, load times increased sharply as the number of concurrent users passed the 250 mark. Also, system load reached such high levels that the server became completely unresponsive (sometimes to the point of needing a manual reboot) due to excessive disk “thrashing” caused by Apache rapidly swapping from RAM to disk in an attempt to free-up more RAM to serve additional clients.

    The Solution

    The PHP, Apache and MySQL configurations on the server were changed from the defaults and the APC (Alternative PHP Cache) caching module was installed. In order to make best use of the APC caching module, the W3 Total Cache WordPress plug-in (version 0.9.1.3) was installed on the site. A brief guide to installing APC and W3-Total-Cache and getting them to work together can be found in the next section.

    So, why did we decide to use the APC caching PHP module, or any other method of server-side caching for that matter? The short answer is: Efficiency.

    APC allows us to cache dynamically generated content. This cached content can then be sent to the client when a request for it is received, instead of wasting more server resources to regenerate it when nothing has changed. This considerably reduces the load on the server.

    We also made use of Amazon’s Cloudfront CDN (Content Delivery Network) and S3 services to store and serve static content (theme files and images, for example) to further lighten the load on the server. Our main reasons for choosing Amazon’s CDN solution were the pay-as-you-go pricing structure and the low storage/data transfer costs. A table detailing the costs can be found here.

    The W3 Total Cache plug-in allows you to configure the site to make use of Amazon’s S3 and Cloudfront services as a CDN from the WordPress dashboard. It takes care of uploading the theme files – and other static content and also takes care of URL rewriting for uploaded files automatically. Overall, we were very impressed by how intuitive the whole setup process was. One online guide we found useful when setting up the CDN can be found on the Freedom Target site.

    Getting APC and W3-Total-Cache Up and Running

    If you are using Ubuntu Server, installing the APC Caching module on your server is as simple as running the command below:

    sudo apt-get install php-apc

    You will then need to restart Apache when the installation process has finished. Ubuntu/Debian users can do this by issuing the following command:

    sudo /etc/init.d/apache2 restart

    The installation and configuration of the W3-Total-Cache plug-in is a little more involved.

    Before you install the plug-in, you will need to make sure that you have the following Apache server modules installed and enabled:

    • expires
    • mime
    • deflate
    • headers
    • env
    • setenvif
    • rewrite

    It’s best to obtain the latest stable version from the WordPress plug-in SVN repository and upload the files to your server manually, rather than using the installer integrated into WordPress.

    The plug-in comes with quite comprehensive documentation in the form of a readme file. Other setup guides can also be found quite easily on the Web. One installation guide we found useful can be found here.

    When you have everything installed and the W3-Total-Cache plug-in has been activated, you will have to configure it to use the APC Caching module on the server. To do this, select the General Settings option from the Performance menu in the WordPress Dashboard and, from the dropdown list next to each option (Page Cache, Minify, Database Cache and Object Cache) select the ‘Opcode: Alternative PHP Cache (APC)’ option. Make sure that the Enable checkbox is checked for each option, and then click the Save Changes button next to each option.

    Server Configuration Changes

    The changes made to the configurations for each component of the LAMP stack are outlined below:

    Apache

    The following changes were made to the ‘Prefork MPM’, ‘Worker MPM’ and ‘Event MPM’ sections of the apache.conf configuration file:

    • The Timeout option was set to 150 seconds.
    • The KeepAliveTimeout option was set to 3 seconds to minimise the amount of time each apache process sits idle waiting for the client to send a KeepAlive request.
    • The MaxClients option was set to 250 to allow for more concurrent users.
    • The MaxRequestsPerChild option was set to 400 to both minimise the consumption of system resources by an individual server process and to allow resources (especially RAM) to be freed up quicker.

    MySQL

    The relevant lines for the MySQL configuration file can be found below:

    [mysqld]

    key_buffer = 16M

    max_allowed_packet = 16M

    thread_stack = 192K

    thread_cache_size = 8

    myisam-recover = BACKUP

    query_cache_limit = 1M

    query_cache_size = 16M

    [isamchk]

    key_buffer = 16M

    PHP

    To lessen the consumption of RAM by PHP scripts when under heavy load, the memory_limit option in php.ini was changed to 64MB.

    Testing Method

    The load testing service Load Impact was used to perform the load testing on the server.

    For each test, we used a simulated load of 250-1000 simultaneous clients with each ‘client’ spending an average of 20 seconds viewing a page. We started the test with an initial load of 250 clients and then ramped up the number of clients by 250 each time, up-to the limit of 1000 simultaneous clients.

    Test Results

    The User Load time results using the amended Apache, MySQL and PHP configurations without using APC caching or a CDN are shown below.

    User Load Time (No APC caching or CDN enabled)

    User Load Time (No APC caching or CDN enabled)

    Although the server did not become completely unresponsive, the load time increases considerably after 250 clients, with load times exceeding 10 seconds after approximately 350 clients. The bandwidth usage results for this test can be found in the graph below:

    Bandwidth Usage (No APC caching or CDN enabled)

    Bandwidth Usage (No APC caching or CDN enabled)

    The maximum amount of bandwidth used in this test (approximately 33 Mbps) was considerably less than the 100Mbps the server was capable of transferring. Taking both the user load time and bandwidth usage results into account, it was apparent that the server was not yet performing as efficiently as it should be.

    With the APC caching module used in conjunction with the W3-Total-Cache plug-in on the site, the reduction in load times was considerable, with user load times at 1000 clients being approximately 25 times faster as the graph below shows:

    User Load Time (Using W3-Total-Cache plug-in with APC caching)

    User Load Time (Using W3-Total-Cache plug-in with APC caching)

    The bandwidth usage results for this test can be found in the graph below:

    Bandwidth Usage (Using W3-Total-Cache plug-in with APC caching)

    Bandwidth Usage (Using W3-Total-Cache plug-in with APC caching)

    Although there is a considerable improvement in bandwidth usage up-to 750 clients, the bandwidth usage drops to around the same level (33Mbps) at 1000 clients as was seen during the first test. This is possibly a function of the VPS having to share its network interface with other websites and may even be due to a certain amount of bandwidth throttling at the hosts.

    Switching to Content Delivery Networks

    When static content was served from the Amazon CDN and APC caching was enabled from within the W3-Total-Cache plug-in, we found that performance could be further improved:

    User Load Time (W3-Total-Cache Plugin and Amazon S3 CDN Used)

    User Load Time (W3-Total-Cache Plugin and Amazon S3 CDN Used)

    Although the performance is not as dramatic as the previous test, when compared to the load times with APC-caching only, the increase in load times as the number of concurrent clients increase is much smoother. The bandwidth usage graph for this test can be found below. The data shown is the combined bandwidth usage of both the server and the CDN:

    Bandwidth Usage (W3-Total-Cache Plugin and Amazon S3 CDN Used)

    Bandwidth Usage (W3-Total-Cache Plugin and Amazon S3 CDN Used)

    Here, we found that the bandwidth usage increased far more smoothly as the test progressed, than in the test with APC caching and W3-Total-Cache only. This is to be expected, as the server no longer had to deal with serving large static files so fewer system resources were required to serve the same number of clients.

    Conclusion

    It’s easy to see that using server side caching and careful server configuration gives excellent results. What using a content delivery network means is that the delivery of content will grow more consistently. One problem with many servers, and one which is rarely acknowledged, is the performance available from the network interface. Most won’t serve more than 100mb/s in theory, and about 70mb/s. What can’t be seen in the charts is the momentary output peaks of over 130mb/s that we saw using the content delivery network. The charts just show the averages. As a consequence it’s hard to show the improvement gained from using a CDN at the 1000 user level.

    What we’d like to do, in the future, is to test the server up to 5,000 concurrent users. This is serious traffic, and also costs quite a bit of money to test. At the moment we know that the Sniff Petrol site can handle around 130,000+ page views per hour. But it may be able to handle a lot more. We’d love to see how far it can be pushed. Would it be possible to have the capacity to serve up to a million pages in an hour without having to commission a massive server? Keep coming back as we’ll be carrying out this test in the future.

    As most of our clients use their own large scale hosting (we work with newspapers and publishers a lot!) we’ve generally let them worry about hosting requirements. They usually do pretty well and have some impressive hardware. But recently we’ve started offering a managed WordPress hosting service to our clients, and had to start learning about WordPress scaling ourselves. We love efficiency, and the idea of simply buying bigger boxes as a solution to performance problems appalls us. Modern computers are incredibly powerful – they can do a lot, for very little money.

     
    • Gavin Pearce 11:00 am on August 24, 2012 Permalink | Reply

      You’re a star. Great article thanks David.

    • simonthepiman 11:37 am on August 24, 2012 Permalink | Reply

      Caching full pages and offloading the static content to another server (CDN) I’d expect better performance than this.

      Or maybe it’s just because you’re using Apache. I would like to see a comparison of performance using Apache, Nginx, or a combination of both.

      Really makes me want to knock up a site and do some load testing… [insert loadImpact affiliate link here]

      PS: joking about the affiliate link.

      • davidcoveney 10:28 am on August 28, 2012 Permalink | Reply

        It’s hard to know for sure – it could well be Apache hitting limits, or server config, or even the processing performance of the VPS. There’s so many variables it’s scary.

        • simonthepiman 10:38 am on August 28, 2012 Permalink

          You’re right about there being so many variables. You can easily make a full time job out of testing and improving performance. I enjoy it, it’s pretty addictive.

  • gavinpearce379 10:35 am on August 24, 2012 Permalink
    Tags: servers setiup   

    Current setups 

    Interested to hear in current set-ups out their for scaling WordPress.

    Architecture, traffic/data figures, server specs/software, caching, plugins / custom development, pricing all welcomed!

    From one of our enterprise set-ups (all in the Rackspace Cloud):

    • 2x Red Hat Linux (web) servers
    • 1x Red Hat Linux (db) server
    • 1x Load Balancer

    Web Server Spec
    1,024MB RAM
    40GB Disk

    DB Server Spec
    2,048MB RAM
    80GB Disk

    Total monthly cost aprox: £160
    [You could lower this cost further by loosing the Red Hat license fees]

    We wrote a custom plugin to allow the media library to upload direct into Rackspace Cloud Files, improving loading times through CDN, and solving the problem of sharing files between servers We also serve as many of the static theme files as possible from the Rackspace CDN.

    Shameful Rackspace partner link: http://www.rackspace.co.uk?id=3637&cmp=partner_3seven9

    Rackspace link (without our partner tracking): http://www.rackspace.co.uk/

     
    • benmay 8:00 am on September 1, 2012 Permalink | Reply

      What are your traffic stats for that setup? Quite a beefy solution, would expect it to be able to handle quite a bit!

      • rickalee2k 2:58 pm on June 4, 2013 Permalink | Reply

        Incredibly interested in “We wrote a custom plugin to allow the media library to upload direct into Rackspace Cloud Files”. You skipped local /wp-content/ completely? Chances you could release this to public?

    • shawn 8:19 pm on August 3, 2013 Permalink | Reply

      Agreed, the plugin sounds incredibly useful. Do you plan on releasing it?

  • David Coveney 10:18 am on August 24, 2012 Permalink
    Tags: cheap   

    Howdy all – we did some research on scaling up on a budget, and the results are here: http://interconnectit.com/1254/make-wordpress-scale-on-a-budget/

    Using the same approach we’ve been able to set up a site that tested up to 5,000 concurrent visitors but, once we undid some things that caused stability problems we pulled back to about 3,500. Currently the site has a fail-over server but we could put a load balancer in front and pretty much double the capacity very quickly. That sort of load means a client with a million page views a month ticks over on a cheap 8GB VPS with a typical load average of 0.6 on a four processor machine.

     
    • Gavin Pearce 10:44 am on August 24, 2012 Permalink | Reply

      Nice article, thanks David. Would you be happy to cross post the article in its entirety over here? Hoping to collate all of the information into one place.

    • davidcoveney 10:45 am on August 24, 2012 Permalink | Reply

      Sure – would you like here in the stream?

      • Gavin Pearce 10:47 am on August 24, 2012 Permalink | Reply

        I think it deserves it’s own blog post – don’t you? Your account should let you login to /wp-admin I hope! Shout if you have any problems.

    • Tom Barrett (@TCBarrett) 10:18 am on August 28, 2012 Permalink | Reply

      We ran a site that had very few logged in users (basically just serving up pages). It served 10k+ visitors 100k+ page impressions a day using Nginx fcgi cache on a Linode 512 ($20/month). That’s without any other caching.

      • davidcoveney 10:26 am on August 28, 2012 Permalink | Reply

        Tom, one thing to be aware of with local caching is that without a cdn you can quickly saturate your network connection. If each page with images etc is 1MB (not so unusual these days!) then you can only serve about 6 of those concurrently, per second, on a 100Mb/s connection. That’s not a lot of users for a busy site, although it’s still high traffic.

        One of the key things with getting scaling right isn’t the overall traffic, but handling of peaks. 5,000 concurrent visitors would be equivalent to 21 mill+ per day if they were neatly spread out, but sadly they never are! However, a tweet from somebody very famous can easily send a lot of traffic. I’ve worked out that the traffic level is approximately equal to one per 50 active followers (the latter being tricky to work out – e.g. light-entertainment celebrities have far fewer active followers than niche players) so somebody with 25,000 genuine followers will create a traffic surge of 500 concurrent visitors.

  • simonthepiman 10:40 am on August 23, 2012 Permalink
    Tags: Multitenancy, shared codebase   

    Jason McCreary gives an overview of what “WordPress Multitenancy” is and the steps he took to achieve it, along with his current working solution.

    http://jason.pureconcepts.net/2012/08/wordpress-multitenancy/

    This is relevant to servers hosting a number of WordPress installs. Having a shared core codebase (not necessarily using multisite) means it’s easier to detect changes in the codebase by having it version controlled. Also less shared memory will be needed by the APC opcode cache, giving you more memory for object caching etc.

     
    • blobaugh 11:01 pm on August 23, 2012 Permalink | Reply

      Note that this bring in it’s own set of headaches, and may ultimately not be helpful to scaling out large networks of sites

    • simonthepiman 11:37 pm on August 23, 2012 Permalink | Reply

      If you have knowledge of the headaches then maybe summarise them here or contribute to that blog post or if you know of other resources that have further information on the issues someone might face when sharing the core codebase then link to them here in the comments.

      That’s what it’s all about, sharing ideas, experiences, data.

    • blobaugh 11:39 pm on August 23, 2012 Permalink | Reply

      Sure, all this was shared in the mailing list. That data should probably be ported over here and anyone new posting there also pointed to this site

    • simonthepiman 10:17 am on August 24, 2012 Permalink | Reply

      I’ve just looked at the wp-hackers list archive and not sure of the best way to get that information over here in a digestable way so I’ll just link to the archive:

      [wp-hackers] Running several WordPress sites on the same server
      http://lists.automattic.com/pipermail/wp-hackers/2012-August/044183.html

      • Jason McCreary 7:18 pm on August 25, 2012 Permalink | Reply

        First, thanks for mentioning my post here. WordPress Multitenancy is something I’ve been interested in for a while. This solution is a first attempt. As such, I welcome feedback. I’ve reviewed the mailing list link provided. However, I’d appreciate elaboration on these “headaches”.

        • simonthepiman 7:57 pm on August 25, 2012 Permalink

          I too looked through the mailing list and couldn’t really see the headaches?

    • Simon 11:39 pm on August 26, 2012 Permalink | Reply

      I can confirm that having a shared core codebase will reduce the RAM needed for hosting multiple sites when utilising the APC opcode cache.

      http://lists.automattic.com/pipermail/wp-hackers/2012-August/044230.html

      • Luke 6:45 pm on February 24, 2013 Permalink | Reply

        Currently looking into this myself for a platform of 30 odd sites, definitely reduced the RAM and so far by sharing everything except theme files and uploads we’ve reduced the deploy size about 98%, has anybody else tried this in production?

    • Simon 8:40 pm on February 24, 2013 Permalink | Reply

      I’ve got a symlinked install in production. It works really well, I’m moving everything to symlinked as soon as I have time.

      Added advantage is you can have a WordPress folder for each wp version and easily switch version by symlinking to the chosen wp directory or uploading the latest version of WP to wp-latest and upgrading all sites in one swoop.

      • Luke 2:37 pm on February 26, 2013 Permalink | Reply

        Did you ever have an issue with the server response being slower? We’ve got the wordpress core symlinked into each of our sites but with this new mutlitenancy setup its spending almost twice the time in PHP

        • Simon 10:44 am on February 28, 2013 Permalink

          No issues like that at all.

          The main issue is down to not having a run of the mill file structure. Many plugins just assume the wp-content and WordPress core files are found under the same old names in the same location but on my installs WordPress core files are in a folder called /wp/ and the wp-content folder is /wp-content/ but plugins just assume everything is in the WordPress folder so look for a non existant wp-content folder in /wp/wp-content/.

          It’s not usually too much of a problem though, just an image missing here and there but I could see how a plugin could try cache expensive to generate information in an invalid location and cause page generation times to double.

    • Eduardo 8:17 pm on March 13, 2013 Permalink | Reply

      Awesome. We’re rolling this into production for 3k websites in a couple of weeks.

    • Scott Hack 9:40 pm on April 11, 2013 Permalink | Reply

      Jason did an updated blog post on this subject that you’ll probably want to check out Eduardo.

      • Eduardo 3:11 am on July 4, 2013 Permalink | Reply

        Thanks Scott. I should mention that we were unable to get multitenancy to work yet. I’ll see if Jason’s update can help us.

  • gavinpearce379 9:55 am on August 23, 2012 Permalink  

    Welcome to Automattic/WordPress.com 

    Welcome to the guys and girls from Automattic/WordPress.com who joined us late yesterday.

    Hopefully their experience and input in scaling WP will prove invaluable as this group grows.

    Hi!

     
  • gavinpearce379 11:39 am on August 22, 2012 Permalink
    Tags: video, wordcamp   

    Scaling WordPress: A FeedMyMedia.com case study 


     
  • Dwain Maralack 10:02 am on August 22, 2012 Permalink  

    Proposal: Resource Page 

    I suggest adding a resources page to this blog.

    From there we can collaboratively add resources to the comments which can be incorporated into the page by the administrator.

     

     
  • Ben Lobaugh (blobaugh) 4:47 pm on August 21, 2012 Permalink  

    Considerations 

    Lets list out some things to consider when looking to scale and some possible ways to overcome them.

    To start off the conversation here are a couple items from the mailing list discussion:

    • Uploaded media files (WP stores files in a local install dir that is not easily changable)
    • Updates to plugins and core
    • Adding additional databases
    • Load balancers
    • CDNs
    • Caching solutions
     
    • Kevinjohn Gallagher 7:16 pm on August 21, 2012 Permalink | Reply

      Accessibility
      Edge Cases
      1 theme
      Release / Publication process
      Use of modern server-side techniques (CSS3, Less, JS etc)

    • Eric Marden 7:22 pm on August 21, 2012 Permalink | Reply

      Don’t forget about front-end performance, minimizing latency, http requests, parallel downloads, e-tags and http headers to influence browser cache, et al

    • John Blackbourn 7:54 pm on August 21, 2012 Permalink | Reply

      Another consideration should be scaling vertically for as long as possible until you need to scale horizontally.

c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel