We have been some busy bees here lately at thirty bees. Today we are adding three new modules to our growing list of core modules. By now most users know that we focus a great deal on performance at thirty bees. The new modules are no exception to that. They are managers for the most popular server side caching systems that thirty bees users use. The three modules are Memcache Manager, Opcache Manager, and APCu Cache Manager.
The new modules are not for increasing performance in your shop per se, they are to help you manage the caching system that you already use. They can let you know if you need to increase the memory dedicated to you caching system, which will speed your shop up if the memory flushes constantly. The modules can also let you know if your caches are working efficiently, or if they are restarting too much as well. One of my favorite uses for the modules is the ability to clear the Opcode caches easily, without having to restart PHP or other services.
Opcache is one of the caches that all thirty bees sites need to be using. It has been standard in PHP since version 5.6 and it improves with speed every version that is released. Opcache speeds your site up by storing the core PHP files, module files, and even the smarty cache files in a compiled state in memory. This state is called Op code. The benefits of using Opcache are two fold, on one side you totally cut out a disk read for executing a file on the other side the file is pre-compiled from PHP to Opcode. This is what makes Opcache work so well, disk reads are very slow and cost a lot of time. Using Opcache cuts them to almost nothing since all the PHP files are stored in a memory cache. This cache however is very small on most default PHP installations. It is just 32mb. We generally recommend upping that to either 64mb or 128mb. You can use our Opcache Manager to figure out exactly what size your cache needs to be set at.
Below is an image of the main interface of the the module.
You can see we have our Opcache on our test server set to 64mb. If this was a production server, I would raise the Opcache to 128mb to see how it fills. What you have to take into consideration, is not only does the core store in the Opcache memory, so does what ever modules you are using. Also your Smarty cache files do as well (this is one huge reason we are really reluctant to move away from Smarty, Twig and other template language do not create and store Opcode and are slower.) Below you can see a screenshot of some of the different cached files, notice the Smarty files that are cached.
Out of all of the different caching systems, Opcache is the only one that is recommended to be ran with one of the other caches too. Both of the other caches, cache user information. Opcache caches general information, so it can be used with APCu or Memcache.
APCu Cache Manager
APCu is a branch from the APC Opcache project. When PHP 5.5 was announced APC was announced to be the Opcache included in 5.5. That was changed over time though and Zend Optimizer was the Opcache that was included. One thing that APC did that Zend did not do however, was cached user data to the Opcache. The official PHP Opcache extension does not do this either. So the developers of APC stopped working on APC and split into a different module called APCu. This module is meant to work with Opcache, but is only the user entry portion of the old APC module. Using APCu in conjunction with Opcache is the recommended fastest cache system for thirty bees.
With APCu, the size of the cache does not need to be as large as the Opcache. Since it is just caching user entries, they are very small, less than one 1kb each. So you can get by with using 32mb for the cache size, that should be more than enough. The flushing does not generally matter with APCu either, since once a user leaves your site it does not matter if their cache entries are flushed. Below is a look at the cache interface for APCu.
Since this cache caches user information, there is generally a larger miss rate, since there is no way to warm this cache per specific user other than the user making the file request. Even though, it still is still the quickest combination to serve a thirty bees site.
Memcache is an old, but reliable caching method. It is admittedly slower than just about any other method, but it is faster than not using it. Personally, I have thought memcache was always misused most of the time, that is why the results are slower. Memcache, unlike APCu adds an extra http request when retrieving information from the cache. When you compare it to caching methods like APCu that serve directly from memory without the extra http request, it is a lot slower. Why have we included this slow cache? See memcache excels at something that APCu cannot do. You can share a front end caching instance across multiple front end servers. This is what memcache was really designed for. Having multiple front end servers, with one server set as a caching instance that all servers share. In a setup like this, memcache out performs APCu, but this is a setup not many users use.
Our Memcache Manager does exactly like the other two modules, it lets you view the usage of your Memcache server. You can use it like the other modules to tune how much memory that you need to dedicate to Memcache. It is hard to estimate how much memory you will need. I would run Memcache a couple of days and check the memory usage and start from there. Below is an image of the interface that the module uses.
Speed is a big focus at thirty bees, that is why we are trying to give our users the tools they need to make the fastest possible shops, while at the same time teaching them how to use the tools. If you would like to check these modules out, use the download links below. If you have any questions about the modules, feel free to post a comment below.