When it comes to application-level caching, only two options seem to exist: Memcached and Redis. I've been using Memcached for years but wanted to re-check my choice just before adding a caching layer to another project.
Memcached
Memcached is the dinosaur if caching like Perl is for script development. It has been invented for LiveJournal (a dinosaur of social networking), users include Wikipedia, Flickr, WordPress and lots of other well-known projects. The server is extremely fast and stable, the API is very easy to learn and use: set, get, add, delete are sufficient for nearly everything.
Memcached is not a database is the most important thing to remember: It's a cache and you'll run into trouble trying to use it as a reliable data storage. The Memcached will automatically delete old data to cache new stuff always staying within the configured memory limit - just what you expect from a cache.
Redis
Creating new technology always creates experience: That and that should have been done differently but is too late to change. The result of learnings from Memcached is Redis. Invented six years after Memcached, Redis supports 200 different commands for lists, pub/sub message handling, transactions and even server-side scripting. It's a in-memory database optimized for working with the stored data.
Memory management
Both solutions offer a "max memory" configuration option to limit the amount of memory used by the server.
Memcached is always doing LRU: The least recently used (LRU) item is deleted first to get free space for new items to store. "Used" means read or write: A value added to the cache once, but read every second is also recently used even if it might be very old.
Redis offers no less than eight different eviction policies: noeviction just returns an error if Redis runs out of memory. Actually, Memcached offers the same option using a command line switch but you'll nearly always run into trouble in your application when using it as your cache stops caching at once.
Both servers offer keys with and without timeout: They're deleted once their time is up. Memcached prefers keys without timeout during eviction while Redis offers eviction for all keys or keys with timeouts. Additional options offer LRU, purely random and least frequently used (LFU) strategies.
Usage cases
I'm using Memcached for caching, locking, passing data and even load management. Lists and pub/sub would add tons of possibilities and might even drop Gearman from my tech stack.
A common case (besides pure best-effort caching) is the load management: A refresh icon invites users for rapid-fire clicking to get the results faster. I've seen a single user using all slots of something for duplicates of duplicates of the same calculation at once. Don't even think about a JavaScript solution: Users will reload the page or preventive open it in multiple tabs to get the chance of clicking refresh more often.
That's why expensive functions often get a time penalty in my projects: They should run only once at a time (more or less easy to do using locking) but shouldn't run within X seconds (minutes) after the last run: The first refresh click will actually run the expensive function while repeated clicks just return the cached result.
A simple timeout doesn't do the job: A value is very often good enough to be shown for a longer time while the rapid-fire protection should have a shorter time. Imagine statistics of marketing campaigns: The list of campaigns should show reasonable fresh values and 99% of the page viewers are quite happy with a once-per-hour refresh, but once in a while someone needs to know what's going on now. The cached data could have a one-hour timeout while the rapid-refresh protection shouldn't block longer than five minutes.
That's why the volatile-lru policy, which deletes only keys with timeout based on LRU, doesn't work for me: I want keys with timeout to survive if possible instead of losing them first. The allkeys policies also don't work: A Redis list used for queuing important jobs should never ever get lost because of an eviction event.
TL;DR
There's no "best" between Redis and Memcached. I'll continue using Memcached for caching because that's where it still outperforms Redis and will start to use Redis as a memory-based database. For me, it's no caching solution at all because of it's additional features.
Noch keine Kommentare. Schreib was dazu