Caching only works because human behavior is not random. When parts of your web app are visited, it is more likely they get revisited soon. For example, a user who logged in today is more likely to come back tomorrow than some other user who hasn’t logged in for a few months.
The hit rate, i.e., the fraction of requests that successfully return a cached value, represents how useful the cache is at storing values. It is one of your cache’s main performance metrics and should be monitored regularly.
So what is a reasonable hit rate? Why might it suddenly change? And what can you do to improve it? These are questions most developers face at some point when using Memcached.
Understanding hit rate
To answer those questions, let’s first unpack hit rate.
The hit rate is quite an opaque metric because it depends on many factors such as cache size, access patterns, and caching strategy. Exploring these factors can help you understand your cache’s hit rate.
Cache sizing is a complex problem. We wrote a whole post about it.
Too large a cache wastes resources. Too small a cache results in inefficient caching and a subpar hit rate.
Access patterns depend on human behavior and are often Pareto distributed, resulting in the famous 80-20 rule. That is, in general, 20% of what you cache will be accessed 80% of the time. Knowing that 20% is invaluable for monitoring cache performance.
Finally, your caching strategy is how you implement caching in your app. The main problem here is that it usually involves not a single caching strategy, but many. That’s because you cache many different things within the same app.
The problem with hit rate
Access patterns and caching strategy tell us that knowing the performance of individual aspects of your cache is incredibly useful. The hit rate, however, is global and only shows the combined performance of all strategies. This is where the CacheSight dashboard for your Memcached can help.
Track the performance of individual aspects of your cache
Say template caching is an important part of your caching strategy. If your template keys begin with
template_, you can watch the
template_ prefix to monitor your template caching performance. CacheSight will deliver analytics for that prefix, telling you what fraction of your cache is preoccupied with it and, more importantly, what hit rate it has.
In essence, CacheSight takes the guesswork out of your caching strategy.
No matter how many things you cache; pages, views, templates, sessions, expensive database queries, expensive computations, slow queries to external services, etc., CacheSight can tell you the individual performance of each aspect of your caching implementation.
With the importance of prefix hit rate in mind, let’s circle back around to our opening questions.
What is a reasonable hit rate?
We recommend you aim at a hit rate of 80% or above for your overall cache performance since real-world events are generally Pareto distributed. This is just a rough guestimate because, depending on what you cache, a wide range of hit rates might be acceptable.
CacheSight allows you to determine a reasonable hit rate for each caching strategy. A hit rate below 60% for your cached views might be unacceptable, while a session counter with a hit rate of 20% might be normal.
Why did my hit rate suddenly change?
CacheSight easily allows you to pinpoint the culprit whenever the overall hit rate changes. Watch prefixes for the most requested parts of your cache, then when a hit rate change occurs, check the hit rate for each prefix to see exactly where that change occurred.
What can I do to improve my hit rate?
Similarly, prefixes help you identify poor performing aspects of your cache which drag down its overall hit rate. That enables you to focus your efforts on improving the caching strategies that will yield the greatest results.
CacheSight works with any Memcache on AWS, including ElastiCache for Memcached. Get started for FREE today and cache with confidence knowing your Memcached strategy is working.