Upgrade to Pro — share decks privately, control downloads, hide ads and more …

What happens when Redis runs out of memory

What happens when Redis runs out of memory

Elena Kolevska

November 15, 2018
Tweet

More Decks by Elena Kolevska

Other Decks in Technology

Transcript

  1. In computing, a cache is a hardware or software component

    that stores data so that future requests for that data can be served faster The Architecture of Computer Hardware and Systems Software, An Information Technology Approach ~ Irv Englander
  2. •sample 3 random volatile keys, evict the one with shortest

    TTL •If no volatile keys - return an error
  3. Salvatore Sanfilipo, Blog post New Expired volatile-ttl 1. Why delete

    objects if there is no need to? 2. Why use more memory than needed?
  4. 12 2009 2018 March 29th
 First commit May 2009
 maxmemory

    implemented October 2010 LRU implemented v 2.2
 alpha 3
  5. 14 LRU = Least Recently Used •If you’ve used an

    item from your cache recently - you’re more likely to use it again. •The longer you haven’t used an item, the smaller the chance you’ll need it again
  6. 15 22 bits for the LFU counter (later increased to

    24) The LFU timer stores the timestamp with the epoch set to the time the server was started. That way we're able to store it in only 22 bits. The initial precision was 10 seconds, but it was increased to 1 second later, when the 2 extra bits were added.
  7. 16 "Since LRU is itself an approximation of what we

    want to achieve, how about approximating LRU itself" Salvatore Sanfilipo, Blog post
  8. 18 Sample X random keys and evict the one with

    the highest idle time maxmemory-samples
  9. Commit: 165346ca29972817b1245e689315edeba1fe369b [165346ca] Author: antirez <[email protected]> Date: October 14, 2010

    at 20:22:21 GMT+1 # MAXMEMORY POLICY: how Redis will select what to remove when maxmemory # is reached? You can select among five behaviors: # # volatile-lru -> remove the key with an expire set using an LRU algorithm # allkeys-lru -> remove any key accordingly to the LRU algorithm # volatile-random -> remove a random key with an expire set # allkeys->random -> remove a random key, any key # volatile-ttl -> remove the key with the nearest expire time (minor TTL) # # maxmemory-policy volatile-lru
  10. 23 2009 2018 March 
 First commit May 2009
 maxmemory

    implemented October 2010 LRU implemented November 2010
 noeviction policy added v 2.2
 alpha 3 v 2.2
 alpha 5 March 2014 v 3.0
  11. ...if you look at this algorithm *across* its executions, you

    can see how we are trashing a lot of interesting data Salvatore Sanfilipo, Blog post
  12. "First rule of Fight Club is: observe your algorithms with

    naked eyes" Salvatore Sanfilipo, Blog post
  13. 27 LRU v2 Use a pool of best candidates for

    eviction (maxmemory-samples)
  14. 29 2009 2018 March 29th
 First commit May 2009
 maxmemory

    implemented October 2010 LRU implemented November 2010
 noeviction policy added v 2.2
 alpha 3 v 2.2
 alpha 5 March 2014 - LRU eviction pool
 - maxmemory-samples = 5
 - LRU field changed to 24 bits (v 2.8.8) v 3.0 Default policy 
 noeviction v 2.8.8
  15. 31 2009 2018 March 29th
 First commit May 2009
 maxmemory

    implemented October 2010 LRU implemented November 2010
 noeviction policy added July 2016
 Cross-database
 eviction March 2014 - LRU eviction pool
 - maxmemory-samples = 5
 - LRU field changed to 24 bits (v 2.8.8) v 3.0 Default policy 
 noeviction v 2.8.8 v 2.2
 alpha 3 v 2.2
 alpha 5
  16. "...my curiosity for this subsystem of Redis was stimulated again

    at that point. I wanted to improve it" Salvatore Sanfilipo, Blog post
  17. "What we really want is to retain keys that have

    the maximum probability of being accessed in the future, that are the keys *most frequently accessed*, not the ones with the latest access!" Salvatore Sanfilipo, Blog post
  18. 35 LFU = Least Frequently Used • Track access (implement

    access counter) • Track time (so you can know the frequency)
  19. 38 24 bits 16 bits Last decrease time 8 bits

    LOG C Morris (logarithmic) counter +--------+------------+------------+------------+------------+------------+ | factor | 100 hits | 1000 hits | 100K hits | 1M hits | 10M hits | +--------+------------+------------+------------+------------+------------+ | 0 | 104 | 255 | 255 | 255 | 255 | +--------+------------+------------+------------+------------+------------+ | 1 | 18 | 49 | 255 | 255 | 255 | +--------+------------+------------+------------+------------+------------+ | 10 | 10 | 18 | 142 | 255 | 255 | +--------+------------+------------+------------+------------+------------+ | 100 | 8 | 11 | 49 | 143 | 255 | +--------+------------+------------+------------+------------+------------+ lfu-log-factor 10 # 1. A random number R between 0 and 1 is extracted. # 2. A probability P is calculated as 1/(old_value*lfu_log_factor+1). # 3. The counter is incremented only if R < P.
  20. 39 24 bits last decrement time (reduced-precision unix time with

    the epoch set to when the server was started, in minutes) 16 bits Last decrease time 8 bits LOG C lfu-decay-time The counter decay time, in minutes, that must elapse in order for the key counter to be divided by two (or decremented if it has a value less <= 10). Default value is 1.
  21. 40 2009 2018 March 29th
 First commit May 2009
 maxmemory

    implemented October 2010 LRU implemented November 2010
 noeviction policy added July 2016
 Cross-database
 eviction March 2014 - LRU eviction pool
 - maxmemory-samples = 5
 - LRU field changed to 24 bits (v 2.8.8) v 3.0 Default policy 
 noeviction v 2.8.8 July 2014 - LFU Implementation
 - Volatile-ttl uses the pool v 4.0 v 2.2
 alpha 3 v 2.2
 alpha 5
  22. maxmemory-policy ★ noeviction ★ allkeys-random ★ volatile-random ★ volatile-ttl ★

    allkeys-lru ★ volatile-lru ★ allkeys-lfu ★ volatile-lfu