Slide 1

Slide 1 text

arc Adaptive Replacement Cache: for fun, not profit @markhibberd

Slide 2

Slide 2 text

“Yea, from the table of my memory, I'll wipe away all trivial fond records” William Shakespeare - Hamlet, Act I, Scene IV

Slide 3

Slide 3 text

No content

Slide 4

Slide 4 text

“We are therefore forced to recognize the possibility of constructing a hierarchy of memories, each of which has greater capacity than the preceding but which is less quickly accessible.” A. W. Burks, H. H. Goldstine, J. von Neumann: - Preliminary Discussion of the Logical Design of Electronic Computing Instrument, Part I, Vol. I, Report prepared for the U.S. Army Ord. Dept.

Slide 5

Slide 5 text

( http://static.googleusercontent.com/media/research.google.com/en//people/jeff/stanford-295-talk.pdf )

Slide 6

Slide 6 text

No content

Slide 7

Slide 7 text

No content

Slide 8

Slide 8 text

Reliable Storage

Slide 9

Slide 9 text

Ephemeral Computation

Slide 10

Slide 10 text

Lazy Replication w/ Disk and Network Cache

Slide 11

Slide 11 text

Durable / Replicated Intent Log

Slide 12

Slide 12 text

No content

Slide 13

Slide 13 text

No content

Slide 14

Slide 14 text

GET /user/1 { “user” : “ocelot” }

Slide 15

Slide 15 text

GET /user/1 { “user” : “ocelot” }

Slide 16

Slide 16 text

GET /user/1 { “user” : “ocelot” }

Slide 17

Slide 17 text

A cache perhaps?

Slide 18

Slide 18 text

LRU A “default” choice Constant time and space complexity *very bad* in the face of scans

Slide 19

Slide 19 text

LFU Often better hit ratio Logarithmic time complexity resilient to scans

Slide 20

Slide 20 text

Hybrid LRU + LFU Lots of attempts Most have logarithmic time complexity *very bad* for general purpose (tuning)

Slide 21

Slide 21 text

No content

Slide 22

Slide 22 text

ARC Combines frequency and recency in the most optimal way for routing money from your pocket to

Slide 23

Slide 23 text

ARC Exploits frequency and recency Constant time and space complexity Self tuning, good for general purpose

Slide 24

Slide 24 text

L1: recency - keys were seen at least once recently L1 L2 ARC

Slide 25

Slide 25 text

L1: recency - keys were seen at least once recently L1 L2 MRU LRU ARC

Slide 26

Slide 26 text

L2: frequency - keys were seen at least twice recently L1 L2 MRU LRU ARC

Slide 27

Slide 27 text

L2: frequency - keys were seen at least twice recently L1 L2 MRU LRU LRU MRU ARC

Slide 28

Slide 28 text

T1: Cached keys in L1 L1 L2 T1 MRU LRU LRU MRU ARC

Slide 29

Slide 29 text

B1: Tracked (but not cached) keys in L1 L1 L2 MRU LRU LRU MRU T1 B1 ARC

Slide 30

Slide 30 text

L1 L2 MRU LRU LRU MRU T1 B1 T2 T2: Cached keys in L2 ARC

Slide 31

Slide 31 text

B2: Tracked (but not cached) keys in L2 L1 L2 MRU LRU LRU MRU T1 B1 T2 B2 ARC

Slide 32

Slide 32 text

1. If we get a hit in T1 or T2 do nothing L1 L2 MRU LRU LRU MRU T1 B1 T2 B2 ARC

Slide 33

Slide 33 text

2. If we get a hit in B1 increase size of T1 L1 L2 MRU LRU LRU MRU T1 B1 T2 B2 ARC

Slide 34

Slide 34 text

2. If we get a hit in B1 increase size of T1 L1 L2 MRU LRU LRU MRU T1 B1 T2 B2 ARC

Slide 35

Slide 35 text

3. If we get a hit in B2 decrease size of T1 L1 L2 MRU LRU LRU MRU T1 B1 T2 B2 ARC

Slide 36

Slide 36 text

3. If we get a hit in B2 decrease size of T1 L1 L2 MRU LRU LRU MRU T1 B1 T2 B2 ARC

Slide 37

Slide 37 text

ARC This is interesting because… It is relatively easy to understand Basically LRU with an extra directory Easy to adapt LRU like algorithms ( https://dl.dropboxusercontent.com/u/91714474/Papers/oneup.pdf )

Slide 38

Slide 38 text

L2 ARC

Slide 39

Slide 39 text

L2 ARC ARC L2 ARC L2 ARC NETWORK

Slide 40

Slide 40 text

L2 ARC ARC L2 ARC GET /user/1 L2 ARC NETWORK

Slide 41

Slide 41 text

L2 ARC ARC L2 ARC GET /user/1 L2 ARC NETWORK ARC Miss

Slide 42

Slide 42 text

L2 ARC ARC L2 ARC GET /user/1 L2 ARC NETWORK ARC Miss L2 ARC Miss

Slide 43

Slide 43 text

L2 ARC ARC L2 ARC GET /user/1 L2 ARC NETWORK ARC Miss L2 ARC Miss Respond & Update L2 ARC

Slide 44

Slide 44 text

L2 ARC ARC L2 ARC GET /user/1 L2 ARC NETWORK ARC Miss L2 ARC Miss Respond & Update L2 ARC Respond & Update ARC

Slide 45

Slide 45 text

L2 ARC ARC L2 ARC GET /user/1 L2 ARC NETWORK ARC Miss L2 ARC Miss Respond & Update L2 ARC Respond & Update ARC

Slide 46

Slide 46 text

L2 ARC: Challenges This doesn’t work If not careful about updating L2 ARC can bottleneck reads everywhere

Slide 47

Slide 47 text

L2 ARC ARC L2 ARC GET /user/1 L2 ARC NETWORK ARC Miss L2 ARC Miss Respond & Update L2 ARC Respond & Update ARC

Slide 48

Slide 48 text

L2 ARC ARC L2 ARC GET /user/1 L2 ARC NETWORK ARC Miss L2 ARC Miss Respond Respond & Update ARC

Slide 49

Slide 49 text

L2 ARC ARC L2 ARC GET /user/1 L2 ARC NETWORK ARC Miss L2 ARC Miss Respond & Queue Respond & Update ARC WRITE

Slide 50

Slide 50 text

L2 ARC ARC L2 ARC GET /user/1 L2 ARC NETWORK ARC Miss L2 ARC Miss Respond & Queue Respond & Update ARC WRITE Async L2 ARC update

Slide 51

Slide 51 text

L2 ARC ARC L2 ARC GET /user/1 L2 ARC NETWORK ARC Miss L2 ARC Miss Respond & Queue Respond & Update ARC WRITE Async L2 ARC update Very prepared to drop L2 updates on the floor

Slide 52

Slide 52 text

No content

Slide 53

Slide 53 text

Results

Slide 54

Slide 54 text

Results Lack of properly implemented LRU caches in Haskell

Slide 55

Slide 55 text

Results

Slide 56

Slide 56 text

Results https://themonadreader.files.wordpress.com/2010/05/issue16.pdf

Slide 57

Slide 57 text

Cribbed Results ( http://www.cs.cmu.edu/~15-440/READINGS/megiddo-computer2004.pdf )

Slide 58

Slide 58 text

Cribbed Results ( http://www.cs.cmu.edu/~15-440/READINGS/megiddo-computer2004.pdf )

Slide 59

Slide 59 text

No content