1 / 133

Caching at the Web Scale

Caching at the Web Scale. Victor Zakhary, Divyakant Agrawal, Amr El Abbadi. The old problem of Caching. Smaller Faster Expensive. L 1. L 2. Larger Slower Cheaper. RAM. Disk. The old problem of Caching. T a = T h + m x T m. Smaller Faster Expensive. L 1.

Download Presentation

Caching at the Web Scale

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Caching at the Web Scale Victor Zakhary, Divyakant Agrawal, Amr El Abbadi

  2. The old problem of Caching Smaller Faster Expensive L1 L2 Larger Slower Cheaper RAM Disk

  3. The old problem of Caching Ta= Th + m x Tm Smaller Faster Expensive L1 Ta: average access time Th: access time in case of a hit m: miss ration (1- hit ratio) Tm: access time in case of a miss Tm >>>> Th L2 Larger Slower Cheaper RAM Disk

  4. The old problem of Caching • Tm >>>> Th • When the cache is full  replacement policy • Replacement policy  eviction mechanism • Having the right elements in cache increases the hit ratio • High hit ratio  less average access time

  5. The old problem of Caching Ta= Th + m x Tm Ta: average access time Th: access time in case of a hit m: miss ration (1- hit ratio) Tm: access time in case of a miss Tm >>>> Th • Th and m are always in contention • Good caching strategy: • lowers m • requires more tracking • Increases Th • Less tracking: • Lower Th • increases m

  6. The old problem of Caching This is not a tutorial on 70s materials Right? Smaller Faster Expensive L1 L2 Larger Slower Cheaper RAM Disk

  7. Nowadays Very time sensitive Huge amount of data • Hardware technologies have changed: • Storage • Memory • Network • etc • Solutions have to exploit all these changes to serve client requests: • at billions of requests per second scale • with low latency • with high availability • achieving data consistency(varies from application to application) New designs for Caching Services Dynamically generated data Source: http://www.visualcapitalist.com/what-happens-internet-minute-2016/

  8. Facebook page load • Each page load is translated into hundreds of lookups • Lookups are done on multiple rounds This slide was taken from Scaling Memcache at Facebook presentation in NSDI`13

  9. Nowadays Architecture Stateless Application Servers Persistent Storage Millions of end-users Page-load and page-update stream(millions/sec) Billions of key lookups per second Overloaded

  10. Nowadays Architecture Persistent Storage Millions of end-users Page-load and page-update stream(millions/sec) Hundreds of Stateless Application Servers Billions of key lookups per second Overloaded Load Balancer

  11. Nowadays Architecture Persistent Storage Millions of end-users Page-load and page-update stream(millions/sec) Hundreds of Stateless Application Servers Billions of key lookups per second High Latency Supported operations Load Balancer Consistency Partition and replicate

  12. Facebook page load • Each page load is translated into hundreds of lookups • Lookups are done on multiple rounds • Reads are 99.8% while writes are only 0.2% [Tao ATC`13] Persistent Storage cannot handle this request throughput at this scale Caching  lower latency + alleviate load on storage This slide was taken from Scaling Memcache at Facebook presentation in NSDI`13

  13. Nowadays Architecture Persistent Storage Millions of end-users Page-load and page-update stream(millions/sec) Hundreds of Stateless Application Servers Billions of key lookups per second hit miss Overloaded Load Balancer Caching Server Partitioned and replicated

  14. Nowadays Architecture Persistent Storage Millions of end-users Page-load and page-update stream(millions/sec) Hundreds of Stateless Application Servers Billions of key lookups per second Failures hit miss Load balance Load Balancer Lookaside vs knowledge based Tens of Caching Servers Partitioned and replicated

  15. Access latency y y y y y y y y Peter Norvig: http://norvig.com/21-days.html#answers

  16. Old Caching Modern Caching Goal Access Latency ↓ Access Latency ↓ Load distribution Update Strategy Challenges • Scale management • Load balancing (utilization) • Update strategy • Update durability • Data consistency • Request rate • Replacement policy • Update strategy • Update durability • Thread contention • Ta= Th + m x Tm

  17. Replacement Policies

  18. Cache Replacement Policies Lookup Cache hit  Cache miss Insert Cache Evict and insert Cache is not full Insert page to cache Fetch page Cache is full

  19. Cache Replacement Policies • Cache size is limited  Cannot fit everything • Eviction mechanism • Contention between hit access time and miss ratio • FIFO, LIFO • LRU (recency of access) • Pseudo-LRU • ARC (Frequency and recency of access) • MRU • …

  20. LRU • Hardware supported implementations • Using counters • Using a binary 2D matrix • Software implementation • Using a doubly linked list and a hash table

  21. LRU – Hardware using Counters • Large enough counter 64 -128 bits • Increment the counter after each instruction • When accessing a page: • Tag the page with the current counter value at the access time • When a page fault happens: • Evict the page with the lowest counter tag • Very expensive to examine the counter for every page

  22. LRU – Hardware using 2D Binary Matrix 1 2 3 • O(N) bits per page • The row with the smallest binary value is the eviction candidate 4

  23. LRU – Software Implementation Hash Table

  24. LRU – Software Implementation Hash Table Access:

  25. LRU – Software Implementation Hash Table Access:

  26. LRU – Software Implementation Hash Table Access:

  27. Pseudo-LRU (PLRU) • Bit-PLRU • One bit per page, initially zero • On access, flip page’s bit to one • If all bits are one, flip all to zero except the last accessed page Access: 1, 2, 3, 4 0 0 1 1 1 0 1 0 0 1 0 0 0 1 0 0 0 0 0 1

  28. Pseudo-LRU • Organizes blocks on a binary tree • The path from the root leads to the PLRU leaf • On access, flip the values along the path to the leaf • 0 goes left, 1 goes right 0 0 0 Access: 1 3 2 4 1 4 5

  29. Pseudo-LRU • Organizes blocks on a binary tree • The path from the root leads to the PLRU leaf • On access, flip the values along the path to the leaf • 0 goes left, 1 goes right 1 1 0 Access: 1 3 2 4 1 4 5 1

  30. Pseudo-LRU • Organizes blocks on a binary tree • The path from the root leads to the PLRU leaf • On access, flip the values along the path to the leaf • 0 goes left, 1 goes right 0 1 1 Access: 1 3 2 4 1 4 5 3 1

  31. Pseudo-LRU • Organizes blocks on a binary tree • The path from the root leads to the PLRU leaf • On access, flip the values along the path to the leaf • 0 goes left, 1 goes right 1 0 1 Access: 1 3 2 4 1 4 5 3 1 2

  32. PLRU • Organizes blocks on a binary tree • The path from the root leads to the PLRU leaf • On access, flip the values along the path to the leaf • 0 goes left, 1 goes right 0 0 0 Access: 1 3 2 4 1 4 5 3 4 1 2

  33. Pseudo-LRU • Organizes blocks on a binary tree • The path from the root leads to the PLRU leaf • On access, flip the values along the path to the leaf • 0 goes left, 1 goes right 1 1 0 Access: 1 3 2 4 1 4 5 3 4 1 2

  34. Pseudo-LRU • Organizes blocks on a binary tree • The path from the root leads to the PLRU leaf • On access, flip the values along the path to the leaf • 0 goes left, 1 goes right 0 1 0 Access: 1 3 2 4 1 4 5 3 4 1 2

  35. Pseudo-LRU • Organizes blocks on a binary tree • The path from the root leads to the PLRU leaf • On access, flip the values along the path to the leaf • 0 goes left, 1 goes right 1 0 0 Access: 1 3 2 4 1 4 5 3 4 1 5

  36. ARC- Adaptive Replacement Cache ARC: Almaden Research Center • Maintains 2 LRU lists L1 and L2 • L1 for recency • L2 for frequency • Tracks pages twice the size of the cache. |L1 + L2| = 2 |c| • Dynamically and adaptively balancing between recency and frequency • Onlineand self-tuning– in response to evolving and possibly changing access patterns Megiddo, Nimrod, and Dharmendra S. Modha. "ARC: A Self-Tuning, Low Overhead Replacement Cache." FAST. Vol. 3. 2003.

  37. ARC B1 T1 T2 B2 Ghost pages Ghost pages L1 L2 Recency c Frequency A miss is inserted to the head of L1 If a page in L1 is accessed twice, move it to L2 head of T2 A miss in B1 increases the size of L1 A miss in B2 increases the size of L2

  38. Old Caching Modern Caching Goal Access Latency ↓ Access Latency ↓ Load distribution Update Strategy Challenges • Scale management • Load balancing (utilization) • Update strategy • Update durability • Data consistency • Request rate • Replacement policy • Update strategy • Update durability • Thread contention

  39. Scale Management

  40. Memcached* • Distributed in-memory caching system • Free and Open source • Written first in Perl by Brad Fitzpatrick in 2003 • Rewritten in C by Anatoly Vorobey • Client driven caching • How does it work? https://memcached.org/, FITZPATRICK, B. Distributed caching with memcached. Linux Journal 2004, 124 (Aug. 2004), 5.

  41. Memcached* Memcached logic Client Side Server Side Application server or dedicated cache client So what does Memcached provide? 1- lookup(k) Cache server 2- Response(k) If k != null done  5- Set(k,V) 3- lookup(k) Else? 4- Response(k) Storage https://memcached.org/, FITZPATRICK, B. Distributed caching with memcached. Linux Journal 2004, 124 (Aug. 2004), 5.

  42. Memcached Caching servers Application server or dedicated cache client 1GB 1GB 1GB https://memcached.org/, FITZPATRICK, B. Distributed caching with memcached. Linux Journal 2004, 124 (Aug. 2004), 5.

  43. Memcached Caching servers Application server or dedicated cache client • Each key is mapped to one caching server • Better memory utilization through hashing • Clients know all servers • Servers don’t communicate with each other • Shared-nothing architecture • Easy to scale 1GB 3GB 1GB 1GB https://memcached.org/, FITZPATRICK, B. Distributed caching with memcached. Linux Journal 2004, 124 (Aug. 2004), 5.

  44. LRU – Software Implementation Hash Table

  45. Memcached Caching servers Application server or dedicated cache client 1GB Hash(k) % server count Lookup(k) Lookup(k) 1GB -Is k here? -Yes -Update LRU and return value =No =Return null 1GB https://memcached.org/, FITZPATRICK, B. Distributed caching with memcached. Linux Journal 2004, 124 (Aug. 2004), 5.

  46. Consistent Hashing • When adding/removing a server • % function causes high key churn (remapping) • Consistent hashing • K/n keys are remapped • The churn problem • Assume keys 1,2,3,4,5,6,7,8,9,10,11,12 • These keys are distributed into 4 servers 1,2,3,4 Karger, David, et al. "Consistent hashing and random trees: Distributed caching protocols for relieving hot spots on the World Wide Web." Proceedings of the twenty-ninth annual ACM symposium on Theory of computing. ACM, 1997. Karger, David, et al. "Web caching with consistent hashing." Computer Networks 31.11 (1999): 1203-1213.

  47. Consistent Hashing 1,5,9 1 5%4 =1 2,6,10 2 11 5 11%4 =3 3,7,11 3 What happens when number of server changes? 4,8,12 4 Karger, David, et al. "Consistent hashing and random trees: Distributed caching protocols for relieving hot spots on the World Wide Web." Proceedings of the twenty-ninth annual ACM symposium on Theory of computing. ACM, 1997. Karger, David, et al. "Web caching with consistent hashing." Computer Networks 31.11 (1999): 1203-1213.

  48. Consistent Hashing 1,5,9 1 1,4,7,10 1 Keys 3,4,5,6,7,8,9,10,11 are remapped 2,6,10 2 2,5,8,11 2 Keys 4,5,6,8,9,10 are remapped even if their machines are up 3,7,11 3 3,6,9,12 4 4,8,12 4 Karger, David, et al. "Consistent hashing and random trees: Distributed caching protocols for relieving hot spots on the World Wide Web." Proceedings of the twenty-ninth annual ACM symposium on Theory of computing. ACM, 1997. Karger, David, et al. "Web caching with consistent hashing." Computer Networks 31.11 (1999): 1203-1213.

  49. Consistent Hashing 2 7 5 4 1 9 6 3 11 12 8 10 Karger, David, et al. "Consistent hashing and random trees: Distributed caching protocols for relieving hot spots on the World Wide Web." Proceedings of the twenty-ninth annual ACM symposium on Theory of computing. ACM, 1997. Karger, David, et al. "Web caching with consistent hashing." Computer Networks 31.11 (1999): 1203-1213.

More Related