1 / 14

Decentralized Coded Caching Attains O rder-Optimal M emory-Rate T radeoff

Decentralized Coded Caching Attains O rder-Optimal M emory-Rate T radeoff. Mohammad Ali Maddah-Ali Bell Labs, Alcatel-Lucent joint work with Urs Niesen. Allerton October 2013. Video on Demand. High temporal t raffic v ariability Caching (prefetching) can help to smooth traffic .

casper
Download Presentation

Decentralized Coded Caching Attains O rder-Optimal M emory-Rate T radeoff

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Decentralized Coded Caching Attains Order-Optimal Memory-Rate Tradeoff Mohammad Ali Maddah-Ali Bell Labs, Alcatel-Lucent joint work with UrsNiesen Allerton October 2013

  2. Video on Demand • High temporal traffic variability • Caching (prefetching) can help to smooth traffic

  3. Caching (Prefetching) Server • Placement phase: populate caches • Demands are not known yet • Delivery phase: reveal request, deliver content

  4. Problem Setting N Files Server Shared Link K Users Cache Contents Size M Placement: - Cache arbitrary function of the files (linear, nonlinear, …) Delivery: -Requests are revealed to the server Question: Smallest worst-case rate R(M) needed in delivery phase? How to choose (1) caching functions, (2) delivery functions - Server sends a function of the files

  5. Coded Caching N Files, K Users, Cache Size M • Uncoded Caching • Caches used to deliver content locally • Local cache size matters • Coded Caching [Maddah-Ali, Niesen 2012] • The main gain in caching is global • Global cache size matters (even though caches are isolated)

  6. Centralized Coded Caching B12 B13 B23 N=3 Files, K=3 Users, Cache Size M=2 Maddah-Ali, Niesen, 2012 A12 A13 A23 Approximately Optimum C12 C13 C23 A23 A23⊕B13⊕C12 B13 C12 1/3 B13 C13 B12 C12 A13 B12 C12 A12 A12 B23 C23 B23 C23 A23 B13 C13 A23 A13 Multicasting Opportunity between three users with different demands

  7. Centralized Coded Caching B12 B13 B23 N=3 Files, K=3 Users, Cache Size M=2 A12 A13 A23 C12 C13 C23 • Centralized caching needs • Number and identity of the users in advance • In practice, it is not the case, • Users may turn off • Users may be asynchronous • Topology may time-varying (wireless) B13 C13 B12 C12 A13 B12 C12 A12 A12 B23 C23 B23 C23 A23 B13 C13 A23 A13 Question: Can we achieve similar gain without such knowledge?

  8. Decentralized Proposed Scheme N=3 Files, K=3 Users, Cache Size M=2 0 1 2 3 12 13 23 123 Delivery: Greedy linear encoding Prefetching: Each user caches 2/3 of the bits of each file - randomly, - uniformly, - independently. 1 2 3 12 13 23 123 0 0 1 2 3 12 13 23 123 23 13 12 ⊕ ⊕ 0 0 0 1 12 13 123 2 12 23 123 3 13 23 123 1 2 1 3 2 3 ⊕ ⊕ ⊕ 1 12 13 123 2 12 23 123 3 13 23 123 1 12 13 123 2 12 23 123 3 13 23 123

  9. Decentralized Caching

  10. Decentralized Caching Centralized Prefetching: 0 1 2 3 12 13 23 123 12 12 12 13 13 13 23 23 23 1 2 3 12 13 23 123 0 0 1 2 3 12 13 23 123 Decentralized Prefetching:

  11. Comparison N Files, K Users, Cache Size M Uncoded • Local Cache Gain: • Proportional to local cache size • Offers minor gain Coded (Centralized): [Maddah-Ali, Niesen, 2012] • Global Cache Gain: • Proportional to global cache size • Offers gain in the order of number of users Coded (Decentralized)

  12. Can We Do Better? Theorem: The proposed scheme is optimum within a constant factor in rate. • Information-theoretic bound • The constant gap is uniform in the problem parameters • No significant gains beside local and global

  13. Asynchronous Delivery Segment 1 Segment 2 Segment 3 Segment 1 Segment 2 Segment 3 Segment 1 Segment 2 Segment 3

  14. Conclusion • We can achieve within a constant factor of the optimum caching performance through • Decentralized and uncoded prefetching • Greedy and linearly coded delivery • Significant improvement over uncoded caching schemes • Reduction in rate up to order of number of users • Papers available on arXiv: • Maddah-Ali and Niesen: Fundamental Limits of Caching (Sept. 2012) • Maddah-Ali and Niesen: Decentralized Coded Caching Attains Order-Optimal Memory-Rate Tradeoff ( Jan. 2013) • Niesen and Maddah-Ali: Coded Caching with Nonuniform Demands (Aug. 2013)

More Related