1 / 19

Author: Tong Yang, Ruian Duan, Jianyuan Lu, Shenjiang Zhang, Huichen Dai and Bin Liu

CLUE: Achieving Fast Update over Compressed Table for Parallel Lookup with Reduced Dynamic Redundancy. Author: Tong Yang, Ruian Duan, Jianyuan Lu, Shenjiang Zhang, Huichen Dai and Bin Liu Publisher: IEEE ICDCS, 2012 Presenter: Kai-Yang, Liu Date: 2013/3/13. INTRODUCTION.

chavi
Download Presentation

Author: Tong Yang, Ruian Duan, Jianyuan Lu, Shenjiang Zhang, Huichen Dai and Bin Liu

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CLUE: Achieving Fast Update over Compressed Table for Parallel Lookup withReduced Dynamic Redundancy Author: Tong Yang, Ruian Duan, Jianyuan Lu, Shenjiang Zhang, Huichen Dai and Bin Liu Publisher: IEEE ICDCS, 2012 Presenter: Kai-Yang, Liu Date: 2013/3/13

  2. INTRODUCTION • To achieve high performance, backbone routers must gracefully handle the three problems: routing table Compression, fast routing Lookup, and fast incremental UpdatE (CLUE). • CLUE consists of three parts: a routing table compression algorithm, an improved parallel lookup mechanism, and a new fast incremental update mechanism.

  3. Compression Algorithm • ONRTC compresses the routing table size to 70% of its original size. • Prefix overlap is eliminated.

  4. ONRTC Algorithm

  5. Partition Algorithm • In order to achieve parallel lookup, the prefixes should be split into partitions firstly. • Step 1: compute the partition size. Suppose the size of routing table is M and the partition count is n, then the size of each partition is M/n. • Step 2: traverse the trie by inorder, then put every M/n prefixes to each bucket.

  6. Improved Parallel Lookup Mechanism

  7. The DRed update process of CLPL’s mechanism

  8. The DRed update process of CLUE’s mechanism

  9. The Incremental Update Mechanism • The whole update process is divided into three steps : 1) trie update; 2) TCAM update; 3) DRed update. • Time to Fresh (TTF) is defined in this paper, including TTF1 (TTF-trie), TTF2 (TTF-TCAM), and TTF3 (TTF-DRed).

  10. Experiments on Compression by ONRTC

  11. Partition comparison among the three algorithms

  12. TTF1 comparison between CLPL and CLUE

  13. TTF2 comparison between CLPL and CLUE

  14. TTF3 comparison between CLPL and CLUE

  15. TTF1+TTF2+TT3 comparison between CLPL and CLUE

  16. WORKLOAD ON DIFFERENT PARTITIONS AND TCAM CHIPS.

  17. Load balance of workload distribution by CLUE • Each TCAM takes 4 clocks to process a packet, while a packet arrives per clock. The FIFO is set to 256 and redundancy size is set to 1024 prefixes.

  18. Speedup factor comparison between CLPL and CLUE

  19. Hit rate comparison between CLPL and CLUE

More Related