1 / 19

Practical Packet Reordering Mechanism for Parallel Exploiting in Network Processors

This paper proposes a practical mechanism to preserve packet order in network processors, ensuring both high speed and flexibility. The mechanism utilizes packet chains to record sequence information and achieves reordering within flow scope. Simulation results demonstrate its effectiveness.

rigobertol
Download Presentation

Practical Packet Reordering Mechanism for Parallel Exploiting in Network Processors

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. A Practical Packet Reordering Mechanism with Flow Granularity for Parallel Exploiting in Network Processors 13th WPDRTS April 4, 2005 Beibei Wu, Yang Xu, Bin Liu, Hongbin Lu Department of Computer Science, Tsinghua University, Beijing, P.R.China

  2. Background & Problem • Network Processor (NP) • A special purpose, programmable hardware device that combines the flexibility of a RISC processor with the speed of ASIC. They are building blocks used to construct network systems • Data path: Processing Engine (PE) • Two Design Goals • High Speed: Multiple PEs packet level parallelism • High Flexibility:Versatile processing requirements unpredictable processing time for each packet • The Packet Disordering (PD) Problem • Packets depart in a different order from their arrival • Network performance may be deteriorated greatly

  3. Objective NP Model PE0 • To design a practical mechanism which can preserve packet order in NP, at the same time to ensure the utilization of PE Dispatcher Aggregator PEn

  4. Contents • Design Issues • Global-scope Vs. within-flow-scope order preserving • Pre-processing Vs. post-processing order scheduling • The Proposed Solution • Packet chains for all the flow sequence information • The working process • Simulation

  5. Design Issues(1)the scope of packets to preserve order • Global-scope • All packets leave strictly in order • Within-flow-scope • Only packets of the same flow leave in order • Processing delay of different flows is different in NP • within-flow-order-preserving is quite necessary

  6. Design Issues(2)Where order scheduling is taking place? • Order scheduling location • Pre-processing scheduling SPSL--Sequential Processing Sequential Leaving • Post-processing scheduling UPSL-- Un-sequential Processing Sequential Leaving

  7. Design Issues(2)the shortcoming of pre-processing scheduling? PE0 Dispatcher Aggregator PEn Packet Buffer

  8. Contents • Design Issues • Global-scope Vs. within-flow-scope order preserving • Pre-processing Vs. post-processing order scheduling • The Proposed Solution • Packet chains for sequence information • The working process • Simulation

  9. The Proposed Solution (1)The methods of reordering • in traditional network devices • Sequence number or timestamp • global order preserving • in our NP • Packet chains • Providing ability to discriminate among flows

  10. The Proposed Solution (2)NP System with Packet Data Buffer PE Complex t1 p1 p2 pn t2 tm b1 b2 bk Dispatcher Packet Data Buffer Aggregator • packet <-> thread <-> block • blocks <-> disordered packets buffering

  11. The Proposed Solution(3) • Packet in the NP <-> block in the Packet Data Memory • When to transmit packet in which block? • How to discriminate among flows?

  12. The Proposed Solution(4) usingpacket chains to record sequence information Block Table Flow Table f(1) p(a) p(b) f(j) p(e) End Packet FlowID Head Packet p(d) p(x) p(y) p(z) FlowID packet Dispatcher

  13. The Proposed Solution(5)The Working Process PE 1 Packet Data Buffer f 4 b1 PE 2 r f 1 b2 f 5 b3 f b4 2 1 5 4 3 r 2 f b5 f b6 f 3 b7 f b8 f0 5 5 7 2 3 1 7 flowID head end

  14. Contents • Design Issues • Global-scope Vs. within-flow-scope order preserving • Pre-processing Vs. post-processing order scheduling • The Proposed Solution • Packet chains for sequence information • The working process • Simulation

  15. Simulationthroughput vs flow number, for three traces A NP system with 4 PEs, each 4 threads and totally 8*4=32 memory blocks • Trace1 f1: 100% constant 40bytes f2: 0% • Trace2 f1: 95% constant 40bytes f2: 5% random from 40 to 60bytes • Trace3 f1: 90% constant 40bytes f2: 10% random from 80 to 120bytes f1 is length unrelated app. Packets, Where f2 is length related app. packets

  16. Simulation Utilization of 4 PEs under traces with the fewest flows • Time fraction of active thread number for each PE trace1 trace2 trace3 All traces have the fewest flows

  17. Simulation Buffer Occupation with the fewest flows • Time fraction of free block number for each PE trace1 trace2 trace3 All traces have the fewest flows

  18. A Summary • A solution to preserve packet order in NP with multiple PE for data plane processing • Packet chains to record sequence information of different flows to preserve packet order • Reordering within-flow-scope is quite necessary in NP • Future work: optimize Memory block and PE resources

  19. Thank you for your attention!

More Related