1 / 37

ISIS Next Generation Router

ISIS Next Generation Router. Shahzad Ali Xia Chen Brendan Howell Yu Zhong. Motivation. Anybody can make a router. Key is to make one with High speed (OC-48 and up) High port densities Better performance (throughput, delay, latency) Cheap etc. But we always forget some requirements.

zia
Download Presentation

ISIS Next Generation Router

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. ISISNext Generation Router Shahzad Ali Xia Chen Brendan Howell Yu Zhong

  2. Motivation • Anybody can make a router. • Key is to make one with • High speed (OC-48 and up) • High port densities • Better performance (throughput, delay, latency) • Cheap • etc. • But we always forget some requirements. • The new network requires new services • Guaranteed bandwidth/latency (QoS) • Scaleable Design Now the task is not that simple!!

  3. Possible uses for our router • Aggregation point for large data centers • Backbone router at a major peering point • Backbone router for carrier facilities • Core router for IP transit providers • High bandwidth • Scaleable • QoS • Robustness • Major Routing Protocol support.

  4. So what did we come up with? • Design a router that meets the minimum specifications • Maximize switch throughput • Provide support for strong QoS in the switch with realistic assumptions • Build an extensible simulator • Evaluate the design using simulations • Make the design realistic! • Tackle the tough issue of scalability Result: ISIS

  5. Design Decisions • 64 port OC-48 with total capacity of 160 Gb/s • Our base design allows for 128 OC-48 (320 Gb/s) • Uses PMC Sierra PM9311 and PM9312 crossbar and scheduler. • Buffers • PC-100 10ns access latency • Dimension buffer by simulation • Queuing? • Output Queuing • Input Queuing

  6. Output Queuing • Provides perfect QoS. • Can order outgoing cells according to their priorities using FQ (WFQ, WF2Q+, etc) • Use fast sorting techniques to speedup the process*. • But, requires a speedup of N for switching fabric • Not scaleable for high bandwidth switches. • Not cheap either. Switching Fabric Output Input *“A simple and Fast Hardware Implementation Architecture for WFQ Algorithms”, Nen-Fu Huang and Chi-An Su

  7. Input Queuing • Provides no support for hard guarantees (QoS). • Difficult to maximize switch throughput while providing these guarantees. • But, simple to implement, with minimal speedup and maximum throughput. • Make scheduling decisions using a smart scheduling algorithm. • Use iSlip or similar algorithms to achieve high throughput Switching Fabric Output Input

  8. Queuing • Output Queuing • No! • Violates our design goal of scalability. • Too Expensive • Input Queuing • Maybe • If we can do proper QoS with it. • Maybe a third option?

  9. QoS with Input Queuing • The only way we can imagine this happening is with buffered crossbars • The buffers are provided at each cross-point • Number of buffers are bounded (between 3-5 cells) • Use FQ servers at Input, crossbar buffers and output to achieve QoS • The solution is hard to implement • Plus it only provides probabilistic guarantees • Plus we don’t even know if it actually works

  10. QoS with Input Queuing • Some recent work focuses on this aspect. • “Implementing Distributed Packet fair Queuing in a Scalable Switch Architecture”, D. C. Stephens • “A Distributed Scheduling Architecture for Scalable Packet Switches”, F. M. Chiussi and A. Francini Is that the best we can do?

  11. CIOQ • Combined Input output queuing: combines the benefits of both Input and Output queuing. • Input queuing is speedup of 1 • Output queuing is speedup of N. • CIOQ is between 1 and N, need to buffer at input and output. • Requires a speedup of only 2 to simulate an output queuing switch completely. • Reasonable given the difficulty of our initial goal. • Guarantees the exact same output as a OQ switch.

  12. CIOQ • Assigns a value called slackness to each VOQ at the input. • Slackness is the urgency of a packet. Slackness of 0 means high urgency. • Inputs and outputs select ports based on a priority calculated by the FQ discipline used. • For FCFS, high priority for packet that came earlier. • Uses Gale-Shapely algorithm to do a stable matching of inputs to outputs after they have been selected in Phase I. • No analysis has been done on the throughput of a CIOQ switch, just an analytical bound on its proximity to OQ switch.

  13. CIOQ • Some recent work in this area • “Delay-Bound Guarantee in CIOQ Switches”, H. Chao, L. Shen Chen • “Matching Output Queuing with CIOQ Switch”, S. Chaung, A. Goel, N. McKeown, B. Prabhakar. • “College Admissions and the stability of marriage”, D. Gale, L. Shapely.

  14. ngrSim • A simulator for ISIS: Next Generation Router. • ngrSim follows a modular design approach. • All components are coded as modules that can be plugged in place of others. • Allows for easy mix-and-match of various schemes. • It is event driven … completely. • Everything is driven by the event queue.

  15. Event handling class handler { public: handler() { }; virtual ~handler() { }; virtual void handle(event *e) = 0; void set_next_handler(handler * h) { next_handler = h; } protected: handler * next_handler; }; • An abstract class handler is defined. • A class event is described which consists of an object of type handler. • The handler is supposed to be the component that is responsible for the event. • When the event is run, the handle function of the handler is called. • Event are enqueued into an event queue. Event Port ID Data Handler

  16. Event handling • Efficient event handling capability. • Events are queued in a structure called a Skip-list, developed by William Pugh at University of Maryland*. • A probabilistic list which allows for O(1) inserts, deletes and searches. • Simulation time is kept track of in the event queue. • The event at the head of the event queue is run (handle function called) and the simulation time is updated. *http://www.cs.umd.edu/~pugh

  17. Initialize the switch and declare various components. ALL READ FROM A CONFIGURATION FILE Start the periodic fabric pull and the traffic generation events. Main Loop Get the event of the head of the queue. This updates the system time as well Call the handle function. Check if simulation ended Print the statistics and draw graphs Main Loop for ngrSim // Initialize the components tgen, ipp, // framer, sched, fab, reframer, opp ((crossbar*)fab)->start(); ((tgen*)tg)->start(); while (1){ event* e = (eq.instance()).dequeue(); if (e == NULL && !sim_running) break; (e->get_next_handler())->handle(e); if ((eq.instance()).get_time() > simTime){ sim_running = 0; ((tgen*)tg)->stop(); ((crossbar*)fab)->stop(); } } // print Stats

  18. Design of ngrSim • The design follows exactly from the design we had in the earlier design review.

  19. Object Diagram for ngrSim (Variable Length) (64 Bytes, 6 Bytes Fixed Header) Packet Packet Cell Traffic IPP FRAMER Fabric Route Lookup Checksum Etc Break packet into cells TGEN Scheduler Traffic FIFO iSlip CIOQ Put packets on the outgoing link Use FQ here as well OPP REFRAMER Assemble cells back to packets Crossbar simpleOPP cioqOPP

  20. Description of the components • Packets are generated in TGEN module and are passed as events to IPP, Framer and Scheduler. • Framer breaks packets into cells. • Cells are 64 bytes with 6 bytes of header. • 2 input + output port number = 4 bytes • 1 byte for cell ID • 1 byte for flag + priority • Scheduler is an abstract class. All schedulers that are implemented derive from this base class. • We have FCFS, iSlip and CIOQ implemented. • The scheduler does not pass the packets on to the fabric. The fabric is running at a fixed time slot which is determined by the cell size and line speed.

  21. Description of the components • The fabric pulls cells from the scheduler at these regular intervals. • Fabric is also an abstract class. All fabrics will derive form this class. • We currently have a crossbar fabric implemented. • The fabric sends the cells to the reframer. • Reframer reassemble cells into packets. • If some cell is not received in a certain time limit, the partial packet is discarded. • The reframer passes the reassembled packets to the OPP. • The OPP queues the packets and sends them out at the link rate. • We have simpleOPP and cioqOPP implemented.

  22. Statistics • All modules have statistics. • They calculate them independently and can be queried. • We keep track of • cell and packet count • buffer sizes • delays • drops • We ran experiments for 100 million clock ticks. • Each data point is a result of 10 runs of the same experiment with different random seeds. • The values were only collected after steady-state. • Due to time-constraint, we could only run for 16 ports.

  23. Theoretical bound = 58 % Scheduler throughput is 45% ? Overall throughput is 30% Throughput (FIFO)

  24. Theoretical bound = 58 % Scheduler throughput is 55% Overall throughput is 40 % Throughput (speedup = 1.25)

  25. FIFO with different Speedup

  26. FIFO with different Speedup

  27. Scheduling Algorithm Throughput

  28. Scheduling Algorithm Delay ?

  29. Buffer sizes with iSlip

  30. Scheduling Algorithm iSlip performs worse than CIOQ

  31. To make ISIS a reality … • Physical Specifications of base design • Chassis Height: 10 in. Includes power module shelf (AC or DC) • Chassis Width: 17.25 in. not including rack mount flanges. can be rack mounted in 19 or 22 in. • Chassis Depth: 18 in. not including cable management system • Chassis Weight: approximately 50 lbs. depending on configuration • Standards compliance • Safety: UL1950,IEC60950,IEC60825,TS001,AS/NZS 3260 • Electromagnetic Emissions: FCC Class A, ICES-003 Class A, EN55022 Class B, VCCI Class B, AS/NZS 3548 Class B • NEBS: SR-3580 Level 3 Compliant

  32. Chassis Configuration • Modular Design separates Line card module from Switch fabric module. • Redundant Power supplies and fabrics can be used to ensure robustness. • Compact design allows flexibility in rack placement. • The modules are connected through proprietary fiber interconnect using LCS protocol*. *http://www.pmcsierra.com/products/details/pm9311/

  33. That’s cute … but show me something big! • The PMC9113 has 10 Gb/s channels that can be used to connect to other switching modules or line cards through fiber and LCS protocol. • So instead of using one 320 Gb/s switch, we can use more than one. • We can extend the capacity of the switch by connecting many switch modules in a structure. • The easiest structure we can think of is a mesh. • Other possibilities include a hyper-cube.

  34. Routing Module 16 10Gb/s Channels to other switching modules Line card Module (160 Gb/s) ISIS-A: ISIS with an attitude Switching module with 320 Gb/s capacity 10 Gp/s channel 16 such switches connected in a mesh through 10 Gb/s channels of fiber. Full mesh of 16 equals 1024 OC-48 ports (2.5 Tb/s)

  35. ISIS-A: Routing • Based on source and destination address, line cards route to • ports on the card itself (through the line card) • Other line cards on same module (through the fabric) • Other switching modules (through the interconnect) • Routing between switching modules based on modified hot-potato routing with queue lengths • Queues only monitor bandwidth utilization of channels between modules. • Built into the chip-set. • Send to the recipient directly if queue is not loaded. • Send to the least loaded queue recipient, otherwise.

  36. ISIS-A: Routing and QoS. • Routing is more like load balancing • Only more intelligent • Controls delays • For QoS, • Some bandwidth is reserved for guaranteed traffic (max 10 Gb/s between two modules) • If such traffic arrives, send directly to the recipient (one link) using this reserved bandwidth. • If no such traffic, then use the bandwidth for normal traffic. • Requires modification to the scheduler (which is programmable) • Addressing has to be universal. • Routing between modules has to be added.

  37. Future Work • Short-term • Need to run more experiments with different parameters. • Plan to follow-up on the scalability options. • Get statistical significance for the results. • Long-term • Implement other scheduling algorithms. • Implement other fabric types. • Explore other possibilities for QoS with IQ switches.

More Related