1 / 29

Buffer-less Switch Fabric Architectures Vahid Tabatabaee Fall 2006

Buffer-less Switch Fabric Architectures Vahid Tabatabaee Fall 2006. References. Light Reading Report on Switch Fabrics, available online at: http://www.lightreading.com/document.asp?doc_id=25989

anahid
Download Presentation

Buffer-less Switch Fabric Architectures Vahid Tabatabaee Fall 2006

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Buffer-less Switch Fabric Architectures Vahid Tabatabaee Fall 2006

  2. References • Light Reading Report on Switch Fabrics, available online at: http://www.lightreading.com/document.asp?doc_id=25989 • Title: Network Processors Architectures, Protocols, and PlatformsAuthor: Panos C. LekkasPublisher: McGraw-Hill • I. Elhanany, D. Chiou, V. Tabatabaee, R. Noro, A. Poursepanj, “The Network Processing Forum Switch Fabric Benchmark Specifications: An Overview”, IEEE Network Magazine, March/April 2005.

  3. Buffer-less Switching Element • There is no major buffering in the switching element. • The only buffering is for alignment of the cells. • Incoming cells after alignment are simultaneously switched to the output ports • The performance of the switch is very much dependent on the scheduling algorithm.

  4. Switching Element Architecture

  5. Data flow in the switching element • Cells are continuously sent from line card to the switch card and from the switch card to the line card. • Transmitted cells may not have valid data. • Switch scheduler decides about connection between input and output port and then send the corresponding command to the line interface chip. • The line interface chip send one cell destined to the corresponding output port to the switch. • The switching element needs to have some information about the backlogged cells in the input ports. • The line card interface needs to know about its designated output port in the next time slot. • The last two bullets info. are sent through the cell header from the line interface to the switch and from the switch to the line interface respectively.

  6. Why do we need cell alignment? • Consider a simple 2x2 switch • Red cells are destined to output 1 and blue cells to output 2 • We need cell alignment if line cards are not equally distanced from the switch cards.

  7. Why do we need cell alignment? • If the cells are not aligned we may end up with switching cells to the wrong destination or contention between cells going to the same destination

  8. Why do we need cell alignment? • We can buffer the cells either in the switch chip or the line card to artificially equalize distances.

  9. Switch Throughput • Throughput is the maximum normalized traffic rate between the line card and the switch card. • Throughput can not be larger than one. • Throughput is usually demonstrated by the average delay versus normalized rate plot. • Theoretically it looks like a hockey stick! • In practice since the buffering is limited delay curve gets saturated.

  10. What causes throughput limitation • If there is no contention between the input and output ports throughput can go up to 100%. • Due to contention some ports can remain idle even though they have cell to send/receive. • The scheduling algorithm decides about input-output connection and resolves contentions. • Therefore scheduling algorithm determines throughput of a switch.

  11. Scheduling Problem • Scheduling algorithm specifies input-output contention. • We can model a switch as a bipartite graph. • We have two set of nodes corresponding to the input and output ports. • There is a link between two nodes if there is buffered cell for that connection. • The scheduling algorithm finds a matching in the given bipartite graph.

  12. 100% Throughput Scheduling • Is it possible to achieve 100% throughput in crossbar based schedulers? • We can achieve 100% throughput with maximum weighted matching. • Each link has weight equal to number of backlogged cells. • We find the matching with maximum total weight. • This guarantees to achieve 100% throughput. 4 4 MWM 2 2 2 2 2 3 3

  13. 4 4 2 2 2 2 2 2 3 Alternative 100% Throughput Algorithms • Alternative algorithms to achieve 100% throughput. • Maximum Weighted Matching (MWM): Maximizes total weight of links; O(N3) complexity. • Longest Port First (LPF): Maximizes total weight of nodes; O(N3) complexity. • Maximum Node Containing Matching (MNCM): Includes all nodes that their weight are greater than (1-1/N) of maximum node weight; O(N2.5) complexity. MWM LPF MNCM

  14. Practical Approaches • These algorithms are not amenable to hardware implementation • We use simple algorithms that are simple and can be implemented in hardware. • To compensate for their low performance we make the switch works faster than the line-card (speedup). • It is proved that any maximal size matching with 2X speedup can achieve 100% throughput. • A matching is maximal if it is not possible to add anymore link to the matching.

  15. iSLIP Scheduling Algorithm • There is an arbiter associated with every input and output node. • Every arbiter receives up to N active signals and select one of them using a round-robin scheduler. • Every output arbiter receives request signal from all inputs that have a backlogged cell. • It grants the first request after the previously ACCEPTED grant. • Input arbiters accept the first grant after the previously accepted grant. • Every arbiter has a pointer that points to the previously accepted port.

  16. Arbiter Connections Output Arbiters Input Arbiters

  17. Inside an Arbiter

  18. Multiple Iteration • We can increase matching size by doing multiple iterations. • The arbiter pointers are only updated after the first iteration. • Grant and Accept arbiters can perform their function in one clock cycle. • If we want to do k iterations we need 2k clock cycles without pipelining. • We can pipeline the job and reduce the time required. Grant1 Accept1 Grant2 Accept2 Grant3 Accept3

  19. iSLIP Throughput and arrival process • Good performance for uniform traffic. • Degraded performance for non-uniform traffic. • In general performance of a switch depends on the characteristics of the input data. • In a switch there are three important characteristics: • Arrival Pattern: • Uniform: Usually modeled as Bernoulli i.i.d arrivals. At each time slot there is a probability p of new arrival. • Non-uniform: Usually modeled with a two-state Markov Chain • If we are in ON state we keep generating packets. • If we are in OFF state no packet is generated. • Packet length: Number of bytes in generated packets. • Load distribution: Destination of packets generated at each input • Uniform: Packets are divide among destinations with equal probability • Non-Uniform: Some destinations are more probable (Hot Spots).

  20. Typical uniform traffic throughput http://tiny-tera.stanford.edu/~nickm/papers/adisak_thesis.pdf

  21. Typical non-uniform traffic throughput curve http://tiny-tera.stanford.edu/~nickm/papers/adisak_thesis.pdf

  22. Benchmarking & Comparison of Switch Fabrics • How do we have to compare switch fabrics • First we have to compare general design parameters. • Second we have to compare performance of the fabrics.

  23. Primary Design Parameters • Switch Architecture • Guaranteed Latency • TDM Support • Sub-ports per 10-Gbit/s Line Interface • Traffic Flows per 10-Gbit/s Port • Frame Payload (Bytes) • Frame Distribution Across Fabric • Fabric Overspeed • Backplane Link Speed • Backplane Links per 10-Gbit/s Port • Redundancy Modes • Host Interface • Switching Capacity • Sample Availability • NPU/TM Interfaces • Integrated Traffic Management • Power (per 10 Gbit/s) • Price (per 10 Gbit/s) • Integrated Linecard SerDes • 160-Gbit/s Device Count • 160-Gbit/s (with 1:1 Redundancy) Device Count • 640-Gbit/s Device Count • 640-Gbit/s (with 1:1 Redundancy) Device Count

  24. Performance Benchmarking • Traffic Modeling • Performance Metrics • Benchmark Suites

  25. Traffic Modeling • Destination Distribution: • The Zipf law has been proposed to model non-uniform traffic distribution between destinations. • k=0 corresponds to uniform traffic • k= infinity completely preferred destination • Typically k varies from 0 to 5

  26. Traffic Modeling • Packet arrival process: • Bernoulli i.i.d. arrivals • ON/OFF model • ON/OFF model with non-delimited burst streams • ON/OFF model with minimum burst size. • Mulitcast • Multiplicity factor: Realistically should not exceed 10 with an average value of 2-4. • Distribution of the detinations • QoS • Distribution of the traffic among a number of classes

  27. Performance Metrics • Fabric Latency: Latency between point 2 and 3. • Total Latency: Latency between point 1 and 3. • Accepted vs. offered bandwidth: The number of cells fabric accept at point 2 divided by the number of frames offered to it at point 1. • Jitter: Difference in the time interval between a pair of consecutive cells belonging to the same flow at the ingress and the egress.

  28. Benchmark Suites • Hardware Benchmarks: • Memory speed, processing speed, port-to-port minimum latency, switch fabric overhead, internal cell size…. • In these test there is no contention between packets to minimize scheduling and arbitration impacts. • Zero load latency, maximum port load

  29. Benchmark Suites • Arbitration Benchmarks • Studies performance of the fabric when there is contention. • Performance is studied for different traffic patterns and load destination distribution.

More Related