1 / 31

TCP/IP over ATM using ABR, UBR, and GFR Services

TCP/IP over ATM using ABR, UBR, and GFR Services. Raj Jain The Ohio State University Columbus, OH 43210 Jain@cse.ohio-State.Edu http://www.cse.ohio-state.edu/~jain/. Why ATM? ABR: Binary and Explicit Feedback ABR Vs UBR TCP/IP over UBR TCP/IP over GFR ATM Research at OSU. Overview.

bono
Download Presentation

TCP/IP over ATM using ABR, UBR, and GFR Services

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. TCP/IP over ATM using ABR, UBR, and GFR Services Raj Jain The Ohio State UniversityColumbus, OH 43210Jain@cse.ohio-State.Edu http://www.cse.ohio-state.edu/~jain/

  2. Why ATM? • ABR: Binary and Explicit Feedback • ABR Vs UBR • TCP/IP over UBR • TCP/IP over GFR • ATM Research at OSU Overview

  3. Why ATM? • ATM vs IP: Key Distinctions • Traffic Management: Explicit Rate vs Loss based • Signaling: Coming to IP in the form of RSVP • PNNI: QoS based routing. QOSPF, Integrated/Differentiated services • Switching: Coming soon to IP in the form of MPLS • Cells: Fixed size or small size is not important

  4. EFCI Current Cell Rate Explicit Rate ABR: Binary vs Explicit Rate • DECbit scheme in 1986: Bit Þ Go up/Down • Used in Frame Relay (FECN) and ATM (EFCI) • In July 1994, we proposed Explicit Rate Approach.Sources send one RM cell every n cells.The switches adjust the explicit rate field down.

  5. Why Explicit Rate Indication? • Longer-distance networks Can’t afford too many round-trips  More information is better • Rate-based control Queue length = Rate Time Time is more critical than with windows • NOTE: Explicit congestion notification (ECN) in IP is binary and applies only to TCP.

  6. Internet Protocols over ATM • ATM Forum has designed ABR service for data • UBR service provides no feedback or guarantees • Internet Engineering Task Force (IETF) prefers UBR for TCP

  7. Dest. Source ATM Router Dest. Source Router ABR vs UBR ABR Queue in the source Pushes congestion to edges Good if end-to-end ATM Fair Works for all protocols UBR Queue in the network No backpressure Same end-to-end or backbone Generally unfair Works with TCP

  8. Vanilla TCP : Slow Start and Congestion Avoidance TCP Reno: Fast Retransmit and Recovery TCP End System Policies Selective Acknowledgments TCP over UBR ATM Switch Drop Policies Minimum Rate Guarantees : per-VC queuing Per-VC Accounting : Selective Drop/FBA Early Packet Discard Tail Drop Improving Performance of TCP over UBR

  9. Policies End-System Policies No FRR New SACK + FRR Reno New Reno No EPD Plain EPD Switch Policies Selective EPD Drop Fair Buffer Allocation

  10. Policies: Results • In LANs, switch improvements (PPD, EPD, SD, FBA) have more impact than end-system improvements (Slow start, FRR, New Reno, SACK). Different variations of increase/decrease have little impact due to small window sizes. • In large bandwidth-delay networks, end-system improvements have more impact than switch-based improvements • FRR hurts in large bandwidth-delay networks.

  11. Policies (Continued) • Fairness depends upon the switch drop policies and not on end-system policies • In large bandwidth-delay networks: • SACK helps significantly • Switch-based improvements have relatively less impact than end-system improvements • Fairness is not affected by SACK • In LANs: • Previously retransmitted holes may have to be retransmitted on a timeout  SACK can hurt under extreme congestion.

  12. Guaranteed Frame Rate (GFR) • UBR with minimum cell rate (MCR) UBR+ • Frame based service • Complete frames are accepted or discarded in the switch • Traffic shaping is frame based. All cells of the frame have CLP =0 or CLP =1 • All frames below MCR are given CLP =0 service. All frames above MCR are given best effort (CLP =1) service.

  13. GR GFR per-class reservation per-VC reservation per-class scheduling per-VC accounting/scheduling No new signaling Need new signaling Can be done now In TM4+ Guaranteed Rate Service • Guaranteed Rate (GR): Reserve a small fraction of bandwidth for UBR class.

  14. Guaranteed Rate: Results • Guaranteed rate is helpful in WANs. • For WANs, the effect of reserving 10% bandwidth for UBR is more than that obtained by EPD, SD, or FBA • For LANs, guaranteed rate is not so helpful. Drop policies are more important.

  15. GFR: Results • Per-VC queuing and scheduling is sufficient for per-VC MCR. • FBA and proper scheduling is sufficient for fair allocation of excess bandwidth • Questions: • How and when can we provide MCR guarantee with FIFO? • What if each VC contains multiple TCP flows? Per-VC Q Single FIFO

  16. Buffer occupancy (X) Delay H (cliff) L (knee) Throughput Desired operating region Load Differential Fair Buffer Allocation

  17. DFBA (contd.) AcceptAll frames. Drop all low priority.Drop high priority with probability P() Drop all ith VC’sQueue(Normalized) 1 2 3 4 Xi(W/Wi) X < L Drop all low priorityDo not drop high priority X > H Low Threshold L High Threshold H Total Queue X TCP Rate

  18. Bottleneck SatelliteLink Workgroup Switch VS/VD • Without Virtual Source/Virtual Destination: • With VS/VD: • With VSVD, the buffering is proportional to the delay-bandwidth of the previous loop Þ Good for satellite networks

  19. Networking Research at OSU • Traffic Management: • ERICA+ Switch Algorithm • Internet Protocols over ATM • Multi-class Scheduling • Multipoint ABR • Performance Testing • ATM Test bed: OCARnet • Voice/Video over ATM/IP • Wireless Networking • QoS over IP

  20. OCARnet • Ohio Computing and Communications ATM Research Network • Nine-Institution consortium lead by OSU • Ohio State University • Ohio Super Computer Center • OARnet • Cleveland State University • Kent State University • University of Dayton • University of Cincinnati • Wright State University • University of Toledo KSU CSU UT vBNS WSU Cleveland155 M OAR UD OSC OSU 622 M UC

  21. Cleveland State Univ Univ of Toledo To vBNS/MCI 45 Mbps Toledo Cleve-land To NASA 155 Mbps Wright State Univ 155 Mbps Kent StateUniv 1.5 Mbps Kent Dayton 622 Mbps OARnet 1.5 Mbps OhioSupercomputerCenter Univ ofDayton Cincinnati Columbus 1.5 Mbps 622 Mbps WAN Switches Univ of Cincinnati Workgroup Switches OCARnet Ohio State Univ

  22. OSU National ATM Benchmarking Lab • Started a new effort at ATM Forum in October 1995 • Defining a new standard for frame based performance metrics and measurement methodologies • We have a measurement lab with the latest ATM testing equipment. Funded by NSF and State of Ohio. • The benchmark scripts can be run by any manufacturer/user in our lab or theirs. • Modeled after Harvard benchmarking lab for routers

  23. PC PC (To control and (To control and monitor the monitor the analyzer) analyzer) ADTECH ADTECH AX/4000 AX/4000 ATM Analyzer ATM Analyzer Performance Testing Facility OCARNET Possible 622 Mbps Link Possible 155 Mbps 155 Mbps 622 Mbps or 25 Mbps or 25 Mbps or 155 Mbps Links Links Link FORE FORE ASX 200BX ASX 200BX ATM Switch ATM Switch

  24. Voice/Video over ATM and IP • VBR Voice over ATM (Speech suppression) Þ Unused bandwidth can be used by dataCannot be used by voice. • Hierarchical compression of VideoDifferent users can see different bandwidth videoNetwork feedback • Multipoint ABR • Real-time ABR • QoS over IP • Distance education

  25. Video Source Video Source Composite Composite Workstation FORE AVA 300 FORE AVA 300 FORE ATV 300 FORE ATV 300 Video Testbed OARNET OSU 155 Mbps 622 155 Mbps Mbps FORE FORE 155 Mbps ASX 200BX ASX 1000 ATM Switch ATM Switch Composite Video Monitor Video Monitor

  26. Wireless Networking • Antenna design and wireless modem communications in Electro-science laboratory of EE dept • High-speed wireless datalink protocols • Wireless TCP • Access methods and hand-off (Jennifer Hou/EE and Steve Lai/CIS)

  27. 100 Mbps 100 M bps - UP LINK TO FORE FORE HP C180 V IDEO T I HE NTERNET AVA 300 ATV 300 WITCH T ESTBED HP C180 S HP 715/100 THERNET HP 715/100 S F ERVER ARM PC SUN HP 715/100 FORE -2 E Ultra 1 ASX-200BX HP 715/100 WITCH HP 715/100 60 AYER ATM S WITCH HP 715/100 S GB L PC J280 HDD ISCO THERNET C PC E PC AST PC P RODUCTION -2 F M OC-3 OC-12 PC ACHINES HP J282/2 FORE AYER ASX-1000 L P OSSIBLE ATM S WITCH ISCO OC-12 OC-3 OR HP C240 C L INK HP C240 FORE FORE 622 M HP C240 BPS ASX-200BX ASX-200BX CONNECTION ATM S ATM S WITCH WITCH HP C240 P TO ERFORMANCE HP C240 OCARNET B ENCHMARKING T AND ESTBED HP C240 BNS V ADTECH ADTECH HP C240 R S AX/4000 AX/4000 EMOTE IMULATION HP C240 P ATM A ATM A OOL NALYZER NALYZER NETLAB Facilities

  28. Summary • Traffic management distinguishes ATM from its competition • Binary feedback too slow. ER switches better for high bandwidth-delay paths. • ABR pushes congestion to edges. UBR+ may be OK for LANs but not for large bandwidth-delay paths.

  29. Summary (Cont) • Reserving a small fraction of bandwidth for the entire UBR class improves its performance considerably. • It may be possible to do GFR with FIFO

  30. Our Contributions and Papers • All our contributions and papers are available on-line at http://www.cse.ohio-state.edu/~jain/ • See Recent Hot Papers for tutorials.

  31. Thank You!

More Related