1 / 27

Background

The Latency/Bandwidth Tradeoff in Gigabit Networks UBI 527 Data Communications Ozan TEKDUR 2011-2012, Fall. Background . Late 1960s up to 30 character per second Mid 1970s 64 kbps trunk speed 10 kbps file transfer speed X.25 packet switched networks Late 1980s cost effective T1 Networks

fawzia
Download Presentation

Background

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. The Latency/Bandwidth Tradeoff in Gigabit NetworksUBI 527 Data CommunicationsOzan TEKDUR2011-2012, Fall

  2. Background • Late 1960s • up to 30 character per second • Mid 1970s • 64 kbps trunk speed • 10 kbps file transfer speed • X.25 packet switched networks • Late 1980s • cost effective T1 Networks • 1544 Mbps trunkspeed • e-mail and file transfer • Packet-switched network architecture still had to process every packet up to the third layer (the networklayer)

  3. Background • Early 1990s • Frame Relay Networks • Packet switching at T1 speeds • fiber optic transmission media • Very high bandwidth • Noise free • Less error control burden • Improvements on VLSI technology(faster switches) • Improvements on VLSI technology • intelligent switches • dynamically assign the channel bandwidth on a demand basis 3

  4. Background • ISDN • Link Access Protocol for D channel • Packets are processed up to the 2nd(data link) layer reducing the burden on network layer • In summary Frame Relay Networks can achieve T1 speeds by: • implementing functions to HW • moving functions out of the network • Taking advantage of LAPD architecture • example: LAN 4

  5. Background • Multi megabit data networks • FDDI • 100 Mbps • SMDS • 45 Mbps • DQDB • Up to 150 Mbps • ATM switches and Broadband ISDN • At 155 Mbps up to 2.4 Gbps • HPPI • 800 Mbps • SONET • 1.2 Gbps 5

  6. Major Issiue: Latency vs. Bandwidth • gigabit world, just another step in greater bandwitdth systems or are they different? • effect of latency • Channel Latency: the time it takes energy to move from one end of the link to the other • Key parameters in any data network system: • C = Capacity of the network (Mbis) • b = Number of bits in a data packet • L = Length of the network (miles) 6

  7. Major Issiue: Latency vs. Bandwidth • a = 5LC / b (Eq.1) • where “a” is the critical system parameter which can be defined as the ratio of the latency of the channel to the time it takes to pump one packet into the link. • measures how many packets can be pumped in to one end of the link before the first bit appears at the other end. • Factor 5 is simply the approximate number of microseconds it takes light to move one mile. 7

  8. Major Issiue: Latency vs. Bandwidth Table 1: propagation delay / packet Tx time 8

  9. Major Issiue: Latency vs. Bandwidth • “a” grows dramatically in gigabit systems. Why? • 2 cases to consider • large number of users are each sharing a small piece of this large bandwidth • few users each sending packets and files at gigabit speeds • from user point of view case 1 is no different from lower datarate systems. • for case 2, due to the high datarate “a” gets large. 9

  10. Major Issiue: Latency vs. Bandwidth Figure 1 • Consider a terminal sends 1 Mb across US as shown in Figure 1 10

  11. Major Issiue: Latency vs. Bandwidth • Assume communication channel is X.25 packet network • 64kbps • First bit will arrive after ~1000 bits are pumped to the channel • Channel is buffering 0,001 of the message • 1000 times much data is stored in terminal then channel • Clearly if a higher speed is used transmitting time will be reduced and we can benefit from more bandwidth 11

  12. Major Issiue: Latency vs. Bandwidth • Assume communication channel is T1 packet network • 1,544Mbps • 40 times much data is stored in terminal then channel • Once again if a higher speed is used transmitting time will be reduced and we can benefit from more bandwidth 12

  13. Major Issiue: Latency vs. Bandwidth • Assume communication channel is SONET packet network • 1.2 Gbps • entire 1 Mb file as asmall pulse moving down the channel • pulse occupies roughly only 0.05 of the channel • it is clear that more bandwidth is of no use in speeding up the communication at this rate • Latency of the channel dominates the time to deliver the file 13

  14. Major Issiue: Latency vs. Bandwidth • Pre-gigabit networking • Capacity limited • Post-gigabit networking • Latency limited • Speed of light, fundamental limitation 14

  15. Major Issiue: Latency vs. Bandwidth • Case of competing traffic • Queueing • Classical M/M/1 queueing system • the mean time from when the message arrives at the tail of the transmit queue until the last bit of the message appears at the output of the channel, including any propagation delay is given by • T = (1.024/c)/ (1 – p) + τ(Eq.2) • T = mean response time (milliseconds) • τ = propagation delay (channel latency) in milliseconds • p, system utilization factor. p = λ( 1024/C) (Eq.3) • λ = arrival rate (messages per microsecond) • C= channel capacity (Mbps) 15

  16. Major Issiue: Latency vs. Bandwidth Figure 2:Response time vs. system load with no propagation delay 16

  17. Major Issiue: Latency vs. Bandwidth Figure 3:Response time vs. system load with 15ms propagation delay 17

  18. Major Issiue: Latency vs. Bandwidth • Can we define a relationship between bandwidth and latency systems? • assume a M/M/1 model , where the messages have an average length equal to b bits is transmittnig data across USA • Recall Eq.2 T = (1.024/c)/ (1 – p) + τ • two components make up the response time T , • the queueing + transmission time delay (the first term in the equation) • the propagation delay (τ). 18

  19. Major Issiue: Latency vs. Bandwidth • we aim to define a sharp boundary between bandwidth- limited and latency-limited regions. • assume two terms are equal to eachother. • the propagation delay = “the queueing” + “transmission time delay” • From Eq. 2 it comes out that Ccritical = (1.000b)/ (1 – p) τ(Eq.4) 19

  20. Major Issiue: Latency vs. Bandwidth Figure 4:Bandwidth vs. system load for a 1-Mb file sent across the U.S. 20

  21. Major Issiue: Latency vs. Bandwidth • Above the boundary, • System is “latency” limited • more bandwidth will have negligible effect in reducing the mean response time, T • Below the boundary, • System is “bandwidth” limited • more bandwidth will reduce the mean response time, T • For these parameters, T = 15ms and 1Mb file size, the system is latency limited over most ofthe load range when a gigabit channel is used • For these parameters a gigabit channel is overkill so far as reducing delay is concerned 21

  22. Major Issiue: Latency vs. Bandwidth Figure 5: Bandwidth vs. system load for files sent across the U.S. • gigabit channels begin to make sense for message sizes of size 10 megabits or more, but are not helpful for smaller file sizes. 22

  23. Other Issiues • congestion control and flow control problem in gigabit networks • Transmission start at t = 0 • T = 15ms, when 1st bit arrive receiver 15 million bits are on the pipeline. • if an error occured and by the time a stop bit is received by the transmitter at t = 30ms, another 15 million bits will have launched! • closed control feedback method of flow control is of no use in this environment due to latency 23

  24. Other Issiues • Possible Solutions: • Rate based flow control • user is permitted to transmit at amaximum allowable rate • Hiding latency at the application level • Use of parallelism • while one process is waiting for a response, another process, which does not depend upon this response, may proceed with its processing • statistical multiplexing of bursty sources • If we have a large number of small bursty sources Law of Large Numbers allows selected channel to be driven with high efficiency 24

  25. CONCLUSION • gigabit networks forces us to deal with the propagation delay due to the finite speed of light • propagation delay across the U.S. is 40 times smaller than the time required to transmit a 1-Mb file into a T1 link • At gigabit speeds the situation is completely reversed, propagation delay is 15 times larger than the time to transmit into the link • user must pay attention to file sizes and how latency will affect applications 25

  26. CONCLUSION • user must try to hide the latency with pipelining and parallelism • system designer must think about the problems of flow control, buffering, and congestion control • rate based flow control • design algorithms for smaller buffer usage 26

  27. Reference: KLEINROCK, L., The Latency/Bandwidth Tradeoff in Gigabit Networks. IEEE Communications Magazine, April 1992UBI527 Data Communications Term Project 20.12.2011 Ozan TEKDUR 91090017615 27

More Related