1 / 22

Lambda scheduling algorithm for file transfers on high-speed optical circuits

Lambda scheduling algorithm for file transfers on high-speed optical circuits. Hojun Lee Polytechnic Univ. Malathi Veeraraghavan Univ. of Virginia Contact: mv@cs.virginia.edu. Hua Li and Edwin Chong Colorado State Univ. Outline. Background & Problem statement

lovie
Download Presentation

Lambda scheduling algorithm for file transfers on high-speed optical circuits

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Lambda scheduling algorithm for file transfers on high-speed optical circuits Hojun Lee Polytechnic Univ. Malathi Veeraraghavan Univ. of Virginia Contact: mv@cs.virginia.edu Hua Li and Edwin Chong Colorado State Univ.

  2. Outline • Background & Problem statement • Varying-Bandwidth List Scheduling(VBLS) • Conclusions and future work

  3. Background • Many optical network testbeds being created for eScience applications • Canarie’s Ca*net 4 - Canada • Translight – USA • SURFnet – Netherlands • UKLight – UK • Target applications: • Terabyte/petabyte file transfers • Remote visualization, computational steering

  4. Background • These optical networks are circuit-switched • Circuit-switched network operation: • Establish a circuit  reserve capacity at each switch on end-to-end path • Dedicated resources implies “rate guarantee” • Sounds great – but what’s the catch? • Cost, if network resources are not SHARED on some basis • Answer: implement dynamic provisioning of circuits • User holds a “lambda” for some short duration and releases for others to use • How “dynamic?” The greater the sharing, the lower the costs • Our proposed approach (NSF project called CHEETAH): • Hold “lambdas” only for the duration of file transfers

  5. 1 1 Capacity C Capacity C 1 1 2 2 PS CS 2 2 3 3 3 3 . . . . N N N N Each transfer is allocated C/N capacity Each transfer gets C/N capacity Background: Old theory says circuits unsuitable for file transfers • PS: Packet switch • CS: Circuit switch • Fixed bandwidth scheme The lone remaining transfer enjoys full capacity C The lone remaining transfer continues with capacity allocation C/N

  6. Our answer to this handicap • Instead of scheduling a “fixed capacity” for the duration of a file transfer - • To take advantage of bandwidth that becomes available subsequent to the start of the transfer - • Schedule varying capacities for different time ranges within the duration of a transfer • Provide sender this schedule at the start of the transfer (i.e., during circuit provisioning) – it adjusts sending rate • Announce schedule to all the circuit switches on the path for an automated reconfiguration of circuits at time range boundaries • How do we predict the time ranges in which more capacity will be available after the transfer starts at the time of circuit setup? • Require users to specify file sizes • Scheduler keeps track of allocations already made to ongoing transfers

  7. Problem statement • Hence our problem is not how to schedule lambdas for fixed durations, but - • Rather it is how to schedule lambdas for file transfers

  8. Scheduling requests • Specify • File size • Maximum rate • File transfers, unlike real-time audio/video, can be allocated “any” capacity; higher the rate, smaller the transfer delay • End host processing, network interface card and disk limitations place an upper bound on the rate allocated for the file transfer • Requested start time • Allows users to specify a delayed start time • Immediate-request vs. book-ahead calls (pricing)

  9. VBLS: A Lambda-Scheduling Algorithm for File Transfers • End host applications request lambdas for file transfers by specifying a three-tuple • : file size • : a maximum bandwidth limit for the request • : the desired start time for the transfer • The scheduler assigns a Time-Range-Capacity (TRC) vector for each transfer • : the start of the kth time range • : the end of the kth time range • : the capacity allocated for the transfer in the kth time range.

  10. VBLS: an example Assume the available capacity of a 4-channel link is as shown below Available capacity F: 5GB Rmax: 2 channels Treq: 50 Per-channel rate: 10Gbps Time unit: 100ms In 10 time units can transfer 1.25GB TRC allocated: (50, 60, 1) (60, 70, 2) (70, 75, 2)

  11. VBLS algorithm • Identify change points (P1, P2, ..., Pn), in available capacity function, (t) • Find interval [Pi, P(i+1)] in which Treq lies • Four cases are possible while allocating resources in that interval: • Remaining file can be fully transferred and (i)  Rmax • Remaining file can be fully transferred and (i) > Rmax • Remaining file cannot be fully transferred and (i)  Rmax • Remaining file cannot be fully transferred and (i) > Rmax • In each case, we set parameters of a time range: • Beginning of time range • End of time range • Capacity allocated in that time range • In last two cases, decrease remaining file size variable and continue to next interval between change points

  12. Analysis and simulation • Traffic model: • Call arrival process = file transfer arrival process: Poisson with rate  • File size F: bounded Pareto distribution k: lower bound on file size; p: upper bound on file size; : shape parameter: 1.1

  13. Validation of simulation program with analysis • Simple case: • All calls specify same maximum rate, which is set equal to link capacity C • M/G/1 model with ‘G’ being bounded Pareto • Analytical result available

  14. Result for validation case • File latency: mean waiting time – from Treq until first bit is transmitted • System load 

  15. Sensitivity analysis: Effect of maximum rate • All calls request same Rmax of 1, 5, 10, 100 channels on a link of capacity C=100 channels • Mean latency smallest in the 1-channel case • Mean file transfer delay (which is latency + service time) is smallest in 100-channel case

  16. Sensitivity analysis: Effect of file size lower bound (k) and upper bound (p) • All calls request same Rmax of 1, 5, 10 channels on a link of capacity C=100 channels • Case 1: k=500MB; p = 10GB • Case 2: k= 10GB; p = 100GB • File latency is more in Case 2 because variance is higher in Case 2 (shape of bounded Pareto distribution) • Increasing upper bound p, increases variance and hence file latency increases

  17. Simulation comparison of VBLS against FBLS (Fixed-Bandwidth LS) and PS • Calls choose 1, 5 or 10 channels with probability 0.3, 0.3 and 0.4 • Normalized delay (D)

  18. Alternate view: throughput • File throughput (y-axis): long-term average of file size divided by transfer delay • System load (x-axis) 1 channel 5 channels 10 channels

  19. Observation • VBLS • Achieves close to idealized PS (infinite buffer) performance • Finite-buffer PS networks need something like TCP – reduces idealized PS throughput levels • Compare with current TCP enhancements under design • which are implementing run-time discovery of available bandwidth to ideally adjust sending rates to match available bandwidth • goal: avoid packet losses and consequent rate drops

  20. Practical considerations • Extending VBLS scheme to multiple links • Clock synchronization & Propagation delay • Staggered schedule • Accounting for retransmissions • Available capacity function • Cannot be continuous, has to be discrete • Wasted resources because of discretization • Cost of achieving PS-like performance • Circuit switches now more complex • Need electronics to do timer-based reconfigurations of circuits

  21. Extensions • Add a second class of requests: • Holding time • Minimum rate • Maximum rate • Requested start time • Useful for remote visualization and other interactive applications

  22. Conclusions and future work • VBLS overcomes a well-known drawback of using circuits for file transfers • fixed-bandwidth allocation fails to take advantage of bandwidth that becomes available subsequent to the start of a transfer • Simulations showed that VBLS can improve performance over fixed-bandwidth schemes significantly for file transfers • Cost: implementation complexity • Future work: to include a second class of user requests for lambdas, targeted at interactive applications such as remote visualization and simulation steering

More Related