1 / 23

Potential Applications of Shared Bottleneck Detection (SBD) Michael Welzl

Potential Applications of Shared Bottleneck Detection (SBD) Michael Welzl. NNUW-1 Simula , Oslo 18. 09. 2013. Context. SBD has been around for a while Papers dating back more than a decade Approach: correlate per-path measurements (e.g. OWD, ..)

carter
Download Presentation

Potential Applications of Shared Bottleneck Detection (SBD) Michael Welzl

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Potential Applications ofShared Bottleneck Detection (SBD) Michael Welzl NNUW-1Simula, Oslo 18. 09. 2013

  2. Context • SBD has been around for a while • Papers dating back more than a decade • Approach: correlate per-path measurements (e.g. OWD, ..) • Never been a “hot topic”; said to be too unreliable, e.g. in IETF • Note: applications have different reliability requirements • Major problem: real-life validation • Most papers are poor in this aspect • E.g. real-life test without really validating • Hard to do in most testbeds • We tried and failed in PlanetLab: no shared bottlenecks • NorNet could make a real difference here

  3. A gallery of potential applications of SBD

  4. Multi-path transfer • Better congestion control for MPTCP, CMT-SCTP • cf.: SofianeHassayoun, JanardhanIyengar and David Ros, “Dynamic Window Coupling for Multipath Congestion Control”. In Proceedings of IEEE ICNP 2011, Vancouver, October 2011. • Basic issue:On N paths, act like 1 flow or like N? • N might be unfair if the flows share the same bottleneck • 1 might be too conservative if they don’t

  5. Multi-path overlay:EC-GIN LFT scenario Multipath file transfer not beneficial due to shared bottleneck Multipath file transfer (AB + ACB) beneficial

  6. File transferdelayprediction • Considertelling A and B thatittakes 5 minutesto send a 10 GB fileto C • But thenthey send atthe same time... • This isneededforscheduling in distributedcomputing

  7. Distributed computing: abstract-concrete WF mapping Tasks = {T1, T2, T3, T4}Resources = {R1, R2, R3, R4}Data transfers = {D1, D2, D3, D4} Unnoticedby traditionalschedulingalgorithms!

  8. Incorporating the net • Theory (analysis, “abstract” simulations) • Network-aware variant of Heterogeneous Earliest Finish Time (HEFT) scheduling algorithm developed in Ph.D. thesisMurtazaYousaf, "Accurate Shared Bottleneck Detection and File Transfer Delay Prediction for Grid Scheduling", December 2008, University of Innsbruck, Austria. • Next step: more realistic simulations, using ns-2 • Simulator available, developed in master thesis • Never been used! Get it from: http://heim.ifi.uio.no/michawe/research/projects/ec-gin-uibk/ • Next step: real-life tests using SBD • Nothing done yet!

  9. 2. yes 4. ok 1. may I join? 3. I quit Per-flow QoS without signaling to routers Traditional method: signalingtoedgerouters (e.g. with COPS) atthispoint! Synchronizationofdistributed (P2P based)database; all flowsknownto all brokers Synchronizationofdistributed (P2P based)database; all flowsknownto all brokers Synchronizationofdistributed (P2P based)database; link capacitiesknownto all brokers continuous measurements;update to BB upon path change

  10. Realistic? • Yes, when you’re in control of alltraffic, know shared bottlenecks, and know their capacity • Several Ph.D. theses, some in “EC-GIN” EU projectOne of them:KashifMunir, "Admission and Congestion Control for Deadline Constrained Data Transfers", May 2009, University of Innsbruck, Austria. • BDTS system at INRIA Lyon, successfully applied in French Grid’5000 environment • Incorporated in INRIA spin-off Lyatiss

  11. INRIA’s JBDTS GUI

  12. Network monitoring • “With CloudWeaver, users are able to gain full insight into the underlying topology, capacity, usage and vulnerability of a cloud deployment; anticipate congestion and fix bottlenecks, maximize resource utilization according to flow patterns and minimize infrastructure costs.” • Wouldn’t it be nice to do something along these lines for networks where you cannot control everything?

  13. How we use the Internet today: 2 stories • I clean our flat while listening to Spotify via my wife’s laptop • in parallel, downloading files via my own • Suddenly I begin to think:“please, dear downloads, don’t make the music stop!” • I am in a hotel room, using Skype with video to see my daughter • Quality barely good enough • I avoid clicking on anything • Note: that’s different when I talk to my mother...

  14. A major problem • We may have become used to this, but that doesn’t mean it’s good?! • Would like to specify: do not interrupt Spotify / Skype(or know: do downloads disturb Spotify / Skype or not?) • These were just two examples • Downloads can also have different priorities • When I download two files, I try to guess whether the downloads slow each other down

  15. Opinions: 139 of my work colleagues, students, and Facebook “friends”

  16. Ingredients of this controlled-fairness soup • Shared bottleneck detection • for the user: know about mutual influence of transfers • for upload and download: control fairness only among flows that share a bottleneck • Coupled congestion control for separate flows • Solutions exist (CM, TCB interdependence (RFC 2140)) • Alternative to coupled cc.: tunable-aggression-TCP • Solutions exist • E2E-signaling of fairness requirements • Doesn’t exist?! • ... and a GUI that shows transfers by application;existing tools can do that

  17. Coupling cc. of separate flows • Does something very wrong if flows do NOT share the same bottleneck! • But if they do, a ton of potential very positive side-effects • Less delay • Less packet loss • Less signalling(N flows don’t need N*feedback about the same path) • More controllable behavior(sender-side scheduling vs. “fighting it out” on the bottleneck) • Better performance for short or application-limited flows(TCP does use itorloseit; with shared congestion control, ifflow 1 doesn’t use it, maybe flow 2 does.Skip slow start: again less queuing delay from slow start overshoot)

  18. IETF RMCAT proposal:“Flow State Exchange” (FSE) • The result of searching for minimum-necessary-standardization: only define what goes in / out, how data are maintained • Could reside in a single app(e.g. browser)and/or in the OS Traditional CM FSE-based CM Another possibleimplementationof flow coordination CM CM Stream 1 Stream 1 Stream 1 FSE FSE Stream 2 Stream 2 Stream 2

  19. Coupled cc: thinking ahead • If we can relax update frequency, can we coordinaterates across hosts? • May need to detect shared bottlenecks on both sides H2 H1 H3

  20. Conclusion • Some SBD mechanisms exist • real-life validation is urgently needed! • A long list of potential applications • Better cc for multi-path transfers • Or decisions to apply multi-path, e.g. in an overlay • Predicting file transfer delay • Important input for scheduling in distributed processing • Deeper knowledge about a net; good for monitoring or even QoS (-ish) mechanisms • At the “least QoS” end of the spectrum, just tell users which flows influence each other, let them apply priorities • Coupled congestion control • Many possibilities there, e.g. cross-host-control

  21. Thank you!Questions?

More Related