1 / 27

Network Performance Management using End-to-end Mechanisms

Network Performance Management using End-to-end Mechanisms. Prashant Pradhan Debanjan Saha Sambit Sahu IBM Research. Manpreet Singh Cornell. Network Performance Management. Fundamentally harder due to distributed nature and decentralized conrol structure (unlike CPU/storage)

Download Presentation

Network Performance Management using End-to-end Mechanisms

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Network Performance Management using End-to-end Mechanisms Prashant Pradhan Debanjan Saha Sambit Sahu IBM Research Manpreet Singh Cornell

  2. Network Performance Management • Fundamentally harder due to distributed nature and decentralized conrol structure (unlike CPU/storage) • Why is it still an unsolved problem after decades of research ? • Traditionally thought of as a problem of providing QoS knobs inside the network • Knobs set by applications with the help of brokers aware of network state • Overkill from the point of view of network providers • Most applications have elastic traffic demands • Requirements of a limited set of applications simply does not warrant the resulting complexity in network infrastructure and pricing

  3. Network Performance Management • Still network performance is a significant component of the end-to-end performance of a performance-sensitive application • Network bottlenecks often dominate server/storage bottlenecks

  4. Re-thinking the problem • The network provides time-varying bandwidth/delay on various network paths • View this as a constraint • An application has traffic demands between its endpoints, with a certain performance requirement • Endpoint performance control can be exercised by mapping this demand intelligently onto the available network paths

  5. End-to-end knobs for mapping traffic onto network paths Sprint AT&T Server Selection ISP Selection Overlay Routing

  6. Challenges and goals restated • Network is a difficult resource to manage • Distributed resource with decentralized control (unlike CPU/storage) • No monitoring/control knobs typically available • Develop a service to which applications can delegate network performance management • Functions of this service • Monitor network capacity and application demand • Plan the setting of the available control knobs • Deploy the plan by performing the requisite control and configuration actions • Research challenge : Is it possible to do this end-to-end, without any control over the network ?

  7. Resource Broker Per-application Optimization Solvers Deployment Engine Data Acquisition Overlay Routing Tables Annotated Topology, Traffic Demand Matrix Server Selection maps Orchestration Measurements, Events NRM Agents Overall architecture Network End systems

  8. Network Monitoring • Key inputs to planning • Underlying graph/topology connecting the application endpoints • Routing constraints on the paths • Available bandwidth/delay on the graph edges • Key challenge : edge bandwidth annotation must be done by running flow experiments end-to-end

  9. Key elements of monitoring solution • Edge can be annotated with available bandwidth by saturating the edge • The number of flows needed to saturate an edge must be minimized • Iteratively compute max flows of graph edges, using dynamic programming • Bottleneck identification (i.e. detecting a saturated edge using endpoint measurements) is a key primitive that drastically reduces the number of steps in the algorithm

  10. N0 E1 (10) N1 E3 (15) E2 (20) N3 N2 E4 (20) E5 (5) E6 (5) E7 (20) N4 N5 N6 N7 Base Graph Annotation Algorithm Illustration

  11. Basic algorithm 10 10 10 10 10 10 10 10 5 10 10 5 10 5 10 5 5 10 10 10 5 10 10 10 10 10 10 10 10 10 5 5 10 5 10 5 5 5

  12. With bottleneck identification 10 10 10 10 10 10 10 10 10 10 10 10 5 10 5 10 5 5 10 5

  13. Solution Illustrated on the IBM IntraGrid

  14. Network Planning • NRM provides applications an interface to • Register its endpoints • Register its bandwidth and delay requirements • Notify <source, destination, size> tuples for transfers it makes • Allows NRM to build a traffic demand matrix • NRM planner uses annotated topology and traffic demand matrix to formulate an optimization problem • Formulation is specific to the end-to-end control knob • Refer to paper for formulations for the 3 knobs considered • Solution is a setting of the end-to-end control knob

  15. Plan deployment • Deployment of an end-to-end knob setting is delegated to NRM agents running at the application endpoints • Examples : • Server selection output  DNS mappings • ISP selection  Routing table configuration in front-end switch • Overlay routing  Routing tables for overlay nodes

  16. Application Scenarios • Demand-driven media distribution tree creation for IBM VideoCharger • Distribution/mixer tree layout for a peer-to-peer voice conferencing service • Key metrics : • Impact of using an averaged network capacity map on application-specific metrics • Effectiveness in real Internet

  17. Servers Servers Storage Storage Servers Storage A media distribution service Clients

  18. Virtualized view of IT assets Servers Servers Storage Storage Network Servers Storage Clients

  19. Network mapping Servers Servers Storage Storage CA NY Servers Storage IL Clients

  20. Network annotation 0.2 Mbps Servers 2 Mbps Servers 10 Mbps Storage Storage CA 10 Mbps NY 1 Mbps Servers 3 Mbps Storage 0.4 Mbps 0.2 Mbps IL Clients

  21. Network planning 0.2 Mbps Servers 2 Mbps Servers 10 Mbps Storage Storage CA 10 Mbps NY 1 Mbps Servers 3 Mbps Storage 0.4 Mbps 0.2 Mbps IL Clients

  22. Network planning 0.2 Mbps Servers 2 Mbps Servers 10 Mbps Storage Storage CA 10 Mbps NY 1 Mbps Servers 3 Mbps Storage 0.4 Mbps 0.2 Mbps IL Clients

  23. Network planning 0.2 Mbps Servers 2 Mbps Servers 10 Mbps Storage Storage CA 10 Mbps NY 1 Mbps Servers 3 Mbps Storage 0.4 Mbps 0.2 Mbps IL Clients

  24. Server Provisioning 0.2 Mbps Servers 2 Mbps Servers 10 Mbps Storage Storage CA 10 Mbps NY 1 Mbps Servers 3 Mbps Storage 0.4 Mbps 0.2 Mbps IL Clients

  25. Storage Provisioning 0.2 Mbps Servers 2 Mbps Servers 10 Mbps Storage Storage CA 10 Mbps NY 1 Mbps Servers 3 Mbps Storage 0.4 Mbps 0.2 Mbps IL Clients

  26. Status • Being prototyped as part of an IBM resource provisioning product • Manages server, network and storage resources to meet end-to-end application performance goals • Netmapper implementation is available • Application trace-driven evaluation over real Internet and PlanetLab underway

  27. Conclusion • Network performance management should be viewed as a constrained resource planning problem, as opposed to a problem of providing/setting QoS knobs in the network • The NRM project is trying to validate this premise by developing the needed technologies and providing a proof-of-concept

More Related