1 / 136

GD-Aggregate: A WAN Virtual Topology Building Tool for Hard Real-Time and Embedded Applications

GD-Aggregate: A WAN Virtual Topology Building Tool for Hard Real-Time and Embedded Applications. Qixin Wang*, Xue Liu**, Jennifer Hou*, and Lui Sha* *Dept. of Computer Science, UIUC **School of Computer Science, McGill Univ. Demand. Big Trend: converge computers with the physical world.

matia
Download Presentation

GD-Aggregate: A WAN Virtual Topology Building Tool for Hard Real-Time and Embedded Applications

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. GD-Aggregate: A WAN Virtual Topology Building Tool for Hard Real-Time and Embedded Applications Qixin Wang*, Xue Liu**, Jennifer Hou*, and Lui Sha* *Dept. of Computer Science, UIUC **School of Computer Science, McGill Univ.

  2. Demand • Big Trend: converge computers with the physical world

  3. Demand • Big Trend: converge computers with the physical world • Cyber-Physical Systems

  4. Demand • Big Trend: converge computers with the physical world • Cyber-Physical Systems • Real-Time and Embedded (RTE) GENI

  5. Demand • Big Trend: converge computers with the physical world • Cyber-Physical Systems • Real-Time and Embedded (RTE) GENI • Virtual Organization

  6. Demand • Big Trend: converge computers with the physical world • Cyber-Physical Systems • Real-Time and Embedded (RTE) GENI • Virtual Organization • Calls for RTE-WAN with following features:

  7. Demand • Big Trend: converge computers with the physical world • Cyber-Physical Systems • Real-Time and Embedded (RTE) GENI • Virtual Organization • Calls for RTE-WAN with following features: • Scalability:

  8. Demand • Big Trend: converge computers with the physical world • Cyber-Physical Systems • Real-Time and Embedded (RTE) GENI • Virtual Organization • Calls for RTE-WAN with following features: • Scalability: • Similar traffic aggregation.

  9. Demand • Big Trend: converge computers with the physical world • Cyber-Physical Systems • Real-Time and Embedded (RTE) GENI • Virtual Organization • Calls for RTE-WAN with following features: • Scalability: • Similar traffic aggregation. • Global/local traffic segregation;

  10. Demand • Big Trend: converge computers with the physical world • Cyber-Physical Systems • Real-Time and Embedded (RTE) GENI • Virtual Organization • Calls for RTE-WAN with following features: • Scalability: • Similar traffic aggregation. • Global/local traffic segregation; • Network hierarchy and modularity

  11. Demand • Big Trend: converge computers with the physical world • Cyber-Physical Systems • Real-Time and Embedded (RTE) GENI • Virtual Organization • Calls for RTE-WAN with following features: • Scalability: • Similar traffic aggregation. • Global/local traffic segregation; • Network hierarchy and modularity, which also assists composability, dependability, debugging etc.

  12. Demand • Big Trend: converge computers with the physical world • Cyber-Physical Systems • Real-Time and Embedded (RTE) GENI • Virtual Organization • Calls for RTE-WAN with following features: • Scalability: • Similar traffic aggregation. • Global/local traffic segregation; • Network hierarchy and modularity; • Configurability:

  13. Demand • Big Trend: converge computers with the physical world • Cyber-Physical Systems • Real-Time and Embedded (RTE) GENI • Virtual Organization • Calls for RTE-WAN with following features: • Scalability: • Similar traffic aggregation. • Global/local traffic segregation; • Network hierarchy and modularity; • Configurability: • Runtime behavior regulation

  14. Demand • Big Trend: converge computers with the physical world • Cyber-Physical Systems • Real-Time and Embedded (RTE) GENI • Virtual Organization • Calls for RTE-WAN with following features: • Scalability: • Similar traffic aggregation. • Global/local traffic segregation; • Network hierarchy and modularity; • Configurability: • Runtime behavior regulation • Flexibility:

  15. Demand • Big Trend: converge computers with the physical world • Cyber-Physical Systems • Real-Time and Embedded (RTE) GENI • Virtual Organization • Calls for RTE-WAN with following features: • Scalability: • Similar traffic aggregation. • Global/local traffic segregation; • Network hierarchy and modularity; • Configurability: • Runtime behavior regulation • Flexibility: • Ease of reconfiguration

  16. Demand • Big Trend: converge computers with the physical world • Cyber-Physical Systems • Real-Time and Embedded (RTE) GENI • Virtual Organization • Calls for RTE-WAN with following features: • Scalability: • Similar traffic aggregation. • Global/local traffic segregation; • Network hierarchy and modularity; • Configurability: • Runtime behavior regulation • Flexibility: • Ease of reconfiguration • Hard Real-Time E2E Delay Guarantee

  17. Demand • Big Trend: converge computers with the physical world • Cyber-Physical Systems • Real-Time and Embedded (RTE) GENI • Virtual Organization • Calls for RTE-WAN with following features: • Scalability: • Similar traffic aggregation. • Global/local traffic segregation; • Network hierarchy and modularity; • Configurability: • Runtime behavior regulation • Flexibility: • Ease of reconfiguration • Hard Real-Time E2E Delay Guarantee

  18. Solution? The Train/Railway Analogy

  19. Solution? The Train/Railway Analogy • Similar traffic aggregation: carriage  train

  20. Solution? The Train/Railway Analogy • Similar traffic aggregation: carriage  train • Global/local traffic segregation: express vs. local train

  21. Solution? The Train/Railway Analogy • Similar traffic aggregation: carriage  train • Global/local traffic segregation: express vs. local train • Hierarchical topology: express vs. local train

  22. Solution? The Train/Railway Analogy • Similar traffic aggregation: carriage  train • Global/local traffic segregation: express vs. local train • Hierarchical topology: express vs. local train • Configuration: routing, capacity planning

  23. Solution? The Train/Railway Analogy • Similar traffic aggregation: carriage  train • Global/local traffic segregation: express vs. local train • Hierarchical topology: express vs. local train • Configuration: routing, capacity planning • Flexibility: change the train planning, not the railway

  24. The Equivalent of Train in Network?

  25. The Equivalent of Train in Network? • An aggregate (of flows) is like a train A C B Legend Aggregate. Member Flow End Node Intermediate Node

  26. The Equivalent of Train in Network? • An aggregate (of flows) is like a train • Sender End Node: merges member flows into the aggregate A C B Legend Aggregate. Member Flow End Node Intermediate Node

  27. The Equivalent of Train in Network? • An aggregate (of flows) is like a train • Sender End Node: merges member flows into the aggregate • Receiver End Node: disintegrates the aggregate into original flows A C B Legend Aggregate. Member Flow End Node Intermediate Node

  28. The Equivalent of Train in Network? • An aggregate (of flows) is like a train • Sender End Node: merges member flows into the aggregate • Receiver End Node: disintegrates the aggregate into original flows • Intermediate Nodes: only forward the aggregate packets A C B Legend Aggregate. Member Flow End Node Intermediate Node

  29. The Equivalent of Train in Network? • An aggregate (of flows) is like a train A C B Legend Aggregate. Member Flow End Node Intermediate Node

  30. The Equivalent of Train in Network? • An aggregate (of flows) is like a train • Packets of member flows  carriages A C B Legend Aggregate. Member Flow End Node Intermediate Node

  31. The Equivalent of Train in Network? • An aggregate (of flows) is like a train • Packets of member flows  carriages • Sender End Node: assembles the carriages into a train A C B Legend Aggregate. Member Flow End Node Intermediate Node

  32. The Equivalent of Train in Network? • An aggregate (of flows) is like a train • Packets of member flows  carriages • Sender End Node: assembles the carriages into a train • Receiver End Node: dissembles the train into carriages A C B Legend Aggregate. Member Flow End Node Intermediate Node

  33. The Equivalent of Train in Network? • An aggregate (of flows) is like a train • Packets of member flows  carriages • Sender End Node: assembles the carriages into a train • Receiver End Node: dissembles the train into carriages • Intermediate Nodes: only forward the train, but cannot add/remove carriages A C B Legend Aggregate. Member Flow End Node Intermediate Node

  34. The Equivalent of Train in Network? • An aggregate (of flows) is like a train • Packets of member flows  carriages • Sender End Node: assembles the carriages into a train • Receiver End Node: dissembles the train into carriages • Intermediate Nodes: only forward the train, but cannot add/remove carriages • Forwarding (routing) on the per train basis, not per carriage basis A C B Legend Aggregate. Member Flow End Node Intermediate Node

  35. The Equivalent of Train in Network? • An aggregate (of flows) is like a train • Packets of member flows  carriages • Sender End Node: assembles the carriages into a train • Receiver End Node: dissembles the train into carriages • Intermediate Nodes: only forward the train, but cannot add/remove carriages • Forwarding (routing) on the per train basis, not per carriage basis • Local Train: few hops (physical links) A C B Legend Aggregate. Member Flow End Node Intermediate Node

  36. The Equivalent of Train in Network? • An aggregate (of flows) is like a train • Packets of member flows  carriages • Sender End Node: assembles the carriages into a train • Receiver End Node: dissembles the train into carriages • Intermediate Nodes: only forward the train, but cannot add/remove carriages • Forwarding (routing) on the per train basis, not per carriage basis • Local Train: few hops • Express Train: many hops A C B Legend Aggregate. Member Flow End Node Intermediate Node

  37. Virtual Link/Topology • Aggregates with the same sender and receiver end nodes collectively embody a virtual link A C B F1 F2 F3 Legend Aggregate. Thickness implies the aggregate’s data throughput Virtual Link End Node Intermediate Node

  38. Virtual Link/Topology • Aggregates with the same sender and receiver end nodes collectively embody a virtual link • Many virtual links altogether build up virtual topology A C B F1 F2 F3 Legend Aggregate. Thickness implies the aggregate’s data throughput Virtual Link End Node Intermediate Node

  39. State-of-the-Art: GR-Aggregate • How to build virtual link with hard real-time E2E delay guarantee?

  40. State-of-the-Art: GR-Aggregate • How to build virtual link with hard real-time E2E delay guarantee? • [SunShin05]: Guaranteed Rate Aggregate (GR-Aggregate)

  41. State-of-the-Art: GR-Aggregate Guaranteed Rate Server (GR-Server) [Goyal97a]: A queueing server S is a GR-Server, as long as there exists a constant rf (called guaranteed rate) for each of its flow f , such that

  42. State-of-the-Art: GR-Aggregate Guaranteed Rate Server (GR-Server) [Goyal97a]: A queueing server S is a GR-Server, as long as there exists a constant rf (called guaranteed rate) for each of its flow f , such that

  43. State-of-the-Art: GR-Aggregate Guaranteed Rate Server (GR-Server) [Goyal97a]: A queueing server S is a GR-Server, as long as there exists a constant rf (called guaranteed rate) for each of its flow f , such that pfj: jth packet of flow f

  44. State-of-the-Art: GR-Aggregate Guaranteed Rate Server (GR-Server) [Goyal97a]: A queueing server S is a GR-Server, as long as there exists a constant rf (called guaranteed rate) for each of its flow f , such that L(p): time when packet p leaves S pfj: jth packet of flow f

  45. State-of-the-Art: GR-Aggregate Guaranteed Rate Server (GR-Server) [Goyal97a]: A queueing server S is a GR-Server, as long as there exists a constant rf (called guaranteed rate) for each of its flow f , such that L(p): time when packet p leaves S A specific function, called GRSFunc pfj: jth packet of flow f

  46. State-of-the-Art: GR-Aggregate Guaranteed Rate Server (GR-Server) [Goyal97a]: A queueing server S is a GR-Server, as long as there exists a constant rf (called guaranteed rate) for each of its flow f , such that L(p): time when packet p leaves S A specific function, called GRSFunc pfj: jth packet of flow f

  47. State-of-the-Art: GR-Aggregate Guaranteed Rate Server (GR-Server) [Goyal97a]: A queueing server S is a GR-Server, as long as there exists a constant rf (called guaranteed rate) for each of its flow f , such that rf: guaranteed rate L(p): time when packet p leaves S A specific function, called GRSFunc pfj: jth packet of flow f

  48. State-of-the-Art: GR-Aggregate Guaranteed Rate Server (GR-Server) [Goyal97a]: A queueing server S is a GR-Server, as long as there exists a constant rf (called guaranteed rate) for each of its flow f , such that rf: guaranteed rate L(p): time when packet p leaves S A specific function, called GRSFunc pfj: jth packet of flow f

  49. State-of-the-Art: GR-Aggregate • [Goyal97a] proves WFQ, WF2Q are GR-Server, with rf = f C, wheref is the weight of flow f(note f ≤ 1), and Cis the server output capacity.

  50. State-of-the-Art: GR-Aggregate • [Goyal97a] proves WFQ, WF2Q are GR-Server, with rf = f C, wheref is the weight of flow f(note f ≤ 1), and Cis the server output capacity. • [SunShin05]: GR-Aggregate based Virtual Link:

More Related