1 / 49

TCOM 541

TCOM 541. Session 2. Mesh Network Design. Algorithms for access are not suitable for backbone design Access designs generally are trees – sites connect to center Diverse access (redundancy) is another question, and only needed for special situations

neo
Download Presentation

TCOM 541

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. TCOM 541 Session 2

  2. Mesh Network Design • Algorithms for access are not suitable for backbone design • Access designs generally are trees – sites connect to center • Diverse access (redundancy) is another question, and only needed for special situations • Backbone designs require many-many connectivity

  3. MENTOR Algorithm • “High quality, low complexity” algorithm • Originally developed for time division multiplexing • Works with other technologies

  4. MENTOR Algorithm (2) • Assume initially only a single link type of capacity C • Divide sites into backbone sites and end sites • Backbone sites are aggregation points • Several algorithms to do this • Threshold clustering is used

  5. Threshold Clustering • Weight of a site is sum of all traffic into and out of the site • Normalized weight of site i is NW(i) = W(i)/C • Sites with NW(i) > W are made into backbone sites • Where W is a parameter

  6. Threshold Clustering (2) • All sites that do not meet the weight criterion and are close to a backbone site are made into end sites • “Close” is defined as when the link cost from the end site e to the backbone site is less than a predefined fraction of the maximum link cost MAXCOST = maxi,jcost(Ni,Nj): cost(e,Ni) < MAXCOST*RPARM

  7. Threshold Clustering (3) • If all sites that pass the weight limit as backbone sites have been chosen and there are still edge sites “too far” from any backbone site, we assign a “merit” to each site • Assign coordinates to each site (e.g., V&H) • Compute center of gravity of sites

  8. Center of Gravity (CG) • Defined as (xctr, yctr) where xctr = SnxnWn/SWn yctr = SnynWn/SWn Note: These coordinates need not correspond to any actual site

  9. Distances to CG • Define dcn = [(xn-xctr)2 + (yn-yctr)2]0.5 maxdc = max(dcn) maxW = max(Wn) • Then meritn= 0.5(maxdc–dcn)/maxdc + 0.5(Wn/maxW) • That is, “merit” gives equal value to a node’s proximity to the center and to its weight

  10. MENTOR Algorithm (3) • From among remaining nodes, choose the one with the highest merit as a backbone node • Continue until all nodes are either backbone nodes or within RPARM*MAXCOST of a backbone node • Select backbone node with smallest moment to be center • Moment(n) = Sdist(n,n*)Wn* • Construct a Prim-Dijkstra tree, parameter a

  11. MENTOR Example Radius = RPARM*MAXCOST C*G Edge node Backbone node

  12. MENTOR Example (2) Radius = RPARM*MAXCOST C*G Edge node Backbone node

  13. MENTOR Example (3) Radius = RPARM*MAXCOST C*G Edge node Backbone node

  14. MENTOR Example (4) Radius = RPARM*MAXCOST C*G Edge node Backbone node

  15. MENTOR Example (5) Radius = RPARM*MAXCOST C*G Edge node Backbone node

  16. Need for Improvement • As we know, tree designs have several drawbacks, especially for large networks • Lack of redundancy increases probability of failure • Chain-like network (low a) • Aggregation of traffic in “central” links raises costs • Large average hops in large networks • Star-like network network (high a) • May have low link utilization

  17. Refining the Design in MENTOR • We introduce the concepts of sequencing and homing to add links so as to make a better design by adding direct links where the traffic justifies it • Use the Prim-Dijkstra tree to define a sequencing of the sites • A sequencing is an outside-in ordering • Do not sequence the pair (N1,N2) until all pairs (N1*,N2*) have been sequenced where N1 and N2 lie on the path between N1* and N2* • Roughly, the longest paths get sequenced first

  18. Example of Sequencing Sequence AE AF BE BF CE CF DA DB AC BC … DF F A C 3 hops D E B 2 hops 1 hop

  19. Comments on Sequences • Sequences are not unique • Different (valid) sequences do not influence the design greatly

  20. Homing • For each pair of nodes (N1, N2) that are not adjacent we select a home • If 2 hops separate N1 and N2, the home is the node between them • If they are more than 2 hops apart there are multiple candidates for their home

  21. Homing (2) N4 N1 N3 N2 Candidate for home (N1,N2) Candidate for home (N1,N2) Choose N3 as home(N1,N2) if: Cost(N1,N3) + Cost(N3,N2) < Cost(N1,N4) + Cost(N4,N2) Otherwise choose N4

  22. Last Step • Consider each node pair only once, add a link if it will carry enough traffic to justify itself • Consider the traffic matrix T(Ni,Nj) • Assume it is symmetric • Recall that MENTOR was developed to design TDM networks, and muxes are bi-directional (usually)

  23. Last Step (2) • For each pair (N1,N2), execute the following algorithm: • If capacity of a link is C, compute • n = ceil[T(N1,N2)/C] • Compute utilization • u = T(N1,N2)/(n*C) • Add link if u > umin, otherwise move traffic 1 hop through the network • I.e., add T(N1,N2) to both T(N1,H) and T(H,N2) • And do same for T(N2,N1) • Note – there is a special case when (N1,N2) belongs to the original tree • In this case just add the link (N1,N2) to the design

  24. Comments • The link-adding algorithm aggregates traffic to justify links between nodes that are multiple hops apart • If traffic between N1 and N2 cannot justify a direct link, it is routed through their home node H • Eventually, in large networks, enough traffic is aggregated to justify a direct link

  25. Comments (2) • Performance of MENTOR is governed by utilization parameter umin and the Prim-Dijkstra tree-building parameter a • How easy it is to add new links is controlled by umin • The shape of the initial tree is controlled by a • High a will build a star-like tree – then links will be added only between site pairs that have enough traffic without help from other nodes • Low a will build a more chain-like tree, so there will be more aggregation of traffic and likely addition of links

  26. Performance of MENTOR • Low-cost algorithm • Three main steps • Backbone selection • Tree building • Link addition • All of O(n2) • Possible to re-run many times, varying parameters

  27. MENTOR Example Based on mux1.inp on Cahn’s FTP site 15 sites, 60 256 kbps circuits 13 6 2 7 15 14 10 9 1 5 12 4 8 11 3

  28. Initial Choice of Backbone Nodes (5) 13 6 2 7 15 Backbone node Backbone node 14 10 9 1 Backbone node 5 12 Backbone node 4 8 Backbone node 11 3

  29. Initial Design a = 0 Cost = $269,785/month 13 6 2 7 15 5 x T1 2 x T1 14 10 9 1 5 5 x T1 12 5 x T1 4 8 11 3

  30. Review of Initial Design • Backbone links have multiple (5) T1 links • Probably not a good thing • Design Principle: • If a design has multiple parallel high-speed links there is usually a better, meshier design • Lower cost, greater diversity (= reliability) • Note this is not mathematically provable

  31. Revised Design umin = 0.7 Cost = $221,590 13 6 2 7 15 3 1 2 14 10 9 1 1 2 5 12 1 4 8 1 11 3

  32. “Best” 5-Node Backbone Design a = 0.1 umin = 0.9 Cost = 209,220 13 6 2 7 15 2 2 14 10 9 1 2 5 2 12 1 4 1 8 11 3

  33. Comments • Note that we produced multiple designs by varying some parameters and picking the best • Of course, there is no guarantee that this design really is “best” • In fact, changing number of backbone nodes yields much better designs • 13-node backbone yields design costing only $191,395 • 12-node backbone costs $198,975

  34. Routing • Now we have designed a good network, we consider how the traffic will actually flow across it • This introduces a whole new class of problems that center on the performance of the routing algorithms

  35. Feasibility Considerations • For any pair of nodes N0 and N1, define a route by (N0, N1, h,n) Where n = 0 if h is adjacent to N0 and n = 1 if h is adjacent to N1 • If N0 and N1 are adjacent, we have a direct route • Else the route is the link (Nn,h) and the route (N1-n,h,h*,n*) • Continue until the full route is established

  36. Feasibility Considerations • This process establishes a feasible routing pattern for the network • However, the muxes may not be smart enough to find this pattern • As an example, consider single-route, minimum-hop (SRMH) routing

  37. An SRMH Disaster A H • Assume MENTOR adds link BF to carry traffic from B to F, G, H, I – but not traffic from F to ABC • SRMH insists on carrying all traffic from A, B, C to F, G, H, I – result is overload on BF B G F C E I D

  38. Feasibility and Routing • In reality, few network-loading algorithms are as bad as SRMH • However, network-loading algorithms do add to the design constraints • In particular, minimum-hop routing algorithms are fragile with respect to network capacity changes • Effective algorithms for redesign are not available

  39. A More Realistic Loading Algorithm • Flow-Sensitive, Minimum-Hop (FSMH) loader loads traffic onto a minimum-hop path, subject to using only links with enough free capacity to carry it • Allows overflow onto longer paths • If no path exists, traffic is blocked • However, there is no guarantee that FSMH will do better than SRMH!

  40. FSMH Failure Example A B Each link has capacity 1 C D Traffic: SRMH will block the second AB traffic and load 4 out of 5 requirements FSMH will load load both AB requirements, but block all the rest Note: order of loading traffic is significant!

  41. Comments on FSMH • In the earlier example (15 sites), FSMH fails on the best designs • 13-node, $191k design blocks 3.3% of traffic • 12-node, $199k design blocks 6.7% of traffic • Best design where FSMH does not block is 11-node, $201k

  42. Approaches • We cannot guarantee that a highly-optimized network design will work with a given routing algorithm • Approaches • Test the loading algorithm against best designs • Routing takes more computation than design Raises complexity to between O(n3) and O(n4) • Limit maximum link utilization to <100% • Also increases reliability, allows for growth

  43. Router Network Design • Common routing algorithm for IP is OSPF (Open Shortest Path First) • Implicit problem is design for minimum distance • Single-route, minimum distance loader (SRMD) • Computes single shortest path between site pairs • If traffic saturates the route, it’s discarded • Designer chooses link lengths appropriately

  44. SRMD Characteristics • Traffic not forced onto illogical paths if link lengths are chosen properly • Problems can still arise • Not dynamic • Cannot split traffic between different routes

  45. OSPF Example This link intended to carry traffic between A and H, and B to H but not traffic between A and G A 395 H 90 100 B 100 G 100 F 100 C E I 100 D A-H traffic will take 1-hop path length 395 B-H traffic will take 2-hop path length 485 A-G traffic will take 5-hop path length 490

  46. Important Difference • Mux networks are designed for high utilization • Router networks are not designed for high utilization • Allows some margin for error by the routing algorithm

  47. Comments • Can encourage the traffic to use the MENTOR routing as we add edges by setting the length of each tree edge to 100, and the length of a direct edge between N1 and N2 to: 100 + 90*(hops(N1,N2)-1)

  48. Comments (2) • Any routing algorithm should work for a tree • Problems arise when design becomes more highly meshed • Can manipulate solution by • Increasing length of overloaded links • Shortening under-utilized links • Adding or deleting capacity

  49. Homework Assignment • Cahn Exercises 8.2, 8.6 • Read Cahn Chapter 9

More Related