1 / 41

Network Architecture Network Design and Analysis

Network Architecture Network Design and Analysis. Wang Wenjie Wangwj@gucas.ac.cn. Notes on Routing. Topics. Router 的基本结构 路由算法概述 常用路由算法 最短路由算法 自适应最短路由的稳定性分析 最优化路由 Formulating a Communication Network Flow Problem 最优化路由及其特性 最优化问题求解. Formulating a Communication Network Flow Problem.

muriel
Download Presentation

Network Architecture Network Design and Analysis

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Network Architecture Network Design and Analysis Wang Wenjie Wangwj@gucas.ac.cn

  2. Notes on Routing

  3. Topics • Router的基本结构 • 路由算法概述 • 常用路由算法 • 最短路由算法 • 自适应最短路由的稳定性分析 • 最优化路由 • Formulating a Communication Network Flow Problem • 最优化路由及其特性 • 最优化问题求解

  4. Formulating a Communication Network Flow Problem • A Four-Node Network Example • Node-Arc Formulation • Arc-Path Formulation

  5. Node-Arc Formulation(1) • Assume: • Traffic only from node 1 to 4 and it is pps (packets per sec). • Packet length to be exponentially distributed with mean length bits

  6. Node-Arc Formulation(2) • Question: What should be the network objective as far as delay is concerned? • a possible network objective is to minimize maximum delay on a link

  7. Arc-Path Formulation(1) • The key assumption : • Generate a set of possible path between the origin and the destination node a priori • Enumerate possible (unknown) flows on paths 1-2-4, 1-2-3-4 and 1-3-4 as y1, y2 and y3, respectively

  8. Arc-Path Formulation(2) • Other objective function: minimize average delay per packet in the network • Note that other objectives are also possible depending on network objective

  9. General Formulation • There are N nodes and L links in the network • For the arc-path formulation, suppose, on average, there are p paths for each pair Node-Arc Arc-Path Single Commodity # of constraints N 1 # of variables L p Single Commodity # of constraints N2/(N-1)/2 N(N-1) # of variables LN(N-1)/2 pN(N-1)/2

  10. 最优路由(1) • 对于一个网络,设在任一节点对w=(S,D)之间可以同时通过多条路径将输入到S端的业务流rw送到目的节点。 • 设任一节点对之间的所有路径用Pw表示,各路径上的流量用xp表示,这些流量的集合用Xw表示。 • 根据定义:各条路径上的流量之和等于输入流量,并且各条链路上的流量一定不小于0 • 设链路(i,j )上的流量用Fij表示:

  11. 最优路由(2) • 寻找最优路由的目的:使网络的成本最低,就是 • Dij是一个单调函数,它是每条链路的成本。常用的成本函数有: Cij是链路(i,j)的容量, dij是链路的时延(包括传播时延和处理时延)

  12. 最优路由(3) • 则最优路由的目标是寻找最佳的Xw ={xp},使得成本函数最小: S/T:

  13. 最优路由特性(1) • 主要讨论如何利用成本函数的一阶导数表示最优路由:假设Dij是Fij是的可微函数,定义在[0, Cij)上。 • 令x为各路径流量组成的一个矢量,则成本函数为: 对xp求偏导得: 如果将D’ij取值为链路(i,j)的长度,则上面的导数是路径p上各链路长度之和,它可以看作是路径p的长度。可将该导数称为路径p的一个微分长度

  14. 最优路由特性(2) • 令x*={X*p}是最佳流矢量,即成本函数最小。 • 如果某一路径p上的流量X*p>0,则将路径p上很少的流量移到相同SD对的另一条路径p’上,必然不会降低成本,即: 因而 该式为x*最佳化的必要条件。即最佳路径的流量仅在具有最小一阶微分长度的路径上为正。此外,在最佳的情况下,如果SD对的输入流量是分配在几条路径上,则这几条路径必定具有相同长度。 如果Dij是一个凸函数,则上式也是x*最佳化的充分条件。

  15. 最优路由的可行方向(1) • 从最佳路由特性知道:仅当输入业务流沿着最小一阶微分长度(MFDL)路径流动时才是最佳的。即,如果给定一组流量不是最佳的,则必然有一部分流量是流经非MFDL路径的。如果把一部分非MFDL上的流量移到MDFL路径上,则性能就会改变,成本函数下降。 • 设x={xp}为满足约束条件可行解,沿x={xp}的方向改变x:x=x+x,使得D(x+x)<D(x)。两个问题: • x方向应满足什么条件? • 步长应如何选择?

  16. 最优路由的可行方向(2) • x可行的方向:可行方向就是x在x方向做一个微量的变化,得到的新的x矢量仍是一个满足约束条件的可行矢量: 微量调整后: 比较后得到: 并且对所有xp=0的路径,应当有xp  0

  17. 最优路由的可行方向(3) • x下降的方向:x沿着x方向变为x+x时,其成本应当下降 • 下降迭代法:在搜索方向上所得到的最佳点处的梯度与该搜索方向正交: 左边实际上是G()=D(x+x)在x+x=0处的一阶导数

  18. 最优路由的可行方向(4) • 满足上述条件的常用算法要求xp满足的条件: 1 2 对于所有非最短路径p,应当有xp  0 3、至少有一个xp<0 ,否则迭代结束。

  19. 最优化问题求解 Optimization Algorithms • Single Variable Problem • Multi-variate unconstrained minimization problem • Multi-variate constrained optimization problem • Optimality Condition • Frank-Wolfe(Flow Deviation) Algorithm

  20. Single Variable Problem • Problem: Optimization Problem with single and continuous variable, i.e., x  IR. • Objective Building a framework from single variable onward to consider multi-variate problem

  21. Convex Function • Definition:convex function A function f(x),x  IR is said to be convex, if for any x and y IR, the following condition is satisfied: f(x+(1- )y)  f(x) +(1- )f(y) [0,1] f(x) +(1- )f(y) x+(1- )y x y

  22. Newton Method(1) • Objective: • Give an algorithmic solution for some instances that f(x) or f'(x) may not be “easy" to arrive at the solution. • Ideas: • Assuming : f is twice differentiable. • If x is an optimal satisfies , then : f’(x)=0 • Linearization of left hand side around a point xk, and set to 0: f’(xk)+f’’(xk)(x- xk)=0 Rearranging: x= xk - f’(xk)/f’’(xk) let:xk+1 =x

  23. Newton Method(2) Step 0: Start with x0, set k=1, choose a tolerance  >0, and maximum iteration count Kmax Step 1: Compute: xk+1= xk - f’(xk)/f’’(xk) Step 2: if | xk+1 - xk| <  or k Kmax stop else k  k+1 and go to step 1 Note: f’’(x)0 function f(.) has to be twice differentiable

  24. Golden Section method(1) • Objective: Minimizing a unimodal function in a given interval • The problems is: • T=0.61803399 f0 f3 f2 f1 x0=a x1 x2 x3=b

  25. T=0.61803399 choose  >0 for tolerance on stopping x0=a ; f0=f(x0) x3=b ; f3=f(x3) x1= x0+(1-T) (x3 –x0) ; f1=f(x1) x2= x0+T (x3 –x0) ; f2=f(x2) while(abs(x3–x0 )>  (abs(x1)+abs(x2 ))) do if f1 > f2 , then /* x3 remains the same */ x0 = x1 ;f0 = f1x1 = x2 ;f1 = f2x2 = x0 + T *(x3–x0 ) f2 =f (x2 ) else /* x0 remains the same */ x3 = x2 ;f3 = f2x2 = x1 ;f2 = f1x1 = x0 + (1- T )*(x3–x0 ) f1 =f (x1 ) endif if( f1 < f2 ) then minvalue= f1 xmin= x1 else minvalue= f2 xmin= x2endif Golden Section method(2)

  26. Multi-variate unconstrained minimization problem • Problem: Optimization Problem with multiple and continuous variable, i.e., x IRn. • The general representation for n-dimensional unconstrained optimization problem is • Note that :

  27. Necessary and sufficient Optimality Condition • A necessary and sufficient condition forx*IRn to be an optimal solution is that: f(x*)=0 and for yIRn , yTf(x*)y  0 • A positive semi-definite matrix,M, satisfies the condition: yTMy  0 A positive definite matrix, M, satisfies the stronger condition: yTMy > 0

  28. Newton method for multi-variate optimization problem • Step 0: Start with x1. set k=1, choose tolerances 1,2>0, and max iteration count Kmax • Step 1: Compute xk+1  xk– [2f(xk)]-1f(xk) • Step 2: if || xk+1 - xk ||< 1 or ||2f(xk+1)|| < 2 or k  Kmax stop else k+1  k and go to step1

  29. Multi-variate constrained optimization problem • Objective function of a constrained minimization problem : subject to: inequality constraints: gj(x)  0, j=1,..,m equality constraints: hi(x)=0, j=1,…, p

  30. Multi-variate constrained optimization problem(Cont’d) • If the objective function and the constraints are all LINEAR, then we have a linear programming (LP) or linear optimization problem: subject to: Bx  0 , x  0

  31. Optimality Condition • Lagragean for the general non-linear programming(NLP) problem: • In the second line u denotes the vector:u=(u1,…, um) g(x) denotes the vector:g(x)=(g1(x),…, gm(x)) Similarly for v and h(x)

  32. Optimality Condition(Cont’d) • Optimality condition for NLP is: There exists such that(note:vj is unrestricted):

  33. Optimality Condition(Cont’d) • If there is noly gj(x)  0, NO hi(x)=0: There exists such that:

  34. Optimality Condition(Cont’d) • For following problem: s/t Ax=b The lagrangean is: The optimality condition is:

  35. Frank-Wolfe(Flow Deviation) Algorithm • Problem: s/t: Ax=b, x  0 the objective function is non-linear and assumed to be convex, and the constraint set is linear

  36. Frank-Wolfe Algorithm(Cont’d) • Finding a direction that SATISFIES the constraints. Consider a direction dIRn, such that Ad=0 so if is a feasible point : then is also feasible :

  37. Frank-Wolfe Algorithm(Cont’d) • Suppose the point is: xk,problem is: S/t: here f(xk) is the gradient of the function f(x) evaluated at xk.

  38. Frank-Wolfe Algorithm(Cont’d) • Suppose yk,is the optimal solution to the LP, then dk = yk–xk we observe that: Adk =A( yk– xk)=b-b=0 Which satisfies the requirement on the direction we imposed above • Suppose:

  39. Frank-Wolfe Algorithm(Cont’d) Step1: Start with a feasible point x1. Set k=1 Choose tolerance , and maximum iteration counter Kmax Step2: Solve the linearized sub-problem : S/t to obtain the solution yk

  40. Frank-Wolfe Algorithm(Cont’d) Step 3: set dk = yk–xk Step 4: Solve the line search problem : to find the step size k. set Step 5: Check to see if the bound is ‘small’, i.e. (UK-LB)/(1+|UB|) <  or, k Kmax. Then stop. slse set k=k+1 and go to step 2.

  41. Algorithm Algorithm(Cont’d) Note: if the constraints set Ax=b are replaced by Ax  b ,x  0 the alporithmic approach remains the same

More Related