1 / 47

The Design of QoS Provisioning Mechanisms in WDM Ring Networks

The Design of QoS Provisioning Mechanisms in WDM Ring Networks. Presenter: Ching-Lung Tseng Adviser: Ho-Ting Wu Date: 2004/07/26 Department of Computer Science and Information Engineering, National Taipei University of Technology. Outlines. WDM Slotted Ring Network Architecture

zorina
Download Presentation

The Design of QoS Provisioning Mechanisms in WDM Ring Networks

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. The Design of QoS Provisioning Mechanisms in WDM Ring Networks Presenter: Ching-Lung Tseng Adviser: Ho-Ting Wu Date: 2004/07/26 Department of Computer Science and Information Engineering, National Taipei University of Technology

  2. Outlines • WDM Slotted Ring Network Architecture • Two Types of Inherited Fairness Problems • Intra-channel / Global fairness problem • Inter-channel / Local fairness problem • Proposed QoS provisioning mechanism-MI-FECCA to support synchronous and asynchronous traffics • Proposed RGC algorithm to resolve receiver collision problem • Conclusions

  3. WDM Slotted Ring Networks Architecture

  4. Slotted Ring payload header λ1 λ2 slotted ring

  5. FT-TR WDM Ring Network (1) • Each node consists of a fixed transmitter and a tunable receiver which allow them to transmit on a unique specific wavelength and receive data on any wavelength. • Each node has one FIFO buffer which can store up packets.

  6. FT-TR WDM Ring Network (2) λ1 payload header λ1 node 1 λ2 λ1, λ2 node 4 node 2 λ1channel on Ring B λ2 λ2 Bi-direction for transfer Fixed transmitter λ1 channel on Ring A node 3 Tunable receiver λ1 FT-TR full-duplex time-slotted WDM metro ring

  7. Two Types of Inherited Fairness Problems

  8. Global Fairness Problem Node0 Node7 Node1 λ1 Node6 Node2 node 2 uses λ 1 channel to transmit to node 7 Node3 Node5 Node4 node 0 usesλ1 channel to transmit to node 7 Unfairness for transmission of FT-TR WDM metro ring

  9. Local Fairness Problem Node0 Node7 Node1 Receive collision on node 7 Node6 Node2 node 1 uses λ2 channel to transmit to node 7 Node3 Node5 Node4 node 0 usesλ1 channel to transmit to node 7 Receive collision problem for transmission of FT-TR WDM metro ring

  10. Global Fairness Algorithm • Cycle-based Control Scheme • Each node is allowed to transmit a maximal number of packets within each cycle. • The duration of a cycle is dependent on different algorithms.

  11. M-ATMR Algorithm • Multi Asynchronous Transfer Mode Ring( M-ATMR). • A cycle-based control scheme. • The forward ring A is used for control purpose (assume data transmission is also on Ring A). • Each node is allowed to transmit a maximal number of packets within each fairness cycle. • This maximal number is called Wins. • A special busy address field is inserted in the header of each slot.

  12. ATMR Control Mechanism (1) • The M-ATMR access algorithm is independently applied to each wavelength of both rings. • Each node has two states: • An active node. • Has not uses up its transmission quota. • Has packets stored in the queue. • An inactive node. • Used up its transmission quota or has no packets stored in the queue.

  13. ATMR Control Mechanism (2) • Access algorithm: • Active node. • Waits for an idle slot to transmit. • Overwrites its own address into the busy address field of every incoming slot. • When a node completely transmit the value of Wins packets or has no packets to transmit, it becomes an inactive node. • Inactive node. • Stops writing its own address in the busy address field. • When an inactive node finds its own address in the busy address field of an incoming slot, it knows that all nodes have completed their packet transmissions. • It then issues a “Reset” signal to start a new fairness cycle. • The Reset signal circulates around the ring network to notify other nodes that a new fairness cycle is established.

  14. ATMR Control Mechanism (3) A reset cycle watched by an AU

  15. M-FECCA Algorithm • Multi Fair and Efficient Cyclic Control Algorithm(M-FECCA). • An extension version of M-ATMR. • A cycle-based control scheme. • Used different directions to transmit data and control message. • When data are transmitted on Ring A the control message are transmitted on Ring B. • A busy address field is included in the header of each time slot.

  16. FECCA Control Mechanism • The M-FECCA access algorithm is independently applied to each wavelength of both rings. • Each node has two states: • An active node. • Defined as in ATMR. • An inactive node. • Defined as in ATMR. • Each upstream node (on Ring A) can observe the on-going activity status of its downstream node (on Ring A). • An inactive upstream node may still allowed to transmit extra packets.

  17. Node0 Node7 λ1, λ2 Node1 Node6 Node2 λ1 control channel on Ring B Node3 Node5 λ1 data channel on Ring A Node4 M-FECCA access control (an inactive node is denoted by the gray color)

  18. Proposed QoS Provisioning Mechanism - MI-FECCA

  19. MI-FECCA Algorithm(1) • Multi Integrated Fair and Efficient Cyclic Control Algorithm(MI-FECCA). • Integration of synchronous and asynchronous traffics. • Base upon M-FECCA algorithm. • A frame approach is employed: • The duration of a single frame is fixed. • A fixed number of contiguous time slots. • Consists of a “synchronous subframe” followed by an “asynchronous subframe”.

  20. MI-FECCA Algorithm(2) • A master station can be used to generate periodically frame. frame frame frame frame λ1 …… Asynchronous subframe Synchronous subframe A single frame The frame architecture watched by a node.

  21. Slot Format One Slot Header Information Field B D BA -------------- B (1 bit): Busy Bit 1 indicates a full slot. 0 indicates an empty slot. D (1 bit): Data type bit 1 indicates for asynchronous subframe. 0 indicates for synchronous subframe. BA (Several bits): Busy Address, this field indicates the active node address.

  22. Synchronous subframe Access Protocol(1) • Each node can prescribe a quota(Qs)of synchronous data which it is allowed to transmit in each frame. • At the start of each frame, each node starts to transmit its synchronous packets upon to its prescribed quota. • Each node has two states: • Synchronous-mode active. • Has not uses up its synchronous transmission quota. • Has synchronous packets stored in the queue. • Synchronous-mode inactive. • Used up its synchronous transmission quota or has no synchronous packets stored in the queue.

  23. Synchronous subframe Access Protocol(2) • Access algorithm: • Synchronous-mode active. • Waits for an idle slot to transmit synchronous packets. • Performs the synchronous overwriting procedure. • Overwrite its own address into the busy address field. • Sets D=0. • When a node completely transmit the synchronous quota or has no synchronous packets to transmit, it becomes an synchronous-mode inactive node. • Synchronous-mode inactive. • Stops its synchronous overwriting procedure. • At this time, it is ready to transmit its asynchronous data(if any) during the remainder of the current frame(no synchronous overwriting procedure will be performed by this node).

  24. Synchronous subframe Access Protocol(3) • When a synchronous-mode inactive node finds its own address in the busy address field of an incoming slot: • All the node have finished their permissible synchronous packet transmissions. • The “synchronous subframe” of the network within the current frame is terminated, and the ” asynchronous subframe” of current frame is initiated. • It starts performing its asynchronous overwriting procedure: • Overwrite its own address into the busy address field. • Sets D=1.

  25. Asynchronous subframe Access Protocol(1) • Each node is allowed to transmit at least its predefined quota(Qa)of asynchronous data in each asynchronous “cycle”. • The duration of each asynchronous cycle is dynamically determined and is usually much longer than a single frame duration. • Each node has two states: • Asynchronous-mode active. • Defined as in FECCA. • Asynchronous-mode inactive. • Defined as in FECCA.

  26. Asynchronous subframe Access Protocol(2) • Access algorithm: • Asynchronous-mode active: • Waits for an idle slot to transmit asynchronous packets. • Performs the asynchronous overwriting procedure: • Overwrite its own address into the busy address field. • Sets D=1. • When a node completely transmit the asynchronous quota or has no asynchronous packets to transmit, it becomes an asynchronous-mode inactive node. • Asynchronous-mode inactive: • Stops its asynchronous overwriting procedure. • At this time, it is ready to transmit its asynchronous data(if any) during the remainder of the current frame(no asynchronous overwriting procedure will be performed by this node.

  27. Asynchronous cycle reset scheme • The same as FECCA reset scheme.

  28. Comparison the performance results • MI-ATMR Algorithm: • Each node transmit data across Ring A. • Overwrite their own address into the busy address fields across Ring A. • Node can not transmit extra data after they transmitted their Wins. • MI-FECCA Algorithm: • Each node overwrite their own address into busy address fields across Ring B. • Nodes may utilize the extra available slots to transmit data.

  29. Network Performance (1) • Topology: bidirectional ring • Number of stations: 208 • Slot per wavelength: 416(slots) • Frame size: 1664(slots) • Transmission bandwidth: 10Gps • Packet length: 1500 bytes(fixed length) • Ring length: 100Km • Propagation delay: 5 • Station are equally spaced • Every station always has something to send μs/km

  30. Network Performance (2)

  31. Qs Fairness 32 64 96 128 160 256 MI-FECCA Network Throughput 13.70 13.66 14.05 15.55 15.65 15.64 Asynchronous Throughput 9.81 5.63 2.07 0.014 0.000043 0 MI-ATMR Network Throughput 12.46 12.67 13.09 15.51 15.60 15.61 Asynchronous Throughput 8.46 4.54 1.12 0 0 0 Network Performance (3) Network throughput and asynchronous throughput versus various synchronous quota(average case).

  32. Qs Fairness 16 32 48 56 64 96 MI-FECCA Network Throughput 12.19 10.38 8.57 7.40 7.97 7.98 Asynchronous Throughput 10.20 6.39 2.56 0.41 0 0 MI-ATMR Network Throughput 10.53 8.77 8.28 6.99 7.98 7.99 Asynchronous Throughput 8.51 4.75 2.28 0 0 0 Network Performance (4) Network throughput and asynchronous throughput versus various synchronous quota(worst case).

  33. Proposed RGC algorithm to resolve receiver collision problem

  34. Local Fairness Problem Packets destine to node 7 Packets destine to node 7 Node0 Node7 Node1 Receive collision on node 7 Node6 Node2 node 1 used λ2 channel to transmit to node 7 Node3 Node5 Node4 node 0 usedλ1 channel to transmit to node 7 Local fairness problem for transmission of FT-TR WDM metro ring

  35. RGC Algorithm(1) • Request and Grant Control Algorithm(RGC). • A cycle-based control scheme. • Each node has a non-FIFO buffer which can store up packets. • Control Messages: • REQ: • Request. • Sent when the node is starved. • Set up a congestion path. • GNT: • Grant. • Terminate the congestion path. • SAT: • Satisfied. • Like a Token.

  36. RGC Algorithm(2) • Two basic modes: • Non-limited Mode: • can transmit at any time as long as the protocol permits it. • Free Access(FA)state. • Limited Mode: • can transmit only a predefined quota of data unit for the heavy-sink node. • triggered only when starvation occurs.

  37. RGC Algorithm(3) • Each node has two states in the limited-mode: • limited-mode active. • Has not uses up its transmission quota for the heavy-sink node. • Has packets for the heavy-sink node stored in the queue. • limited-mode inactive. • Used up its transmission quota for the heavy-sink node or has no packets for the heavy-sink node stored in the queue.

  38. Slot Format One Slot Header Information Field B BA CA -------------- B (1 bit): Busy Bit 1 indicates a full slot. 0 indicates an empty slot. BA (Several bits): Busy Address, this field indicates the active node address. CA (Several bits): Collision Address, this field indicates the node address that receiver collision has occurred (Control Message + destination address).

  39. RGC Control Mechanism (1) • At the time of initialization, all nodes are in the free access (FA)state. • Each node is allowed to tolerate contiguous transmission delay quota Qd. • Transmission delay time C >= Qd • Write the REQ and collision destination node address into collision address field (CA). • Set C = 0. • Entering the tail(T)state.

  40. REQ Packets destine to node 6 REQ Packets destine to node 6 B H B λ1 Packets destine to node 6 REQ Node6 B λ4 control channel on Ring B REQ T Node5 Node4 Packets destine to node 6 RGC algorithm(for forwarding control message REQ).

  41. RGC Control Mechanism (2) • Access algorithm: • limited-mode active. • Transmitted the packet for the heavy-sink node. • When a node completely transmit the value of quota for the heavy-sink node or has no packets to transmit for the heavy-sink node, it becomes an limited-mode inactive node. • limited-mode inactive. • Transmitted the packet for the any node except the heavy-sink node.

  42. SAT Packets destine to node 6 SAT Packets destine to node 6 B H B λ1 Packets destine to node 6 SAT Node6 B λ4 control channel on Ring B SAT T Node5 Node4 Packets destine to node 6 RGC algorithm(for forwarding control message SAT).

  43. GNT Packets destine to node 6 GNT Packets destine to node 6 B H B λ1 Packets destine to node 6 GNT Node6 B λ4 control channel on Ring B GNT T Node5 Node4 Packets destine to node 6 RGC algorithm(for forwarding control message GNT).

  44. Performance Results • Combine the M-FECCA with RGC algorithm: • Each node on transmission channel is controlled by the M-FECCA global fairness algorithm. • Each node on different transmission channel is controlled by the RGC Local Fairness algorithm, if necessary possible. • M-FECCA with RRC algorithm: • Each node on transmission channel is controlled by the M-FECCA global fairness algorithm. • Each node on different transmission channel is controlled by the RRC Local Fairness algorithm, if necessary possible.

  45. Per Node Throughput (b) (a) Simulation of node throughput versus relative node position of light traffic nodes on drop channel with window size = 256 in each cycle in FT-TR WDM ring network. In RGC algorithm, Qd = 8. (Assume asymmetric loading, nodes 8~11 are full loaded, 50% data destined for node 24, others uniform destination distribution; nodes 32~35 are fully loaded, 50% data destined for node 48, others uniform destination distribution; each of other nodes has input = 0.05 packets/slot, uniform destination distribution, 64 nodes on the ring and 256 slots on each channel). (a) M-FECCA with RGC. (b) M-FECCA with RRC.

  46. Per Node Transmission Delay (b) (a) Simulation of mean delay versus relative node position of light traffic nodes on drop channel with window size = 256 in each cycle in FT-TR WDM ring network. In RGC algorithm, Qd = 8. (Assume asymmetric loading, nodes 8~11 are fully loaded, 50% data are destined for node 24, others uniform destination distribution; nodes 32~35 are fully loaded, 50% data destined for node 48, others uniform destination distribution; each of other nodes has input = 0.05 packets/slot, uniform destination distribution, 64 nodes on the ring and 256 slots on each channel). (a) M-FECCA with RGC. (b) M-FECCA with RRC.

  47. Conclusions • The QoS access protocol, MI-FECCA: • Maintaining a guaranteed synchronous data bandwidth-reserve. • Yield high asynchronous data throughput. • Local fairness access protocol, RGC: • Yield local fairness. • Decreasing the propagation delay time of data transmission.

More Related