1 / 17

Buffer Management for Shared-Memory ATM Switches

Buffer Management for Shared-Memory ATM Switches. Written By: Mutlu Apraci John A.Copelan Georgia Institute of Technology Presented By: Yan Huang. Outline. Describe several buffer management policies, and their strengths and weakness.

atira
Download Presentation

Buffer Management for Shared-Memory ATM Switches

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Buffer Management for Shared-Memory ATM Switches Written By: Mutlu Apraci John A.Copelan Georgia Institute of Technology Presented By: Yan Huang

  2. Outline • Describe several buffer management policies, and their strengths and weakness. • Evaluate the performance of various policies using computer simulations • Compare of the most import schemes

  3. Some basic definition • The prime purpose of an ATM switch is to route incoming cells arriving on a particular input link to the output link (switching) • Three basic techniques are used • space-division: crossbar switch • shared-medium:based on a common high-speed bus • shared-memory • Also have the functionality of queue • Input queuing, output queuing, shared memory

  4. Shared-Memory Switch • Consists of a single dual-ported memory shared by all input and output line • Bring both switching and queuing • Does not suffer from the throughput degradation caused by head of the line blocking(HOL) • The main focus is buffer allocation • determines how the total buffer spaces(memory) will be used by individual output ports of the switches. (Cont’d)

  5. Shared-Memory Switch (Cont’d) • The selection and implementation of the buffer allocation policy is refereed as buffer management • Model of the SM switch • N output ports • M buffer space • Performance • cell loss: occurs when a cell arrive at a switch node and find the buffer is full

  6. Buffer Allocation Policies • Stochastic assumption • Poisson arrivals • Exponential service time • Static Thresholds • Complete Partition • The entire buffer space is permanently partitioned among the N servers. • Does not provide any sharing • Complete Sharing • An arriving packet is accepted if any space is available in the switch memory • Independent of the server to which the packet is directed

  7. Comparison of CP and CS • CP policy • the buffer allocated to a port is wasted if that port is inactive, since it can not be used by other possibly active lines • CS policy • one of the ports may monopolize most of the storage space if it is highly utilized. • In the CS policy, a packet is lost when the common memory is full. In CP, a packet is lost when its corresponding queue has already reaches its maximum allocation. • The assumption of the traffic arrival process enable us to model the switch as a Markov process (Fig 3)

  8. Simulation • The assumption of exponential inter arrival and service time dist is not realistic for ATM system • The traffic in ATM networks is bursty in nature. • To model it, use an ON/OFF source • Simulation • mean duration of ON state = 240. • Mean duration of OFF state = 720 • cell interarrival time =5 • Switch model has two output ports(N=2) • The size of the shared memory is 300 cells(M=300) • Performance metric is the cell loss ration(CLR) at the port

  9. Performance of CS and CP • Balanced traffic: load at the port are equal • For medium traffic load, CS achieve lower CLR

  10. Performance of CS and CP(cont’d) • Imbalanced traffic • The load at one port is varied, but remain constant at the other port • CS: both port have the same CLR • CP: port buffer are isolated. CLR at port 1 increase with the traffic load

  11. Sharing with Maximum Queue Length • SMXQ -a limit is imposed on the number of buffers to be allocated at any time to any server. • There is one global threshold for all the queues. The advantage of SMXQ: • SMXQ achieves lower CLR than CP, manages to isolate the “good”port from the “bad” port. The better CLR performance is obtained with buffer sharing, the isolation is obtained by restricting the queue length.

  12. SMA and SMQMA • Two variation of SMXQ • SMA (sharing with a minimum allocation) • A minimum number of buffer is always reserved for each port. • SMQMA (sharing with a maximum queue and minimum allocation ) • each port always has access to a minimum allocated space, but they cannot have arbitrarily long queues. • SMQMA has the following advantage over SMXQ • A minimum space is allocated for each port in order to simplify the issue of serving high-priority traffic in a buffer-sharing environment.

  13. Push-Out • Push-out (PO): drop-on-demand(DoD) • A previously accepted packet can be dropped from the longest queue in the switch • Advantage • Fair. • Efficient. • Naturally adaptive. • Achieves a lower CLR than the optimal SMXQ setting • Drawback • Difficult to implement

  14. Push-Out with Threshold • In ATM networks, different ports carrying different traffic type might have different priorities. • A modification to PO, CSVP is to achieve priorities among ports. Similar idea is called POT (push-out with threshold) • CSVP has the following attributes. • N users share the total available buffer space M, which is virtually partitioned into N segments corresponding to the N ports • When the buffer is full, there are two possibilities: • If the arriving cell’s type, i , occupies less space than its allocation Ki, then, at least one other type must be occupy more than its own allocation, for instance, Kj. The admission policy will admit the newly arriving type i cell by pushing out a type j cell. • If the arriving cell’s queue exceeds its allocation at the time of arrival, then the cell will be rejected. • When the buffer is not full • CSVP operates as CS. • Under heavy traffic loads, the system tends to become a CP management.

  15. Dynamic Policies • The analyses of the buffer allocation problem above assume static environments • Dynamic Threshold (DT)can be used to adapt to changes in traffic conditions. • The queue length thresholds of the ports, are proportional to the current amount of unused buffering in the switch. T(t) = a (M- Q(t)) • Cell arrivals for an output port are blocked whenever the output port’s queue length equals or exceed the current threshold value • Major advantage of DT is to be it’s robustness to traffic load changes, a feature not present in the ST policy

  16. Comparison

  17. The End

More Related