lbe tests between infn ucl
Download
Skip this Video
Download Presentation
LBE Tests between INFN & UCL

Loading in 2 Seconds...

play fullscreen
1 / 41

LBE Tests between INFN & UCL - PowerPoint PPT Presentation


  • 90 Views
  • Uploaded on

LBE Tests between INFN & UCL. Contents. Purpose/Scenario Equipment/Topology UDP TESTS Per frame-size 1:1 proportion (2 flows) Per frame-size 5:5, 7:3, 9:1 proportion Visual summary of results TCP TESTS (ongoing…..) Conclusions Future work. 2 Possible test Scenarios.

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about ' LBE Tests between INFN & UCL' - soren


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
contents
Contents
  • Purpose/Scenario
  • Equipment/Topology
  • UDP TESTS
    • Per frame-size 1:1 proportion (2 flows)
    • Per frame-size 5:5, 7:3, 9:1 proportion
    • Visual summary of results
  • TCP TESTS (ongoing…..)
  • Conclusions
  • Future work
2 possible test scenarios
2 Possible test Scenarios
  • Bottleneck(s) in the core network [Activity 2 in Project GN1 (Geant);D9.9].
  • Bottleneck(s) in the edge-networks (NRENs) only:

This is supported by the idea that a network can deploy LBE incrementally, in points where congestion is more likely to occur (currently in the NRENs only)

objectives
Objectives…..
  • Assess
    • BE protection under congestion due to the LBE attack in terms of:
      • Throughput (IP and TCP layer)
      • Packet Loss
      • Jitter (OWD Standard Deviation)
        • OWD Distribution buckets were engineered to achieve a compromise between lost of accuracy (buckets too large) and the network-noise/clocks non-synchronization vulnerability (buckets too small)
      • Reordering (mainly for BE)…ongoing
edges equipment

Cisco 7200

100Mb

100Mb

100Mb

1 Gbps

2x100Mb

Policy Applied

Edges Equipment
  • 1 x 7200 Cisco Router. (1x GigE + 3 x feth + 1 x eth )
  • 1 x SmartBits 6000B chassis (with 12 x 10/100 BaseT and 2 x GigE) in each site
  • Italian site: 2 Linux boxes dual-processor 2.4.9 (red hat 7.3)
  • UK site: 1 Supermicro super server 6002P6, Dual Intel Processor, 1G memory, Intel card

NRENs + GEANT (“Overprovisioned”…some noise to filter out)

slide6

Cabletron

SS8600

mbng1/pc58

mbng1/pc58

SW:6500

mbng1/pc58

Cabletron

Smart Switch Router

2000

Sunlab3f

MBNG2

Sunlab2f

7200

JANET

GEANT

GARR

  • : Gigabit

1A6:

128.40.255.33/30

fe1/0:

128.40.255.34 /30

128.40.255.234/30 secondary

Mngm_port

128.40.4.159

ge0/0:

128.40.169.217 /29

HUB

100Mb

1Gb

GW:

128.40.169.222

eth1:

128.40.255.233/30

100Mbps Bottleneck !!!

Policy applied 99% BE

1% LBE

fe:131.154.99.3/24

fe:131.154.99.2/24

1Gb

1 Gb

GW:

131.154.99.253/24

1A1: 131.154.99.72/24

1A2: 131.154.99.74/24

Mngm_port: 131.154.99.73

slide8

Test Layout

1 LBE Flow

100Mbps

1 BE Flow

100Mbps

Frame sizes set :

64,128,256,384,512,1180,1500,1518

Load Set ;

from 10Mbps…to 100Mbps per port

slide9

64

  • 0 packets dropped both in the BE and LBE output queues of the router.
  • 0 packets dropped in the router input interface
  • 0 packets ignored
  • 8184514-4194010=3990504 frame lost in the network..48.7% of the sent packets
  • 35Mbps is the Max. IP throughput for 40Mbps Link!!!! sure not for 100Mbps…..Clearly we can’t go above the associated input rate which is
  • [35.22Mbps]/[8*(64-18)]=95706 packets/sec!!!

35Mbps

NO DIFFERENTIATION: traffic can’t arrive at the router in order to congest the output diff. serv. queues……..

Traffic is seriously blocked by the network….the maximum IP level throughput for a 64 Bytes frame size flow on a 100Mbps link would be 54.7 Mbps…..and we reach 35Mbps………..

Similar router (we saw the same in bck2bck) along the path reached their Max. Input rate !!!! (95706 packets/sec)

slide10

128

74.3Mbps

  • 0 packets dropped both in the BE and LBE output queues of the router.
  • 0 packets dropped in the router input interface
  • 179884 packets ignored..5,3% of all- rate packets that entered the input interface (higher % for the high rate packets)
  • 4645260 – 3380105 = 1265155frame lost in the network..27% of the frame sent
  • At least….74.3 Mbps IS the Max. IP throughput possible for a 128f.s. on a 100Mbps link

NO DIFFERENTIATION: traffic can’t arrive at the router in order to congest the output diff. serv. queues

Traffic is partially blocked by the network….the remaining is then blocked at the input interface where that size provokes a lot of packet ignored…

If the 64B packets had arrived at the router…we would have seen packets both ignored and flushes……he same thing would have happened for 128B packets…

slide11

256

0 packets dropped both in the BE and LBE output queues of the router.

0 packets dropped in the router input interface

48767 packets ignored….2.6% of the all-rate packets that entered the input interface (higher % for high rate packets)

2490930 - 1813031 = 677899 frame lost in the network..27% of the packets sent

86.2Mbps is the Max. IP throughput possible for a 256f.s. on a 100Mbps link

86.2Mbps

Less than for 128…..

NO DIFFERENZIATION: traffic can’t arrive at the router in order to congest the output diff. serv. queues

Traffic is partially blocked by the network….the remaining is then blocked at the input interface where that size provokes packet ignored…

slide12

Less than 256

384

  • 0 packets dropped in the BE queue BUT232844 packets dropped in the LBE output queue of the router.
  • 0 packets dropped in the router input interface
  • 2374 packets ignored in the input interface…1.6% of the packets that entered the interface (higher % for high rate packets)
  • 1701722 - 1471758 = 229964 frame lost in the network…13.5%
  • 89.6Mbps would be the IP throughput BE should get at 200Mbps offered load according to 384f.s. and to the policy applied (99%)…

PARTIAL DIFFERENZIATION: traffic arrive at the router in order to congest the output diff. serv. Queues but it is still dropped in the network and ignored in the input interface as well…..

slide13

512

  • 0 packets dropped in the BE queue BUT 339579 packets dropped in the LBE output queue of the router.
  • 0 packets dropped in the router input interface
  • 0 packets ignored in the input interface
  • 1292284- 1045468 = 246816 frames lost in the network…19% of the sent frames
  • 91.8Mbps would be the IP throughput BE should get at 200Mbps offered load according to 512f.s. and to the policy applied (99%)…the target throughput is 91.8 – 88.07= 3.1Mbps far…
slide14

1180

  • 464 packets dropped in the BE queue!!!!...traffic
  • arrives finally
  • 0 packets dropped in the router input interface
  • 0 packets ignored in the input interface
  • 0 frames lost in the network
  • 95.8Mbps would be the IP throughput BE should get at 200Mbps offered load according to 512f.s. and to the policy applied (99%)…[it also equalises what 100M BE gets]….it’s ok!!!
slide15

1500

  • 357 packets dropped in the BE queue
  • 0 packets dropped in the router input interface
  • 0 packets ignored in the input interface
  • 0 frames lost in the network
  • 96.52Mbps would be the IP throughput BE should get at 200Mbps offered load according to 512f.s. and to the policy applied (99%)…[it also equalises what 100M BE get]…..it’s ok!!!
slide16

1518

  • 351 packets dropped in the BE queue
  • 0 packets dropped in the router input interface
  • 0 packets ignored in the input interface
  • 0 frames lost in the network
  • 97.02Mbps would be the IP throughput BE should get at 200Mbps offered load according to 1518f.s. and to the policy applied (99%) [it also equalises what 100M BE get]….nearly 0.3 Mbps missing!!!
slide17

Zoom 512

  • 91.8Mbps would be the IP throughput BE should get at 200Mbps offered load according to 512f.s. and to the policy applied (99%)…[it also equalizes what 100M BE get]…nearly 4Mbps missing…
slide18

Zoom 1180

464 packets dropped in the BE queue!!!!...traffic

arrives finally

0 packets dropped in the router input interface

0 packets ignored in the input interface

0 frames lost in the network

95.8Mbps would be the IP throughput BE should get at 200Mbps offered load according to 512f.s. and to the policy applied (99%)…[it also equalises what 100M BE gets]….it’s ok!!!

slide19

Zoom 1500

  • 357 packets dropped in the BE queue
  • 0 packets dropped in the router input interface
  • 0 packets ignored in the input interface
  • 0 frames lost in the network
  • 96.52Mbps would be the IP throughput BE should get at 200Mbps offered load according to 512f.s. and to the policy applied (99%)…[it also equalises what 100M BE get]…..it’s ok!!!
slide20

Zoom 1518

ZooZZZ

  • 351 packets dropped in the BE queue
  • 0 packets dropped in the router input interface
  • 0 packets ignored in the input interface
  • 0 frames lost in the network
  • 97.02Mbps would be the IP throughput BE should get at 200Mbps offered load according to 1518f.s. and to the policy applied (99%) [it also equalises what 100M BE get]….nearly 0.3 Mbps missing….
slide21

Interleaving(1)

  • Interleaving frame sizes below 512 with frame size 1500
  • 64 : NO for both combinations
  • 128 : NO for both combinations
  • 256 : “YES” (Great improvement!!)…but only if BE is 1500 and LBE is 256…….see next slide…..
  • 384 : PERFECT for both combinations
  • 512 : PERFECT for both combinations
slide23

Tests with disproportion :

  • 10 flows per Smartbits port ( overall 20 flows)
  • Case 1:1(again..)
  • 5 BE + 5 LBE flows per port (i.e. X=Y=5)
  • Case 7:3
  • 7 LBE + 3 BE flows per port (i.e. X=3;Y=7)
  • Case 9:1
  • 9 LBE + 1 BE flows per port (i.e. X=1;Y=2)

X BE + Y LBE Flows

X BE + Y LBE Flows

slide24

1500f.s. 5:5 (5LBE+5BE per port sharing the BW)

[5/(5+5)= 0.5] * 200Mbps * [(1500-18/1500+20)=0.975] = 97.5Mbps(*)

(*) this cannot be achieved.....99%(97.5)=96.52Mbps can be achieved ……It is Fine!!!!!!!!!!!!!!!!

slide25

1500f.s.7:3 (7LBE+3BE per port sharing the BW)

A : 200Mbps = 140Mbps LBE + 60Mbps BE offered load

[3/(3+7)= 0.3] * 200Mbps * [(1500-18/1500+20)=0.975] = 58.5Mbps…it is fine!!!!……..obvious…....there is no loss!!!

slide26

1500 9:1 ( 9LBE+1BE per port sharing the BW)

200Mbps = 180Mbps LBE + 20Mbps BE offered load

[1/(9+1)= 0.1] * 200Mbps * [(1500-18/1500+20)=0.975] = 19.5Mbps…it is fine!!.........obvious…….there is no loss!!!!!

It is even better than the 7:3 case…

slide27

1180f.s. 5:5 (5LBE+5BE per port sharing the BW)

[5/(5+5)= 0.5] * 200Mbps * [(1180-18/1180+20)=0.968] = 96.8Mbps(*)

(*) this cannot be achieved.....99%(96.8)=95.832Mbps can be achieved…. It is fine!!!

slide28

1180f.s. 7:3 (7LBE+3BE per port sharing the BW)

200Mbps = 140Mbps LBE + 60Mbps BE offered load

[3/(3+7)= 0.3] * 200Mbps *[(1180- 18/1180+20)=0.968] = 58.08Mbps…..it is fine!!!!......obvious……...there is no loss!!!

slide29

1180f.s. 9:1 (9LBE+1BE per port sharing the BW)

200Mbps = 180Mbps LBE + 20Mbps BE offered load

[1/(9+1)= 0.1] * 200Mbps * [(1180-18/1180+20)=0.968] = 19.36Mbps……it is fine!!!.......obvious……..there is no loss!!!

It is even better than the 7:3 case…

slide30

512f.s. 5:5 (5LBE+5BE per port sharing the BW)

[5/5+5= 0.5] * 200Mbps * [(512-18/512+20)=0.928] =100Mbps BE * 0.928 = 92.8Mbps

(*) this cannot be achieved.....99%(92.8)=91.8Mbps can be achieved……

The target throughput is 91.8. – 89.3 = 2.5Mbps far……but this is not a news….see 512 1:1….

slide31

512 f.s. 7:3(7LBE+3BE per port sharing the BW)

A : 200Mbps = 140Mbps LBE + 60Mbps BE offered load

[3/(3+7)= 0.3] * 200Mbps * [(512-18/512+20)=0.928] = 60Mbps BE * 0.928 = 55.68Mbps... The target Throughput is 55.68 – 52.63 = 3.05Mbps (5%loss) far due to the presence of 140Mbps LBE traffic……SEE BLUE ARROW IN SLIDE 13

B : 190Mbps = 133Mbps LBE + 57Mbps BE offered load

[3/(3+7)= 0.3] * 190Mbps * [(512-18/512+20)=0.928] = 57Mbps BE * 0.928 = 52.896Mbps... The target Throughput is 52.896-52.780 = 116kBps (0.27%loss) far due to the presence of 140Mbps LBE traffic……INTERPOLATE THE VALUES IN SLIDE 13 FOR 57Mbps BE STANDALONE

A

B

slide32

512 9:1 (9LBE+1BE per port sharing the BW)

A : 200Mbps = 180Mbps LBE + 20Mbps BE offered load

[1/(9+1)= 0.1] * 200Mbps * [(512-188/512+18)=0.928] = 20Mbps BE Wire rate * 0.928 =18.56Mbps……The target Throughput is 18.56 – 17.05 = 1.51Mbps (8%loss) far… due to the presence of 180Mbps LBE traffic……SEE RED ARROW IN SLIDE 13

B : 190Mbps = 171Mbps LBE + 19Mbps BE offered load

[1/(9+1)= 0.1] * 190Mbps * [(512-188/512+18)=0.928] = 19Mbps BE Wire rate * 0.928 =17.632Mbps……The target Throughput is 17.632 – 17.102 = 1/2Mbps (3%loss) far… due to the presence of 171Mbps LBE traffic….

……..INTERPOLATE THE VALUES IN SLIDE 13FOR 19Mbps BE STANDALONE

A

B

slide33
The network noise for 512 f.s. doesn’t affect the results significantly because it is absent in point 1 and not dominant in point 2.
  • Slide 7 and 13 demonstrates that 512 frame size flow can reach the theoretical target throughput when the offered wire rate is 60,57,20,19 and 100 Mbps respectively.
  • The network noise provokes an equal drop probability for both classes…(Geant transparency)……so…if it was dominant we should have seen a smaller drop % for BE when its load was smaller….however, we saw quite the reverse, that is, a higherdrop % when BE load was smaller.
slide35

1500, 1180

Very Very Good

Very Good

Good

Bad

Very Bad

Very Very Bad

512

1:1

5:5

7:3

9:1

slide36

As regards BE, a greater # of flows lose more because more TCP flows have to adapt…

  • BE aggregate gets….:
  • 1BE+1LBE: 79.02/85.98=91%
  • 2BE+2LBE : 86/92.26=93%
  • 4BE+4LBE:88.3/93.72=94%
  • 8BE+8LBE:90.94/93.86=96.8
  • 16BE+16LBE:93.54/94.77=98%
  • 32BE+32LBE:94.2/95 = 99%
slide37

32 flows BE TCP + 1 flow UDP LBE 512&1500

It seems that, for 512f.s., beyond 80x2=160 Mbps LBE offered load…the input interface goes crazy again….and while BE TCP backs-off…LBE UDP fill the gaps created by TCP; This is why LBE “steals” BW to LBE…….

I think this is the situation where the LBE BW. percentage assigned has to be 0%...but CISCO doesn’t support it

Bear in mind that when LBE is 160Mbps, the load at the input interface is constituted by the TCP load as well which is around 94Mbps

slide38

64 TCP flows instead of 32

Better reaction for the 64 aggregate with respect to the 32 one……..the higher the # of flows the higher the reactivity!!!!!!!!!!!

slide39
Conclusions:

UDP

  • Router input interfaces suffer a lot from high input rate…it prevents traffic from arriving at the Diff. Serv. output interfaces.

………so…..small frame sizes LBE traffic has to be carefully rate- limited, when necessary, in the router input interface.

  • For small frame sizes BE is not protected from LBE attack…the bigger the disproportion the worse the situation…it is likely that this is going to be a problem for Gbps speed….

......so….such form of rate-limiting has to be function of the traffic disproportion present in the system as well….

TCP

  • The policy works better when the number of flows increases
slide40

Future work

  • TCP + UDP
  • Try N flows TCP BE + UDP LBE with and without burstiness and check if BE TCP is protected
  • Apply a 0% BW reservation to LBE UDP traffic
  • TCP
  • Investigate whether, in the case of a small number of TCP flows, it is actually the unreliability of the measurements obtained with a very small number of TCP flows which is behind the bad performance of the policy, rather than the policy itself not working with a small number of flows.
ad