1 / 27

Proactive Surge Protection: A Defense Mechanism for Bandwidth-Based Attacks

Proactive Surge Protection: A Defense Mechanism for Bandwidth-Based Attacks. Jerry Chou, Bill Lin University of California, San Diego Subhabrata Sen, Oliver Spatscheck AT&T Labs-Research. Outline. Problem Approach Experimental Results Summary. Motivation. Seattle.

wiltonr
Download Presentation

Proactive Surge Protection: A Defense Mechanism for Bandwidth-Based Attacks

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Proactive Surge Protection: A Defense Mechanismfor Bandwidth-Based Attacks Jerry Chou, Bill Lin University of California, San Diego Subhabrata Sen, Oliver Spatscheck AT&T Labs-Research

  2. Outline • Problem • Approach • Experimental Results • Summary

  3. Motivation Seattle • Large-scale bandwidth-based DDoS attacks can quickly knock out substantial parts of a network before reactive defenses can respond • All traffic that share common route links will suffer collateral damage even if it is not under direct attack New York Chicago Sunnyvale Denver Indianapolis Los Angeles Washington Kansas City Atlanta Houston

  4. Motivation • Potential for large-scale bandwidth-based DDoS attacks exist • e.g. large botnets with more than 100,000 bots exist today that, when combined with the prevalence of high-speed Internet access, can give attackers multiple tens of Gb/s of attack capacity • Moreover, core networks are oversubscribed (e.g. some core routers in Abilene have more than 30 Gb/s incoming traffic from access networks, but only 20 Gb/s of outgoing capacity to the core

  5. Example Scenario Seattle/NY: 3 Gb/s Sunnyvale/NY: 3 Gb/s • Suppose under normal condition • Traffic between Seattle/NY + Sunnyvale/NY under 10 Gb/s Seattle New York 10G Kansas City 10G 10G Sunnyvale Indianapolis Houston Atlanta

  6. Example Scenario Seattle/NY: 3 Gb/s Sunnyvale/NY: 3 Gb/s Houston/Atlanta: Attack 10 Gb/s • Suppose sudden attack between Houston/Atlanta • Congested links suffer high rate of packet loss • Serious collateral damage oncrossfire OD pairs Seattle New York 10G Kansas City 10G 10G Sunnyvale Indianapolis Houston Atlanta

  7. Impact on Collateral Damage US Europe • OD pairs are classified into 3 types with respect to the attack traffic • Attacked: OD pairs with attack traffic • Crossfire: OD pairs sharing route links with attack traffic • Non-crossfire: OD pairs not sharing route links with attack traffic • Collateral damage occurs on crossfire OD pairs • Even a small percentage of attack flows can affect substantial parts of the network

  8. Related Works • Most existing DDoS defense solutions are reactive in nature • However, large-scale bandwidth-based DDoS attacks can quickly knock out substantial parts of a network before reactive defenses can respond • Therefore, we need a proactive defense mechanism that works immediately when an attack occurs

  9. Related Works (cont’d) • Router-based defenses like Random Early Drop (RED, RED-PD, etc) can prevent congestion by dropping packets early before congestion • But may drop normal traffic indiscriminately, causing responsive TCP flows to severely degrade • Approximate fair dropping schemes aim to provide fair sharing between flows • But attackers can launch many seemingly legitimate TCP connections with spoofed IP addresses and port numbers • Both aggregate-based and flow-based router defense mechanisms can be defeated

  10. Previous Solutions (cont’d) • Router-based defenses like Random Early Drop (RED, RED-PD, etc) can prevent congestion by dropping packets early before congestion • But may drop normal traffic indiscriminately, causing responsive TCP flows to severely degrade • Approximate fair dropping schemes aim to provide fair sharing between flows • But attackers can launch many seemingly legitimate TCP connections with spoofed IP addresses and port numbers • Both aggregate-based and flow-based router defense mechanisms can be defeated In general, defenses based on unauthenticated header information such as IP addresses and port numbersmay not be reliable

  11. Outline • Problem • Approach • Experimental Results • Summary

  12. Our Solution • Provide bandwidth isolation between OD pairs, independent of IP spoofing or number of TCP/UDP connections • We call this method Proactive Surge Protection (PSP) as it aims to proactively limit the damage that can be caused by sudden demand surges, e.g. sudden bandwidth-based DDoS attacks

  13. Basic Idea: Bandwidth Isolation Seattle/NY: Limit: 3.5 Gb/s Actual: 3 Gb/s All admitted as High Houston/Atlanta: Limit: 3 Gb/s Actual: 2 Gb/s All admitted as High Sunnyvale/NY: Limit: 3.5 Gb/s Actual: 3 Gb/s All admitted as High Houston/Atlanta: Limit: 3 Gb/s Actual: 10 Gb/s High: 3 Gb/s Low: 7 Gb/s Traffic received in NY: Seattle: 3 Gb/s Sunnyvale: 3 Gb/s … • Meter and tag packets on ingress as HIGH or LOW priority • Based on historical traffic demands and network capacity • Drop LOW packets under congestion inside network Seattle New York 10G Kansas City Proposed mechanism proactively drop attack traffic immediately when attacks occur 10G 10G Sunnyvale Indianapolis Houston Atlanta

  14. Architecture Traffic Measurement Traffic Data Collector Bandwidth Allocator Bandwidth Allocation Matrix Proposed mechanism readily available in modern routers Policy Plane Data Plane Deployed at Network Routers forwarded packets tagged packets arriving packets Preferential Dropping Differential Tagging Deployed at Network Perimeter dropped packets High priority Low priority

  15. Allocation Algorithms • Aggregate traffic at the core is very smooth and variations are predictable • Compute a bandwidth allocation matrix for each hour based on historical traffic measurements • e.g. allocation at 3pm is computed by traffic measurements during 3-4pm in the past 2 months Source: Roughan’03 on a Tier-1 US Backbone

  16. Allocation Algorithms • To account for measurement inaccuracies and provide headroom for traffic burstiness, we fully allocate the entire network capacity as an utility max-min fair allocation problem • Mean-PSP: based on the mean of traffic demands • CDF-PSP: based on the Cumulative Distribution Function (CDF) of traffic demands • Utility Max-min fair allocation • Iteratively allocate bandwidth in “water-filling” manner • Each iteration maximize the common utility of all flows • Remove the flows without residual capacity after each iteration

  17. Utility Max-min Fair Bandwidth Allocation 100 100 100 Utility(%) Utility(%) Utility(%) 80 80 80 60 60 60 40 40 40 20 20 20 1 3 4 5 1 3 4 5 2 2 1 3 4 5 2 BW BW BW 1st round 2nd round BW BW 5 5 5 A B 4 4 5 3 3 5 5 2 2 C 1 1 0 0 AB AB BC BC Links Links Utility functions AC AB BC Network Allocation

  18. Mean-PSP (Mean-based Max-min) BW Allocation Bij A B C 10G 10G A - 6 4 A B C B 4 - 6 10G 10G C 6 4 - 2nd round BW 10 8 6 4 2 0 AB BC CB BA Links • Use mean traffic demand as the utility function • Iteratively allocate bandwidth in “water-filling” manner Mean Demand A B C - A 1.5 1 B 0.5 - 0.5 C 1.5 1 - BW 1st round 10 8 6 4 2 0 AB BC CB BA Links

  19. CDF-PSP (CDF-based Max-min) 100 Utility(%) 80 60 40 20 1 3 4 5 2 BW • Explicitly capture the traffic varianceby using a Cumulative Distribution Function (CDF) model as utility functions • Maximize utility is equivalent to minimizing the drop probabilities for all flows in a max-min fair manner When allocated 3 unit bandwidth, drop probability is 20%

  20. Outline • Problem • Approach • Experimental Results • Summary

  21. Networks • US Backbone • Large tier1 backbone network in US • ~700 nodes, ~2000 links (1.5Mb/s – 10Gb/s) • 1-minute traffic traces: 07/01/07-09/03/07 • Europe Backbone • Large tier1 backbone network in Europe • ~900 nodes, ~3000 links (1.5Mb/s – 10Gb/s) • 1-minute traffic traces: 07/01/07-09/03/07

  22. Evaluation Methodology • NS2 Simulation • Normal traffic: Based on actual traffic demands over 24 hour period for each backbone • Attack traffic: • US Backbone: highly distributed attack scenario • Based on commercial anomaly detection systems • From 40% ingress routers to 25% egress routers • Europe Backbone: targeted attack scenario • Created by synthetic attack flow generator • From 40% ingress routes to only 2% egress routers

  23. Packet Loss Rate Comparison US Europe • Both PSP schemes greatly reduced packet loss rates • Peak hours have higher packet loss rates

  24. Relative Loss Rate Comparison US Europe • PSP reduced packet loss rates by more than 75%

  25. Behavior Under Scaled Attacks • Packet drop rate under attack demand scaled by factor up to 3x • Under PSP, the loss remains small throughout the range ! US Europe

  26. Summary of Contributions • Proactive solution for protecting networks that provides a first line of defense when sudden DDoS attacks occur • Very effective in protecting network traffic from collateral damage • Not dependent on unauthenticated header information, thus robust to IP spoofing • Readily deployable using existing router mechanisms

  27. Questions?

More Related