1 / 25

Quantifying Network Denial of Service: A Location Service Case Study

Quantifying Network Denial of Service: A Location Service Case Study. Yan Chen, Adam Bargteil, David Bindel, Randy H. Katz and John Kubiatowicz Computer Science Division University of California at Berkeley USA. Outline. Motivation Network DoS Attacks Benchmarking Object Location Services

azuka
Download Presentation

Quantifying Network Denial of Service: A Location Service Case Study

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Quantifying Network Denial of Service: A Location Service Case Study Yan Chen, Adam Bargteil, David Bindel, Randy H. Katz and John Kubiatowicz Computer Science Division University of California at Berkeley USA

  2. Outline • Motivation • Network DoS Attacks Benchmarking • Object Location Services • Simulation Setup and Results • Conclusions

  3. Motivations • Network DoS attacks increasing in frequency, severity and sophistication • 32% respondents detected DoS attacks (1999 CSI/FBI survey) • Yahoo, Amazon, eBay and MicroSoft DDoS attacked • About 4,000 attacks per week in 2000 • Security metrics in urgent need • Mission-critical applications built on products claiming various and suspect DoS resilient properties • No good benchmark for measuring security assurance • Desired: A general methodology for quantifying arbitrary system/service resilience to network DoS attacks

  4. Outline • Motivation • Network DoS Attacks Benchmarking • Object Location Services • Simulation Setup and Results • Conclusions

  5. Network DoS Benchmarking • QoS metrics • DoS attacks resource availability, a spectrum metric rather than binary • General vs. App-specific metrics • General: end-to-end latency, throughput and time to recover • Multi-dimensional Resilience Quantification • Dimension rank based on importance, frequency, severity & sophistication • Application/system specific, hard to generalize • Solution: be specific in the threat model definition and only quantify resilience in the model

  6. Network DoS Benchmarking (Cont’d) • Simulation vs. Experiment • Standard & realistic simulation environment specification • Network configuration • Workload generation • Threat model – taxonomy from CERT Y Consumption of network connectivity and/or bandwidth Y Consumption of other resources, e.g. queue, CPU Y Destruction or alternation of configuration information N Physical destruction or alternation of network components

  7. Two General Classes of Attacks • Flooding Attacks • Point-to-point attacks: TCP/UDP/ICMP flooding, Smurf attacks • Distributed attacks: hierarchical structures • Corruption Attacks • Application specific • Impossible to test all, choose typical examples for benchmarking

  8. Outline • Motivation • Network DoS Attacks Benchmarking • Object Location Services • Simulation Setup and Results • Conclusions

  9. Object Location Services (OLS) • Centralized directory services (CDS) vulnerable to DoS attack • SLP, LDAP • Replicated directory services (RDS) suffer consistency overhead, still limited number of targets • Distributed directory services (DDS) • Combined routing & location on overlay network • Guaranteed success and locality • Failure and attack isolation • Examples: Tapestry, CAN, Chord, Pastry

  10. Tapestry Routing and Location • Namespace (nodes and objects) • Each object has its own hierarchy rooted at Root f (ObjectID) = RootID, via a dynamic mapping function • Suffix Routing from A to B • At hth hop, arrive at nearest node hop(h) s.t. hop(h) shares suffix with B of length h digits • Example: 5324 routes to 0629 via5324  2349  1429  7629  0629 • Object Location • Root responsible for storing object’s location • Publish / search both route incrementally to root • Http://www.cs.berkeley.edu/~ravenben/tapestry

  11. 3 4 2 NodeID 0x43FE 1 4 3 2 1 3 4 4 3 2 3 4 2 3 1 2 1 2 3 1 Tapestry MeshIncremental suffix-based routing NodeID 0x79FE NodeID 0x23FE NodeID 0x993E NodeID 0x43FE NodeID 0x73FE NodeID 0x44FE NodeID 0xF990 NodeID 0x035E NodeID 0x04FE NodeID 0x13FE NodeID 0x555E NodeID 0xABFE NodeID 0x9990 NodeID 0x239E NodeID 0x73FF NodeID 0x1290 NodeID 0x423E

  12. 5712 7510 4510 3210 0880 0 0 0 0 0 1 1 1 1 1 2 2 2 2 2 4 4 4 4 4 5 5 5 5 5 7 7 7 7 7 3 3 3 3 3 6 6 6 6 6 Neighbor Map For “5712” (Octal) 0712 x012 xx02 xxx0 1712 x112 5712 xxx1 2712 x212 xx22 5712 3712 x312 xx32 xxx3 4712 x412 xx42 xxx4 5712 x512 xx52 xxx5 6712 x612 xx62 xxx6 7712 5712 xx72 xxx7 4 3 2 1 Routing Levels Routing in Detail Example: Octal digits, 212 namespace, 5712  7510 5712 0880 3210 4510 7510

  13. Object LocationRandomization and Locality

  14. Outline • Motivation • Network DoS Attacks Benchmarking • Object Location Services • Simulation Setup and Results • Conclusions

  15. Fast Ethernet T3 Fast Ethernet T1 T1 Simulation Setup • Distributed Information Retrieval System Built on top of ns • 1000-node Transit-stub Topology from GT-ITM • Extended with common network bandwidth • Synthetic Workload • Zipf’s law and hot-cold patterns • 500 objects, 3 replicas each on 3 random nodes • Size randomly chosen as typical web contents: 5KB – 50KB

  16. Directory Servers Simulation • CDS with random replica (CDSr) • CDS with closest replica (CDSo) • RDS: with 4 random widely-distributed nodes, always return random replica • With random directory server (RDSr) • With closest directory server (RDSo) • DDS: simplified version of Tapestry (DDS) • Tapestry mesh statically built with full topology knowledge • Using hop count as distance metric

  17. Attacks Simulation • Flooding Attacks • 200 seconds simulation • Vary the number of agents: 1 – 16 • Each inject various constant bit stream to targets: 25KB/s – 500KB/s • Targets: • CDS, RDS: the directory server(s) • DDS: the root(s) of hot object(s) • Corruption Attacks • Corrupted application-level routing tables of target nodes: CDS, RDS, DDS • Forged replica advertisement through node spoofing: DDS

  18. Results of Flooding Attacks • Only the flood traffic amount matter? No! • Bottleneck bandwidth restrict attackers’ power • Path sharing of clients and attackers • Hard to identify and eliminate multiple attackers simultaneously • CDS vs. Tapestry • 1 * 100KB/s, 4 * 25KB/s – 4 *100KB/s • Tapestry shows resistance to DoS attacks Average response latency Request throughput

  19. Dynamics of Flooding Attacks • CDS vs. Tapestry (most severe case: 4 *100KB/s) • Attacks start at 40th second, ends at 110th second • Time to Recover • CDS (both policies): 40 seconds • Tapestry: negligible Average response latency Request throughput

  20. Results of Flooding Attacks • RDS vs. Tapestry • 4 * 100KB/s – 16 * 500KB/s • Both RDS and Tapestry are far more resilient than CDS • Performance: RDSo > Tapestry > RDSr Counter DoS attacks: Decentralization and topology-aware locality Average response latency Request throughput

  21. Results of Corruption Attacks • Distance Corruption • One false edge on directory/root server • Only CDSo (85%) and Tapestry (2.2%) affected • App-specific attacks • One Tapestry node spoofing to be root of every object (black square node in the graph) • 24% of nodes are affected (enclosed by round-corner rectangles)

  22. Resiliency Ranking • Combine multiple-dimensional resiliency quantification into a single ranking • For multiple flooding attacks, weights are assigned in proportion to the amount of flood traffic • Normalized with the corresponding performance without attacks

  23. Outline • Motivation • Network DoS Attacks Benchmarking • Object Location Services • Simulation Setup and Results • Conclusions

  24. Conclusions • First attempt for network DoS benchmarking • Applied to quantify various directory services • Replicated/distributed services more resilient than centralized ones • Will expand it for more comprehensive attacks and more dynamic simulation • Will use it to study other services such as web hosting and content distribution

  25. Average response latency (s) Legitimate throughput Queuing Theory Analysis(backup slide) • Assume M/M/1 queuing • Predict the trends and how to choose simulation parameters to cover enough spectrum X axis: ratio of attack traffic vs. legitimate traffic

More Related