1 / 51

Network Support For IP Traceback - Sigcomm ‘00

Network Support For IP Traceback - Sigcomm ‘00. Stefan Savage, David Wetherall , Anna Karlin and Tom Anderson University of Washington- Seattle, WA. Presented by Mohammad Hajjat - Purdue University Slides courtesy of Teng Fei - Umass April, 2002. The Problem.

saniya
Download Presentation

Network Support For IP Traceback - Sigcomm ‘00

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Network Support For IP Traceback-Sigcomm ‘00 Stefan Savage, David Wetherall, Anna Karlin and Tom Anderson University of Washington- Seattle, WA Presented by Mohammad Hajjat- Purdue University Slides courtesy of TengFei - Umass April, 2002

  2. The Problem • Denial of Service (DoS) attack • Remotely consume resource of server or network • Increase in number and frequency • Simple to implement • DoS attacks are difficult to trace: • Indirection • Attacking packets sent from slave machines, which under the control of a remote master machine • Spoof of IP source addresses • Disguise their location using incorrect IP addresses, hence the true origin is lost

  3. Packet Marking Traceback • Mark packets with router address • deterministically or probabilistically • Trace attack using marked packets • Pros • Require no cooperation with ISPs • Does not cause heavy network overhead • Can trace attack “post mortem”

  4. Multiple Attackers A2 A3 A1 attack origin R5 R7 R6 R3 R4 R2 victim R1 V

  5. Exact Traceback Problem A2 A3 A1 R5 R7 R6 R3 R4 R2 attack path exact traceback R6, R3, R2, R1 R1 V

  6. Approximate Traceback Problem A2 A3 A1 R5 R7 R6 R3 R4 R2 approx. traceback R5, R6, R3, R2, R1 R1 V

  7. Methodology • I. Marking procedure • by routers • add information to packets • II. Path reconstruction procedure • by victim • use information in marked packets • convergence time: # of packets to reconstruct the attack path

  8. Basic Marking Algorithms • I. Node Append • II. Node Sampling • III. Edge Sampling

  9. I. Node Append • Append address of each node to the end of the packet • Complete, ordered list of routers attack path original packet router list

  10. I. Node Append • Pros • complete, ordered attack path • converge quickly (single packet) • Cons • infeasibly high router overhead • attacks can create false path information

  11. II. Node Sampling • Reserve node file in packet header • Router write address in node field with probability p • Reconstruct path using relative # of node samples • Only require additional write, checksum update

  12. R1 II. Node Sampling R1 R2 R3

  13. R1 II. Node Sampling R1 R2 R3

  14. R1 II. Node Sampling R1 R2 R3

  15. R3 II. Node Sampling R1 R2 R3

  16. II. Node Sampling • Cons: • Slow convergence • need many packets • usually order of 10,000 - 100,000 • Can not trace multiple attackers ▪

  17. III. Edge Sampling • Edge represent routers at each end of the link • Store edges instead of nodes • start and end addresses of edge routers • distance from edge to victim R1 R2

  18. III. Edge Sampling • A router writes its own address in the start field, and 0 into the distance field • Distance field of 0 means the packet is already marked • router writes its own address in the end address field and increase the distance field by 1 • Other routers may then reset these fields. Otherwise, the distance field is incremented

  19. R1 #1 #1 III. Edge Sampling R1 R2 R3

  20. R1 #1 0 III. Edge Sampling R1 R2 R3

  21. R1 R2 1 III. Edge Sampling R1 R2 R3

  22. R1 R2 2 III. Edge Sampling R1 R2 R3

  23. Path Reconstuction • Consider G is a graph with root v • Insert tuples (start, end, distance) into G • Remove any edge (x, y, d) with d != distance from x to v in G • Extract path from G

  24. III. Edge Sampling • Pros • Converge much faster than node sampling • Efficiently discern multiple attacks • Cons • Space: requires additional space in the IP header- 72 bits of space in every IP packet (2 x 32 bit IP address and 8 bit for distance) • Compatibility ▪

  25. Encoding Issue • Overload the IP identification field • used for fragmentation • Decreases the space requirement • store the XOR of the edge addresses (edge-id)- B XOR A XOR B = A • Pros: • Reduced space • Cons: • Increases reconstruction time

  26. Marking With XOR attack path b a c d v resulting XOR edges bXORc cXORd d aXORb

  27. c b a Reconstructing With XOR cXORd d reconstructed path bXORc aXORb

  28. Subdividing Edge-id • Reduce per packet space more by dividing the edge-id (XORed address) into k non-overlapping packets, and store only 1 of them • Need offset of fragment

  29. Creating Unique Edge-ids • Problem: Edge-id fragments are not unique • with multiple attackers, multiple edge fragments with the same offset and distance • Solutoin: Bit-interleave hash code with IP address

  30. Creating Unique Edge-ids Hash(Address) Address 0000...1111 0011…1100 Bit-interleave 00000101...11111010 0 k-1 send k fragments into network

  31. Candidate Edge-ids • Combine all permutations of fragments at each distance with disjoint offset values • Check that the hash matches hash of the address

  32. Construction Candidate Edges 0 k-1 No, reject 00000101...11111010 0000...1111 0011…1100 Hash(Address)? Address? =? 0011…1100 Hash(Address?) Yes, correct address

  33. Encoding Edge Fragments • Overload the 16-bit identification field • used to differentiate IP fragments

  34. Testing the Algorithm • Simulator • Create random paths • Originate attacks • Marking probability is 1/25 • 1,000 random test runs • vary path lengths

  35. Experimental Results number of packets to reconstruct paths

  36. Thanks for listening • Questions?

  37. Backup slidesFuture Work • Suffix validation • spoof end edges • include a router “secret” • Attack origin (host) • Find attacker (person)

  38. Related Research • Steven M. Bellovin ICMP Traceback Message AT&Thttp://www.research.att.com/~smb/papers/draft-bellovin-itrace-00.txt • Alex Snoeren Hash-Based IP Traceback BBN SigCOMMhttp://www.acm.org/sigcomm/sigcomm2001/p1-snoeren.pdf

  39. References • Stefan Savage Practical Network Support For IP Tracebackhttp://www.cs.washington.edu/homes/savage/papers/UW-CSE-00-02-01.pdf • Sara Sprenkle Practical Network Support Duke Universityhttp://www.duke.edu/~ses12/presentations/nerdSavage.ppt • Hal Burch IP Traceback Carnegie Mellon Universityhttp://axp.missouri.edu/~cecs481/Talks/rrp83a.ppt

  40. DoS Counter Measures • Ingress filtering • Link testing • input debugging • controlled flooding • Logging

  41. Ingress Filtering • Block packets with invalid source addresses • Pros • Moderate management/network overhead • Cons • require widespread deployment • hard to do in backbone/transit network

  42. Link Testing • Start from victim and test upstream links • Recursively repeat until source is located • Assume attack remains active until trace complete

  43. Input Debugging • Victim recognize attack signature • Install filter on upstream router • Pros • May use software to help coordinate • Cons • Require cooperation between ISPs • Considerable management overhead

  44. Controlled Flooding • Flooding link with large bursts of traffic during attack • Observe attacking packet rate change to determine the source • Pros • Ingenious • Cons • Itself a denial of service - possible worse

  45. Logging • Key routers logging packets • Data mining to analysis • Pros • Post mortem • Cons • High resource demand

  46. ICMP Traceback • Sample packets with low probability • Copy data and path information in a new ICMP packet • Pros • reconstruct path information with large amount of packet • Cons • ICMP may be filtered

  47. DoS Attack Assumptions • Attacker may generate any packet • Multiple attackers may conspire • Attackers may be aware they are being traced • packets may be lost or reordered

  48. Design Assumptions • Attackers send numerous packets • Route between attacker and victim is fairly stable • Routers have limited CPU and memory • Routers are not widely compromised

  49. IP Header Encoding • Backwards compatibility • Two problems • Writing same values into id fields of frags from different datagrams • Writing different values into id fields of frags of same datagrams

  50. Fragmentation Issues • Copy data into ICMP packet • Check the checksum at higher level • etc

More Related