1 / 48

Denial-of-Service Attacks and Defenses

Denial-of-Service Attacks and Defenses. Jinyang Li. DoS in a nutshell. Goal: overwhelm a victim site with huge workload How? Workload amplification Exploit protocol design oversights Small work for attackers  Lots of work at victims Brute-force flooding Command botnets.

Download Presentation

Denial-of-Service Attacks and Defenses

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Denial-of-Service Attacks and Defenses Jinyang Li

  2. DoS in a nutshell • Goal: overwhelm a victim site with huge workload • How? • Workload amplification • Exploit protocol design oversights • Small work for attackers  Lots of work at victims • Brute-force flooding • Command botnets

  3. Lecture overview • A “tour” of attacks on protocol bugs at many layers • Link, IP, TCP/UDP, DNS, applications • In-network DoS mitigations • IP Traceback [Savage et al. SIGCOMM’00] • TVA [Yang et al. SIGCOMM’05]

  4. Link layer: attacks on 802.11[Bellardo, Savage, USENX Security ‘03] • De-authentication attack • De-auth packets not authenticated • Attackers forge de-auth packets to AP • De-authenticated clients lose communications • Trick a victim into setting a large NAV (max=215) • NAV is meant for (honest) nodes to reserve channel (in RTS/CTS mode) Small work for attacker, lots of overhead at the victim

  5. Ping: dst=broadcast src=victim Smurf ping attack attacker victim

  6. DNS traffic amplification • Attackers send DNS queries with forged source address • DNS servers send response to victims • 50X traffic amplification • DNS query size (60-byte UDP) • DNS reply (w/ extensions) 3000 bytes

  7. SYN: SNC SYN-ACK: SNC,SNS ACK:SNS TCP SYN flood State: daddr,dport Saddr,sport, SNC,SNS client server

  8. SYNC1 SYNC2 SYNC3 SYNC4 DoS via state exhaustion TCP SYN flood • Attackers flood SYN with forged source addresses • Victim keeps connection state while awaiting for ACK • Legitimate connections rejected due to state full attacker victim

  9. TCP SYN Floods Backlog lingers (3 minutes): victims must re-send SYN-ACKs • Real world attacks: • Blaster (2003) worm launches SYN flood against windowsupdate.com

  10. SYN: SNC SYN-ACK: SNC,SNS SNS=T || L24-bits, no state ACK: SNS Check for SNS validity: if so, allocate state Defense: SYN cookies • Keep no state until connection is fully set up • Encode state in SNS during 3-way handshake client server T=5-bit counter L=Fkey(saddr,sport,daddr,dport,T)+SNC

  11. Inferring SYN Flood activity • Backscatter: victim servers send SYN-ACK to random IPs • Monitor unused IP address space • Unsolicited SYN-ACKs could be from SYN floods • Found 400 SYN attacks/week [MVS USENIX Security’01]

  12. Higher level DoS • Flood the victim’s DNS server • Send HTTP requests for large files • Make victims perform expensive ops • SSL servers must decrypt first message • Requests for expensive DB operations

  13. End-system solutions Idea: increase clients’ workloads • Computational client puzzle • CAPCHAS

  14. Tunable: depend on attack volume Client puzzles • Make clients consume CPU before service • Example puzzle: • Find X s.t. SHA-1(C|x) = 0 at rightmost n bits • Clients take O(2n) time to find answer • Servers take O(n) times to check • Servers checks solutions before doing work • SSL: server checks solution before decryption • Make clients solve puzzles only during attack

  15. CAPTCHAS • Make clients devote “human” resources • Make clients solve CAPCHAS before performing DB ops during attack [Killbots NSDI’05]

  16. DoS mitigations inside the network

  17. Network-level solution: source identification • Goal: block attacks at the source • Problem: attack traffic forge source IPs • Possible solution: ingress filtering • ISP should only forward packets with legitimate source IP • Requires all ISPs (in all countries) to perform filtering

  18. IP traceback [Savage et al. SIGCOMM‘00] • Goal: Determine paths to attack sources based on attack packets • Insights: • Routers record info in packets • Victim assembles info from large amounts of attack packets • Assumptions: • Attackers might generate any packet (with markings) • There could be  1 attack paths • Routers are not compromised • Paths are stable during attack

  19. Caveat: traceback is approximate A1 A2 A3 R6 R7 R5 R3 R4 R2 Ideal traceback: A2,R6,R3,R2,V Approximate but robust traceback: R10 R9 R3 R6 R3 R2 V V

  20. Potential solutions: #1 node append • A router appends its address to each packet • Each packet contains the entire attack path • Not enough space in packet • Can be expensive to implement

  21. Potential solutions:#2 node sampling • A router records its address in a single field with probability p • Victim orders recorded routers into a path • Place routers with fewer samples farther away • Easy to implement • p must be > 0.5 -- so forged markings cannot change path order • Converges slowly • Not robust against multiple attacks

  22. Traceback solution:Edge sampling • Record edge with probability p in packets //pkt.start, pkt.end encodes an edge //pkt.distance is the distance between edge and victim r = random(0,1) If (r < p) pkt.start = self pkt.distance=0 Else if pkt.distance==0 pkt.end=self else pkt.distance++

  23. b + c c + d d a + b a b c d V Reduce space usage • Record edge-id = start  end • Work backwards to construct path • Record one of k fragments of edge-id at a time • Include hash(edge-id) to verify edge-id correctness after reconstruction Result: overload 16-bit IP identifier field to mark packets

  24. Limitations: reflectors • Reflectors are nodes that responds to traffic • Attackers forge source so responses are sent to victims • Examples: • DNS servers • Web servers • … • Traceback cannot track across requests and their responses at higher levels

  25. Limitations: DDoS • Numerous sources and reflectors attack simultaneously • Traceback needs O(mk) to find each valid edge-id

  26. Capabilities based solution • Traceback: detect sources after attack has happened • Capabilities: prevent nodes from sending unwanted traffic • let receivers explicitly specify what it wants • [Yang et al. SIGCOMM’05] [Yaar et al. IEEE S&P’04] [Anderson et al. SIGCOMM ‘04]

  27. Sketch of network capabilities • Source requests permission to send. • Destination authorizes source for limited transfer, e.g, 32KB in 10 secs • A capability is the proof of a destination’s authorization. • Source places capabilities on packets and sends them. • Network filters packets based on capabilities.  cap

  28. TVA Challenges • Counter flooding attack • Flood initial requests • Flood (mistakenly) permitted packets • Design unforgeable capabilities • Make capability verification efficient

  29. Challenge #1Problem: Request floods • Request do not carry capabilities

  30. Solution: rate limit requests • More problem:attackers’ requests cripple good requests cap cap cap

  31. Solution: fair queue requests based on path id 1 2 • Routers insert path identifier tags [Yarr03]. • Fair queue requests using the most recent tags. Per path-id queues 1 1

  32. Problem: Flood using allowed packets cap cap cap cap cap

  33. cap Solution: Fair queue permitted packets w.r.t. destinations • Per-destination queues • TVA bounds the number of queues (later) cap cap cap cap cap

  34. cap2 cap1 Challenge #2: Capability design • Routers stamp pre-capabilities on request packets • (timestamp, hash(src, dst, key, timestamp) • Destinations return fine-grained capabilities • (N, T, timestamp, hash(pre-cap, N, T)) • send N bytes in the next T seconds, e.g. 32KB in 10 seconds pre2 pre1 

  35. data cap2 cap1 Validating capabilities N, T, timestamp, hash(pre-cap, N, T)  • Each router verifies hash correctness • Check for expiration: timestamp + T < now • Check for byte bound: sent_pkts * pkt_len < N

  36. Challenge #3:Efficiently count with bounded state • Create counting state only for fast flows • Fast flows: a capability with rate > N/T • A link with capacity C have < fast flows • min N/T = 3.2 Kbps  312,500 records at routers with 1Gbps link • Implementation: expire state at rate N/T, reuse expired state

  37. Efficiency: bound queues Queue on most recent tags requests path-identifier queue regular packets per-destination queue Y Validate capability N legacy packets low priority queue Keeps a queue if a destination receives faster than a threshold rate R • Tag space bounds the number of request queues. • Number of destination queues is bounded by C/R

  38. Efficiency: reduce capabilities’ packet overhead • A sender associates a nonce with a capability • Routers cache nonce to <src,dst> mapping • nonce is found in cache => permitted packet • Caveat: if nonce is evicted, packets are treated as legacy and put on slow paths

  39. Other DoS defenses • Pushback filtering • Iterative “push back” traffic filters towards attack sources along the paths • [Mahajan et al. CCR’02, Ioannidis et al. NDSS’02, Argyraki USENIX’05] • Overlay filtering • Offline authenticators determine who can send • Overlay performs admission control using authenticators • [Keromytis, SIGCOMM’02, Andersen USITS’03]

  40. Conclusion • One must design protocols with DoS attacks in mind • Current Internet is ill-suited for coping with DDoS attacks • Many good proposals for detection, mitigation, prevention

  41. Project administravia • Dec 10: In-class presentation/demo • Dec 11: CS department poster/demo • Dec 17: Final report

  42. Project presentation • 8 groups • 10 min presentation + 5 min Q&A • Demo is preferred • 10 minutes  ≈10 content slides

  43. System-based projects • (2) Explain motivation • What is the problem you are tackling? • Why is it interesting or important? • What are existing designs/systems? • Why are they not good enough? • (4) Explain your design • Give a strawman • Specify key challenges • Explain your solutions • (4) Convince with results • Did you system tackle the stated problem? • Did your design prove to be essential?

  44. Measurement-based projects • Explain goal • What problem? Similar studies not sufficient? • Your study will be useful for …? • Designing new protocols • Debunking old myths • Explain measurement methodology • A list of experiments and hypothesis • How each experiment proves/disproves hypothesis • Discuss results

  45. Bottom-line • What is your proudest technical nugget? • Convince others it’s cool • What did you learn from your project? • Share your lessons with others • Demo helps • Seeing is believing

  46. How can you do a good job? • Prepare, prepare, prepare • Practice talks to non-group members • Did they understand your problem & solution? • Discuss your slides with me

  47. Project poster demo • Why do it? • Seek a wider audience; publicize your work • Diverse feedbacks • Check out what others have done • Mingle with people • CS department wide: graphics, vision etc. • Reuse talk slides for poster • Demo helps

  48. Project report • 8 page maximum • Same flow as the talk but has room for details • Email me by Dec 17 (Mon) • The PDF report • A bundle of your source code

More Related