1 / 52

Insider Attacker Detection

Insider Attacker Detection. Presented by Fang Liu fliu@gwu.edu. Outline. Introduction Detection of Faulty Sensors Detection of Routing Misbehaviors A General Solution – Insider Attacker Detection in Wireless Sensor Networks. Secure the Sensor Networks.

irving
Download Presentation

Insider Attacker Detection

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Insider Attacker Detection Presented by Fang Liu fliu@gwu.edu

  2. Outline • Introduction • Detection of Faulty Sensors • Detection of Routing Misbehaviors • A General Solution – Insider Attacker Detection in Wireless Sensor Networks

  3. Secure the Sensor Networks • Protecting confidentiality, integrity, and availability of the communications and computations • Sensor networks are vulnerable to security attacks due to the broadcast nature of transmission • Jamming, eavesdropping, etc. • Sensor nodes can be physically captured or destroyed • All information will be released if not tamper-resistant.

  4. Compromised Sensors • Sensors are vulnerable. • Subject to physical attacks • Not tamper-resistant • Compromised nodes can launchinsider attacks. • False information • False readings, Data alteration, etc • Routing misbehaviors • Message negligence, selective forwarding, jamming, etc.

  5. Challenges in Detecting Insider Attackers • Compromised nodes know all the information! • Cannot be detected with classical cryptographic security mechanisms • Authentication, Integrity protection, etc • Difficult to study the normal/abnormal node activities • Dynamic attacks • No centralized server to perform analysis and correlation

  6. Existent Solutions • Detection of False Information • Detection of Routing Misbehaviors • Our Work – A General Solution to Insider Attacker Detection in Wireless Sensor Networks

  7. Detection of False Information • Detecting and tolerating false information inserted by • Faulty sensors • Compromised sensors • Methods: • Centralized solution: The base station collects the data and checks the correctness [Shen ICC’01, Koushanfar et al. Sensors’03] • Secure data aggregation [Cao et al Mobihoc’06] • Fault-tolerant event detection: Disambiguate events from noise-related error, faulty sensors • 0/1 predicate • Comparison with neighborhood activities [Cheng et al. Infocom’05]

  8. Detection of Routing Misbehaviors • Routing misbehaviors: • Selective forwarding, packet dropping, etc. • One contemporary solution: • Forward packets only through nodes that share apriori trust relationship. But, • It requires key distribution. • Trusted nodes may be still overloaded, broken or compromised • Untrusted nodes may be well behaved.

  9. Detection of Routing Misbehaviors • Method: • Detect with the help of base station • “Location-centric Isolation of Misbehavior and Trust Routing in Energy-constrained Sensor Networks” • Detect by monitoring the neighborhood • “Mitigating Routing Misbehavior in Mobile Ad Hoc Networks”

  10. Location-centric Isolation of Misbehavior and Trust Routing in Energy-constrained Sensor Networks • Misbehavior model • Dropping of queries and data packets • Assume the availability of location information and the ability to perform geographic routing • Main procedure • Base stations send marked packets to probe sensors, and rely on the responses to identify and isolate insecure location • Sensors route packets to trusted neighbors

  11. TRANS Components Authentication Periodic beaconing

  12. Trust Routing Protocol • Send packets only toward trusted neighbors • Trust table based security mechanism

  13. TRANS Scenario

  14. TRANS Scenario

  15. Isolating Insecure Location(1/2) • Finding Malicious Node (Probing) • E-TTL • Send probe packet with increasing hop-count • Binary • Send probe packet in a binary search fashion • One-Shot • Send probe packet along the path and each node replies its location

  16. Isolating Insecure Location(2/2) • Isolating Method • Sink finds misbehaving node and generate Black List • Black List Geocast • Broadcast black list • Remove isolated node from neighbor list • Broadcasting overhead • Embedded Black List • Embedded black list in packet header • Detour point using geographic routing

  17. Summary: Location-centric Isolation of Misbehavior and Trust Routing in Energy-constrained Sensor Networks • Routing misbehaviors detection and isolation • Centralized detection • Isolating Misbehavior node using black list • Trust routing protocol design • Trust evaluation may be not working for insider attackers • Based on authentication

  18. Mitigating Routing Misbehavior in Mobile Ad Hoc Networks • Ad hoc networks maximize total network throughput by using all available nodes for routing and forwarding. • A node may misbehave by agreeing to forward the packet and then failing to do so because it is • Overloaded, Selfish, Malicious or Broken • Few misbehaving nodes can have a severe impact

  19. Proposed Solutions • Install extra facilities in the network to detect and mitigate routing misbehavior. • Make only minimal changes to the underlying routing algorithm. • Two extensions to DSR - “Watchdog” and “Pathrater” • Watchdog identifies misbehaving nodes by overhearing transmissions • Pathrater avoids routing packets through these nodes

  20. Assumptions • Some assumptions are • Links between the nodes are bi-directional • Nodes are inpromiscuous mode operation • Malicious node does not work in groups C A B

  21. Watchdog • The watchdog is implemented by maintaining a buffer of recently • Each overheard packet ismatchedwith the packet in the buffer • In case of a match, the packet in the buffer in removed • By overhearing, tampering of payload or header can also be detected • If the packet, however, has remained in the buffer for longer than a certain timeout • The watchdog increases the failure tally for the node responsible for forwarding on the packet • If the tally exceeds the threshold value, it determines that the node is misbehaving

  22. Watchdog (Contd) • Advantages • It can detect misbehavior at the forwarding level • Disadvantages are • Might not detect packet drops due to collisions • Ambiguous collisions • Receiver collisions • Limited transmission power • Others

  23. Ambiguous Collisions • The ambiguous problem prevents node A from overhearing transmission from B A cannot overhead B C S Packet # 1 A B Packet# 1 Packet # 1 D F I G H

  24. Limited transmission Power • Misbehaving node can control its transmission power to circumvent the watchdog A cannot overhead B S Packet # 1 Packet # 1 A B Packet # 1 D F E G H

  25. False Misbehavior • A reports that B is not forwarding packets when in fact it is. When nodes falsely report other nodes as misbehaving S Packet # 1 A B Packet # 1 D C F G H Failure Tally ++; If (Failure Tally > Threshold) notify source;

  26. Collusion • A forwards to B, but doesn’t report when B drops the packet. Multiple nodes in collusion can mount a more sophisticated attack S Packet # 1 A B Packet # 1 D C F G H

  27. Partial Dropping • B drops packets at a lower rate than the misbehavior detection threshold. A node can circumvent the watchdog by dropping packets at a lower rate than the watchdog’s configured minimum misbehavior threshold S A Packet # 1 B Packet # 1 Packet # 2 D C F G Failure Tally ++; If (Failure Tally > Threshold) notify source; H

  28. Pathrater • Each nodes maintain a rating for every other node it knows about in the network • A path metric is the Average of the Node ratings along the path. • The metric gives a comparison of the overall reliability of different paths • If there are multiple paths to the same destination, the path with the highest metric is chosen

  29. Summary: Mitigating Routing Misbehavior in Mobile Ad Hoc Networks • Enable nodes to avoid malicious nodes (overloaded, malicious, selfish, broken) in their routes • Watchdog – identifies misbehavior nodes by listening to the next node’s transmission • Pathrater – helps routing protocols avoid these nodes • Allows nodes to use better paths and thus to increase their throughput • The watchdog determines a malicious through threshold comparison. • How the threshold value is calculated ? - it is one of the important factor in detecting malicious nodes

  30. A Framework for Identifying Compromised Nodes in Sensor Networks • Identifying compromised nodes? • Use the alert information! • But, compromised nodes may … • Raise false alerts • Form a local majority and collude • Behave arbitrarily • An application-independent framework to identify compromised node based on alert reasoning

  31. Assumptions • Application-specific detection mechanisms • Beacon probing, watchdog … • Static sensor networks • Fixed observability relationship • Message confidentiality and integrity • Secure comm. with base stations • Trustable base stations • Centralized

  32. An Example • The base station should: • Have the monitoring relationship • Consider the possibility of false alerts • Probe beacon nodes regularly Sensor Beacon The sensor network The observability graph

  33. The Framework • Sensor behavior model: • Reliability rm: the percentage of normal activities conducted by an uncompromised node. • Observer model: • Observability rate rb: s1 may not observe each activity of s2 • Positive accuracy rp: s1 may not detect the abnormal activity of s2 • Negative accuracy rn: s1 raise alert against s2, but s2 is normal. • Security estimation K • The max # of compromised nodes that the network can work

  34. Identification of compromised nodes • Step 1: Label abnormal/normal alerts • Observe the alert pattern • Get the expected #Alerts raised by s1 against s2 • Compare with the actual #Alerts • > expected#: abnormal; o.w. normal Observability rate Reliability Negative accuracy Positive accuracy Pb(alert: i against j) fj(x): the distribution fo #events that can be sensed by j Rij(t): expected # of alerts raised by i against j, when i, j are uncompromised

  35. Identification of compromised nodes • Step 2: Derive suspicious node pairs • Labelled observability graph G’(V,Ea+En) • Node si and sj are a suspicious pair if: • (si,sj) or (sj,si) is in Ea, or • s’ exists • (si,s’) in Ea & (sj,s’) in En, or, • (si,s’) in En & (sj,s’) in Ea. At least one of the suspicious pair is compromised! normal abnormal

  36. Identification of compromised nodes • Step 3: Find the compromised nodes • Definition: valid assignment • To identify the common nodes in all possible assignments: CompromisedCore • The largest number of truly compromised nodes, no false alarms

  37. Alert reasoning algorithm • Lemma 3.1: Given an inferred graph I(V;E), let VI be a minimum vertex cover of I. Then the number of compromised nodes is no less than |VI|. • Theorem 3.1 Given an inferred graph I and a security estimation K, for any node s in I, s in CompromisedCore(I;K) if and only if |Ns|+CI’s > K. • NP-complete Min vertex cover

  38. Alert reasoning algorithm • Corollary 3.1: Given an inferred graph I and a security estimation K, for any node s in I, if |Ns|+MI’s > K, then s in CompromisedCore(I;K). • Maximal matching: MG ≤ CG ≤ 2MG • |Ns|+CI’s > |Ns|+MI’s > K • Polynomial

  39. Simulation • General+mm: • The general AppCompromisedCore algorithm + maximum matching • EigenRep • PeerTrust • Reputation-based trust functions for P2P • Majority Voting

  40. Impact of the concentration of compromised nodes

  41. Summary: A Framework for Identifying Compromised Nodes in Sensor Networks • Detection algorithm with maximum accuracy without false alarms • Effective with local majority • However, • A priori knowledge about: • sensor behavior model • observer model: the accuracy of the alert • Observability rate, positive accuracy, negative accuracy, etc. • Centralized: the base station does the detection!

  42. The Common Methodology  Requires application-specific knowledge! Suspicious Behavior Detection Watchdog-based, 0/1 predicate Information Collection Localized/centralized Diagnosis and notification of the detection result Reputation evaluation, threshold comparison, etc.

  43. Our Work – A General Solution to Insider Attacker Detection • Insider attackers • Compromised nodes under the control of the adversary • Data alteration, Message negligence, Selective forwarding, etc • Challenges: • The insider attacker knows all the secret information! • The detection scheme must be efficient, flexible, and localized • Cannot use cryptography-based techniques • Localized statistical analysis?

  44. The Basic Idea • Observation: Similar networking behaviors in close neighborhood. • Detection of insider attackers with a light, flexible and localized algorithm? • Measure the networking behaviors of neighboring nodes • E.g. packet dropping rate, packet sending rate, forwarding delay time, etc. • Detect if any abnormal activities exist  Exploiting the spatial correlation among neighboring sensors!

  45. The Basic Algorithm • Information Collection • Node x gets f (xi) for each neighbor xi in N(x) • Outlier Detection • Assume f (xi) ~ Nq(μ,Σ), then the Mahalanobis squared distance d2(xi) = (f (xi) -μ)TΣ-1(f (xi) -μ) ~ χ2q. Thus, Prob(d2(xi)> χ2q(α)) = α. • xi could be an outlier if d2(xi) is sufficiently large. • Majority Vote

  46. Two Extensions • Estimate (µ,Σ) from the data set {f(xi)} with the existence of outliers? • If f(xi) ~ Nq(μ,Σ), d2(xi) = (f(xi) -μ)TΣ-1(f(xi) -μ) ~ χ2q • (µ,Σ) is about the population of normal sensors • Cannot use sample mean, sample covariance-covariance to estimate (µ,Σ)  Robust statistics: Orthogonalized Gnanadesikan-Ketterring (OGK)

  47. Two Extensions • For a sparse network, information is collected from multi-hop neighborhood, which may be inserted with false data.  Trust-based false information filtering B: (21,42,39) D C B: (20,30,39) B A E F B: (18,31,37)

  48. Trust-based false information filtering D C Sensor (A) should select a reliable relay node (D or F?) based on its own observation. B A E F A’s monitoring results Trust value C: (19,32,40) D: (22,11,42) E: (21,29,38) F: (19,31,39) C: (0.83,0.63,0.15) D: (1.17,1.49,1.31) E: (0.50,0.33,1.02) F: (0.83,0.53,0.44) C: 0.83 D: 1.49 E: 1.02 F: 0.83 C: 1 D: 0.56 E: 0.81 F: 1 standardize max min/x standardize (y-μ)/σ

  49. Evaluation metrics Detection accuracy: False alarm: Simulation settings Sparse or Dense networks Compromised relay nodes: Performance Evaluation (1/3) D: Identified outliers O: Real outliers

  50. Performance Evaluation (2/3) • Dense networks

More Related