1 / 16

Presentation by: Robert Stewart Sirak Gizaw

Insider Attacker Detection in Wireless Sensor Networks. Presentation by: Robert Stewart Sirak Gizaw. Original material courtesy of Liu, Cheng, and Chen. http://www.cspri.seas.gwu.edu/Mazz.%20Papers/Insider-INFOCOM2007.pdf. Introduction.

gali
Download Presentation

Presentation by: Robert Stewart Sirak Gizaw

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Insider Attacker Detection in Wireless Sensor Networks Presentation by: Robert Stewart Sirak Gizaw Original material courtesy of Liu, Cheng, and Chen. http://www.cspri.seas.gwu.edu/Mazz.%20Papers/Insider-INFOCOM2007.pdf

  2. Introduction • Security provisioning is a critical requirement for many sensor network applications • The constrained capabilities of smart sensors and the harsh development environment make this problem difficult • Most of the existent works rely on the traditional cryptography and authentication techniques to establish a trustworthy relationship among the collaborative sensors • Unreliable wireless channels and unattended operation make it easy to compromise the sensors and break that trust • Doing so gives the attacker all of the security information • This allows the attack to launch insider attacks

  3. Insider Attacks • Data alteration, message negligence, selective forwarding, jamming, etc. • Can fabricate false events to mislead decision makers or keep injecting bogus data to cause network outages, etc. • Cannot be solved by classical cryptography alone, so an insider attacker detection scheme is needed

  4. Insider Attack Detection • Cannot use intrusion techniques from fixed wired networks • A typical low-cost sensor has limited memory and restricted computational power which makes it impossible to study a detection log to identify internal attacks. • Base station cannot collect audit data from network due to network size and lack of infrastructure • Instead, the correlation among neighboring sensors can be used to detect insider attacks • When a significant change takes place in the networking behavior of a single sensor, that sensor should be faulty or malicious with a high chance

  5. Our Solution • Each sensor monitors the networking behavior of immediate neighbors, tracking multiple aspects of node behaviors • In sparse networks, each sensor may also use the monitoring results of its neighbor’s neighbors for reference, with the data source selected by a trust-based node evaluation scheme • A neighbor is suspected to be an internal adversary if its behavior is “extreme” compared with those from the same neighborhoods • Decision based on detection results from the neighborhood through majority vote

  6. Comparison to Other Solutions • Explores spatial correlation in neighborhood activities and requires no prior knowledge about normal or malicious sensors. • Is generic. Can monitor many aspects of sensor networking behaviors. • Compared to using a 0/1 decision predicate our algorithm is more precise and more robust since the original measurements are used without any second round approximation. • Is localized. Information exchange is restricted to a limited neighborhood. Allows a high definition accuracy with low false alarm rates for up to 25% of node misbehaving.

  7. Network Model and Assumptions • Network: • - Homogeneous sensor network with N sensors • - Region: a squared field located in two dimensional Euclidean plane R2 • Nodes: • - All have bidirectional links, and same capabilities • - Those in proximity are burdened with similar workloads, thus are expected to behave similarly. (under normal conditions) • - Insider Attacker or Outlier is a sensor under the control of an adversary. (has the same resources as normal sensors, but behave differently) • - Each sensor intermittently listens on the channel for it's neighbor's activities. • - Activities are monitored among neighbors, and each node's activities are modeled by a q-component attribute vector. • - q different attributes, eg. no. of packets dropped/broad casted.

  8. Localized Insider Attacker Detection – Algorithm • Attribute representation and properties : • - For Node x, attribute vector of its neighbor, xi • - f(xi) = (f1(xi), f2(xi), …, fq(xi))T ; fj(xi) – actual monitoring result in one aspect. • - Each component can be continuous or discrete, • - For localized area, all the normal nodes' attribute vectors, f(xi), follow the same multivariate normal distribution. • Four phases involved in detecting insiders attackers, with “abnormal” behavior wrt normal sensors: • - Collecting 'local' information, • - Filtering the collected data, • - Identifying initial outliers, • - Majority vote to obtain final list of outliers.

  9. A. Information collection. • Let x be a member of the network, and N1(x) denote a bounded closed set of R2 that contains nodes being monitored by x. • N(x) denotes another closed set of R2 that contains x and additional n-1 nearest sensors. • N(x) can be seen as a neighborhood and N1(x) as a one-hop neighbor of x • N1(x) ≤ N(x) • N1is the set of nodes x monitors (one hope neighbors), whereas N is the neighborhood where x broadcasts the observed data. • For a dense network, we can choose N(x) = N1(x), while for sparse networks, N(x) may include two hop neighbors of x, • Sensor x also obtains a set F(x) of attribute vectors. • F(x) = { f(xi) = (f1(xi), f2(xi), …, fq(xi))T|xi Є N(x)} • - Evaluation metrics example: For q = 4. • Packet dropping: Monitoring sensor forwards a packet to the neighbor, then 'overhears' when the receiver forwards it (if it does). • sender maintains buffer of recently sent packets, and match them with the overheard packets. Clear the buffer over time and for the messages with no match, the neighbor's attribute will be updated. • Packet sending rate: among the over heard packets, count the no. of packets sent in one unit time • Forwarding delay time: measure the difference b/n the time the packet is sent and the time the neighbor is over heard forwarding. • Sensor readings: Information received from the monitored nodes about other nodes.

  10. B. Information filtering • Need for filtering: when N1(x) < N(x) • x sends attribute information results to the nodes it monitors plus more • Which means every node in N, including x, will receive monitoring results of nodes to which it is not directly connected to • If an outlier exists in N1(x), it may forward a false vector about its neighbor in N(x)-N1(x). • Proposed solution: Trust-Based False Information Filtering Protocol • Based on direct monitoring results, x assigns a trust value to each neighbor (in N1(x)) • T(xi) = [0,1]. values closer to 1 indicate higher probability that xi is normal. • T(xi) computed according to the degree of xi's deviation from the neighborhood activities: • Attribute vectors of N1(x), F1(x) • = {f(xi) = (f1(xi), f2(xi), …, fq(xi))T |xi Є N1(x)}. • Sensor x computes the sample mean, μj and standard deviation σj of each attribute component in F1(x), i.efj(xi) • Using these values the sensor standardizes each data set F1,j(x) (1 ≤ j ≤ q – for each attribute vector) • Computes absolute values to obtain F’1,j(x)= {f’j(xi)|xi Є N1(x) }

  11. Trust-Based False Information Filtering Protocol continued • For each neighbor, x computes the maximum attribute component f’M(xi) = max{f’j(xi) | 1 ≤ j ≤ q} … indicating “extremeness” of xi's deviation • Then the trust value is computed as: • T(xi) = fmM/f’M(xi); • fmM= min {f’M(xi) | xi Є N1(x)} – (max attribute component value of the least extreme neighbor). • x may receive t different attribute vectors regarding a sensor xjЄ N(x) – N1(x) from t direct neighbors b/n x and xj . • A node xi between x and xj is said to be a reliable relay node for xjif • T(xi) = max { T(xs) | xsbetween x and xj} and • Tmin≤T(xi); Tmin= fmM /2 • Using this scheme sensor x will filter out all the attribute vectors received about nodes, to which it does not have a reliable relay node. • From this point, node x will only consider information from direct neighbors and those that have a reliable relay node. • We'll call the set containing these nodes N’(x), • and the filtered information set F’(x)

  12. C. Outlier Detection • Detection is done by studying the filtered data • Compute the distance between each sensor in N’(x) to the “center” of the data set F’(x). • A sensor is said to be an outlier if the distance is larger than a predefined threshold θ • From our assumption about the work load distribution, all f(xi) (for xiЄ N’)form a sample of a multivariate normal distribution, • with mean vector μ and variance-covariance matrix ∑ • Mahalanobissquared distance is used to determine the nodes with the higher deviation and eventually classify as an insider attacker. • Distance d(xi) computed by: (f(xi)-μ)T ∑-1(f(xi)-μ) • If d2(xi) > θ, then xi is classified as an outlier. • Simpler μ and ∑ estimators fail to take into account the presence of outliers' information • OrthogonalizedGnanadesikan-Kettenring(OGK) estimators used.

  13. D. Majority Vote • Each sensor will announce D(x): a set of outlying neighbors. • Broadcast to a bigger set N*(x) ≥ N(x) ≥ N1(x) • Involve more sensors in the voting. • Sensor x receives announcements • Records all the votes regarding to its neighbors. • For a neighbor xi, x counts the proportion pi among all the received advertisements that xi is an outlier. • If pi > 0.5, then xi isan insider attacker. • This identification together with the routing protocol will aid in selecting a reliable next hop forwarder. • - The broadcast message: an entry for every node in N`, with a flag 1/0 • The majority vote combines decisions from multiple resources to accurately determine the malicious sensors.

  14. Simulation Analysis. • Setting: • Network: 64 x 64 square region • N = 4096 sensors uniformly distributed in the region. • Behavior modeled by q (= 3) attribute vector. • Two cases: dense and sparse networks. • N1(x) – set of x’s one-hop neighbors. • N(x) • For the sparse network, x’s two hope neighborhood (includes N1(x)) • = N1(x) for the dense network • Evaluation metrics: • Detection accuracy: ratio of no. of the outliers detected to the total no. of attackers. • False alarm: ratio of the no. of sensors identified as an insider attacker, but are actually normal sensors, to the total no. of normal sensors.

  15. Results: First row shows results for detection accuracy; Second one for false alarm rate Each column represents results obtained by using different correlation coefficient matrices to define the variance-covariance matrix ∑. (for d(xi) = (f(xi)-μ)T ∑-1(f(xi)-μ)) ‘d’ in the graph represents number of direct neighbors (network density)

  16. Conclusion • By exploiting the spatial correlation among the networking behaviors of sensors in close proximity, we can achieve a high detection accuracy and low false alarm rate • Algorithm requires no prior knowledge about normal or malicious sensors • Can be employed to inspect any aspects of networking activities with multiple attributes evaluated simultaneously. • Algorithm can be specialized by exploring the degree of the correlations among different aspects of sensor networking behavior

More Related