1 / 56

An Iterative Algorithm for Trust Management and Adversary Detection for Delay-Tolerant Networks

An Iterative Algorithm for Trust Management and Adversary Detection for Delay-Tolerant Networks. Department of Computer Science Virginia Polytechnic Institute and State University Northern Virginia Center, USA. Authors : Erman AYDAY , Faramarz Fekri

meena
Download Presentation

An Iterative Algorithm for Trust Management and Adversary Detection for Delay-Tolerant Networks

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. An Iterative Algorithm for Trust Managementand Adversary Detectionfor Delay-Tolerant Networks Department of Computer Science Virginia PolytechnicInstitute and State University Northern Virginia Center, USA Authors : Erman AYDAY, Faramarz Fekri Presented by : Mehmet Saglam

  2. Outline • Introduction • Iterative Trust and Reputation Management Mechanism (ITRM) • Trust Management and Adversary Detection in DTNs • Conclusion

  3. Introduction • Delay Tolerant Networks (DTNs) • Sparseness and delay are particularly high • Characterized by intermittent contacts between nodes, leading to spacetime evolution of multihop paths (routes) for transmitting packets to the destination • i.e. DTNs’ links on an end-to-end path do not exist contemporaneously • Hence intermediate nodes may need to store, carry, and wait for opportunities to transfer data packets toward their destinations

  4. Introduction • Delay Tolerant Networks (DTNs) • Application Areas; • Emergency response • Wildlife surveying • Vehicular to vehicular communications • Healthcare • Military • Tactical sensing • …

  5. Introduction • Mobile Ad hoc Networks (MANETs) • The existence of end-to-end paths via contemporaneous links is assumed in spite of node mobility • If a path is disrupted due to mobility, the disruption is temporaryand either the same path or an alternative one is restored very quickly • MANETs are special types of DTNs

  6. Introduction • DTNs vs. MANETs • Problem of DTNs in packet communication; • Routing, unicasting, broadcasting and multicasting become sufficiently harder even with no packet erasures due to communication link • Reason; • Lack of knowledge on the network topology, and the lack of end to end path

  7. Introduction • Byzantine Adversary attacks against DTNs (1/3) • Byzantine Attack: One or more legitimate nodes have been compromised and fully controlled by the adversary. A Byzantine malicious node may mount the following attacks; • Packet drop, in which the malicious node drops legitimate packets to disrupt data availability • Bogus packet injection, in which the Byzantine node injects bogus packets to consume the resourcesof the network • Noise injection, in which the malicious node changes the integrity of legitimate packets

  8. Introduction • Byzantine Adversary attacks against DTNs (2/3) • Routing attacks, in which the adversary tempers with the routing by misleading the nodes • Flooding attacks, in which the adversary keeps the communication channel busy to prevent legitimate traffic from reaching its destination • Impersonation attacks, in which the adversary impersonates the legitimate nodes to mislead the network Routing attacksare not significant threats for DTNs because of the lack of end-to-end path from a source to its destination Attacks on packet integrity may be prevented using a robust authentication mechanism in DTNs

  9. Introduction • Byzantine Adversary attacks against DTNs (3/3) • However, packet drop is harder to contain because nodes’ cooperation is fundamental for the operation of DTNs • This paper focuses on packet drop attack which gives serious damages to the network in terms of data availability, latency, and throughput • Finally, Byzantine nodes may individually or in collaboration attack the security mechanism (e.g., the trust management and malicious node detection schemes)

  10. Introduction • Reputation-based trust management system in MANETs • In MANETs, reputation-based trust management systems are shown to be an effective way to cope with adversary • Trust plays a pivotal role for a node in choosing with which nodes it should cooperate, improving data availability in the network • Examining trust values has been shown to lead to the detection of malicious nodes in MANETs • Achieving the same for DTNs leads to additional challenges • Constraints posed by DTNs make existing security protocols inefficient or impractical

  11. Introduction • Main objective of the paper • Develop a security mechanism for DTNs • To evaluate the nodes based on their behavior during their past interactions • To detect misbehavior due to Byzantine adversaries, selfish nodes, and faulty nodes • This paper develops the Iterative Trust and Reputation Mechanism (ITRM), and explore its application on DTNs • By proposing a distributed malicious node detection mechanism for DTNs using ITRM • ITRM enables every node to evaluate other nodes based on their past behavior, without requiring a central authority

  12. Introduction • Related Work (1/4) • In MANETs, a node evaluates another by using either direct or indirect measurements. Building reputation values by direct measurement is either achieved by using the watchdog mechanism or by using the ACK from the destination • The use of indirect measurements to build reputation values is also allowed while the watchdog mechanism is used to obtain direct measurements • Reputation values are constructed using the ACK messages sent by the destination node.

  13. Introduction • Related Work (2/4) • The techniques used in MANETs are not applicable to DTNs • The watchdog mechanism cannot used to monitor another node after forwarding the packets. Because, links on an end-to-end path do not exist contemporaneously and the node loses connection with the intermediate node which it desires to monitor • Relying on the ACK packets would fail, because of the lack of a fixed common multihop path • Using indirect measurements is possible. However, it is unclear as to how these measurements can be obtained

  14. Introduction • Related Work (3/4) • Reputation systems for P2P networks are either not applicablefor DTNs or they require excessive time to build the reputation values of the peers • The EigenTrust algorithm(most popular one) is constrained by the fact that trustworthiness of a peer (on its feedback) is equivalent to its reputation value • However, trusting a peer’s feedback and trusting a peer’s service quality are two different concepts • Amalicious peer can attack the network protocol or the • reputation management systemindependently. Therefore, • the EigenTrust algorithm is not practical for DTNs

  15. Introduction • Related Work (4/4) • The Cluster Filtering Method for reputation management introduces quadratic complexity while the computational complexity of ITRM is linear with the number of users in the network • Hence, ITRM scheme is more scalable and suitable for large scale reputation systems • Several other works have focused on securing DTNs by using Identity Based Cryptography and packet replicationwhich provide confidentiality and authentication • On the other hand, ITRM provides malicious node detection and high data availability with low packet latency

  16. ITRM Mechanism • The Goals of ITRM Computing the service quality (reputation) of the peers who provide a service by using the feedbacks from the peers who used the service (referred to as the raters) Determining the trustworthiness of the raters by low packet latency analyzing their feedback about Service Providers

  17. ITRM Mechanism • Considered attacks against trust and reputation management systems Bad mouthing, in which malicious raters collude and attack the SPs with the highest reputation by giving low ratings in order to undermine them Ballot stuffing, in which malicious raters collude to increase the reputation values of peers with low reputations. Sophisticated attacks a. Utilizes bad mouthing or ballot stuffing with a strategy such as RepTrap b. Malicious raters provide both reliable and malicious ratings to mislead the algorithm

  18. ITRM Mechanism If a new rating arrives from the ith rater about the jth SP, the scheme updates the new value of the edge {i,j} by averaging the new rating and the old value of the edge multiplied with the fading factor

  19. ITRM Mechanism

  20. ITRM Mechanism • Initial Iteration and are the values of the SP and the {i,j}th edge at the iteration v of the ITRM algorithm = - the set of all rater connected to the SP j The list of malicious raters (blacklist) is empty

  21. ITRM Mechanism • First Iteration (1/2) v =1 Compute average inconsistency factor () of each rater i using the values of the SPs - the set of SPs connected to the rater i d(.,.) – distance metric used to measure the inconsistency

  22. ITRM Mechanism • First Iteration (2/2) • List the inconsistency factors of all raters in ascending order • Select and Blacklist the rater i with the highest inconsistency • if it is greater than or equal to a definite threshold τ • Delete the ratings of the blacklisted rater for all SPs • If there is no rater to blacklist, stop the algorithm

  23. ITRM Mechanism ITRM EXAMPLE - Actual reputations are equal to 5 - τ=0.7 - s are equal to 1 - s are equal - {1,2,3,4,5} honest - {6,7} malicious

  24. ITRM Mechanism • Raters’ Trustworthiness • values updated using the set of all past blacklists together in a Beta distribution. Initially, prior to the first time-slot, for each rater peer i, the value is set to 0.5 • - Then, if the rater peer i is blacklisted, is decreased by setting • - Otherwise, is increased by setting • Where λ is the fading parameter and ᵟ denotes the penalty factor for the blacklisted raters. • Updating values via the Beta distribution has one major disadvantage. • An existing malicious rater with low could cancel its account and sign in with a new ID

  25. ITRM Mechanism • Security Evaluation of ITRM • To prove that the general ITRM framework is a robust trust and reputation management mechanism, its security will be briefly evaluated by both analytically and via computer simulations • Then, the security of ITRM will be evaluated in a realistic DTN environment

  26. ITRM Mechanism • Frequently used notations

  27. ITRM Mechanism • Analytical Security Evaluation (1/3) • Assumed that • the quality of SPs remains unchanged during time slots • = 1 (for simplicity) • The evaluation is for Bad-mouthing attack only (others have similar results) • Ratings generated by the nonmalicious raters are distributed uniformly among the SPs • d is a random variable with Yule-Simon distribution, which resembles the power-law distribution used in modeling online systems

  28. ITRM Mechanism • Analytical Security Evaluation (2/3) • Lemma 1 : Let and be the number of unique raters for the jth SP and the total number of outgoing edges from an honest raterin t elapsed time slots, respectively. Let Q also be a random variable denoting the exponent of the fading parameter λ at the tth time slot. Then, ITRM would be aτ-eliminate-optimal scheme if the conditions • are satisfied at the tth time slot, where • and Λis the index set of the set Γ

  29. ITRM Mechanism • Analytical Security Evaluation (3/3) • The design parameter τ should be selected based on the highest fraction of malicious raters to be tolerated • We use a waiting time t such that (6a) and (6b) are satisfied with high probability • Then, among allτvalues we select the highestτvalue to minimize the probability of blacklisting a reliable rater

  30. ITRM Mechanism • Simulations (1/4) • Assumed that, there were already 200 raters and 50 SPs • 50 time slots have passed since the launch of the system • After this initialization process, 50 more SPs introduced • A fraction of the existing raters changed behavior (malicious) • By providing reliable ratings during the initialization period the malicious raters increased their trustworthiness values • Eventually, there are D+H=200 raters and N=100 SPs • The performance of ITRM obtained, for each time slot, as the Mean Absolute Error (MAE) (I-I)

  31. ITRM Mechanism • Simulations (2/4) • Performance has evaluated in the presence of bad mouthing • The victims are chosen among the newcomer SPs in order to have the most adverse effect • The malicious raters do not deviate very much from the actual values to remain under cover • Malicious raters apply a low intensity attack(the RepTrap attack) by choosing the same set of SPs and rate them as n=4 • By assuming that the ratings of the reliable raters deviate from the actual reputation values, this attack scenario becomes even harder to detect than the RepTrap • Δ = /b = 1

  32. ITRM Mechanism • Simulations (3/4)

  33. ITRM Mechanism • Simulations (4/4) - Although the malicious raters stay under cover when they attack with very less number of edges, this type of an attack limits the malicious raters’ ability to make a serious impact (they can only attack to a small number of SPs)

  34. Trust Management and Adversary Detection • Adversary Models and Security Threats • Attack Types • Attack on the network communication protocol • Attack on the security mechanism • Packet drop and packet injection (type 1) • An insider adversary drops legitimate packets it has received • Amalicious node may also generate its own flow to deliver to another node via the legitimate nodes • Bad mouthing (Ballot stuffing) on trust management (type2) • A malicious node may give incorrect feedback in order to undermine the trust management system • Bad-mouthing attacks attempt to reduce the trust on a victim node • Ballot-stuffing attacksboost trust value of a malicious ally

  35. Trust Management and Adversary Detection • Adversary Models and Security Threats • Random attack on trust management (type 2) • A Byzantine node may adjust its packet drop rate (on the scale of zero-to-one) tostay under cover • Bad mouthing (Ballot stuffing) on detection scheme (type 2) • Every legitimate node creates its own trust entries in a table (rating table) for a subset of network nodes for which the node has collected sufficient feedbacks • Each node also collects rating tables from other nodes • When the Byzantine nodes transfer their tables to a legitimate node, they may victimize the legitimate nodes or help their malicious allies • This effectively reduces the detection performance of the system

  36. Trust Management and Adversary Detection • Network/Communication Model and Technical Background • Mobility Models (1/2) • Random Waypoint (RWP) model produces exponentially decaying intercontact time distributions for the network nodes making the mobility analysis tractable • Each node is assigned an initial location in the field • Nodes travel at a constant speed to a randomly chosen destination. The speed is randomly chosen between min and max value • After reaching the destination, the node may pause for a random amount of time before the new destination and speed are chosen randomly for the next movement

  37. Trust Management and Adversary Detection • Network/Communication Model and Technical Background • Mobility Models (2/2) • Levy-walk (LW) model is shown to produce power-law distributions that has been studied extensively for animal patterns and recently has been shown to be a promising model for human mobility • Each movement length and pause time distributions closely match truncated power-law distributions • Angles of movement are pulled from a uniform distribution

  38. Trust Management and Adversary Detection • Network/Communication Model and Technical Background • Packet Format • Each packet contains its two hop history in its header • when node B receives a packet from node A, it learns from which node A received that packet • This mechanism is useful for the feedback mechanism • Routing and packet exchange protocol • The source node never transmits multiple copies of the same packet • Exchange of packets between two nodes follows a back-pressure policy • Assume nodes A and B have x and y packets belonging to the same flow f (where x > y). Then, if the contact duration permits, node A transfers (x-y)/2 packets to node B belonging to flow f

  39. Trust Management and Adversary Detection • Iterative Detection for DTNs • In DTNs, due to intermittent contacts, a judge node has to wait for a very long time to issue its own ratings for all the nodes in the network • However, it is desirable to have a fresh estimate of the reputation in a timely manner, mitigating the effects of malicious nodes immediately • Present feedback ratings as (0-malicious) or (1-honest)

  40. Trust Management and Adversary Detection • Iterative Detection for DTNs

  41. Trust Management and Adversary Detection • Trust Management Scheme for DTNs (1/5) • The authentication mechanism for the packets generated by a specific source is provided by a Bloom filter and ID-based signature (IBS) • When a source node sends some packets, it creates a Bloom filter output and signs it using IBS • When an intermediate node forwards packets to its contact, it also forwards the signed Bloom filter output for authentication • The feedback mechanism to determine the entries in the rating table is based on a 3-hop loop

  42. Trust Management and Adversary Detection • Trust Management Scheme for DTNs (2/5) • When Band C meet at , they first exchange signed time stamps • Bsends the packets in its buffer • Node B transfers the receipts it received thus far to C. Those receipts include the proofs of node B’s deliveries • Calso gives a signed receipt to B • The judge A and the witness C meet, they initially exchange their contact histories. A learns that C has met B and requests the feedback

  43. Trust Management and Adversary Detection • Trust Management Scheme for DTNs (3/5) • The feedback consists of two parts; receipts of B and the hashes of those packets for evaluation • The feedbacks from the witnesses are not trustable. Because of the bad mouthing (ballot stuffing) and random attacks • A judge node waits for a definite number of feedbacks to give its verdict • Each judge node uses the Beta distribution to aggregate multiple evaluations. If it is bigger than 0.5 the suspect is rated as “1”, otherwise it is rated as “0”

  44. Trust Management and Adversary Detection • Trust Management Scheme for DTNs (4/5) • The sufficient number of feedbacks that is required to give a verdict with high confidence depends on the packet drop rate and detection level • The judge node applies the ITRM for the lowest possible detection level depending on the entries in both its own rating table and collected from other nodes • Assume a judge node M collected rating tables from other nodes K and V • The rating table entries with the largest detection level has a detection level of m, k, and vfor M, K, and V ’s rating tables

  45. Trust Management and Adversary Detection • Trust Management Scheme for DTNs (5/5) • M performs ITRM at the detection level of max(m,k,v) • The malicious nodes may try to survive from the detection mechanism by setting their packet drop rates to lower values • The proposed detection mechanism eventually detects all the malicious nodes when the judge node waits longer times to apply the ITRM at a lower detection level

  46. Trust Management and Adversary Detection • Security Evaluations (1/9) • The performance of ITRM compared with the well-known reputation management schemes (Bayesian and EigenTrust) in a realistic DTN environment. • RWP and LW mobility models used to evaluate the performance of the proposed scheme • Simulation area is fixed to 4.5kmx4.5km which includes N=100 nodes each with a transmission range of 250 m • is the intercontact time between two particular nodes • Random variables x, y, and zrepresent the number of feedbacks received at judge node A, total number of contacts that node B established after meeting A, and the number of distinct contacts of B after meeting A

  47. Trust Management and Adversary Detection • Security Evaluations (2/9) Lemma 2.Let be the time that a transaction occurred between a particular judge-suspect pair. Further, let be the number of feedbacks received by the judge for that particular suspect node since t=. Then, the probability that the judge node has at least M feedbacks about the suspect node from M distinct witnesses at time T + is given by

  48. Trust Management and Adversary Detection • Security Evaluations (3/9)

  49. Trust Management and Adversary Detection • Security Evaluations (4/9) Lemma 3. Let a particular judge node start collecting feedbacks and generating its rating table at time t=. Further, let be the number of entries in the rating table of the judge node. Then, the probability that the judge node has at least s entries at time + T is given by

  50. Trust Management and Adversary Detection • Security Evaluations (5/9) • ITRM compared with the Bayesian reputation management framework and the EigenTrust algorithm • However, neither the original Bayesian framework nor EigenTrust is directly applicable to DTNs since both protocols rely on direct measurements which is not practical for DTNs • ITRM performs better than the Bayesian framework since Bayesian approaches assume that the reputation values of the nodes are independent • Hence, in these schemes, each reputation value is computed independent of the other nodes’ reputation

More Related