1 / 59

Building A Trustworthy, Secure, And Privacy Preserving Network

Building A Trustworthy, Secure, And Privacy Preserving Network. Bharat Bhargava CERIAS Security Center CWSA Wireless Center Department of CS and ECE Purdue University Supported by NSF IIS 0209059, NSF IIS 0242840 , NSF ANI 0219110, CISCO, Motorola, IBM. Research Team.

sage-cox
Download Presentation

Building A Trustworthy, Secure, And Privacy Preserving Network

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Building A Trustworthy, Secure, And Privacy Preserving Network Bharat Bhargava CERIAS Security Center CWSA Wireless Center Department of CS and ECE Purdue University Supported by NSF IIS 0209059, NSF IIS 0242840 , NSF ANI 0219110, CISCO, Motorola, IBM

  2. Research Team • Faculty Collaborators • Dongyan Xu, middleware and privacy • Mike Zoltowski, smart antennas, wireless security • Sonia Fahmy, Internet security • Ninghui Li, trust • Cristina Nita-Rotaru, Internet security • Postdoc • Lezsek Lilien, privacy and vulneribility • Xiaoxin Wu, wireless security • Jun Wen, QoS • Mamata Jenamani, privacy • Ph.D. students • Ahsan Habib, Internet Security • Mohamed Hefeeda, peer-to-peer • Yi Lu, wireless security and congestion control • Yuhui Zhong, trust management and fraud • Weichao Wang, security of ad hoc networks More information at http://www.cs.purdue.edu/people/bb

  3. Motivation • Lack of trust, privacy, security, and reliability impedes information sharing among distributed entities. • San Diego supercomputer center detected 13,000 DoS attacks in a three-week period [eWeek, 2003] • Internet attacks in February, 2004 caused an estimated $68 billion to $83 billion in damages worldwide [British Computer Security Report] • Business losses due to privacy violations • Online consumers worry about revealing personal data • This fear held back $15 billion in online revenue in 2001 • 52,658 reported system crashes caused by software vulnerabilities in 2002 [Express Computers 2002]

  4. Research is required for the creation of knowledge and learning in secure networking, systems, and applications.

  5. Goal • Enable the deployment of security sensitive applications in the pervasive computing and communication environments.

  6. Problem Statement • A trustworthy, secure, and privacy preserving network platform must be established for trusted collaboration. The fundamental research problems include: • Trust management • Privacy preserved interactions • Dealing with a variety of attacks and frauds in networks • Intruder identification in ad hoc networks (focus of this seminar)

  7. Applications/Broad Impacts • Guidelines for the design and deployment of security sensitive applications in the next generation networks • Data sharing for medical research and treatment • Collaboration among government agencies for homeland security • Transportation system (security check during travel, hazardous material disposal) • Collaboration among government officials, law enforcement and security personnel, and health care facilities during bio-terrorism and other emergencies

  8. A. Trust Formalization • Problem • Dynamically establish and update trust among entities in an open environment. • Research directions • Handling uncertain evidence • Modeling dynamic trust • Formalization and detection of fraud • Challenges • Uncertain information complicates the inference procedure. • Subjectivity leads to various interpretations toward the same information. • The multi-faceted and context-dependent characteristics of trust require tradeoff between representation comprehensiveness and computation simplicity of the trust model.

  9. Trust Enhanced Role Assignment (TERA) Prototype • Trust enhanced role mapping (TERM) server assigns roles to users based on • Uncertain & subjective evidence • Dynamic trust • Reputation server • Dynamic trust information repository • Evaluate reputation from trust information by using algorithms specified by TERM server Prototype and demo are available at http://www.cs.purdue.edu/homes/bb/NSFtrust/

  10. TERA Architecture

  11. Trust Enhanced Role Mapping (TERM) Server • Evidence rewriting • Role assignment • Policy parser • Request processor & inference engine • Constraint enforcement • Policy base • Trust information management • User behavior modeling • Trust production

  12. TERM Server

  13. Fraud Formalization and Detection • Model fraud intention • Uncovered deceiving intention • Trapping intention • Illusive intention • Fraud detection • Profile-based anomaly detection • Monitor suspicious actions based upon the established patterns of an entity • State transition analysis • Build an automaton to identify activities that lead towards a fraudulent state

  14. Model Fraud Intentions • Uncovered deceiving intention • Satisfaction ratings are stably low. • Ratings vary in a small range over time.

  15. Model Fraud Intentions • Trapping intention • Rating sequence can be divided into two phases: preparing and trapping. • A swindler behaves well to achieve a trustworthy image before he conducts frauds.

  16. Model Fraud Intentions • Illusive intention • A smart swindler attempts to cover the bad effects by intentionally doing something good after misbehaviors. • Process of preparing and trapping is repeated.

  17. B. Private and Trusted Interactions • Problem • Preserve privacy, gain trust, and control dissemination of data • Research directions • Dissemination of private data • Privacy and trust tradeoff • Privacy metrics • Challenges • Specify policies through metadata and establish guards as procedures • Efficient implementation • Estimate privacy depending on who will get this information, possible uses of this information, and information disclosed in the past • Privacy metrics are usually ad hoc and customized Detail slides athttp://www.cs.purdue.edu/homes/bb/priv_trust_cerias.ppt

  18. Preserving Privacy in Data Dissemination • Design self-descriptive private objects • Construct a mechanism for apoptosis of private objects apoptosis = clean self-destruction • Develop proximity-based evaporation of private objects • Develop schemes for data distortions

  19. Privacy Metrics • Determine the degree of data privacy • Size of anonymity set metrics • Entropy-based metrics • Privacy metrics should account for: • Dynamics of legitimate users • Dynamics of violators • Associated costs

  20. “More” anonymous (1/n) Size of Anonymity Set Metrics • The larger set of indistinguishable entities, the lower probability of identifying any one of them • Can use to ”anonymize” a selected private attribute value within the domain of its all possible values “Hiding in a crowd” “Less” anonymous (1/4)

  21. Dynamics of Entropy • Decrease of system entropy with attribute disclosures (capturing dynamics) • When entropy reaches a threshold (b), data evaporation can be invoked to increase entropy by controlled data distortions • When entropy drops to a very low level (c), apoptosis can be triggered to destroy private data • Entropy increases (d) if the set of attributes grows or the disclosed attributes become less valuable – e.g., obsolete or more data now available H* Entropy Level All attributes Disclosed attributes (b) (a) (d) (c)

  22. Private and Trusted System (PRETTY) Prototype (4) (1) (2) [2c2] (3)User Role [2a] [2b] [2d] [2c1] TERA = Trust-Enhanced Role Assignment

  23. C. Tomography Research • Problem • Defend against denial of service attacks • Optimize the selection of data providers in peer-to-peer systems • Research Directions • Stripe based probing to infer individual link loss by edge-to-edge measurements • Overlay based monitoring to identify congested links by end-to-end path measurement • Topology inference to estimate available bandwidth by path segment measurements

  24. Defeating DoS Attacks in Internet

  25. Overlay-based Monitoring • Do not need individual link loss to identify all congested links • Edge routers form an overlay network for probing. Each edge router probe part of the network • Problem statement • Given topology of a network domain, identify which links are congested and possibly under attack

  26. Attack Scenarios Delay (ms) Loss Ratio Time (sec) Time (sec) (a) Changing delay pattern due to attack (b) Changing in loss pattern due to attack

  27. Identified Congested Links Loss Ratio Loss Ratio Time (sec) Time (sec) (a) Counter clockwise probing (b) Clockwise probing Probe46 in graph (a) and Probe76 in graph (b) observe high losses, which means link C4  E6 is congested.

  28. (a) Topology (b) Overlay (c) internal links Probing: Simple Method Congested link

  29. D. Intruder Identification in Adhoc On-demand Distance Vector (AODV) • Problem • AODV are vulnerable to various attacks such as false distance vector, false destination sequence, and wormhole attacks • Detecting attacks without identifying and isolating the malicious hosts leaves the security mechanisms in a passive mode • Challenges • Locate the sources of attacks in a self-organized infrastructure • Combine local decisions with knowledge from other hosts to achieve consistent conclusions on the malicious hosts

  30. Related Work • Vulnerability model of ad hoc routing protocols [Yang et al., SASN ’03] • A generic multi layer integrated IDS structure [Zhang and Lee, MobiCom ’00] • IDS combining with trust [Albert et al., ICEIS ’02] • Information theoretic measures using entropy [Okazaki et al., SAINT ’02] • SAODV adopts both hash chain and digital signature to protect routing information [Zapata et al, WiSe’03] • Security-aware ad hoc routing [Kravets et al, MobiHOC’01]

  31. Ideas • Monitor the sequence numbers in the route request packets to detect abnormal conditions • Apply reverse labeling restriction to identify and isolate attackers • Combine local decisions with knowledge from other hosts to achieve consistent conclusions • Combine with trust assessment methods to improve robustness

  32. Introduction to AODV • Introduced in 97 by Perkins at NOKIA, Royer at UCSB • 12 versions of IETF draft in 4 years, 4 academic implementations, 2 simulations • Combines on-demand and distance vector • Broadcast Route Query, Unicast Route Reply • Quick adaptation to dynamic link condition and scalability to large scale network • Support multicast

  33. Route Discovery in AODV (An Example) D S1 S3 S2 S4 S Route to the source Route to the destination

  34. Route request flooding query non-existing host (RREQ will flood throughout the network) False distance vector reply “one hop to destination” to every request and select a large enough sequence number False destination sequence number select a large number (even beat the reply from the real destination) Wormhole attacks tunnel route request through wormhole and attract the data traffic to the wormhole Coordinated attacks The malicious hosts establish trust to frame other hosts, or conduct attacks alternatively to avoid being identified Attacks on AODV

  35. RREQ(D, 3) RREP(D, 5) RREQ(D, 3) RREQ(D, 3) RREQ(D, 3) RREP(D, 20) False Destination Sequence Attack S3 D S S1 S2 M Packets from S to D are sinking at M. Node movement breaks the path from S to M (trigger route rediscovery).

  36. During Route Rediscovery, False Destination Sequence Attack Is Detected (1). S broadcasts a request that carries the old sequence + 1 = 21 (2) D receives the RREQ. Local sequence is 5, but the sequence in RREQ is 21. D detects the false desti-nation sequence attack. D S3 RREQ(D, 21) S S1 S2 M S4 Propagation of RREQ

  37. Reverse Labeling Restriction (RLR) Blacklists are updated after an attack is detected. • Basic Ideas • Every host maintains a blacklist to record suspicious hosts. Suspicious hosts can be released from the blacklist. • The destination host will broadcast an INVALID packet with its signature when it finds that the system is under attack on sequence. The packet carries the host’s identification, current sequence, new sequence, and its own blacklist. • Every host receiving this packet will examine its route entry to the destination host. If the sequence number is larger than the current sequence in INVALID packet, the presence of an attack is noted. The previous host that provides the false route will be added into this host’s blacklist.

  38. BL {S2} BL {} BL {S1} BL {} BL {M} BL {} S3 D INVALID ( D, 5, 21, {}, SIGN ) S S1 M S2 S4 D broadcasts INVALID packet with current sequence = 5, new sequence = 21. S3 examines its route table, the entry to D is not false. S3 forwards packet to S1. S1 finds that its route entry to D has sequence 20, which is > 5. It knows that the route is false. The hop which provides this false route to S1 was S2. S2 will be put into S1’s blacklist. S1 forwards packet to S2 and S. S2 adds M into its blacklist. S adds S1 into its blacklist. S forwards packet to S4. S4 does not change its blacklist since it is not involved in this route. Correct destination sequence number is broadcasted. Blacklist at each host in the path is determined.

  39. D1 D2 [M] S3 S4 [M] M D4 D3 [M] [M] S2 S1 M attacks 4 routes (S1-D1, S2-D2, S3-D3, and S4-D4). When the first two false routes are detected, D3 and D4 add M into their blacklists. When later D3 and D4 become victim destinations, they will broadcast their blacklists, and every host will get two votes that M is malicious host. Hosts closer to malicious site are in blacklists of multiple hosts. In the above figure, M is in four blacklists.

  40. Combine Local Decisions with Knowledge from Other Hosts • When a host is destination of a route and is victim by any malicious host, it will broadcast its blacklist. • Each host obtains blacklists from victim hosts. • If M is in multiple blacklists, M is classified as a malicious host based on certain threshold. • Intruder is identified. • Trust values can be assigned to other hosts based on past information.

  41. Acceleration in Intruder Identification D3 D3 D2 D2 D1 D1 M2 M3 M2 M3 M1 M1 S2 S2 S1 S3 S3 S1 Routing topology Reverse labeling procedure Multiple attackers exist in the network. More routes are under attack. When the false routes are detected, more blacklists will be broadcasted.

  42. Update Blacklist by INVALID Packet Next hop on the invalid route will be put into local blacklist, a timer starts, and a counter increases. The time duration that the host stays in blacklist is exponential to the counter value. Labeling process will be conducted in the reverse direction of the false route. When timer expires, the suspicious host will be released from the blacklist and routing information from it will be accepted. Reverse Labeling Restriction

  43. Deal With Hosts in Blacklist • Packets from hosts in blacklist • Route request: If the request is from suspicious hosts, ignore it. • Route reply: If the previous hop is suspicious and the query destination is not the previous hop, the reply will be ignored. • Route error: will be processed as usual. RERR will activate re-discovery, which will help to detect attacks on destination sequence. • INVALID: if the sender is suspicious, the packet will be processed but the blacklist will be ignored.

  44. Attack 1: Malicious host M sends false INVALID packet Because the INVALID packets are signed, it cannot send the packets in other hosts’ name If M sends INVALID in its own name If the reported sequence number is greater than the real sequence number, every host ignores this attack If the reported sequence number is less than the real sequence number, RLR will converge at the malicious host. M is included in blacklist of more hosts. M accelerated the intruder identification directing towards M. Attacks of Malicious Hosts on RLR

  45. Attack 2: Malicious host M frames other innocent hosts by sending false blacklist • If the malicious host has been identified, the blacklist will be ignored • If the malicious host has not been identified, this operation can only make the threshold lower. If the threshold is selected properly, it will not impact the identification results. • Combining trust can further limit the impact of this attack.

  46. Attack 3: Malicious host M only sends false destination sequence about some special host The special host will detect the attack and send INVALID packets. Other hosts can establish new routes to the destination by receiving the INVALID packets.

  47. Experimental Studies of RLR • The experiments are conducted using ns2. • Various network scenarios are formed by varying the number of independent attackers, number of connections, and host mobility. • The examined parameters include: • Packet delivery ratio • Identification accuracy: false positive and false negative ratio • Communication and computation overhead

  48. Simulation Parameter

  49. Experiment 1: Measure the Changes in Packet Delivery Ratio Purpose: investigate the impacts of host mobility, number of attackers, and number of connections on the performance improvement brought by RLR Input parameters: host pause time, number of independent attackers, number of connections Output parameters: packet delivery ratio Observation: When only one attacker exists in the network, RLR brings a 30% increase in the packet delivery ratio. When multiple attacker exist in the system, the delivery ratio will not recover before all attackers are identified.

  50. Increase in Packet Delivery Ratio: Single Attacker X-axis is host pause time, which evaluates the mobility of host. Y-axis is delivery ratio. 25 connections and 50 connections are considered. RLR brings a 30% increase in delivery ratio. 100% delivery is difficult to achieve due to network partition, route discovery delay and buffer.

More Related