1 / 108

Building A Trustworthy, Secure, And Privacy Preserving Network

Building A Trustworthy, Secure, And Privacy Preserving Network. Bharat Bhargava CERIAS Security Center CWSA Wireless Center Department of CS and ECE Purdue University Supported by NSF IIS 0209059, NSF IIS 0242840 , NSF ANI 0219110, CISCO, Motorola, IBM. Research Team.

amal
Download Presentation

Building A Trustworthy, Secure, And Privacy Preserving Network

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Building A Trustworthy, Secure, And Privacy Preserving Network Bharat Bhargava CERIAS Security Center CWSA Wireless Center Department of CS and ECE Purdue University Supported by NSF IIS 0209059, NSF IIS 0242840 , NSF ANI 0219110, CISCO, Motorola, IBM

  2. Research Team • Faculty Collaborators • Dongyan Xu, middleware and privacy • Mike Zoltowski, smart antennas, wireless security • Sonia Fahmy, Internet security • Ninghui Li, trust • Cristina Nita-Rotaru, Internet security • Postdoc • Lezsek Lilien, privacy and vulneribility • Xiaoxin Wu, wireless security • Jun Wen, QoS • Mamata Jenamani, privacy • Ph.D. students • Ahsan Habib, Internet Security • Mohamed Hefeeda, peer-to-peer • Yi Lu, wireless security and congestion control • Yuhui Zhong, trust management and fraud • Weichao Wang, security of ad hoc networks More information at http://www.cs.purdue.edu/people/bb

  3. Motivation • Lack of trust, privacy, security, and reliability impedes information sharing among distributed entities. • San Diego supercomputer center detected 13,000 DoS attacks in a three-week period [eWeek, 2003] • Internet attacks in February, 2004 caused an estimated $68 billion to $83 billion in damages worldwide [British Computer Security Report] • Business losses due to privacy violations • Online consumers worry about revealing personal data • This fear held back $15 billion in online revenue in 2001 • 52,658 reported system crashes caused by software vulnerabilities in 2002 [Express Computers 2002]

  4. Research is required for the creation of knowledge and learning in secure networking, systems, and applications.

  5. Goal • Enable the deployment of security sensitive applications in the pervasive computing and communication environments.

  6. Problem Statement • A trustworthy, secure, and privacy preserving network platform must be established for trusted collaboration. The fundamental research problems include: • Trust management • Privacy preserved interactions • Dealing with a variety of attacks and frauds in networks • Intruder identification in ad hoc networks (focus of this seminar)

  7. Applications/Broad Impacts • Guidelines for the design and deployment of security sensitive applications in the next generation networks • Data sharing for medical research and treatment • Collaboration among government agencies for homeland security • Transportation system (security check during travel, hazardous material disposal) • Collaboration among government officials, law enforcement and security personnel, and health care facilities during bio-terrorism and other emergencies

  8. Scientific Contributions • Trust formalization • Privacy preservation in interactions • Network tomography techniques for DoS attacks • Intrusion detection and intruder identification in ad hoc networks • Vulnerability analysis and threat assessment

  9. A. Trust Formalization • Problem • Dynamically establish and update trust among entities in an open environment. • Research directions • Handling uncertain evidence • Modeling dynamic trust • Formalization and detection of fraud • Challenges • Uncertain information complicates the inference procedure. • Subjectivity leads to various interpretations toward the same information. • The multi-faceted and context-dependent characteristics of trust require tradeoff between representation comprehensiveness and computation simplicity of the trust model.

  10. Uncertain Evidence • Probability-based approach to evaluate the uncertainty of a logic expression given a set of uncertain evidence • Atomic formula: Bayes network + causal inference + conditional probability interpretation of opinion • AND/OR expressions: rule defined by Jsang [Jsang'01] • Subjectivity is realized using discounting operator proposed by Shafer [Shafer'76]

  11. Dynamic Trust • Trust production based on direct interaction • Identify behavior patterns and their characteristic features • Determine which pattern is the best match of an interaction sequence • Develop personalized trust production algorithms considering behavior patterns • Reputation aggregation • Global reputation vs. personalized reputation • Personalized reputation aggregation • Determine the subset of trust information useful for a specific trustor by using collaborative filters • Translate trust information into the scale of a specific trustor

  12. Trust Enhanced Role Assignment (TERA) Prototype • Trust enhanced role mapping (TERM) server assigns roles to users based on • Uncertain & subjective evidence • Dynamic trust • Reputation server • Dynamic trust information repository • Evaluate reputation from trust information by using algorithms specified by TERM server Prototype and demo are available at http://www.cs.purdue.edu/homes/bb/NSFtrust/

  13. TERA Architecture

  14. Trust Enhanced Role Mapping (TERM) Server • Evidence rewriting • Role assignment • Policy parser • Request processor & inference engine • Constraint enforcement • Policy base • Trust information management • User behavior modeling • Trust production

  15. TERM Server

  16. Fraud Formalization and Detection • Model fraud intention • Uncovered deceiving intention • Trapping intention • Illusive intention • Fraud detection • Profile-based anomaly detection • Monitor suspicious actions based upon the established patterns of an entity • State transition analysis • Build an automaton to identify activities that lead towards a fraudulent state

  17. Model Fraud Intentions • Uncovered deceiving intention • Satisfaction ratings are stably low. • Ratings vary in a small range over time.

  18. Model Fraud Intentions • Trapping intention • Rating sequence can be divided into two phases: preparing and trapping. • A swindler behaves well to achieve a trustworthy image before he conducts frauds.

  19. Model Fraud Intentions • Illusive intention • A smart swindler attempts to cover the bad effects by intentionally doing something good after misbehaviors. • Process of preparing and trapping is repeated.

  20. B. Private and Trusted Interactions • Problem • Preserve privacy, gain trust, and control dissemination of data • Research directions • Dissemination of private data • Privacy and trust tradeoff • Privacy metrics • Challenges • Specify policies through metadata and establish guards as procedures • Efficient implementation • Estimate privacy depending on who will get this information, possible uses of this information, and information disclosed in the past • Privacy metrics are usually ad hoc and customized Detail slides athttp://www.cs.purdue.edu/homes/bb/priv_trust_cerias.ppt

  21. Preserving Privacy in Data Dissemination • Design self-descriptive private objects • Construct a mechanism for apoptosis of private objects apoptosis = clean self-destruction • Develop proximity-based evaporation of private objects • Develop schemes for data distortions

  22. Privacy-Trust Tradeoff • Gain a certain level of trust with the least loss of privacy • Build trust based on digital credentials of users that contain private information • Formulate the privacy-trust tradeoff problem • Estimate privacy loss due to disclosing a set of credentials • Estimate trust gain due to disclosing a set of credentials • Develop algorithms that minimize privacy loss for required trust gain

  23. Privacy Metrics • Determine the degree of data privacy • Size of anonymity set metrics • Entropy-based metrics • Privacy metrics should account for: • Dynamics of legitimate users • Dynamics of violators • Associated costs

  24. “More” anonymous (1/n) Size of Anonymity Set Metrics • The larger set of indistinguishable entities, the lower probability of identifying any one of them • Can use to ”anonymize” a selected private attribute value within the domain of its all possible values “Hiding in a crowd” “Less” anonymous (1/4)

  25. Dynamics of Entropy • Decrease of system entropy with attribute disclosures (capturing dynamics) • When entropy reaches a threshold (b), data evaporation can be invoked to increase entropy by controlled data distortions • When entropy drops to a very low level (c), apoptosis can be triggered to destroy private data • Entropy increases (d) if the set of attributes grows or the disclosed attributes become less valuable – e.g., obsolete or more data now available H* Entropy Level All attributes Disclosed attributes (b) (a) (d) (c)

  26. Private and Trusted System (PRETTY) Prototype (4) (1) (2) [2c2] (3)User Role [2a] [2b] [2d] [2c1] TERA = Trust-Enhanced Role Assignment

  27. Information Flow for PRETTY • User application sends query to server application. • Server application sends user information to TERA server for trust evaluation and role assignment. • If a higher trust level is required for query, TERA server sends the request for more user’s credentials to privacy negotiator. • Based on server’s privacy policies and the credential requirements, privacy negotiator interacts with user’s privacy negotiator to build a higher level of trust. • Trust gain and privacy loss evaluator selects credentials that will increase trust to the required level with the least privacy loss. Calculation considers credential requirements and credentials disclosed in previous interactions. • According to privacy policies and calculated privacy loss, user’s privacy negotiator decides whether or not to supply credentials to the server. • Once trust level meets the minimum requirements, appropriate roles are assigned to user for execution of his query. • Based on query results, user’s trust level and privacy polices, data disseminator determines: (i) whether to distort data and if so to what degree, and (ii) what privacy enforcement metadata should be associated with it.

  28. Experimental Studies • Private object implementation • Validate and evaluate the cost, efficiency, and the impacts on the dissemination of objects • Study the apoptosis and evaporation mechanisms for private objects • Tradeoff between privacy and trust • Study the effectiveness and efficiency of the probability-based and lattice-based privacy loss evaluation methods • Assess the usability of the evaluator of trust gain and privacy loss • Location-based routing and services • Evaluate the dynamic mappings between trust levels and distortion levels

  29. C. Tomography Research • Problem • Defend against denial of service attacks • Optimize the selection of data providers in peer-to-peer systems • Research Directions • Stripe based probing to infer individual link loss by edge-to-edge measurements • Overlay based monitoring to identify congested links by end-to-end path measurement • Topology inference to estimate available bandwidth by path segment measurements

  30. Defeating DoS Attacks in Internet

  31. Overlay-based Monitoring • Do not need individual link loss to identify all congested links • Edge routers form an overlay network for probing. Each edge router probe part of the network • Problem statement • Given topology of a network domain, identify which links are congested and possibly under attack

  32. Attack Scenarios Delay (ms) Loss Ratio Time (sec) Time (sec) (a) Changing delay pattern due to attack (b) Changing in loss pattern due to attack

  33. Identified Congested Links Loss Ratio Loss Ratio Time (sec) Time (sec) (a) Counter clockwise probing (b) Clockwise probing Probe46 in graph (a) and Probe76 in graph (b) observe high losses, which means link C4  E6 is congested.

  34. (a) Topology (b) Overlay (c) internal links Probing: Simple Method Congested link

  35. Analyzing Simple Method • Lemma 1. If P and P’ are probe paths in the first and the second round of probing respectively, |P P’ |≤ 1 • Theorem 1.If only one probe path P is shown to be congested in any round of probing, the simple method successfully identifies status of each link in P • Performs better if edge-to-edge paths are congested • The average length of the probe paths in the Simple method is ≤ 4

  36. Performance: Simple Method Theorem 2. Let p be the probability of a link being congested in any arbitrary overlay network. The simple method determines the status of any link of the topology with probability at least 2(1-p)4-(1-p)7+p(1-p)12 Detection Probability Frac of actual congested links

  37. Advanced Method AdvancedMethod() begin Conduct Simple Method. E is the unsolved equation set for Each undecided variable Xij of E do node1 = FindNode(Tree T, vi, IN) node2 = FindNode(Tree T, vj , OUT) if node1 ≠ NULL AND node2 ≠ NULL then Probe(node1, node2). Update equation set E end if Stop if no more probe exists endfor end

  38. Analyzing Advanced Method • Lemma 2.For an arbitrary overlay network with n edge routers, on the average a link lies on b = edge-to-edge paths • Lemma 3.For an arbitrary overlay network with n edge routers, the average length of all edge-to-edge paths is d = • Theorem 3.Let p be the probability of a link being congested. The advanced method can detect the status of a link with probability at least (1-(1-(1-p)d)b)

  39. D. Intruder Identification in Adhoc On-demand Distance Vector (AODV) • Problem • AODV are vulnerable to various attacks such as false distance vector, false destination sequence, and wormhole attacks • Detecting attacks without identifying and isolating the malicious hosts leaves the security mechanisms in a passive mode • Challenges • Locate the sources of attacks in a self-organized infrastructure • Combine local decisions with knowledge from other hosts to achieve consistent conclusions on the malicious hosts

  40. Attacks on Routing in Mobile Ad Hoc Networks Attacks on routing Active attacks Passive attacks Routing procedure Packet silent discard Routing information hiding Flood network Route request Route broken message False reply Wormhole attacks

  41. Related Work • Vulnerability model of ad hoc routing protocols [Yang et al., SASN ’03] • A generic multi layer integrated IDS structure [Zhang and Lee, MobiCom ’00] • IDS combining with trust [Albert et al., ICEIS ’02] • Information theoretic measures using entropy [Okazaki et al., SAINT ’02] • SAODV adopts both hash chain and digital signature to protect routing information [Zapata et al, WiSe’03] • Security-aware ad hoc routing [Kravets et al, MobiHOC’01]

  42. Ideas • Monitor the sequence numbers in the route request packets to detect abnormal conditions • Apply reverse labeling restriction to identify and isolate attackers • Combine local decisions with knowledge from other hosts to achieve consistent conclusions • Combine with trust assessment methods to improve robustness

  43. Introduction to AODV • Introduced in 97 by Perkins at NOKIA, Royer at UCSB • 12 versions of IETF draft in 4 years, 4 academic implementations, 2 simulations • Combines on-demand and distance vector • Broadcast Route Query, Unicast Route Reply • Quick adaptation to dynamic link condition and scalability to large scale network • Support multicast

  44. Route Discovery in AODV (An Example) D S1 S3 S2 S4 S Route to the source Route to the destination

  45. Route request flooding query non-existing host (RREQ will flood throughout the network) False distance vector reply “one hop to destination” to every request and select a large enough sequence number False destination sequence number select a large number (even beat the reply from the real destination) Wormhole attacks tunnel route request through wormhole and attract the data traffic to the wormhole Coordinated attacks The malicious hosts establish trust to frame other hosts, or conduct attacks alternatively to avoid being identified Attacks on AODV

  46. Impacts of Attacks on AODV We simulate the attacks and measure their impacts on packet delivery ratios and protocol overhead

  47. RREQ(D, 3) RREP(D, 5) RREQ(D, 3) RREQ(D, 3) RREQ(D, 3) RREP(D, 20) False Destination Sequence Attack S3 D S S1 S2 M Packets from S to D are sinking at M. Node movement breaks the path from S to M (trigger route rediscovery).

  48. During Route Rediscovery, False Destination Sequence Attack Is Detected (1). S broadcasts a request that carries the old sequence + 1 = 21 (2) D receives the RREQ. Local sequence is 5, but the sequence in RREQ is 21. D detects the false desti-nation sequence attack. D S3 RREQ(D, 21) S S1 S2 M S4 Propagation of RREQ

  49. Reverse Labeling Restriction (RLR) Blacklists are updated after an attack is detected. • Basic Ideas • Every host maintains a blacklist to record suspicious hosts. Suspicious hosts can be released from the blacklist. • The destination host will broadcast an INVALID packet with its signature when it finds that the system is under attack on sequence. The packet carries the host’s identification, current sequence, new sequence, and its own blacklist. • Every host receiving this packet will examine its route entry to the destination host. If the sequence number is larger than the current sequence in INVALID packet, the presence of an attack is noted. The previous host that provides the false route will be added into this host’s blacklist.

  50. BL {} BL {S2} BL {S1} BL {} BL {M} BL {} S3 D INVALID ( D, 5, 21, {}, SIGN ) S S1 S2 M S4 D broadcasts INVALID packet with current sequence = 5, new sequence = 21. S3 examines its route table, the entry to D is not false. S3 forwards packet to S1. S1 finds that its route entry to D has sequence 20, which is > 5. It knows that the route is false. The hop which provides this false route to S1 was S2. S2 will be put into S1’s blacklist. S1 forwards packet to S2 and S. S2 adds M into its blacklist. S adds S1 into its blacklist. S forwards packet to S4. S4 does not change its blacklist since it is not involved in this route. Correct destination sequence number is broadcasted. Blacklist at each host in the path is determined.

More Related