1 / 49

Agent approaches to Security, Trust and Privacy in Pervasive Computing

Agent approaches to Security, Trust and Privacy in Pervasive Computing. Anupam Joshi joshi@cs.umbc.edu http://www.cs.umbc.edu/~joshi/. The Vision. Pervasive Computing : a natural extension of the present human computing life style

Download Presentation

Agent approaches to Security, Trust and Privacy in Pervasive Computing

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Agent approaches to Security, Trust and Privacy in Pervasive Computing Anupam Joshi joshi@cs.umbc.edu http://www.cs.umbc.edu/~joshi/

  2. The Vision • Pervasive Computing: a natural extension of the present human computing life style • Using computing technologies will be as natural as using other non-computing technologies (e.g., pen, paper, and cups) • Computing services will be available anytime and anywhere.

  3. Pervasive Computing “The most profound technologies are those that disappear. They weave themselves into the fabric of everyday life until they are indistinguishable from it ” – Mark Weiser Think: writing, central heating, electric lighting, … Not: taking your laptop to the beach, or immersing yourself into a virtual reality

  4. Today: Life is Good.

  5. Tomorrow: We Got Problems!

  6. The Brave New World • Devices increasingly more {powerful ^ smaller ^ cheaper} • People interact daily with hundreds of computing devices (many of them mobile): • Cars • Desktops/Laptops • Cell phones • PDAs • MP3 players • Transportation passes  Computing is becoming pervasive

  7. Securing Data & Services • Security is critical because in many pervasive applications, we interact with agents that are not in our “home” or “office” environment. • Much of the work in security for distributed systems is not directly applicable to pervasive environments • Need to build analogs to trust and reputation relationships in human societies • Need to worry about privacy!

  8. An early policy for agents 1 A robot may not injure a human being, or,through inaction, allow a human being tocome to harm. 2 A robot must obey the orders given it by human beings except where such orders would conflict with the First Law. 3 A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. -- Handbook of Robotics, 56th Edition, 2058 A.D.

  9. On policies, rules and laws • The interesting thing about Asimov’s laws were that robots did not always strictly follow them. • This is a point of departure from more traditional “hard coded” rules like DB access control, and OS file permissions • For autonomous agents, we need policies that describe “norms of behavior” that they should follow to be good citizens. • So, it’s natural to worry about issues like • When an agent is governed by multiple policies, how does it resolve conflicts among them? • How can we define penalties when agents don’t fulfill their obligations? • How can we relate notions of trust and reputation to policies?

  10. The Role of Ontologies We will require shared ontologies to support this framework • A common ontology to represent basic concepts: agents, actions, permissions, obligations, prohibitions, delegations, credentials, etc. • Appropriate shared ontologies to describe classes, properties and roles of people and agents, e.g., • “any device owned by TimFinin” • “any request from a faculty member at ETZ” • Ontologies to encode policy rules

  11. ad-hoc networking technologies • Ad-hoc networking technologies (e.g. Bluetooth) • Main characteristics: • Short range • Spontaneous connectivity • Free, at least for now • Mobile devices • Aware of their neighborhood • Can discover others in their vicinity • Interact with peers in their neighborhood • inter-operate and cooperate as needed and as desired • Both information consumers and providers  Ad-hoc mobile technology challenges the traditional client/server information access model

  12. pervasive environment paradigm • Pervasive Computing Environment • Ad-Hoc mobile connectivity • Spontaneous interaction • Peers • Service/Information consumers and providers • Autonomous, adaptive, and proactive • “Data intensive” “deeply networked” environment • Everyone can exchange information • Data-centric model • Some sources generate “streams” of data, e.g. sensors  Pervasive Computing Environments

  13. motivation – conference scenario • Smart-room infrastructure and personal devices can assist an ongoing meeting: data exchange, schedulers, etc.

  14. imperfect world • In a perfectworld • everything available and done automatically • In therealworld • Limited resources • Battery, memory, computation, connection, bandwidth • Must live with less than perfect results • Dumb devices • Must explicitly be told What, When, and How • “Foreign” entities and unknown peers • So, we really want Smart, autonomous, dynamic, adaptive, and proactive methods to handle data and services…

  15. Securing Ad-Hoc Networks • MANETs underlie much of pervasive computing • They bring to fore interesting problems related to • Open • Dynamic • Distributed Systems • Each node is an “independent, autonomous” router • Has to interact with other nodes, some never seen before. • How do you detect bad guys ?

  16. “Network Level : Good Neighbor” • Ad hoc network • Node A sends packet destined for E, through B. • B and C make snoop entry (A,E,Ck,B,D,E). • B and C check for snoop entry. • Perform Misroute A E B D C

  17. A E B D C “Good Neighbor” • No Broadcast • Hidden terminal • Exposed terminal • DSR vs. AODV • GLOMOSIM

  18. Intrusion Detection • Behaviors • Selfish • Malicious • Detection vs. Reactions • Shunning bad nodes • Cluster Voting • Incentives (Game Theoretic) • Colluding nodes • Forgiveness

  19. Simulation in GlomoSim • Passive Intrusion Detection • Individual determination • No results forwarding • Active Intrusion Detection • Cluster Scheme • Voting • Result flooding

  20. GlomoSim Setup • 16 nodes communication • 4 nodes sources for 2 CBR streams • 2 nodes pair CBR streams • Mobility 0 – 20 meters/sec • Pause time 0 – 15s • No bad nodes

  21. Simulation Results

  22. Preliminary Results • Passive • False alarm rate > 50% • Throughput rate decrease < 3% additional • Active • False alarm rate < 30% • Throughput rate decrease ~ 25% additional

  23. challenges – is that all? (1) • Spatio-temporal variation of data and data sources • All devices in the neighborhood are potential information providers • Nothing is fixed • No global catalog • No global routing table • No centralized control • However, each entity can interact with its neighbors • By advertising / registering its service • By collecting / registering services of others

  24. challenges – is that all? (2) • Query may be explicit or implicit, but is often known up-front • Users sometimes ask explicitly • e.g. tell me the nearest restaurant that has vegetarian menu items • The system can “guess” likely queries based on declarative information or past behavior • e.g. the user always wants to know the price of IBM stock

  25. challenges – is that all? (3) • Since information sources are not known a priori, schema translations cannot be done beforehand • Resource limited devices so hope for common, domain specific ontologies  • Different modes: • Device could interact with only such providers whose schemas it understands • Device could interact with anyone, and cache the information in hopes of a translation in the future. • Device could always try to translate itself • Prior work in Schema Translation, Ongoing work in Ontology Mapping.

  26. challenges – is that all? (4) • Cooperation amongst information sources cannot be guaranteed • Device has reliable information, but makes it inaccessible • Devices provides information, which is unreliable • Once device shares information, it needs the capability to protect future propagation and changes to that information

  27. challenges – is that all? (5) • Need to avoid humans in the loop • Devices must dynamically "predict" data importance and utility based on the current context • The key insight: declarative (or inferred) descriptions help • Information needs • Information capability • Constraints • Resources • Data • Answer fidelity • Expressive Profiles can capture such descriptions

  28. 4. our data management architecture MoGATU • Design and implementation consists of • Data • Metadata • Profiles • Entities • Communication interfaces • Information Providers • Information Consumers • Information Managers

  29. MoGATU – metadata • Metadata representation • To provide information about • Information providers and consumers, • Data objects, and • Queries and answers • To describe relationships • To describe restrictions • To reason over the information  Semantic language • DAML+OIL / DAML-S • http://mogatu.umbc.edu/ont/

  30. MoGATU – profile • Profile • User – preferences, schedule, requirements • Device – constraints, providers, consumers • Data – ownership, restriction, requirements, process model • Profiles based on BDI models • Beliefs are “facts” • about user or environment/context • Desires and Intentions • higher level expressions of beliefs and goals • Devices “reason” over the BDI profiles • Generate domains of interest and utility functions • Change domains and utility functions based on context

  31. MoGATU – information manager (8) • Problems • Not all sources and data are correct/accurate/reliable • No common sense • Person can evaluate a web site based on how it looks, a computer cannot • No centralized party that could verify peer reliability or reliability of its data • Device is reliable, malicious, ignorant or uncooperative • Distributed Belief • Need to depend on other peers • Evaluate integrity of peers and data based on peer distributed belief • Detect which peer and what data is accurate • Detect malicious peers • Incentive model: if A is malicious, it will be excluded from the network…

  32. MoGATU – information manager (9) • Distributed Belief Model • Device sends a query to multiple peers • Ask its vicinity for reputation of untrusted peers that responded to the query • Trust a device only if trusted before or if enough of trusted peers trust it… • Use answers from (recommended to be) trusted peers to determine answer • Update reputation/trust level for all devices that responded • A trust level increases for devices that responded according to final answer • A trust level decreases for devices that responded in a conflicting way • Each devices builds a ring of trust…

  33. A: D, where is Bob? A: B, where is Bob? A: C, where is Bob?

  34. C: A, Bob is at work. B: A, Bob is home. D: A, Bob is home.

  35. A: B: Bob at home, C: Bob at work, D: Bob at home A: I have enoughtrust in D. What about B and C?

  36. B: I am not sure. C: I always do. F: I do. E: I don’t. A: Do you trust C? A: I don’t care what C says. I don’t know enough about B, but I trust D, E, and F. Together, they don’t trust C, so won’t I. D: I don’t.

  37. B: I do. C: I never do. F: I am not sure. E: I do. A: Do you trust B? A: I don’t care what B says. I don’t trust C, but I trust D, E, and F. Together, they trust B a little, so will I. D: I am not sure.

  38. A: I trust B and D, both say Bob ishome… A: Increase trust in D. A: Bob is home! A: Increase trust in B. A: Decrease trust in C.

  39. MoGATU – information manager (10) • Distributed Belief Model • Initial Trust Function • Positive, negative, undecided • Trust Learning Function • Blindly +, Blindly -, F+/S-, S+/F-, F+/F-, S+/S-, Exp • Trust Weighting Function • Multiplication, cosine • Accuracy Merging Function • Max, min, average

  40. experiments • Primary goal of distributed belief • Improve query processing accuracy by using trusted sources and trusted data • Problems • Not all sources and data are correct/accurate/reliable • No centralized party that could verify peer reliability or reliability of its data • Need to depend on other peers • No common sense • Person can evaluate a web site based on how it looks, a computer cannot • Solution • Evaluate integrity of peers and data based on peer distributed belief • Detect which peer and what data is accurate • Detect malicious peers • Incentive model: if A is malicious, it will be excluded from the network…

  41. experiments • Devices • Reliable (Share reliable data only) • Malicious (Try to share unreliable data as reliable) • Ignorant (Have unreliable data but believe they are reliable) • Uncooperative (Have reliable data, will not share) • Model • Device sends a query to multiple peers • Ask its vicinity for reputation of untrusted peers that responded to the query • Trust a device only if trusted before or if enough of trusted peers trust it… • Use answers from (recommended to be) trusted peers to determine answer • Update reputation/trust level for all devices that responded • A trust level increases for devices that responded according to final answer • A trust level decreases for devices that responded in a conflicting way

  42. experimental environment • HOW: • Mogatu and GloMoSim • Spatio-temporal environment: • 150 x 150 m2 field • 50 nodes • Random way-point mobility • AODV • Cache to hold 50% of global knowledge • Trust-based LRU • 50 minute each simulation run • 800 questions-tuples • Each device 100 random unique questions • Each device 100 random unique answers not matching its questions • Each device initially trusts 3-5 other devices

  43. experimental environment (2) • Level of Dishonesty • 0 – 100% • Dishonest device • Never provides an honest answer • Honest device • Best effort • Initial Trust Function • Positive, negative, undecided • Trust Learning Function • Blindly +, Blindly -, F+/S-, S+/F-, F+/F-, S+/S-, Exp • Trust Weighting Function • Multiplication, cosine • Accuracy Merging Function • Max, min, avg • Trust and Distrust Convergence • How soon are dishonest devices detected

  44. results • Answer Accuracy vs. Trust Learning Functions • Answer Accuracy vs. Accuracy Merging Functions • Distrust Convergence vs. Dishonesty Level

  45. Answer Accuracy vs. Trust Learning Functions • The effects of trust learning functions with an initial optimistic trust for environments with varying level of dishonesty. • The results are shown for ∆++, ∆--, ∆s, ∆f, ∆f+, ∆f-, and ∆exp learning functions.

  46. Answer Accuracy vs. Trust Learning Functions (2) • The effects of trust learning functions with an initial pessimistic trust for environments with varying level of dishonesty. • The results are shown for ∆++, ∆--, ∆s, ∆f, ∆f+, ∆f-, and ∆exp learning functions.

  47. Answer Accuracy vs. Accuracy Merging Functions • The effects of accuracy merging functions for environments with varying level of dishonesty. The results are shown for • (a) MIN using only-one (OO) final answer approach • (b) MIN using {\it highest-one} (HO) final answer approach • (c) MAX + OO, (d) MAX + HO, (e) AVG + OO, and (f) AVG + HO.

  48. Distrust Convergence vs. Dishonesty Level • Average distrust convergence period in seconds for environments with varying level of dishonesty. • The results are shown for ∆++, ∆--, ∆s, and ∆f trust learning functions with an initial optimal trust strategy and for the same functions using an undecided initial trust strategy for results (e-h), respectively.

  49. http://ebiquity.umbc.edu/

More Related