1 / 42

Data-Centric Storage in Sensornets with GHT, A Geographic Hash Table

Data-Centric Storage in Sensornets with GHT, A Geographic Hash Table. Presented by: Yan Lu Date: 04/03/2003. Outline. Background Existing Schemas Data-Centric Storage Performance Conclusion References. Background. Sensornet

dante
Download Presentation

Data-Centric Storage in Sensornets with GHT, A Geographic Hash Table

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Data-Centric Storage in Sensornets with GHT, A Geographic Hash Table Presented by: Yan Lu Date: 04/03/2003 CS691-WMU

  2. Outline • Background • Existing Schemas • Data-Centric Storage • Performance • Conclusion • References CS691-WMU

  3. Background • Sensornet ♦ A distributed sensing network comprised of a large number of small sensing devices equipped with • processor • memory • radio ♦ Great volume of data • Data Dissemination Algorithm ♦ Scalable ♦ Self-organizing ♦ Energy efficient CS691-WMU

  4. Observations/Events/Queries • Observation ♦ Low-level output from sensors ♦ E.g. detailed temperature and pressure readings • Event ♦ Constellations of low-level observations ♦ E.g. elephant-sighting, fire, intruder • Query ♦ Used to elicit the event information from sensornets ♦ E.g. locations of fires in the network Images of intruders detected CS691-WMU

  5. Existing Schemas • External Storage (ES) • Local Storage (LS) • Data-Centric Storage (DCS) CS691-WMU

  6. External Storage (ES) CS691-WMU

  7. ES Problems CS691-WMU

  8. Local Storage (LS) CS691-WMU

  9. Local Storage (LS) CS691-WMU

  10. Data-Centric Storage (DCS) • Events are named with keys • DCS provides (key, value) pair • DCS supports two operations: ♦ Put (k, v)stores v ( the observed data ) according to the key k, the name of the data ♦ Get (k)retrieves whatever value is stored associated with key k • Hash function ♦ Hash a key k into geographic coordinates ♦ Put() and Get() operations on the same key k hash k to the same location CS691-WMU

  11. Put(“elephant”, data) DCS – Example (11, 28) (11,28)=Hash(“elephant”) CS691-WMU

  12. DCS – Example Get(“elephant”) (11, 28) (11,28)=Hash(“elephant”) CS691-WMU

  13. DCS – Example – contd.. elephant fire CS691-WMU

  14. Geographic Hash Table (GHT) • Builds on ♦ Peer-to-peer Lookup Systems ♦ Greedy Perimeter Stateless Routing GHT GPSR Peer-to-peer lookup system CS691-WMU

  15. Peer-to-peer Lookup System • P2P Model ♦ Each user stores a subset of data ♦ Each user can access data of all the other users in the system ♦ More scalable than centralized / hierarchy networks • Schema based on hashing techniques ♦ Content Addressable Networks (CAN) CS691-WMU

  16. Hash(k,v) 3 p 4 5 1 2 Content Addressable Networks • Dynamically partition entire coordinate space, every node owns its distinct zone • Each node stores entire hash table, and adjacent zones information • Data Storage • Data Retrieval (k,v) CS691-WMU

  17. Greedy Perimeter Stateless Routing • The position of destination is known • Nodes know neighbor’s positions • Routing decision is made based on the position of the neighbors and a packet’s destination • Greedy Forwarding • Perimeter Forwarding CS691-WMU

  18. GPSR – Greedy Forwarding CS691-WMU

  19. GPSR - void CS691-WMU

  20. GPSR – Perimeter Forwarding 2 Right Hand Rule: Each node to receive a packet forwards the packet to the next link counterclockwise about itself from the ingress link X Z 3 1 Y CS691-WMU

  21. Home Node and Home Perimeter • Hash function is ignorant of the placement of individual nodes in the topology • Home Node ♦ Geographically nearest the destination coordinates of the hashed location • Home Perimeter ♦ Uses GPSR perimeter mode ♦ Packet traverses the entire perimeter that enclose the destination, before returning to the home node CS691-WMU

  22. Problems • Not robust enough ♦ Nodes could move (new home node?) ♦ Home nodes could fail • Not scalable ♦ Home nodes could become communication bottleneck ♦ Storage capacity of home nodes CS691-WMU

  23. Solutions • Perimeter Refresh Protocol ♦ Extension for robustness ♦ Handles nodes failure and topology change • Structured Replication ♦ Extension for scalability ♦ Load balance CS691-WMU

  24. Perimeter Refresh Protocol (replica) (replica) E D ♦Key stored at location L. ♦Home node A. ♦Replicas D and E on the home perimeter L F A (home) C B CS691-WMU

  25. PRP – contd.. • Consistency ♦ Every Th seconds, the home node generates a refresh packet, it will take a tour of the current home perimeter ♦ If the receiver is closer to the destination, it consumes that refresh packet, and initiates its own ♦ If not, forwards the refresh packet in perimeter mode ♦ Ensure the node closest to a key’s hash location will become home node CS691-WMU

  26. PRP – contd.. • Persistency ♦Replica node receives a refresh packet, •caches the data in the packet •sets a takeover timer Tt for that key ♦ The timer expires, replica node initiates a refresh for that key and its data, addressed to the key’s hashed location ♦ When home node fails, its replica nodes step forward to initiate refreshes CS691-WMU

  27. PRP – contd.. (replica) (replica) E D ♦Some time after node A fails, replica D initiates a fresh for L L F C B CS691-WMU

  28. PRP – contd.. (replica) (replica) E D ♦Node F becomes the new home node ♦Node F recruits replicas B, C, D and E L F (home) C (replica) (replica) B CS691-WMU

  29. root point level1 mirror points level2 mirror points Structured Replication (SR) Too many events with the same key are detected, key’s home node could become a hotspot Hierarchical decomposition of key hashed location ♦ d, hierarchy depth ♦ mirrors, 4d -1 e.g. d = 2 CS691-WMU

  30. SR – contd.. • Storage cost reduces ♦ A nodestores detected event at the mirror closest to its location • Query cost increases ♦ Route queries to all mirror nodes recursively ♦ Responses traverse the same path, reverse direction CS691-WMU

  31. Comparison Study • Metrics ♦ Total Messages • total packets sent in the sensor network ♦ Hotspot Messages • maximal number of packets sent by any particular node CS691-WMU

  32. Comparison Study - contd.. • Assume ♦ n is the number of nodes ♦ Asymptotic costs of O(n) for floods O(n 1/2) for point-to-point routing CS691-WMU

  33. Comparison Study -contd.. • Dtotal, the total number of events detected • Q , the number of event types queries for • Dq, the number of detected events of event types • No more than one query for each event type, so there are Q queries in total. • Assume hotspot occurs on packets sending to the access point. CS691-WMU

  34. Comparison Study – contd.. DCS is preferable if • Sensor network is large • Dtotal >> max[Dq, Q] CS691-WMU

  35. Performance Total Messages, varying queries CS691-WMU

  36. Performance – contd.. Hotspot Messages, varying queries CS691-WMU

  37. Performance – contd.. Total Messages, varying network size CS691-WMU

  38. Performance – contd.. Hotspot Messages, varying network size CS691-WMU

  39. Conclusion • In DCS, relevant data are stored by name at nodes within the sensornets. • GHT hashes a key k into geographic coordinates, the key-value pair is stored at a node in the vicinity of the location to which its key hashes. • To ensure robustness and scalability, DCS uses Perimeter Refresh Protocol (PRP) and Structured Replication (SR). • Compared with ES and LS, DCS is preferable in large sensornet . CS691-WMU

  40. References • [1] Sylvia Ratnasamy, Brad Karp, Scott Shenker, Deborah Estrin, Ramesh Govindan, Li Yin and Fang Yu, Data-Centric Storage in Sensornets with GHT, A Geographic Hash Table • [2] Sylvia Ratnamy, Paul Francis, Mark Handley, Richard Karp, Scott Shenker, A Scalable Content-Addressable Network • [3] Ion Stoica, Rober Morris, David Karger, M. Frans Kaashoek, Hari Balakrishnan, Chord: A Scalable Peer-to-peer Lookup Service for Internet Application • [4] C.Intanagonwiwat, R.Govindan, and D.Estrin, Directed Diffusion: A Scalable and Robust Communication Paradigm for Sensor Networks. • [5] Philippe Bonnet, Johannes Gehrke, Praveen Seshadri, Towards Sensor Database Systems CS691-WMU

  41. References • [6] John Heidemann, Fabio Silva, Chalermek Intanagonwiwat, Ramesh Govindan, Deborah Estrin, Deepak Ganesan, Building Efficient Wireless Sensor Networks with Low-Level Naming • [7] Sri Kumar, David Shepherd, and Feng Zhao, Collaborative Signal and Information Processing in Micro-Sensor Networks • [8] Brad Karp, H.T.Kung, GPSR: Greedy Perimeter Stateless Routing For Wireless Networks • [9] Li Yin, Fang Yu, Presentation slides for A Scalable Routing Schema Based on Hashing Technique for P2P Wireless Ad Hoc Networks • [10] Chengdu Huang, Presentation slides for Data Storage Schemas in Sensor Networks CS691-WMU

  42. Questions and Comments Thank you! CS691-WMU

More Related