1 / 76

Wireless Sensor Networks Review

Wireless Sensor Networks Review. Professor Jack Stankovic University of Virginia. Review Outline. Architecture – Where all this fits Clock Sync Power Management Database View of WSN Programming Abstractions Security and Privacy (questions?). WSN Architecture Example.

temple
Download Presentation

Wireless Sensor Networks Review

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Wireless Sensor NetworksReview Professor Jack Stankovic University of Virginia

  2. Review Outline • Architecture – Where all this fits • Clock Sync • Power Management • Database View of WSN • Programming Abstractions • Security and Privacy (questions?)

  3. WSN Architecture Example

  4. Clock Sync Protocols • NTP (Network Time Protocol) – for Internet • RBS (Reference Broadcast Sync) • TPSN (Time-sync Protocol for Sensor Networks) • FTSP (Flooding Time Sync Protocol)

  5. NTP • Network Time Protocol (NTP) on Internet • Included in OS code • Simple NTP (SNTP) exists for PCs • Runs as background process • Get time from “best” servers • Finds those with lowest jitter/latency • Depends on wired and fast connections • Use statistical analysis of round trip time to clock servers • Clock servers get time from GPS

  6. NTP • Not (directly) usable for WSN • Complex – see 50 page document • Continuous cost • Large code size • Expensive in messages/energy

  7. Clock Sync Delays • Uncertainties of radio message delivery

  8. RBS • Reference message is broadcast • Receivers record their local time when message is received • Timestamps are ONLY on the receiver side • This eliminates access and send times • Nodes exchange their recorded times with each other • No transmitter side non-determinism

  9. RBS Local Time Exchange Adjust 6,7 5,7 5,6 Plus 2 Plus 1 OK 5 6 7 Send MSG No Clock Propagation Time = 0

  10. TPSN • Creates a spanning tree • Perform pair-wise sync along edges of the tree • Must be symmetric links • Recall radio irregularity paper • Considers “entire” system • RBS looks at one hop • Exchange two sync messages with parent node

  11. Synchronization Phase • Delta = clock drift • P = propagation delay T2=T1+P+Delta T3 T2 Node B (T1) (T1,T2,T3) Node A T1 T4 P = ((T2-T1)+(T4-T3))/2 Delta = ((T2-T1)-(T4-T3))/2 Node A corrects its clock by Delta Note: Sender A corrects to clock of receiver B

  12. Example 5.3 T2=5.35 T3=6.0 Node B .05 .05 Node A 5.05 T1= 5.0 T4=5.75 5.7 P = ((T2-T1)+(T4-T3))/2 Delta = ((T2-T1)-(T4-T3))/2 P= ((5.35-5.0)+(5.75-6))/2 Delta= ((.35)-(-.25))/2 P=((.35)+(-.25))/2 Delta= 0.6/2 =.3 P= .1/2=.05 So A adds .3 to 5.75 to get 6.05 Only need Delta to adjust clocks

  13. Read Clock in MAC Layer • Uncertainties of radio message delivery

  14. Flooding Time Sync Protocol (FTSP) • ~1 Microsec accuracy • MAC-layer timestamp • Skew compensation with linear regression (accounts for drift) • Handles large scale networks

  15. Remove Uncertainties • Eliminate Send Uncertainty • Get time in the MAC layer • Eliminate Access Time • Get time after the message has access to the channel • Eliminate Receive Time • Record local time message received at the MAC layer

  16. Remaining • (Mostly) Deterministic Times • Transmit • Propagation • Reception

  17. Basic Idea • When to time stamp the message Get timestamp Ready sender Mica2 uncertainties: Interrupt:5us, 30us(<2%) Encode+decode:110us -112us Byte align: 0us – 365us Propagation: 1us receiver Set timestamp

  18. Basic Idea • When to time stamp the message • Radio layer, after the second SYN sent out, 6 timestamps in row, take the average and send only 1 timestamp Normalize and then take Average of these timestamps for 6 bytes of data

  19. Why take 6 samples? • Because of all the uncertainties as described in a previous slide

  20. FTSP • Root maintains global time for system • All others sync to the root • Nodes form an ad hoc structure rather than a spanning tree

  21. Clock Drift - Basic Idea • 8 entry linear regression table to estimate clock skew (each entry derived from 1 clock sync protocol execution) Example • 5 us offset • 5 us offset • 7 us offset • 4 us offset • 4 us offset • 5 us offset • 7 us offset • 5 us offset 5 us over Say a 10 sec Resync period Compensate .5 us Each sec 10 seconds 1 sec

  22. Multi-HopRoot/Reference Point Root/Reference Point Step 1 How to handle messages from multiple nodes Step 2 8 sync msgs to perform Linear regression

  23. Root Election Process • If no sync message for time T, declare myself to be root • May be multiple roots • Smallest ID wins; so roots will give up their root status when it eventually gets a sync message from another root with lower ID

  24. Power Management • Hardware layer • MAC layer - review • Routing layer – review • Localization and Clock Sync - review • Overarching power management schemes • Sentry service • Tripwire service • Duty cycle • Differentiated surveillance

  25. Power Management- Hardware layer • Turn off/on • CPU • Memory • Sensors • Radio (most expensive) • Fully awake ………… Deep Sleep • Dynamic voltage scaling also possible • SW ensures a node/components are awake when needed

  26. Architecture

  27. 3 4 2 1 Sentry-Based Power Management (SBPM) • Two classes of nodes: sentries and non-sentries • Sentries are awake • Non-sentries can sleep • Sentries • Provide coarse monitoring & backbone communication network • Sentries “wake up” non-sentries for finer sensing • Sentry rotation • Even energy distribution • Prolong system lifetime • Decentralized Algorithm • See photo

  28. Tripwire Service – Scaling to 1000s Network partitioning • 2 tripwire sections • 8 dormant sections • 100 motes, 1 relay per section • Size and number of sections reconfigurable • Rotate sections Sentries • N% in tripwire section • Rotate sentries

  29. Sensing Coverage = 100%

  30. r Basic design with 100% sensing coverage • Solution – 100% Grid point sensing coverage • Divide whole network into virtual grids • For each grid point x, guarantee that x is covered by at least one node’s sensing range at ANY time

  31. Schedule example Point x B A C Decide Working Schedule If we want to provide sensing coverage for point x, we can have either A or B or C awake. Scheduling example for A, B and C 100 0 30 70 Node A 10 60 Node B 5 45 Node C time Waking Sleeping

  32. Decide Working Schedule • Challenge: For each node, how to coordinate with other nodes and decide its own schedule? • Solution - Random Reference Point Scheduling Algorithm

  33. Decide Working Schedule • Concepts • A node’s working schedule is determined by a four parameter tuple – (T, Ref, Tfront, Tend)

  34. Point P1 B refC refA refB 100 A 20 40 90 C Decide Working Schedule • Solution – Random Reference Point Scheduling Algorithm 1) Each node N chooses a “Reference Point (Ref)” randomly from [0, 100) and broadcasts its Ref and position. e.g. T = 100, RefA = 40, RefB= 90, RefC = 20 2) For each grid point P in its own sensing area, N sorts all the Refs from nodes (including N) which can also sense P in ascending order. For A according to point P1, we have: Ref(1) = RefC = 20, Ref(2) = RefA = 40, Ref(3) = RefB = 90 0

  35. t refC refA refB 20 40 90 t 0 30 65 Decide Working Schedule 3) Assuming RefN is the (i)th Ref, N’s four parameter tuple is computed as follows: • TfronN = (Ref(i)- Ref(i-1))/2, 1<i<M • TendN = (Ref(i+1)-Ref(i))/2, 1<i<M TfrontA = (Ref(2)-Ref(1))/2 = (40-20)/2 = 10 TendA = (Ref(3)-Ref(2))/2 = (90-40)/2 = 25 (T, RefA, TfrontA, TendA) = (100, 40, 10, 25) 4) N’s working period for point P (TwN(P)) is decided by: [T*j + RefN – TfrontN , T*j + RefN + TendN), j = 0, 1, 2, … TwA(P1) = [100*j+40–10, 100*j+40+25) = [100*j+30, 100*j+65)

  36. TwA(P1) TwA(P2) TwA(P3) 5 … . . . 50 TwA(Pn) 45 65 TwA Point P1 B 5 65 A 100 0 C Decide Working Schedule 5) Calculate the union of TwN(Px) for all grid points within N’s sensing area, choose this union as the final working period of N (TwN).

  37. Enhanced Design with Differentiation • Goal • provide sensing coverage with DOC = a • a > 1 or a < 1 • Solution • Extend 4-parameter tuple to 5-parameter tuple (T, Ref, Tfront, Tend, a) • Determine a node’s working period as follows: • [T*j + Ref – Tfront*a , T*j + Ref + Tend*a)

  38. refC refA refB 20 40 90 An Example Schedule for Grid Point P1 (a=2) (T, RefA, TfrontA, TendA, a) = (100, 40, 10, 25, 2) (T, RefB, TfrontB, TendB, a) = (100, 90, 25, 15, 2) (T, RefC, TfrontC, TendC, a) = (100, 20, 15, 10, 2) TwA = [T*j + Ref – Tfront*2,T*j + Ref + Tend*2) = [100*j + 20, 100*j + 90) TwB = [100*j + 40, 100*j + 120) TwC = [100*j -10, 100*j + 40) 0 A B C 5 30 65

  39. Database View • Architectures • GHT • TinyDB • SEAD

  40. Architecture (1) Base Station Data Stored here Queries performed here Data Data Data Data

  41. Architecture (2) Base Station Queries Flood Data Stored Decentralized at Each Node

  42. Architecture (3) Hierarchical Network Query to Rendezvous Points Base Station Stargates/ Log motes

  43. Architecture (4) Distant WorkStation Disconnected System Data Stored Decentralized at Each Node Collected by Data Mules

  44. Geographic Hash Table (GHT) • Translate from a attribute to a storage location • Distribute data evenly over the network • Example: GHT system (A Geographic Hash Table for Data Centric Storage – see Ch 6.6 in text))

  45. GHT Base Station Store Tank Info Here Query

  46. TAG of TinyDB • 2 Phases (sleep when possible) • disseminate periodic query • collect data (scheduled) Base Station Epoch Pipelining

  47. SEAD: Scalable Energy Efficient Asynchronous Dissemination Protocol • An asynchronous content distribution multicast tree is maintained • Tree is modified when • a sink joins • a sink leaves • a sink moves beyond some threshold • Cost of building tree is minimized Mobile node Access node Forwarding chain r4 Dissemination to Mobile Sinks r3 r2 r1 r2 Caching Steiner points r4

  48. Subscription Query (1) Sink 1 (access node) Sink 2 (access node) Source

  49. 4 Phases • Subscription Query • Mobile node attaches to nearest node as access point • Access node sends join query to source • Second -Gate replica search • Attach new node on current tree at best gate replica

  50. Gate Replica Search (2) Sink 1 (access node) Gate Replicas (assume they exist for some current tree – not shown) Source

More Related