1 / 58

TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A A A A A A

Theory Meets Practice. …it's about TIME!. TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A A A A A A. Map of Computer Science . [ www.confsearch.org ]. theory. Zooming in on Theory. [ www.confsearch.org ]. SOFSEM. Theory Meets Practice?.

Download Presentation

TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A A A A A A

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Theory Meets Practice • …it's about TIME! TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AAAAAAA

  2. Map of Computer Science [www.confsearch.org] theory

  3. Zooming in on Theory [www.confsearch.org] SOFSEM

  4. Theory Meets Practice? FC Practice SC Theory

  5. Why is there so little interaction? Practice is trivial… Theory is useless… Theory Practice

  6. Systems people don’t read theory papers • Sometimes for good reasons… • unreadable • don’t matter that much (only getting out the last %) • wrong models • theory is lagging behind • bad theory merchandising/branding • systems papers provide easy to remember acronyms • “On the Locality of Bounded Growth” vs. “Smart Dust” • good theory comes from surprising places • difficult to keep up with • having hundreds of workshops does not help • If systems people don’t read theory papers, maybe theory people should build systems themselves?

  7. Systems Perspective: Dozer

  8. Today, we look much cuter! And we’re usually carefully deployed Power Radio Processor Sensors Memory

  9. A Sensor Network After Deployment multi-hop communication

  10. A Typical Sensor Node: TinyNode 584 [Shockfish SA, The Sensor Network Museum] • TI MSP430F1611 microcontroller @ 8 MHz • 10k SRAM, 48k flash (code), 512k serial storage • 868 MHz Xemics XE1205 multi channel radio • Up to 115 kbps data rate,200m outdoor range

  11. Environmental Monitoring (PermaSense) • Understand global warming in alpine environment • Harsh environmental conditions • Swiss made (Basel, Zurich) Go

  12. Example: Dozer • Up to 10 years of network life-time • Mean energy consumption: 0.066 mW • Operational network in use > 2 years • High availability, reliability (99.999%) [Burri et al., IPSN 2007]

  13. Is Dozer a theory-meets-practice success story? • Good news • Theory people can develop good systems! • Dozer is to the best of my knowledge more energy-efficient and reliable than all other published systems protocols… for many years already! • Sensor network (systems) people write that Dozer is one of the “best sensor network systems papers”, or: “In some sense this is the first paper I'd give someone working on communication in sensor nets, since it nails down how to do it right.” • Bad news: Dozer does not have an awful lot of theory inside • Ugly news: Dozer v2 has even less theory than Dozer v1 • Hope:Still subliminal theory ideas in Dozer?

  14. Orchestration of the whole network stack to achieve duty cycles of ~ 0.1% Energy-Efficient Protocol Design • Communication subsystem is the main energy consumer • Power down radio as much as possible • Issue is tackled at various layers • MAC • Topology control / clustering • Routing

  15. contention window Dozer System • Tree based routing towards data sink • No energy wastage due to multiple paths • Current strategy: SPT • TDMA based link scheduling • Each node has two independent schedules • No global time synchronization • The parent initiates each TDMA round with a beacon • Enables integration of disconnected nodes • Children tune in to their parent’s schedule parent child activationframe beacon beacon time

  16. Dozer System • Parent decides on its children data upload times • Each interval is divided into upload slots of equal length • Upon connecting each child gets its own slot • Data transmissions are always ack’ed • No traditional MAC layer • Transmissions happen at exactly predetermined point in time • Collisions are explicitly accepted • Random jitter resolves schedule collisions Clock drift, queuing, bootstrap, etc. data transfer jitter time slot 1 slot 2 slot n

  17. Dozer in Action

  18. Relay node No scanning Energy Consumption 0.32%duty cycle 0.28%duty cycle scanning overhearing updating #children • Leaf node • Few neighbors • Short disruptions

  19. Theory Meets Practice • Example: Clock Synchronization • …it's about TIME!

  20. Clock Synchronization in Practice • Many different approaches for clock synchronization Global PositioningSystem (GPS) Radio Clock Signal AC-power lineradiation Synchronizationmessages

  21. Clock Devices in Sensor Nodes • Structure • External oscillator with a nominal frequency (e.g. 32 kHz or 7.37 MHz) • Counter register which is incremented with oscillator pulses • Works also when CPU is in sleep state 7.37 MHz quartz 32 kHz quartz Mica2 TinyNode 32 kHz quartz

  22. Clock Drift • Accuracy • Clock drift: random deviation from the nominal rate dependent on power supply, temperature, etc. • E.g. TinyNodes have a maximum drift of 30-50 ppm at room temperature rate Thisis a drift of up to 50 μs per second or0.18s per hour 1+² 1 1-² t

  23. Time accor- t t B ding to B 2 3 Answer Request from A from B t Time accor- t A ding to A 1 4 Sender/Receiver Synchronization • Round-Trip Time (RTT) based synchronization • Receiver synchronizes to sender‘s clock • Propagation delay  and clock offset  can be calculated

  24. Messages Experience Jitter in the Delay • Problem: Jitter in the message delay • Various sources of errors (deterministic and non-deterministic) • Solution: Timestamping packets at the MAC layer [Maróti et al.] • → Jitter in themessagedelayisreducedto a fewclockticks 1-10 ms 0-100 ms 0-500 ms SendCmd Access Transmission timestamp Reception Callback timestamp 0-100 ms t

  25. ClockSynchronization in Networks? • Time, Clocks, and the Ordering of Events in a Distributed SystemL. Lamport, Communications of the ACM, 1978. • Internet Time Synchronization: The Network Time Protocol (NTP)D. Mills, IEEE Transactions on Communications, 1991 • Reference Broadcast Synchronization (RBS)J. Elson, L. Girod and D. Estrin, OSDI 2002 • Timing-sync Protocol for Sensor Networks (TPSN)S. Ganeriwal, R. Kumar and M. Srivastava, SenSys 2003 • Flooding Time Synchronization Protocol (FTSP)M. Maróti, B. Kusy, G. Simon and Á. Lédeczi, SenSys 2004 • andmanymore ... FTSP: State oftheartclocksyncprotocolfornetworks.

  26. Flooding Time Synchronization Protocol (FTSP) • Eachnodemaintainsboth a localand a global time • Global time issynchronizedtothelocal time of a referencenode • Nodewiththesmallestidiselectedasthereferencenode • Reference time isfloodedthroughthenetworkperiodically • Timestampingatthe MAC Layer isusedtocompensatefordeterministicmessagedelays • Compensationforclockdriftbetweensynchronizationmessagesusing a linear regressiontable referencenode 0 4 5 1 7 6 3 2

  27. Best tree for tree-based clock synchronization? • Finding a good tree for clock synchronization is a tough problem • Spanning tree with small (maximum or average) stretch. • Example: Grid network, with n = m2nodes. • No matter what tree you use, the maximumstretch of the spanning tree will always beat least m (just try on the grid figure right…) • In general, finding the minimum maxstretch spanning tree is a hard problem, however approximation algorithms exist [Emek, Peleg, 2004].

  28. Variants of Clock Synchronization Algorithms Tree-likeAlgorithms Distributed Algorithms e.g. FTSP e.g. GTSP [Sommer et al., IPSN 2009] Bad localskew All nodes consistently average errors to allneighbors

  29. FTSP vs. GTSP: Global Skew • Network synchronization error (global skew) • Pair-wise synchronization error between any two nodes in the network FTSP (avg: 7.7 μs) GTSP (avg: 14.0 μs)

  30. FTSP vs. GTSP: LocalSkew • Neighbor Synchronization error (local skew) • Pair-wise synchronization error between neighboring nodes • Synchronization error between two direct neighbors: FTSP (avg: 15.0 μs) GTSP (avg: 2.8 μs)

  31. Time in (Sensor) Networks • Synchronized clocks are essential for many applications: Localization Local Global Global Sensing Local Local Local TDMA Duty-Cycling Clock Synchronization Protocol Hardware Clock

  32. Clock Synchronization in Theory? • Given a communication network • Each node equipped with hardware clock with drift • Message delays with jitter • Goal: Synchronize Clocks (“Logical Clocks”) • Both global and local synchronization! worst-case (but constant)

  33. Time Must Behave! • Time (logical clocks) should not be allowed to stand still or jump • Let’s be more careful (and ambitious): • Logical clocks should always move forward • Sometimes faster, sometimes slower is OK. • But there should be a minimum and a maximum speed. • As close to correct time as possible!

  34. Formal Model • Hardware clock Hv(t) = s[0,t] hv(¿) d¿with clock rate hv(t)2 [1-²,1+²] • Logical clock Lv(∙) which increases at rate at least 1 and at most ¯ • Message delays 2 [0,1] • Employ a synchronization algorithm to update the logical clock according to hardware clock and messages from neighbors Clock drift ² is typically small, e.g. ² ¼10-4for a cheap quartz oscillator Logical clocks with rate much less than 1 behave differently... Neglect fixed share of delay, normalize jitter Hv Time is 152 Time is 140 Time is 150 Lv?

  35. Variants of Clock Synchronization Algorithms Tree-likeAlgorithms Distributed Algorithms e.g. FTSP e.g. GTSP Bad localskew

  36. Old clock value:x Clock value:D+x Old clock value:D+x-1 Old clock value:x+1 Synchronization Algorithms: An Example (“Amax”) • Question: How to update the logical clock based on the messages from the neighbors? • Idea: Minimizing the skew to the fastestneighbor • Set the clock to the maximum clock value received from any neighbor(if larger than local clock value) • forward new values immediately • Optimum global skew of about D • Poor local property • First all messages take 1 time unit… • …then we have a fast message! Allow ¯= 1, i.e. logicalclockmay jump forward New time is D+x skew D! Fastest Hardware Clock New time isD+x Time is D+x Time is D+x Time is D+x

  37. LocalSkew: OverviewofResults Everybody‘s expectation, 10 years ago („solved“) All natural algorithms [Locher et al., DISC 2006] Blocking algorithm Lower bound of logD/ loglogD[Fan & Lynch, PODC 2004] 1logD √D D … Dynamic Networks![Kuhn et al., SPAA 2009] Kappa algorithm[Lenzen et al., FOCS 2008] Tight lower bound[Lenzen et al., PODC 2009] together[JACM 2010]

  38. Enforcing Clock Skew • Messages between two neighboring nodes may be fast in one direction and slow in the other, or vice versa. • A constant skew between neighbors may be „hidden“. • In a path, the global skew may be in the order of D/2. 2 3 4 5 6 7 u v 2 3 4 5 6 7 vs 2 3 4 5 6 7 2 3 4 5 6 7 2 3 4 5 6 7 2 3 4 5 6 7

  39. LocalSkew: LowerBound (Single-Slide Proof!) Lv(t) = x + l0/2 hv = 1 Lv(t) = x hv = 1+² Higherclockrates l0 = D Lw(t) Lw(t) hw = 1 hw = 1 • Add l0/2 skew in l0/(2²) time, messing with clock rates and messages • Afterwards: Continue execution for l0/(4(¯-1)) time (all hx = 1) •  Skew reduces by at most l0/4 at least l0/4 skew remains • Consider a subpath of length l1 = l0·²/(2(¯-1))with at least l1/4 skew • Add l1/2 skew in l1/(2²)= l0/(4(¯-1)) time  at least 3/4·l1 skew in subpath • Repeat this trick (+½,-¼,+½,-¼,…) log2(¯-1)/²D times Theorem: (log(¯-1)/²D) skewbetweenneighbors

  40. Surprisingly, up to small constants, the (log(¯-1)/²D)lower bound can be matched with clock rates 2 [1,¯] (tough part, not in this talk) We get the following picture [Lenzen et al., PODC 2009]: Local Skew: Upper Bound ... because too large clock rates will amplify the clock drift ². We can have both smooth and accurate clocks!

  41. Surprisingly, up to small constants, the (log(¯-1)/²D)lower bound can be matched with clock rates 2 [1,¯] (tough part, not in this talk) We get the following picture [Lenzen et al., PODC 2009]: In practice, we usually have 1/² ¼104 > D. In other words, our initial intuition of a constant local skew was not entirely wrong!  Local Skew: Upper Bound ... because too large clock rates will amplify the clock drift ². We can have both smooth and accurate clocks!

  42. Clock Synchronization vs. Car Coordination • In the future cars may travel at high speed despite a tiny safety distance, thanks to advanced sensors and communication

  43. Clock Synchronization vs. Car Coordination • In the future cars may travel at high speed despite a tiny safety distance, thanks to advanced sensors and communication • How fast & close can you drive? • Answer possibly related to clock synchronization • clock drift ↔ cars cannot control speed perfectly • message jitter ↔ sensors or communication between cars not perfect

  44. Example: Clock Synchronization • …it's about TIME! Is Our Theory Practical?!?

  45. One Big Difference Between Theory and Practice, Usually! Worst Case Analysis! Physical Reality... Theory Practice

  46. „Industry Standard“ FTSP in Practice • As we have seen FTSP does have a local skew problem • But it’s not all that bad… • However, testsrevealedanother (severe!) problem: • FTSP does not scale: Globalskewgrowsexponentiallywithnetworksize… FTSP (avg: 15.0 μs)

  47. Why? • How does the network diameter affect synchronization errors? • Examples for sensor networks with large diameter • Bridge, road or pipeline monitoring • Deployment at Golden Gate Bridge with 46 hops[Kim et al., IPSN 07] ... 0 1 2 3 4 d

  48. Multi-hop Clock Synchronization • Nodes forward their current estimate of the reference clock • Each synchronization beacon is affected by a random jitter J • Sum of the jitter grows with the square-root of the distance • stddev(J1 + J2 + J3 + J4 + J5 + ... Jd) = √d×stddev(J) • Thisisbad but does not explainexponentialbehaviorof FTSP… • In addition FTSP uses linear regressionto compensate for clock drift • Jitter is amplified before it is sent to the next hop! • Amplification leads to exponential behavior… ... 0 1 2 3 4 d

  49. Linear Regression (FTSP) • Simulation of FTSP with regression tables of different sizes(k = 2, 8, 32) Log Scale!

  50. The PulseSync Protocol [Lenzen et al., SenSys 2009] 1) Remove self-amplifying of synchronization error 2) Send fast synchronization pulses through the network • Speed-up the initialization phase • Faster adaptation to changes in temperature or network topology 0 0 1 1 FTSP Expected time = D·B/2 2 2 3 3 4 4 t PulseSync Expected time= D·tpulse t

More Related