1 / 21

Rechargeable Sensor Activation under Temporally Correlated Events

Rechargeable Sensor Activation under Temporally Correlated Events. Neeraj Jaggi ASSISTANT PROFESSOR DEPT OF ELECTRICAL ENGINEERING AND COMPUTER SCIENCE WICHITA STATE UNIVERSITY. Outline. Sensor Networks Rechargeable Sensor System Design of energy-efficient algorithms

oleg
Download Presentation

Rechargeable Sensor Activation under Temporally Correlated Events

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Rechargeable Sensor Activation under Temporally Correlated Events Neeraj Jaggi ASSISTANT PROFESSOR DEPT OF ELECTRICAL ENGINEERING AND COMPUTER SCIENCE WICHITA STATE UNIVERSITY

  2. Outline • Sensor Networks • Rechargeable Sensor System • Design of energy-efficient algorithms • Activation question – Single sensor scenario • Temporally correlated event occurrence • Perfect state information • Structure of optimal policy • Imperfect state information • Practical algorithm with performance guarantees Neeraj Jaggi Dept of EECS Wichita State University

  3. Sensor Networks • Sensor Nodes • Tiny, low cost Devices • Prone to Failures • Redundant Deployment • Rechargeable Sensor Nodes • Range of Applications • Important Issues • Energy Management • Quality of Coverage Neeraj Jaggi Dept of EECS Wichita State University

  4. Rechargeable Sensor System 4 Event Phenomena Randomness Control Spatio-temporal Correlations Renewable Energy Rechargeable Sensors Discharge Recharge Activation Policy Quality of Coverage Neeraj Jaggi Dept of EECS Wichita State University

  5. Research Question • How should a sensor be activated(“switched on”) dynamically so that the quality of coverage is maximized over time ? • A sensor became ready. What should it do ? • Activate itself now: • Gain some utility in the short-term • Activate itself later: • No utility in the short term • Activate when the system “needs it more” Neeraj Jaggi Dept of EECS Wichita State University

  6. Temporal Correlations 6 • Event Process (e.g. Forest fire) • On period (HOT) • Off period (COLD) • Correlation probabilities 0.5 < ( , ) < 1 (= = 0.8) • Performance Criteria – Single Sensor Node • Fraction of Events Detected over time Neeraj Jaggi Dept of EECS Wichita State University

  7. Sensor Energy Consumption Model sensor activated δ1+δ2 discharge - On period K qc δ1 recharge discharge - Off period activation policy sensor not activated (no discharge) 7 • Discrete Time Energy Model • Operational Cost (1) • Detection Cost (2) • Recharge Rate (qc) • Probability (q) • Amount (c) Neeraj Jaggi Dept of EECS Wichita State University

  8. System Observability 8 • Perfect State Information • Sensor can always observe state of event process (even while inactive) • Imperfect State Information • Inactive sensor can not observe event process Neeraj Jaggi Dept of EECS Wichita State University

  9. Approach/Methodology 9 • Perfect State Information • Formulate Markov Decision Problem (MDP) • Structure of Optimal Policy • Imperfect State Information • Formulate Partially Observable MDP (POMDP) • Transform POMDP to equivalent MDP (Known techniques) • Structure of Optimal Policy • Near-optimal practical Algorithms Neeraj Jaggi Dept of EECS Wichita State University

  10. Perfect State Information 10 • Markov Decision Process • State Space = {(L, E); 0 ≤ L ≤ K, E є[0, 1]} • L – Current Energy Level, E – On/Off period • Reward r– one if event detected; zero otherwise • Action u є[0, 1]; Transition probabilities p • Optimality equation (average reward criteria) • h* – state variables • λ* – optimal reward Neeraj Jaggi Dept of EECS Wichita State University

  11. Perfect State Information (contd.) 11 • Approximate Solution • Closed form solution for h* does not seem to exist • Value Iteration • Activation Algorithm • L << K • Sensitive to system parameters when L ~ K • Optimality equation (average reward criteria) • H* – variables • Lambda* – optimal reward Neeraj Jaggi Dept of EECS Wichita State University

  12. Perfect State Information (contd.) 12 • Optimal Policy Structure • Randomized algorithm • P* is directly proportional to the recharge rate • Energy balance • Average recharge rate equals average discharge rate in steady state Sufficient Energy? On Period ? Activate Yes Yes No No Prob. ≤ P* ? No Yes Do Not Activate Neeraj Jaggi Dept of EECS Wichita State University

  13. Imperfect State Information 13 • Partially Observable Markov Decision Process • State Space • Observation Space • Optimal actions depend on current and past observations (y) and on past actions (u) • Transformation to equivalent MDP 1 • State – Information vector Zt of length |X| • Zt+1is recursively computable given Zt, ut and yt+1 • Ztforms a completely observable MDP • Equivalent rewards and actions 1 Neeraj Jaggi Dept of EECS Wichita State University

  14. Equivalent MDP Structure 14 • Active Sensor – Observation = (L, 1) or (L, 0) • State is the same as observation • Zt has only one non-zero component • Inactive Sensor – Observation = (L, Φ) • Let state last observed = E, number of time slots inactive = i • Zt has only two non-zero components • Let pi= prob. that event process changed state from E to 1- E in i time slots • State = (L, E) with prob. 1 - pi • State = (L, 1 – E) with prob. pi • Zt is a function of (L, E, i) Neeraj Jaggi Dept of EECS Wichita State University

  15. Imperfect State Information (contd.) 15 • Transformed MDP State Space – (L, E, t) • L – Current Energy Level • E – State of Event process last observed • t – Number of time slots spent in inactive state • Optimal Policy Structure f0 – (L, 0, t), f1 – (L, 1, t) • [1=c=1, 2= 2, = 0.6, = 0.9, q = 0.1] On Period – Aggressive Wakeup Off Period – Reluctant Wakeup Neeraj Jaggi Dept of EECS Wichita State University

  16. Practical Algorithm 16 • Correlation dependent Wakeup (CW) • Activate during On Periods; Deactivate during Off • Sleep Interval (SI*) • Derived using energy balance during a renewal interval • -optimal (~ O(1/β)); β = 2/1 A – Active I – Inactive Y – On, N – Off SI – sleep duration t1, t2 – renewal instances A A A A A A I A A A A I I SI Y Y Y Y Y Y N Y Y N t t1 t2 Neeraj Jaggi Dept of EECS Wichita State University

  17. Simulation Results Energy balancing Sleep Interval SI* 17 [1= c = 1, 1 = 6, q = 0.5, K = 2400] [ = 0.6, = 0.9, SI* = 7] [ = 0.7, = 0.8, SI* = 18] Neeraj Jaggi Dept of EECS Wichita State University

  18. Contributions 18 • Structure of Optimal Policy • EB Policy is Optimal for Perfect State Information • EB Policy is near Optimal for Imperfect State Information • Coauthors • Prof. Koushik Kar , Rensselaer Polytechnic Institute • Prof. Ananth Krishnamurthy, Univ. of Wisconsin Madison • 5th International Symposium on Modeling and Optimization in Mobile Ad hoc and Wireless Networks (WIOPT) April 2007 • ACM/KLUWER Wireless Networks 2008 (Accepted ) Neeraj Jaggi Dept of EECS Wichita State University

  19. Q & A 19 THANK YOU !!  Neeraj Jaggi Dept of EECS Wichita State University

  20. Policies – AW, CW 20 • AW (Aggressive Wakeup) Policy • Activate whenever L ≥ 2 + 1 • Ignores temporal correlations • Optimal if no temporal correlations • CW (Correlation dependent Wakeup) Policies • Activate during On periods; deactivate during Off • Upper Bound (U*CW) • State unobservable during inactive state • Performance depends upon sleep duration How long should sensor sleep ? Neeraj Jaggi Dept of EECS Wichita State University

  21. MDP – State Transitions 21 • State (L, 1):L ≥ 2 + 1 • Action u = 1 (activate) • Next state : • (L + qc – δ1 – δ2, 1) with probability q.pcon • (L + qc – δ1, 0) with probability q.(1 – pcon) • (L – δ1 – δ2, 1) with probability (1 – q ).pcon • (L – δ1, 0) with probability (1 – q ).(1 – pcon) • Reward r = 1 with probability pcon; 0 otherwise. • Actionu = 0 (deactivate) • Next state : • (L + qc, 1) with probability q.pcon • (L + qc, 0) with probability q.(1 – pcon) • (L, 1)with probability (1 – q).pcon • (L, 0)with probability (1 – q).(1 – pcon) • Reward r = 0 Neeraj Jaggi Dept of EECS Wichita State University

More Related