1 / 39

Tracking and Collaborative Signal Processing

Tracking and Collaborative Signal Processing. Wireless Ad-hoc Sensor Networks EE 206A Louane Kuang Jonathan Hui. Outline . Basics of Ad-hoc Sensor Networks Relatively immobile Severely power constrained Large scale Embedded processing capabilities Sensors Acoustic/seismic

gavan
Download Presentation

Tracking and Collaborative Signal Processing

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Tracking and Collaborative Signal Processing Wireless Ad-hoc Sensor Networks EE 206A Louane Kuang Jonathan Hui

  2. Outline • Basics of Ad-hoc Sensor Networks • Relatively immobile • Severely power constrained • Large scale • Embedded processing capabilities • Sensors • Acoustic/seismic • Infrared, magnetic, imaging

  3. Topics of Presentation • Tracking and Collaborative Signal Processing • Applications: • Battlefield tactical • Environmental monitoring

  4. Paper Topics • Source Localization & Beamforming • Information-Driven Dynamic Sensor Collaboration • Detection & Classification • Tracking and Reasoning with Relations

  5. Detection, Classification and Tracking of Targets • Detection and tracking of a single target requires participation and handoff by several nodes • Target classification is needed for simultaneous detection and tracking of multiple targets • Inter-node cooperation is inherent in detection, tracking and classification algorithms and can be achieved through collaborative signal processing (CSP)

  6. CSP • A single node covers a limited field, therefore more than one sensor nodes need to cooperate to process space-time signals together to obtain a global view • Distributive processing-raw signals are processed locally while only transmitting requested higher level information • Goal-oriented, on-demand processing-information is forwarded and processing takes place only upon request, otherwise, nodes enter an energy-conserving standby mode • Information fusion-the data exchanged farther away are of lower bandwidth than data exchanged with closer neighbors • Multi-resolution processing-resolution of sampling depends on the required CSP task

  7. Detection • The output of a detector is sampled periodically • An output higher than a false alarm threshold signals an event • The threshold is calculated using noise output from the detector and is dynamic to new noise readings • Upon detection of an event, data about the event is sent to manager nodes that includes: the time when the threshold was first exceeded, the time when closest point of approach (CPA) is achieved, the signal detected during CPA, and the entire duration of detector outputs remaining above the threshold

  8. Localization • Energy measurements from multiple (4 or more) nodes are used • More accurate localization requires time synchronization, which is costly in low-power sensor nets • Assumptions made by beamforming and other coherent localization algorithms may not hold in field environments • Eliminates the need to exchange time series data, which may consume too much energy yi(t) = energy reading of ith sensor r(t) = unknown coordinates of source ri = coordinates of ith sensor s(t) = unknown target signal energy  = decay exponent

  9. Localization cont’d • Find yi(t)/yj(t) for all pairs of i and j • this eliminates s(t) and defines a circle which contains r(t) • Estimate r(t) using • (x, y) is the target coordinate • (oi, x, oi, y) = center of the circle • i = radius

  10. Tracking • Geographic positions of nodes are more important than high level addresses (ex. a simple way to approximate location of a target is the position of the node that detected the strongest signal from it) • A geographic region is divided into cells and manager nodes selected from nodes in a cell coordinate sensing in the cell (a) Nodes that detect a target are called active nodes and the cell they are in is called the active cell, active nodes report sensing data of a target to their manager nodes (b) Manager nodes use localization algorithms to find the target position • Manager nodes use past target positions to predict future target locations • According to the predicted positions, new cells are formed in regions the target is likely to enter, some will be activated • If the a new cell detects the target, a handoff occurs between the new active cell and the previous one, steps (a) to (e) is repeated by the new cell

  11. Classification • Single node classification algorithms • k-nearest neighbor (kNN) • maximum likelihood (ML) • Support Vector Machine (SVM) • Classifiers chosen to maximize differences between target classes • Power spectral density (PSD) of time series data • Data generated from seismic and acoustic readings for binary classification of tracked and wheeled vehicles

  12. Classification cont’d {x; xN} set of feature vectors • ={1,2,,m} set of m target classes, cis a class p(c) prior probability that x  c p(c|x) posterior probability for c given x • x  c if p(i|x) > p(j|x) for all j  i • approximate using gi(x) > gj(x) if p(i|x) > p(j|x) for j  i , gi(x) is a discriminant function

  13. k-NN • {pk } is a set of prototypes • Find distance from test vector to every prototype • Identify k prototypes closest to test vector • Combine to generate the appropriate class label for the test vector • Not scalable to increasing prototypes

  14. Maximum Likelihood Likelihood function: Gi(x|i) = i = [mi1, …, miP, i1, …, iP] = mean and covariance parameters of the P Gaussian mixture densities for a class i Discriminant function: gi(x) = Gi(x|i)p(i) p(i)can be approximated by the number of training vectors for the class i

  15. SVM • Linear classifier is the symmetric SVM kernel representation = a set of nonlinear transformations that map the input vector (N-dimension) to a feature space (M- dimensions, where M > N) • Each class uses a uniquely trained SVM whose output gives an approximation of p(i|x) for a class i

  16. Correct Classifications Correct Classifications Incorrect Classifications Incorrect Classifications Tracked Tracked Wheeled Wheeled Tracked as Wheeled Tracked as Wheeled Wheeled as Tracked Wheeled as Tracked k-NN k-NN 87.8% 89.55% 94.26% 95.2% 10.45% 12.2% 4.8% 5.74% ML ML 81.23% 92.27% 77.6% 88.97% 7.73% 18.77% 11.03% 22.4% SVM SVM 94.09% 92.5% 97% 96.45% 7.5% 5.91% 3.55% 3% Classification Data • Binary classification between wheeled and tracked vehicles using • Low bandwidth seismic data • Wideband acoustic data

  17. Sensing, Tracking, and Reasoning with Relations • Relations refer to spatial or temporal connections between objects and or environmental features • Relations allow the mapping of high-level user queries to low-level signal processing that minimizes the use of resources • Large-scale behaviors and relations of objects relative to its environment or to other objects may be easier to ascertain than exact object position or motion • Simple global queries can be answered without active data collection by aggregating the partial information of each node and then storing it locally

  18. Example of Uses of Relational Sensing • Who is the leader? (positional relation) • Am I surrounded? (geometric relation) s1: f, e2 and e3 form a counter clockwise triangle (CCW) s2: f, e3 and e1 form a counter clockwise triangle (CCW) s3: f, e1 and e2 form a counter clockwise triangle (CCW) Therefore, e1, e2 and e3 form a CCW enclosing f, which is indeed surrounded

  19. Kinetic Data Structure (KDS) • Incremental update is a more efficient way to track the attribute values of a target • Objects are allowed to move as long as the relations among them stay valid • Certificates- elementary relations that certify the value of an attribute • KDS- data structure designed for maintaining data about objects that move incrementally using several support certificates • A KDS algorithm is used to find alternate certificates (relations) that will support an attribute when sensors cannot support current ones or the current relations have failed • The goal is to find certificates that change incrementally and locally according to coherence of motion • More certificates implies a quicker computation of attribute values, but it also means a greater likelihood of certificate failure requires more processing to fix

  20. KDS Cont’d • The KDS model incorporates the costs for sensing and communication in sensor nodes • KDS is useful for • Coordinating groups of sensors during target tracking • Motion prediction of the target to facilitate formation of tracking groups • Creating and maintaining clusters of moving nodes • Directing communication routes throughout the sensor net either as a relay for outside user nodes or for sensor nodes within the net

  21. Probabilistic Relational Reasoning • KDS needs to be enhanced with tolerance for uncertainty in sensing target location • The belief state of the system regarding target location is represented as probability density function which is translated into a set of weighted particles, each particle represents a position and the corresponding weight gives the probability • The distribution has to take into account not only the target location, but other attributes associated with the target • A distribution is factored into parts each represented by a particle with independent uncertainties

  22. RDBN • Dynamic Bayesian Networks (DBN) can be used to model dependencies resulting from the various states an object goes through during motion and it is adapted to the sensor environment • Relational Dynamic Bayesian Networks (RDBN) are used to deal with uncertainties and change occurring in the relations between objects , in the identities of the objects and in the number of participating objects • RDBN can be integrated with KDS • KDS algorithm finds certificates with confidence specified by the RDBN model • The RDBN belief state representation can be modified by the KDS so that its belief state can be more easily matched to good certificates and improve its accuracy

  23. Issues • Variability in data • Sufficiently accurate time synchronization and position of sensors are difficult to obtain even with GPS • Doppler shifts due to motion may create spectral variations that inhibit accurate classification of targets • Data used to train classifiers may not resemble actual data obtained in the field

  24. Keys to Tracking • Leverage the distributed computing environment with respect to: • Sensor networks enable dense spatial sampling • Asynchronous • Optimization • Information Gain • Resource Cost

  25. Information-Driven Dynamic Sensor Collaboration • Collaborative Signal & Information Processing (CSIP) • dynamically determine: • who should sense • what needs to be sensed • who the information must be passed on to

  26. Assumptions about Sensors • Have local sensing & communication range • Physical phenomenon of interest • Can locally estimate cost of sensing/processing/communicating • Monitor power usage

  27. Tracking Scenario • Moving vehicle in two-dimensional space • No road constraint • No prior knowledge can be exploited • Vehicle accelerates/decelerates between sensors • Many sensors potentially make simultaneous observations • Potentially (Flood network with information)

  28. Sensor Selection • Wish to incrementally update the belief by incorporating measurements of other nearby sensors • Not all sensors provide useful information that improves estimate • Task is to select an optimal subset of available sensors and optimal order of how to incorporate these new measurements

  29. Collaboration • Detection quality • Track Quality • Scalability • Survivability • Resource Usage

  30. Information Driven Sensor Querying (IDSQ) • Bayesian Estimation problem • x - target we wish to estimate • zi - sensor measurement (at location i) • p(x| z1,…, zj-1) - current estimate • p(x| z1,…, zj-1,zj)-new estimate based on latest measurement zj • select sensor j that provides greatest improvement at the lowest cost

  31. IDSQ Continued… • Optimization Problem • M(p(x| z1,…,zj)) = Utility(p(x| z1,…,zj)) - (1-)Cost(zj) • Utility() - information utility measure • characterizes usefulness of data provided • Cost() - Cost of resources • cost of obtaining information (link bandwidth, transmission latency, power reserve) •  - relative weight of utility versus cost

  32. Information Utility • Examples of what  could be defined as • Information-Theoretic Measure: Entropy • Mahalanobis Distance Measure • Measures on Expected Posterior • Apply one of the above to a simulated measurement incorporated into belief state

  33. Information Utility: Entropy • Natural choice for Utility() is statistical entropy (measures randomness of random variable) • Smaller the entropy the more certain we are about the random variable • For example: Utility() = -Hp(x)

  34. Information Utility: Mahalanobis Distance • Works well when the current belief state is well approximated by a Gaussian distribution: • xj is the position of sensor j • x is the mean of the belief (target position estimate)

  35. Sensor Selection continued

  36. Estimation error for Nearest neighbor selection Estimation error for Mahalanobis Distance Estimation error for minimizing entropy

  37. Additional Considerations • Sequential versus Concurrent Information exchange • node-to-node versus leader-to-leader • Parallel information exchange • Tracking Robustness • sensor placement density • sensing range • communication range

  38. Sources: Chen, J.C.; Kung Yao; Hudson, R.E. Source localization and beamforming. IEEE Signal Processing Magazine, vol.19, (no.2), IEEE, March 2002. p.30-9. Dan Li; Wong, K.D.; Yu Hen Hu; Sayeed, A.M. Detection, classification, and tracking of targets. IEEE Signal Processing Magazine, vol.19, (no.2), IEEE, March 2002. p.17-29. Feng Zhao; Jaewon Shin; Reich, J. Information-driven dynamic sensor collaboration. IEEE Signal Processing Magazine, vol.19, (no.2), IEEE, March 2002. p.61-72. Guibas, L.J. Sensing, tracking and reasoning with relations. IEEE Signal Processing Magazine, vol.19, (no.2), IEEE, March 2002. p.73-85. Sri Kuma; Feng Zhao; David Sheperd. Collaborative Signal and Information Processing in Microsensor Networks. IEEE Signal Processing Magazine, vol.19, (no.2), IEEE, March 2002. p.13-14.

More Related