480 likes | 648 Views
Distributed Target Localization via Spatial Sparsity -or- Array Processing Meets Compressive Sensing. Volkan Cevher Marco Duarte Richard Baraniuk. Array Processing. Goal: Localize targets by fusing measurements from an array of sensors collect time signal data
E N D
Distributed Target Localization via Spatial Sparsity -or-Array Processing Meets Compressive Sensing Volkan Cevher Marco Duarte Richard Baraniuk
Array Processing • Goal: Localize targetsby fusing measurementsfrom an array of sensors • collect time signal data • communicate signals to central fusion center • solve an optimizationproblem
Array Processing • Goal: Localize targetsby fusing measurementsfrom an array of sensors • collect time signal data • communicate signals to central fusion center • solve an optimizationproblem
Array Processing • Goal: Localize targetsby fusing measurementsfrom an array of sensors • collect time signal data • requires potentiallyhigh-rate (Nyquist)sampling • communicate signals to central fusion center • potentially largecommunicationburden • solve an optimizationproblem • ex: MLE
Digital Data Acquisition • Foundation: Shannon sampling theorem “if you sample densely enough(at the Nyquist rate), you can perfectly reconstruct the original analog data” time space
Sensing by Sampling • Long-established paradigm for digital data acquisition • uniformly sampledata at Nyquist rate (2x Fourier bandwidth) sample
Sensing by Sampling • Long-established paradigm for digital data acquisition • uniformly sampledata at Nyquist rate (2x Fourier bandwidth) too much data! sample
Sensing by Sampling • Long-established paradigm for digital data acquisition • uniformly sampledata at Nyquist rate (2x Fourier bandwidth) • compress data sample compress transmit/store JPEG JPEG2000 … receive decompress
Sparsity / Compressibility largewaveletcoefficients (blue = 0) largeGabor (TF)coefficients pixels widebandsignalsamples frequency time
Sample / Compress • Long-established paradigm for digital data acquisition • uniformly sample data at Nyquist rate • compress data sample compress transmit/store sparse /compressiblewavelettransform receive decompress
What’s Wrong with this Picture? • Why go to all the work to acquire N samples only to discard all but K pieces of data? sample compress transmit/store sparse /compressiblewavelettransform receive decompress
What’s Wrong with this Picture? nonlinear processing nonlinear signal model (union of subspaces) linear processing linear signal model (bandlimited subspace) sample compress transmit/store sparse /compressiblewavelettransform receive decompress
Compressive Sensing • Directly acquire “compressed” data • Replace samples by more general “measurements” compressive sensing transmit/store receive reconstruct
Sampling • Signal is -sparse in basis/dictionary • WLOG assume sparse in space domain sparsesignal nonzero entries
Sampling • Signal is -sparse in basis/dictionary • WLOG assume sparse in space domain • Samples sparsesignal measurements nonzeroentries
Compressive Sampling • When data is sparse/compressible, can directly acquire a condensed representation with no/little information loss through linear dimensionality reduction sparsesignal measurements nonzero entries
How Can It Work? • Projection not full rank…… and so loses information in general • Ex: Infinitely many ’s map to the same
How Can It Work? • Projection not full rank…… and so loses information in general • But we are only interested in sparse vectors columns
How Can It Work? • Projection not full rank…… and so loses information in general • But we are only interested in sparse vectors • is effectively MxK columns
How Can It Work? • Projection not full rank…… and so loses information in general • But we are only interested in sparse vectors • Design so that each of its MxK submatrices are full rank columns
How Can It Work? • Goal: Design so that its Mx2K submatrices are full rank • difference between two K-sparse vectors is 2K sparse in general • preserve information in K-sparse signals • Restricted Isometry Property (RIP) of order 2K columns
Unfortunately… • Goal: Design so that its Mx2K submatrices are full rank(Restricted Isometry Property – RIP) • Unfortunately, a combinatorial, NP-complete design problem columns
Insight from the 80’s [Kashin, Gluskin] • Draw at random • iid Gaussian • iid Bernoulli … • Then has the RIP with high probability as long as • Mx2K submatrices are full rank • stable embedding for sparse signals • extends to compressible signals in balls columns
Compressive Data Acquisition • Measurements = random linear combinations of the entries of • WHP does not distort structure of sparse signals • no information loss sparsesignal measurements nonzero entries
CS Signal Recovery • Goal: Recover signal from measurements • Problem: Randomprojection not full rank(ill-posed inverse problem) • Solution: Exploit the sparse/compressiblegeometry of acquired signal
CS Signal Recovery • Recovery: given(ill-posed inverse problem) find (sparse) • fast, wrong • correct, slow • correct, efficient mild oversampling[Candes, Romberg, Tao; Donoho]number of measurements required linear program
Universality • Random measurements can be used for signals sparse in any basis
Universality • Random measurements can be used for signals sparse in any basis
Universality • Random measurements can be used for signals sparse in any basis sparsecoefficient vector nonzero entries
Compressive Sensing • Directly acquire “compressed” data • Replace N samples by M random projections random measurements transmit/store … receive linear pgm
Problem Setup • Discretize space into a localization grid withN grid points • fixes localization resolution • P sensors do not have tobe on grid points
Problem Setup • Discretize space into a localization grid withN grid points • Construct localization dictionary • reference sensor imeasures signal • for all grid positionsn=1,…,N: assume that target is at grid position n • for all sensorsj=1,…,P: use Green’s function to estimate signal sensor j would measureif target was at position n
Signal Observation Model localizationgrid actual sensormeasurements
Signal Observation Model localizationgrid localizationdictionary actual sensormeasurements
Localization as Sparse Approximation localizationgrid actual sensormeasurements truetargetlocation
Localization as Sparse Approximation localizationgrid actual sensormeasurements truetargetlocation givencan recovervia lin prgm
Valid Dictionaries • S.A. works when columns of are mutual incoherent[Vandergheynst, Gribonval, et al] • True when target signal has fast-decaying autocorrelation • Extends to multiple targets with small cross-correlation • Correlations control localization resolution
Multiple Targets localizationgrid actual sensormeasurements 2 truetargetlocations
Typical Correlation Functions Toyota Prius ACF: Toyota Prius CCF: Rodeo vs. Prius Isuzu Rodeo ACF: Isuzu Rodeo CCF: Rodeo vs. Camaro Chevy Camaro ACF: Chevy Camaro CCF: Camaro vs. Prius
Enter Compressive Sensing • Since localization vector is sparse, acquire and transmit just a few compressive measurementsto the fusion center
Enter CS: ELVIS • Since localization vector is sparse, acquire and transmit just a few compressive measurementsto the fusion center • ELVIS: Enhanced Localization Via Incoherence and Sparsity
ELVIS • Goal: Localize targetsby fusing measurementsfrom an array of sensors • collect time signal data • requires potentiallyhigh-rate (Nyquist)sampling • communicate signals to central fusion center • potentially largecommunicationburden • solve an optimizationproblem
Field Data Results Field example: 5 vehicle convoy, 2 HMMV’s and 3 commercial SUV’s.
Summary • Compressive sensing • integrates sensing, compression, processing • exploits signal sparsity/compressibility information • enables new sensing modalities, architectures, systems • ELVIS: CS for array processing and localization • array signals can form an incoherent dictionary • sub-Nyquist sampling at each sensor • communication bandwidth usage scales logarithmically with the number of sensors and/or desired resolution • democratic compressive measurements robust to quantization (even 1-bit), noise, and packet loss • universality of compressive measurements enables design/deployment of inexpensive generic sensing hardware