1 / 20

A Single-letter Characterization of Optimal Noisy Compressed Sensing

Replace samples by more general measurements based on linear projections. Sparse signal measurements with non-zeros. Measurement process is typically analog with Gaussian noise. Single-letter characterization of optimal Compressed Sensing using BP algorithm.

lebow
Download Presentation

A Single-letter Characterization of Optimal Noisy Compressed Sensing

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. A Single-letter Characterization of Optimal Noisy Compressed Sensing Dongning Guo Dror Baron Shlomo Shamai

  2. Setting • Replace samples by more general measurements based on a few linear projections (inner products) sparsesignal measurements # non-zeros

  3. Signal Model • Signal entry Xn= BnUn • iid Bn» Bernoulli()  sparse • iid Un» PU PX Bernoulli() Multiplier PU

  4. Measurement Noise • Measurement process is typically analog • Analog systems add noise, non-linearities, etc. • Assume Gaussian noise for ease of analysis • Can be generalized to non-Gaussian noise

  5. Noise Model • Noiseless measurements denoted y0 • Noise • Noisy measurements • Unit-norm columns  SNR= noiseless SNR

  6. Allerton 2006 [Sarvotham, Baron, & Baraniuk] source encoder channel decoder channel encoder source decoder channel CS decoding CS measurement • Model process as measurement channel • Measurements provide information!

  7. Single-Letter Bounds • Theorem:[Sarvotham, Baron, & Baraniuk 2006] For sparse signal with rate-distortion function R(D), lower bound on measurement rate s.t. SNR  and distortion D • Numerous single-letter bounds • [Aeron, Zhao, & Saligrama] • [Akcakaya and Tarokh] • [Rangan, Fletcher, & Goyal] • [Gastpar & Reeves] • [Wang, Wainwright, & Ramchandran] • [Tune, Bhaskaran, & Hanly] • …

  8. Goal: Precise Single-letter Characterization of Optimal CS

  9. What Single-letter Characterization?  , channel posterior • Ultimately what can one say about Xn given Y? • (sufficient statistic) • Very complicated • Want a simple characterization of its quality • Large-system limit:

  10. Main Result: Single-letter Characterization • Result1: Conditioned on Xn=xn, the observations (Y,) are statistically equivalent to  easy to compute… • Estimation quality from (Y,) just as good as noisier scalar observation  , channel posterior degradation

  11. Details • 2(0,1) is fixed point of • Take-home point: degraded scalar channel • Non-rigorous owing to replica method w/ symmetry assumption • used in CDMA detection [Tanaka 2002, Guo & Verdu 2005] • Related analysis [Rangan, Fletcher, & Goyal 2009] • MMSE estimate (not posterior) using [Guo & Verdu 2005] • extended toseveral CS algorithms particularly LASSO

  12. Decoupling

  13. Decoupling Result • Result2: Large system limit; any arbitrary (constant) L input elements decouple: • Take-home point: “interference” from each individual signal entry vanishes

  14. Sparse Measurement Matrices

  15. Sparse Measurement Matrices [Baron, Sarvotham, & Baraniuk] • LDPC measurement matrix (sparse) • Mostly zeros in ; nonzeros » P • Each row contains ¼Nq randomly placed nonzeros • Fast matrix-vector multiplication • fast encoding / decoding sparse matrix

  16. CS Decoding Using BP[Baron, Sarvotham, & Baraniuk] • Measurement matrix represented by graph • Estimate input iteratively • Implemented via nonparametric BP [Bickson,Sommer,…] signal x measurements y

  17. Identical Single-letter Characterization w/BP • Result3: Conditioned on Xn=xn, the observations (Y,) are statistically equivalent to • Sparse matrices just as good • BP is asymptotically optimal! identical degradation

  18. Decoupling Between Two Input Entries (N=500, M=250, =0.1, =10) density

  19. CS-BP vs Other CS Methods (N=1000, =0.1, q=0.02) MMSE CS-BP M

  20. Conclusion • Single-letter characterization of CS • Decoupling • Sparse matrices just as good • Asymptotically optimal CS-BP algorithm

More Related