1 / 15

Compressive Sensing Lecture notes by Richard G. Baraniuk

Compressive Sensing Lecture notes by Richard G. Baraniuk. IEEE Signal processing magazine July 2007 Compressive Sensing Tutorial PART 1 Svetlana Avramov-Zamurovic January 15, 2009. Motivation.

andrew
Download Presentation

Compressive Sensing Lecture notes by Richard G. Baraniuk

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Compressive Sensing Lecture notes by Richard G. Baraniuk IEEE Signal processing magazine July 2007 Compressive Sensing Tutorial PART 1 Svetlana Avramov-Zamurovic January 15, 2009.

  2. Motivation • Practically in the vast majority of applications data acquisition is based on Shannon/Nyquist sampling theorem that requires sampling rate at least twice the message signal bandwidth in order to achieve exactrecovery. • Source frequency characteristic: This requirement is not practical for video industry since the signal bandwidth is very wide and the technology is not feasible to achieve necessary processing rates in order to satisfy the Shannon/Nyquist sampling theorem. Practical solution bandlimits the signals and prevents aliasing. • Channel frequency characteristic: There is a significant class of signals (pictures for example) that are compressible: not all the data is necessary to transmit in order to get ‘good enough’ representation of the original message. Practical solution introduces lossy compression processing at the source level. • Compressed sensing is new method to capture and represent compressible signals at the rate well below Nyquist’s rate. • Employs nonadaptive linear projections (random measurement matrix) • Preserves the signal structure (length of the sparse vectors is conserved) • Reconstructs the signal from the projections using optimization process (L1 norm)

  3. Classical Approach: Transform coding PICTURE ORTHO- GONALIZITION SORTING CODING straightforward decoding N samples (ALL measurements are taken) Only K coefficients are coded Full set of projections is found K largest coefficients selected N-K coef. dumped K<<N EXAUSTIVE SEARCH Compressed sensing PICTURE CODING Signal Reconstruction (1) Underdetermined system M<N (2) Reconstructed signal must have N components (3) L1 norm is used to find sparse representation Capture only significant components Only M (K≈ M) samples (measurements) are taken (a) Measurements must be carefully designed (b) Original signal (picture) must be sparse

  4. Inefficiencies of transform coding (classical) • Sampling required by Nyquist rate is high producing huge number of samples N. • Finding signal representation in an orthonormal basis (Fourier, etc.) is computationally intensive since ALL si must be found and sorted in order to find K significant coefficients. • Along with the magnitude of K coefficients their location needs to be coded introducing the overhead.

  5. Definitions

  6. y S Φ y Ψ Θ S = = M x N (a) (b) K-sparse (a) Compressive sensing measurement process with a random Gaussian measurement matrix and discrete cosine transform (DCT) matrix . The vector of coefficients s is sparse with K = 4. Φ (phi, measurement matrix) Ψ (psi, orthonormal basis) Θ (theta, Compressed Sensing reconstruction matrix) (b) Measurement process with There are four columns that correspond to nonzero si coefficients; the measurement vector y is a linear combination of these columns. From IEEE Signal processing magazine July 2007, R. G. Baraniuk, Compressive sensing

  7. Compressive sensing solution • Measurement matrix must be stable and produce reconstruction of original signal, x, (length N) from M measurements • 1) Ψ othonormal basis must be selected 2), stable Φ (measurement matrix) must be designed (CS measures PROJECTIONS of signal onto a basis) and 3) the signal x has to be reconstructed from the underdetermined system using optimization (norm L1) linear combinations, since the projections( linear combinations are measured) we are extracting the sparse components using optimization algorithm (L1) with restrictions ( Rip and incoherence) • The following conditions must hold in order to find unique sparse solution • Restricted isometry property RIP • If x is K sparse and K magnitudes and locations are known than for M>K we can find solution under the following condition: Θ=ΦΨ MUST preserve the length of the vectors sharing the same K non-zero coefficients as s • ε is a small number, v is any vector sharing the same K non-zero coefficients as s • Incoherence • Rows of Φ (measurement matrix) CAN NOT sparsely represent the columns of Ψ (othonormal basis) and vice versa. MUST be DENSE!

  8. Direct construction of measurement matrix (Φ) • Verifying RIP requires all possible combinations of K non- zero entries of length N • However BOTH RIP and incoherence could be achieved by simply selecting Φ randomly • φij are independent and identically distributed random variables from Gaussian probability density with zero mean and 1/N variance. • So measurements y= Θ x are randomly weighted linear combinations of elements of x. • iid Gaussian Φ (MxN) properties • Φ is incoherent with the basis Ψ=I, Θ= ΦΨ=Φ with high probability if M≥cKlog(N/K), c is a small constant (log2(N/K+1) so M<<N. • Φ is universal: Θ= ΦΨ will be iid Gaussian and have RIP with high probability regardless of choice of the othonormal basis Ψ

  9. Designing a signal reconstruction algorithm • M measurements, y, random measurements matrix, Φ, and basis Ψ, are used to reconstruct compressible signal x (length) N or equivalently its sparse coefficient vector s. • Since M<N there are infinitely many solutions. Since Θ(s+r)= Θ(s)=y for any vector r in the null space N(Θ). But we have a restriction that the solution is sparse. • Signal reconstruction algorithm aims to find signal’s sparse coefficient vector in the (N-M)-dimensional translated null space H=N(Θ)+s. • L2 norm (energy) minimized. Pseudo inverse is closed form to find the solution but it DOES NOT find sparse solution. • L0 norm counts the number of non-zero elements of s. This optimization can recover K sparse signal exactly with high probability using only M=K+1 measurements but solving it is unstable and NP-complete requiring exhaustive enumeration of all (N K) possible locations of the non-zero entries in s. • L1 norm (adding absolute values of all elements) can exactly recover K sparse signals and closely approximate compressible signals with high probability using only M≥cKlog(N/K) iid Gaussian measurements. • Convex optimization reducing to linear programming known as Basis pursuit with the computation complexity about O(N3)

  10. RN S a b c (a) The subspaces containing two sparse vectors in R3 lie close to the coordinate axes. (b) Visualization of the L2 minimization (5) that finds the non-sparse point-of-contact s between the 2 ball (hyper-sphere, in red) and the translated measurement matrix null space (in green). (c) Visualization of the L1 minimization solution that finds the sparse point-of-contact s with high probability thanks to the pointiness of the 1 ball. From IEEE Signal processing magazine July 2007, R. G. Baraniuk, Compressive sensing

  11. Compressive Imaging Camera Architecture • Single detector: By time multiplexing a single detector, we can use a less expensive and yet more sensitive photon detector. A single detector camera can also be adapted to image at wavelengths that are currently impossible with conventional CCD and CMOS imagers (for example IR, security applications). • Universality: Random and pseudorandom measurement bases are universal in the sense that they can be paired with any sparse basis. This allows exactly the same encoding strategy to be applied in a variety of different sensing environments; knowledge of the nuances of the environment are needed only at the decoder. Random measurements are also future-proof: if future research in image processing yields a better sparsity-inducing basis, then the same set of random measurements can be used to reconstruct an even better quality image. • Encryption: A pseudorandom basis can be generated using a simple algorithm according to a random seed. Such encoding effectively implements a form of encryption: the randomized measurements will themselves resemble noise and be meaningless to an observer who does not know the associated seed. • Robustness and progressivity: Random coding is robust in that the randomized measurements have equal priority, unlike the Fourier or wavelet coefficients in current transform coders. Thus they allow a progressively better reconstruction of the data as more measurements are obtained; one or more measurements can also be lost without corrupting the entire reconstruction. • Scalability: We can adaptively select how many measurements to compute in order to trade off the amount of compression of the acquired image versus acquisition time; in contrast, conventional cameras trade off resolution versus the number of pixel sensors. • Computational asymmetry: Finally, CI places most of its computational complexity in the decoder, which will often have more substantial computational resources than the encoder/imager. The encoder is very simple; it merely computes incoherent projections and makes no decisions. From D. Takhar at al.A new Compressive Imaging Camera Architecture using Optical domain Compression

  12. Compressive imaging test bed • Micro-actuated mirrors => commercially viable MEMS technology for the video/projector display market as well as laser systems and telescope optics. Texas Instruments (TI) digital micromirror device (DMD). • TI DMD developer’s kit and accessory light modulator package (ALP). • The DMD consists of an array of electrostatically actuated micro-mirrors where each mirror the array is suspended above an individual SRAM cell. The DMD micro-mirrors form a pixel array of size 1024×768. • Each mirror rotates about a hinge and can be positioned in one of two states (+12 degrees and −12 degrees from horizontal); thus light falling on the DMD may be reflected in two directions depending on the orientation of the mirrors. • Desired image is formed on the DMD plane with the help of a biconvex lens; this image acts as an object for the second biconvex lens which focuses the image onto the photodiode. • The light is collected from one out of the two directions in which it is reflected (e.g., the light reflected by mirrors in the +12 degree state). The light from a given configuration of the DMD mirrors is summed at the photodiode to yield an absolute voltage that yields a coefficient y(m) for that configuration. The output is amplified through an op-amp circuit and then digitized by a 12-bit ADC. User decides how many measurements to take (M). • Object is illuminated by LED light source, 1kHz. • Compressed Sensing: • system directly acquires a reduced set of M incoherent projections of an N-pixel image x without first acquiring the N pixel values; • random position of the mirror(+12 or -12); programmable so we can decide on pattern (Radamacher) • Mirrors can programmed to stay in a position longer producing better resolution. Assumption: Object is stationary! • Summing of the light in the photo diode, each measurement multiplexes several pixel values (activating DMD in blocks) CS reconstruction algorithm extracts them. From D. Takhar at al.A new Compressive Imaging Camera Architecture using Optical domain Compression

  13. Scene Image Photodiode Bitstream A/D Reconstruction Digital Micro- mirror Device Single-pixel, compressive sensing camera. Array Random Number Generator Major challenges (1) Acquisition speed (2) DSP processing speed The images in (a) and (b) are not meant to be aligned. (a) Conventional digital camera image of a soccer ball. (b) 64 × 64 black-and-white image x of the same ball (N = 4,096 pixels) recovered from M = 1,600 random measurements taken by the camera in (a) From IEEE Signal processing magazine July 2007, R. G. Baraniuk, Compressive sensing

  14. Results Experimental results Sources of noise (1) Nonlinearities in the photodiode (2) Non-uniform reflectance of the mirrors through the lens focus onto the photodiode (changing the weighting of the pattern blocks) (3) Non-uniform mirror positions Robustness of the CS reconstruction algorithm Suppresses quantization noise from ADC and photodiode circuit noise.

  15. Notes • RB in TI : can use 10 Mega pixel camera to get the resolution of 25 Mega pixel picture! CS is scalable! • The classical approach takes the whole picture at intervals (or as a stream in the video). How fast is signal reconstruction in CS? Can it keep up with the moving target? In the classical approach the records are taken and after the event the zoom in possibility is open. Cs never takes ALL of the information and relays on random approach that statistically gives good results but this method may not be acceptable in some applications. • But in D Takhar at al. possibility of zoom in on a part of the image as opposed to acquiring the whole image and then extract the piece of interest. Adaptively sectioned to highlight. • RB : just choose the measurement matrix to have random iid Gaussian distribution. • Michael Lustig : just choosing samples at random is not good idea, because the distance between samples are not preserved. Globally, such a sampling scheme has uniform density but locally you will get high density areas and ‘holes’ these holes really mess up the receiver array source of redundancy. Using Poisson disk sampling distribution provides local uniform sampling redundancy are maximally exploited.

More Related