Sparse Signals Reconstruction Via Adaptive Iterative
This presentation is the property of its rightful owner.
Sponsored Links
1 / 19

Presenter : 張庭豪 PowerPoint PPT Presentation


  • 105 Views
  • Uploaded on
  • Presentation posted in: General

Sparse Signals Reconstruction Via Adaptive Iterative Greedy Algorithm Ahmed Aziz , Ahmed Salim , Walid Osamy. Presenter : 張庭豪. International Journal of Computer Applications March 2014. Outline. INTRODUCTION RELATED RESEARCH PROBLEM DESCRIPTIONS

Download Presentation

Presenter : 張庭豪

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript


Presenter

Sparse Signals Reconstruction Via Adaptive IterativeGreedy AlgorithmAhmed Aziz, Ahmed Salim , WalidOsamy

Presenter: 張庭豪

International Journal of Computer Applications

March 2014


Outline

Outline

  • INTRODUCTION

  • RELATED RESEARCH

  • PROBLEM DESCRIPTIONS

  • ENHANCED ORTHOGONAL MATCHING PURSUIT (E-OMP)

  • SIMULATION RESULTS

  • CONCLUSION


Introduction

INTRODUCTION

  • CS technique is employed by directly acquiring a compressed signal

    representation of length M for a length N signal with M < N

  • For a one-dimensional real-valued discrete-time signal x RN of

    length N with S << N, where S is nonzero entries or sparsity level of

    the signal x .


Introduction1

INTRODUCTION

  • y is an M *1 vector with M < N.

  • represents M * N sampling matrix.

  • if x is sparse enough, in the sense that there are very limited S

    non-zero components in x, then the exact reconstruction is possible.

M


Related research

RELATED RESEARCH

  • Historically, Matching Pursuit (MP) is the first greedy pursuit.

  • MP expands the support set T (the set of columns selected from for

    representing y) of x by the dictionary atom, i.e. columns of the

    dictionary which has the highest inner-product with the residue at

    each iteration.

  • Major drawback of MP is that it does not take into account the

    non-orthogonalityof the dictionary, which results in suboptimal

    choices of the nonzero coefficients.


Related research1

RELATED RESEARCH

  • The non-orthogonality of dictionary atoms is taken into account

    by the Orthogonal Matching Pursuit (OMP) .

  • OMP applies the greedy algorithm to pick the columns of by

    finding the largest correlation between and the residual of y, this

    process called Forward step.

  • ROMP groups inner-products with similar magnitudes into sets at

    each iteration and selects the set with maximum energy.


Problem descriptions

PROBLEM DESCRIPTIONS

  • OMP is a forward algorithm which builds up an approximation one

    step at a time by making locally optimal choices at each step.

  • At iteration k, OMP appends T with the index of the dictionary atom

    closest to rk-1 , i.e. it selects the index of the largest magnitude

    entry of:

  • The projection coefficients are computed by the orthogonal projection:

  • At the end of each iteration, the residue is updated as


Problem descriptions1

PROBLEM DESCRIPTIONS

  • These steps are repeated until a halting criterion is satisfied (the

    number of iterations= S). After termination, T and W contains the support and the corresponding nonzero entries of x, respectively.

    simple example:

    and S

    non-zero components.

  • Given a sensing matrix and an observation vector y, find the best solution to with at most S non-zero components.


Problem descriptions2

PROBLEM DESCRIPTIONS

  • By applying OMP algorithm with the following initialization

    parameters to solve .

  • At iteration k = 1 , OMP appends T with the index of the dictionary

    atom closest to rk-1 , i .e. it selects the index of the largest magnitude

    entry of:

找最大


Problem descriptions3

PROBLEM DESCRIPTIONS

  • Therefore, the index of the first column will be added to the setT (false column selection by OMP), since it is the index of thelargest magnitude entry of C.

  • Then OMP forms a new signalapproximation by solving a least-square problem:

  • Finally, OMP updates the residual


Problem descriptions4

PROBLEM DESCRIPTIONS

  • During the next iterations, the OMP algorithm will add the second

    column index to set T.

  • All forward greedy algorithms, such as OMP and other MPvariants, enlarge the support estimate iteratively via forwardselection steps.

  • As a result, the fundamental problem of all forwardgreedy algorithms

    is that, they possess no backward removalmechanism, any index that

    is inserted into the support estimatecannot be removed. So, one or

    more incorrect elements remain inthe support until termination may

    cause the reconstruction to fail.


Problem descriptions5

PROBLEM DESCRIPTIONS

  • To avoid this drawback during the forward step, E-OMP selectsthe

    columns from by solving the least square problem thatincreasing the

    probability of selecting the correct columns from.

  • It gives the best approximation solution better than using the

    absolute value of inner product as all OMP-types algorithms. Also,

    E-OMP algorithm adds a backtracking step to give itself the ability

    to cover up the errors made by the forward step.


Enhanced orthogonal matching pursuit e omp

ENHANCED ORTHOGONAL MATCHING PURSUIT (E-OMP)

  • E-OMP algorithm is an iterative two-stage greedy algorithm.

  • The first stage of E-OMP is the forward step which solving a least-squares problem to expand the estimated support set T by

    columns from at each time.

  • Then, E-OMP computes the orthogonal projection of the observed

    vector onto the subspace defined by the support estimate set T.

  • Next, the backward step prunes the support estimate by removing

    columns chosen wrongly in the previous processing and identify the

    true support set more accurately.


Enhanced orthogonal matching pursuit e omp1

ENHANCED ORTHOGONAL MATCHING PURSUIT (E-OMP)


Enhanced orthogonal matching pursuit e omp2

ENHANCED ORTHOGONAL MATCHING PURSUIT (E-OMP)

(Kmax =M)


Simulation results

SIMULATION RESULTS

  • In this section the performance of the proposed technique is evaluated

    in comparison to ROMP and OMP.

  • The experiments cover different nonzero coefficient distributions,

    including uniform and Gaussian distributions as well as binary nonzero

    coefficients.

  • Reconstruction accuracy are given in terms the Average Normalized

    Mean Squared Error (ANMSE).


Simulation results1

SIMULATION RESULTS

(set of simulations employ sparse binary vectors, where the nonzero coefficients were selected as 1 )


Simulation results2

SIMULATION RESULTS


Conclusion

CONCLUSION

  • By solving the least-square problem instead of using the absolute

    value of inner product, E-OMP increase the probability to select the

    correct columns from and therefore, it can give much better

    approximation performance than all the other tested OMP type algorithms.

  • E-OMP uses a simple backtracking step to detect the previous chosen columns reliability and then remove the unreliable columns at each time.

  • To conclude, the demonstrated reconstruction performance of E-OMP indicates that it is a promising approach, that is capable of reducing the reconstruction errors significantly.


  • Login