Sparse Signals Reconstruction Via Adaptive Iterative Greedy Algorithm Ahmed Aziz , Ahmed Salim , Walid Osamy. Presenter : 張庭豪. International Journal of Computer Applications March 2014. Outline. INTRODUCTION RELATED RESEARCH PROBLEM DESCRIPTIONS
Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.
Sparse Signals Reconstruction Via Adaptive IterativeGreedy AlgorithmAhmed Aziz, Ahmed Salim , WalidOsamy
International Journal of Computer Applications
representation of length M for a length N signal with M < N
length N with S << N, where S is nonzero entries or sparsity level of
the signal x .
non-zero components in x, then the exact reconstruction is possible.
representing y) of x by the dictionary atom, i.e. columns of the
dictionary which has the highest inner-product with the residue at
non-orthogonalityof the dictionary, which results in suboptimal
choices of the nonzero coefficients.
by the Orthogonal Matching Pursuit (OMP) .
finding the largest correlation between and the residual of y, this
process called Forward step.
each iteration and selects the set with maximum energy.
step at a time by making locally optimal choices at each step.
closest to rk-1 , i.e. it selects the index of the largest magnitude
number of iterations= S). After termination, T and W contains the support and the corresponding nonzero entries of x, respectively.
parameters to solve .
atom closest to rk-1 , i .e. it selects the index of the largest magnitude
column index to set T.
is that, they possess no backward removalmechanism, any index that
is inserted into the support estimatecannot be removed. So, one or
more incorrect elements remain inthe support until termination may
cause the reconstruction to fail.
columns from by solving the least square problem thatincreasing the
probability of selecting the correct columns from.
absolute value of inner product as all OMP-types algorithms. Also,
E-OMP algorithm adds a backtracking step to give itself the ability
to cover up the errors made by the forward step.
columns from at each time.
vector onto the subspace defined by the support estimate set T.
columns chosen wrongly in the previous processing and identify the
true support set more accurately.
in comparison to ROMP and OMP.
including uniform and Gaussian distributions as well as binary nonzero
Mean Squared Error (ANMSE).
(set of simulations employ sparse binary vectors, where the nonzero coefficients were selected as 1 )
value of inner product, E-OMP increase the probability to select the
correct columns from and therefore, it can give much better
approximation performance than all the other tested OMP type algorithms.