We consider the minimization of the number of non-zero coefficients (the ℓ0 “norm”) of the representation of a data set in a general dictionary under a fidelity constraint. This (nonconvex) optimization problem leads to the sparsest approximation. The average performance of the model consists in the probability (on the data) to obtain a K-sparse solution—involving at most K nonzero components—from data uniformly distributed on a domain. These probabilities are expressed in terms of the parameters of the model and the accuracy of the approximation. We comment the obtained formulas and give a simulation.
and the presentation Average performance of the sparsest approximation in terms of an orthogonal basis by Mila Nikolova as a joint work with Francois Malgouyres.
Here are some accepted papers for ICASSP 09. Here is one: Thresholded Smoothed-L0 (SL0) Dictionary Learning for Sparse Representations by Hadi Zayyani, Massoud Babaie-Zadeh and Remi Gribonval. The abstract reads:
In this paper, we suggest to use a modified version of Smoothed-l0 (SL0) algorithm in the sparse representation step of iterative dictionary learning algorithms. In addition, we use a steepest descent for updating the non unit column norm dictionary instead of unit column-norm dictionary. Moreover, to do the dictionary learning task more blindly, we estimate the average number of active atoms in the sparse representation of the training signals, while previous algorithms assumed that it is known in advance. Our simulation results show the advantages of our method over K-SVD in terms of complexity and performance.
I look forward to the implementation of ATH-SL0 (Amplitude THresholded SL0) and KTH-SL0 (KTHresholded SL0). I am adding this paper in the list of sparse dictionaries in the big picture.
From the Rice Compressive Sensing repository, we have a new paper entitled Reconstruction in Compressive Sensing Using Affine Sclaing Transformations with Variable-p Diversity Measure by Rufino J. Domínguez, Sergio D. Cabrera, J. Gerardo Rosiles and Javier Vega-Pineda. The abstract reads:
This method uses as a variable the index of the semi-norm l_p's. This is very new and very astute. Here is a figure showing the distribution of the p value used during a computation, as one can see, many iterations occur in the region close to p = 0.
This graph is from a previous paper: Sergio D. Cabrera, Rufino Dominguez, J. Gerardo Rosiles, and Javier Vega-Pineda, Variable-p affine scaling transformation algorithms for improved compressive sensing.
On a different note, there is another ICASSP paper entitled: A Low Complexity Orthogonal Matching Pursuit for Sparse Signal Approximation with Shift-Invariant Dictionaries, by Boris Mailhé, Remi Gribonval, Frederic Bimbot, and Pierre Vandergheynst. The abstract reads:
From the Rice Compressive Sensing repository, we have a new paper entitled Reconstruction in Compressive Sensing Using Affine Sclaing Transformations with Variable-p Diversity Measure by Rufino J. Domínguez, Sergio D. Cabrera, J. Gerardo Rosiles and Javier Vega-Pineda. The abstract reads:
The Affine Scaling Transformation (AST) family of algorithms can solve the minimization of the l_(p less than1), p-normlike diversity measure for an underdetermined linear inverse problem. The AST algorithms can therefore be used to solve the sparse signal recovery problem that arises in Compressive Sensing. In this paper, we continue to investigate the application of the iterative AST family of algorithms with a dynamical adjustment of the p parameter to improve convergence speed and signal recovery accuracy. In our previous work we experimentally determined that any p in [0, 1] can give the sparse solution when exact recovery is possible, however, the behavior of the algorithm is highly dependent on this parameter. In addition, the bestapproximation error, for those cases where exact recovery is not possible, is also highly dependent on p. Using various criteria, we propose and evaluate some strategies to vary the values of p as a function of the iteration in the AST algorithm. The goal in these strategies for a variable-p AST algorithm is to capture the benefits of the p=0 and the p=1 fixed-p approaches simultaneously.
This method uses as a variable the index of the semi-norm l_p's. This is very new and very astute. Here is a figure showing the distribution of the p value used during a computation, as one can see, many iterations occur in the region close to p = 0.
This graph is from a previous paper: Sergio D. Cabrera, Rufino Dominguez, J. Gerardo Rosiles, and Javier Vega-Pineda, Variable-p affine scaling transformation algorithms for improved compressive sensing.
On a different note, there is another ICASSP paper entitled: A Low Complexity Orthogonal Matching Pursuit for Sparse Signal Approximation with Shift-Invariant Dictionaries, by Boris Mailhé, Remi Gribonval, Frederic Bimbot, and Pierre Vandergheynst. The abstract reads:
We propose a variant of Orthogonal Matching Pursuit (OMP), called LoCOMP, for scalable sparse signal approximation. The algorithm is designed for shift-invariant signal dictionaries with localized atoms, such as time-frequency dictionaries, and achieves approximation performance comparable to OMP at a computational cost similar to Matching Pursuit. Numerical experiments with a large audio signal show that, compared to OMP and Gradient Pursuit, the proposed algorithm runs in over 500 less time while leaving the approximation error almost unchanged.
Pierre Vandergheynst also has a presentation on compressed sensing from his perspective.
Laurent Duval mentioned to me the presentation by Thomas Capricelli entitled Compressed Sensing and Tomography. The author also recently completed his thesis entitled: Generalized convex projection algorithms and applications to medical imaging. The abstract of the thesis reads:
Laurent Duval mentioned to me the presentation by Thomas Capricelli entitled Compressed Sensing and Tomography. The author also recently completed his thesis entitled: Generalized convex projection algorithms and applications to medical imaging. The abstract of the thesis reads:
This thesis focuses on the use of generalized projection operators in convex optimization algorithms and on their applications to medical imaging. We describe various extensions of the notion of a projection onto a closed convex set in a Hilbert space. These include subgradient projectors, proximity operators, and Bregman projectors. We propose a new generalized projection algorithm for projecting onto a countable intersection of closed convex sets and prove its convergence. This contribution unifies several existing results on the convergence of projection methods in Hilbert spaces. We then study original applications of projection methods in medical imaging. We propose a new strategy to incorporate Poisson noise in continuous tomography, as well as a new approach to discrete tomography via convex programming and total variation. We also discuss the connections between total variation, compressive sensing, and tomographic reconstruction. Finally, we present what seem to be the first numerical results on the use of Bregman distances in projection algorithms. The software tools that have been developed to carry out this work have been designed so as to be made available to the scientific community.
The thesis is in French and chapter 5 discusses the compressed sensing aspect of tomography. I note the following at the end of the chapter and a similar statement at the end of the presentation:
The experiment from Candes, Romberg and Tao must be understood as a reconstruction from a sub-sampling of the Fourier domain, without noise and using a mask adapted to simple signals.
Eventually, it really begs the upcoming problem of having good benchmark cases against which people can judge their reconstruction algorithms and implementations. We, as a community, need to probably add additional problem cases in SPARCO. I haven't talked to Michael Friedlander or Ewout van der Berg on this issue, but I am sure they are looking forward to good benchmark to add to the problems section. Specifically, there is a unique opportunities for hardware developers to provide some of these benchmarks...wink, wink :-) Also, let me restate this again, SPARCO is solver agnostic.
While we are on the subject of tomography, here is another paper on quantum tomography, I am not quite sure what it entails, but it sound fascinating:
On a totally unrelated note, the University of Waterloo is seeking an assistant professor in the sub-area of sparse sampling, compressed sensing. It is listed in Compressive Sensing Jobs section.
While we are on the subject of tomography, here is another paper on quantum tomography, I am not quite sure what it entails, but it sound fascinating:
For an initially well designed but imperfect quantum information system, the process matrix is almost sparse in an appropriate basis. Existing theory and associated computational methods (ℓ1-norm minimization) for reconstructing sparse signals establish conditions under which the sparse signal can be perfectly reconstructed from a very limited number of measurements (resources). Although a direct extension to quantum process tomography of the ℓ1-norm minimization theory has not yet emerged, the numerical examples presented here, which apply ℓ1-norm minimization to quantum process tomography, show a significant reduction in resources to achieve a desired estimation accuracy over existing methods.
On a totally unrelated note, the University of Waterloo is seeking an assistant professor in the sub-area of sparse sampling, compressed sensing. It is listed in Compressive Sensing Jobs section.
No comments:
Post a Comment