Tuesday, November 13, 2007

Sparsity: What is it good for ?

The solution of an underdetermined system of linear equations is generally too large (i.e. it generally is an entire subspace as opposed to a unique solution). When you have too many unknowns and not enough equations, experience tells you to wait for additional equations or constraints to obtain a potentially unique solution. In Compressed Sensing, sparsity of the solution is the element that provides an ability to put additional constraints in order to find a unique solution.

But sparsity can be also used for other purpose as shown by this excellent study by Jerome Bobin, Jean-Luc Starck, Jalal Fadili and Yassir Moudden in Sparsity and Morphological Diversity in Blind Source Separation where their approach "takes advantage of this “morphological diversity” to differentiate between the sources with accuracy". Evidently, the reconstruction techniques parallels the ones found in Compressed Sensing. The abstract reads:

Over the last few years, the development of multi-channel sensors motivated interest in methods for the coherent processing of multivariate data. Some specific issues have already been addressed as testified by the wide literature on the so-called blind source separation (BSS) problem. In this context, as clearly emphasized by previous work, it is fundamental that the sources to be retrieved present some quantitatively measurable diversity. Recently, sparsity and morphological diversity have emerged as a novel and effective source of diversity for BSS. We give here some new and essential insights into the use of sparsity in source separation and we outline the essential role of morphological diversity as being a source of diversity or contrast between the sources. This paper introduces a new BSS method coined Generalized Morphological Component Analysis (GMCA) that takes advantages of both morphological diversity and sparsity, using recent sparse overcomplete or redundant signal representations. GMCA is a fast and efficient blind source separation method. We present arguments and a discussion supporting the convergence of the GMCA algorithm. Numerical results in multivariate image and signal processing are given illustrating the good performance of GMCA and its robustness to noise.

The GMCALab matlab code for the implementation of this algorithm can be found here.

5 comments:

Anonymous said...

What is the relation between the coherence and the sampling method used.From the definition,coherence is the measure of similarity between the measurement and sparsity basis.(e.g,Fourier as measurement and Wavelet as sparsity basis).Does coherence incooporate the sampling pattern we use in our measurements?(random,lattice)as from definition,its just a similarity mneasure between two basis(nothing to do with sampling pattern).

Igor said...

Anonymous,

Coherence is just a similarity measure between the two bases but it does affect the "sampling pattern".

When the sparsity basis and the measurement basis have been shown to be as incoherent as possible,
the sampling pattern, or to be precise the number of measurements needed, depends on the sparsity of the signal.

If they are less than totally incoherent, then one need more measurements.

If the two bases are not incoherent with each other (say Fourier as measurement and Fourier as sparsity), then you are back with sampling at the Nyquist rate.

Igor.

Anonymous said...

If I use wavelet as sparsity basis and fourier as measurement basis,I will get some fixed value of coherence.But using fourier as measurement basis,I will get different results if I use random and lattice sampling pattern.In the formula for required number of measurements M(as given in paper by Candes)for some sparsity S of sparsse signal of length N,we have

M > C (mu)^2 S log(N)

where mu is mutual coherence.
If I use different sampling patterns(lattice and random),coherence is the same in both cases which suggests number of required measurements M in both cases are also the same.

Igor said...

Anonymous,

I am not sure, but I think you mispoke when you said this "..But using fourier as measurement basis.."
Also, it seems to me that you may be confused about the lattice and irregular sampling. Please note that measurements in compressed sensing are a linear combination of random evaluation of the functions in the measurement basis, not one evaluation of these functions at a time. Therefore, irregular or lattice sampling do not make much sense in that setting. Please let me know if I misunderstood your question.

Igor.

Anonymous said...

Using fourier as measurement basis means i am acquiring the data in fourier domain.In which way I am acquiring data depends on the sampling pattern used in fourier domain.Lattice sampling doesnt have randomness which is part and parcel of compressed sensing.
In case of measurements taken in fourier domain and sparsity transform being identity,coherence is the maximum of entry of fourier matrix.In this case,i am not looking at which samples I am taking or under-sampling factor?.Does coherence depend on under-sampling factor?
Please correct me if I am wrong.

Thanks

Printfriendly