Thursday, July 23, 2009

CS: Various thresholds for $\ell_1$-optimization in CS, Radial K-t FOCUSS MRI, BBCS algorithm, block-sparse CS, Non Convex CS, OCT

Early this week, a reader of this blog sent me the following:
Dear Igor,

Recently, I encountered one problem about CS. Some guy asked me if the measurement matrix \Phi should be normalized, I said yes. But the interesting thing is that when I came back re-running the program without normalizing of \Phi, it did work!

In this specific unnormalized case, the RIC of \Phi is definitely larger than 1, why does it work then ?

I am a little confused about this, could you provide me some clues? ....

To what I responded
I have seen this too. However, if you take different matrices \phi it will work less often without the normalization than with the normalization. Recall also, the RIP is a sufficient condition (not a necessary one), i.e. there are other matrices that do not fit the RIP but yet allow for recovery using LP techniques.
I realize that this may be a detail to some of you but this is an issue that comes back often. This situation is further compounded by the overuse of the Restricted Isometry Property argument in different engineering papers. A necessary and sufficient condition such as the null space property can be computationally checked for a specific measurement matrix and this should open the door to other type of investigations. I take as an example the paper featured yesterday by David Donoho, Arian Maleki, and Andrea Montanari entitled Message passing algorithms for compressed sensing or the following new paper Various thresholds for $\ell_1$-optimization in compressed sensing by Mihailo Stojnic. The abstract reads:
Recently, \cite{CRT,DonohoPol} theoretically analyzed the success of a polynomial $\ell_1$-optimization algorithm in solving an under-determined system of linear equations. In a large dimensional and statistical context \cite{CRT,DonohoPol} proved that if the number of equations (measurements in the compressed sensing terminology) in the system is proportional to the length of the unknown vector then there is a sparsity (number of non-zero elements of the unknown vector) also proportional to the length of the unknown vector such that $\ell_1$-optimization succeeds in solving the system. In this paper, we provide an alternative performance analysis of $\ell_1$-optimization and obtain the proportionality constants that in certain cases match or improve on the best currently known ones from \cite{DonohoPol,DT}.
A must read as the techniques developed in this paper have the potentiality of being applied to the compressible case and thenon convex l_q optimization case unlike the work of David Donoho and Jared Tanner for instance (see Phase transitions phenomenon in Compressed Sensing)

In a different direction, Jong Chul Ye let me know that the following paper has been accepted: Radial k-t FOCUSS for High-Resolution Cardiac Cine Magnetic Resonance Imaging by Hong Jung, Jaeseok Park, Jaeheung Yoo, and Jong Chul Ye. The abstract reads:

A compressed sensing dynamic magnetic resonance (MR) technique called k-t FOCUSS has been recently proposed. It outperforms the conventional k-t BLAST/SENSE technique by exploiting the sparsity of x-f signals. This paper applies this idea to radial trajectories for high-resolution cardiac cine imaging. Radial trajectories are more suitable for high resolution dynamic MR imaging than Cartesian trajectories since there is smaller trade-off between spatial resolution and number of views if streaking artifacts due to limited views can be resolved. As shown for Cartesian trajectories, k-t FOCUSS algorithm efficiently removes artifacts while preserving high temporal resolution. k-t FOCUSS algorithm applied to radial trajectories is expected to enhance dynamic MR imaging quality. Rather than using an explicit gridding method, which transforms radial k-space sampling data to Cartesian grid prior to applying k-t FOCUSS algorithms, we use implicit gridding during FOCUSS iterations to prevent k-space sampling errors from being propagated. In addition, motion estimation and motion compensation (ME/MC) after the ¯rst FOCUSS iteration were used to further sparsify the residual image. By applying an additional k-t FOCUSS step to the residual image, improved resolution was achieved. In vivo experimental results show that new method can provide high spatio-temporal resolution even from very limited radial data set.

But more importantly, Jong also mentioned
Also, we have recently opened a website for our compressed sensing dynamic MRI, from which people can download source code.

Thank you Jong.

We also have a new reconstruction algorithm, the BBCS algorithm, that seems to be as fast as GPSR. It is described in A Barzilai-Borwein $l_1$-Regularized Least Squares Algorithm for Compressed Sensing by Robert Broughton, Ian Coope, Peter Renaud, Rachael Tappenden. The abstract reads:

Problems in signal processing and medical imaging often lead to calculating sparse solutions to under-determined linear systems. Methodologies for solving this problem are presented as background to the method used in this work where the problem is reformulated as an unconstrained convex optimization problem. The least squares approach is modified by an $l_1$-regularization term. A sparse solution is sought using a Barzilai-Borwein type projection algorithm with an adaptive step length. New insight into the choice of step length is provided through a study of the special structure of the underlying problem. Numerical experiments are conducted and results given, comparing this algorithm with a number of other current algorithms.
[Update: Rachael tells me the code will be up in a month or so ]

Finally, we have another paper by Mihailo Stojnic entitled Block-length dependent thresholds in block-sparse compressed sensing . the abstract reads:
One of the most basic problems in compressed sensing is solving an under-determined system of linear equations. Although this problem seems rather hard certain $\ell_1$-optimization algorithm appears to be very successful in solving it. The recent work of \cite{CRT,DonohoPol} rigorously proved (in a large dimensional and statistical context) that if the number of equations (measurements in the compressed sensing terminology) in the system is proportional to the length of the unknown vector then there is a sparsity (number of non-zero elements of the unknown vector) also proportional to the length of the unknown vector such that $\ell_1$-optimization algorithm succeeds in solving the system. In more recent papers \cite{StojnicICASSP09block,StojnicJSTSP09} we considered the setup of the so-called \textbf{block}-sparse unknown vectors. In a large dimensional and statistical context, we determined sharp lower bounds on the values of allowable sparsity for any given number (proportional to the length of the unknown vector) of equations such that an $\ell_2/\ell_1$-optimization algorithm succeeds in solving the system. The results established in \cite{StojnicICASSP09block,StojnicJSTSP09} assumed a fairly large block-length of the block-sparse vectors. In this paper we consider the block-length to be a parameter of the system. Consequently, we then establish sharp lower bounds on the values of the allowable block-sparsity as functions of the block-length.

Angshul Majumdar released a Matlab toolbox entitled Non Convex Compressed Sensing for Non Gaussian Noise where we have an optimization of the form min ||x||_p subject to ||y-Ax||_q

Finally, Sean O'Connor let me know that he has implemented an algorithm by the name of OCT:

The n-point Walsh Hadamard transform only requires n*LogBase2(n) additions and subtractions. Since it only uses addition and subtraction and since the basis are orthogonal the central limit theorem applies to all its outputs.
You can combine it with random permutations and random sign flipping to do random projections very quickly.

http://code.google.com/p/lemontree/downloads/list
Thanks Sean!

No comments:

Printfriendly