## Monday, November 15, 2010

### CS: Threshold-ISD, Weak Compressed Sensing, Deterministic Matrices, Compressive auto-indexing in femtosecond nanocrystallography, randomized Hadamard transform, Compressive Sensing Microscope and more...

Yilun Wang sent me the following:
Dear Igor,

Wotao and I have developed an algorithm called Threshold-ISD'', which was once mentioned in a Nuit Blanche post.

The algorithm was published at SIAM Journal on Imaging Sciences, 2010, and the code was available online at http://www.caam.rice.edu/~optimization/L1/ISD/

We are expecting that more people can be aware of it, test it and give us more feedback in order to further improve it.

Thanks you very much.

Yilun

Thanks Yilun. I have added Threshold-ISD to the reconstruction section of the big picture as I should have done a long time ago. Since the last version was released in June 2010, I hope more people will look at it as it solves the interesting problem of performing the better reconstruction with the measurements you already have.

In a different direction, Do you recall the entry about weak compressive sensing ? Well it looks like I was borrowing the idea from different people to a certain extent. In particular from here: and here:
They both look at [A I] which seem to suggest that they follow the analysis of the Donoho-Tanner phase transition. Now an analysis needs to be performed on how \epsilon changes all that in [\epsilon*A   I].

There is a new version of Kronecker Compressive Sensing by Marco Duarte, and Richard Baraniuk.

I also found this new presentation entitled Large-Scale Structured Sparse Learning by Jieping Ye

Compressive auto-indexing in femtosecond nanocrystallography by Filipe R.N.C. Mai, Chao Yang and Stefano Marchesini. The abstract reads:
Ultrafast nanocrystallography has the potential to revolutionize biology by enabling structural elucidation of proteins for which it is possible to grow crystals with 10 or fewer unit cells on the side. The success of nanocrystallography depends on robust orientation-determination procedures that allow us to average diffraction data from multiple nanocrystals to produce a three-dimensional (3D) diffraction data volume with a high signal-to-noise ratio. Such a 3D diffraction volume can then be phased using standard crystallographic techniques. “Indexing” algorithms used in crystallography enable orientation determination of diffraction data from a single crystal when a relatively large number of reflections are recorded. Here we show that it is possible to obtain the exact lattice geometry from a smaller number of measurements than standard approaches using a basis pursuit solver.
Improved analysis of the subsampled randomized Hadamard transform by Joel Tropp. The abstract reads:
This paper presents an improved analysis of a structured dimension-reduction map called the subsampled randomized Hadamard transform. This argument demonstrates that the map preserves the Euclidean geometry of an entire subspace of vectors. The new proof is much simpler than previous approaches, and it offers---for the first time---optimal constants in the estimate on the number of dimensions required for the embedding.
I mentioned that paper  before and had some people comment it on the blog while Bob explained the paper a day later, but there is now a new version of it with an added author, here it is again: Compressed Sensing with Coherent and Redundant Dictionaries by Emmanuel Candes, Yonina Eldar, Deanna Needell, and Paige Randall. The abstract reads:
This article presents novel results concerning the recovery of signals from undersampled data in the common situation where such signals are not sparse in an orthonormal basis or incoherent dictionary, but in a truly redundant dictionary. This work thus bridges a gap in the literature and shows not only that compressed sensing is viable in this context, but also that accurate recovery is possible via an 1-analysis optimization problem. We introduce a condition on the measurement/sensing matrix, which is a natural generalization of the now well-known restricted isometry property, and which guarantees accurate recovery of signals that are nearly sparse in (possibly) highly overcomplete and coherent dictionaries. This condition imposes no incoherence restriction on the dictionary and our results may be the rst of this kind. We discuss practical examples and the implications of our results on those applications, and complement our study by demonstrating the potential of 1-analysis for such problems.

The max-norm was proposed as a convex matrix regularizer in [1] and was shown to be empirically superior to the trace-norm for collaborative filtering problems. Although the max-norm can be computed in polynomial time, there are currently no practical algorithms for solving large-scale optimization problems that incorporate the max-norm. The present work uses a factorization technique of Burer and Monteiro [2] to devise scalable first-order algorithms for convex programs involving the max-norm. These algorithms are applied to solve huge collaborative filtering, graph cut, and clustering problems. Empirically, the new methods outperform mature techniques from all three areas.

Abstract. We present and analyze an efficient implementation of an iteratively reweighted least squares algorithm for recovering a matrix from a small number of linear measurements. The algorithm is designed for the simultaneous promotion of both a minimal nuclear norm and an approximatively low-rank solution. Under the assumption that the linear measurements ful ll a suitable generalization of the Null Space Property known in the context of compressed sensing, the algorithm is guaranteed to recover iteratively any matrix with an error of the order of the best k-rank approximation. In certain relevant cases, for instance for the matrix completion problem, our version of this algorithm can take advantage of the Woodbury matrix identity, which allows to expedite the solution of the least squares problems required at each iteration. We present numerical experiments which con rm the robustness of the algorithm for the solution of matrix completion problems, and demonstrate its competitiveness with respect to other techniques proposed recently in the literature.

I have put together a list of preprints and papers on deterministic measurement matrices for compressed sensing. It is here in the Deterministic Compressed Sensing page. There is a new contribution in this area: Deterministic Compressed Sensing Matrices from Multiplicative Character Sequences by Nam Yul Yu. The abstract reads:
Compressed sensing is a novel technique where one can recover sparse signals from the undersampled measurements. In this paper, a $K imes N$ measurement matrix for compressed sensing is deterministically constructed via multiplicative character sequences. Precisely, a constant multiple of a cyclic shift of an $M$-ary power residue or Sidelnikov sequence is arranged as a column vector of the matrix, through modulating a primitive $M$-th root of unity. The Weil bound is then used to show that the matrix has asymptotically optimal coherence for large $K$ and $M$, and to present a sufficient condition on the sparsity level for unique sparse solution. Also, the restricted isometry property (RIP) is statistically studied for the deterministic matrix. Numerical results show that the deterministic compressed sensing matrix guarantees reliable matching pursuit recovery performance for both noiseless and noisy measurements.

In the Matlab Central library, here is a new benchmark for MRI :Compressed Sensing Phantom (v1.0) by David Smith:
Description

CSPHANTOM is a test phantom tailored to compressed sensing MRI algorithm development. It is designed to be non-sparse under a gradient transform and to contain features difficult to reproduce with partial Fourier sampling. We hope that this phantom can be used to evaluate the quality and accuracy of compressed sensing MRI reconstruction algorithms in the noise-free domain so that real-world applications of CS MRI may be improved.

In this paper we present the design and implementation of a Compressive Sensing Microscopy (CSM) imaging system, which uses the Compressive Sensing (CS) method to realize optical-sectioning imaging. The theoretical aspect of the proposed system is investigated using the mathematical model of the CS method and an experimental prototype is constructed to verify the CSM design. Compared to conventional optical-sectioning microscopes (such as Laser Scanning Confocal Microscopes (LSCMs) or Programmable Array Microscopes (PAMs)), the CSM system realizes optical-sectioning imaging using a single-pixel photo detector and without any mechanical scanning process. The complete information of the imaging scene is reconstructed from the CS measurements numerically.

Compressive sensing principles and iterative sparse recovery for inverse and ill-posed problems by Evelyn Herrholz and Gerd Teschke. The abstract reads:
In this paper, we are concerned with compressive sampling strategies and sparse recovery principles for linear inverse and ill-posed problems. As the main result, we provide compressed measurement models for ill-posed problems and recovery accuracy estimates for sparse approximations of the solution of the underlying inverse problem. The main ingredients are variational formulations that allow the treatment of ill-posed operator equations in the context of compressively sampled data. In particular, we rely on Tikhonov variational and constrained optimization formulations. One essential difference to the classical compressed sensing framework is the incorporation of joint sparsity measures allowing the treatment of infinite-dimensional reconstruction spaces. The theoretical results are furnished with a number of numerical experiments.

I just found the following on an extensive search on arxiv, CS is mentioned: An inverse problem for the wave equation with one measurement and the pseudorandom noise by Tapio Helin, Matti Lassas, Lauri Oksanen. The abstract reads:
We consider the wave equation $(\p_t^2-\Delta_g)u(t,x)=f(t,x)$, in $\R^n$, $u|_{\R_-\times \R^n}=0$, where the metric $g=(g_{jk}(x))_{j,k=1}^n$ is known outside an open and bounded set $M\subset \R^n$ with smooth boundary $\p M$. We define a deterministic source $f(t,x)$ called the pseudorandom noise as a sum of point sources, $f(t,x)=\sum_{j=1}^\infty a_j\delta_{x_j}(x)\delta(t)$, where the points $x_j,\ j\in\Z_+$, form a dense set on $\p M$. We show that when the weights $a_j$ are chosen appropriately, $u|_{\R\times \p M}$ determines the scattering relation on $\p M$, that is, it determines for all geodesics which pass through $M$ the travel times together with the entering and exit points and directions. The wave $u(t,x)$ contains the singularities produced by all point sources, but when $a_j=\lambda^{-\lambda^{j}}$ for some $\lambda>1$, we can trace back the point source that produced a given singularity in the data. This gives us the distance in $(\R^n, g)$ between a source point $x_j$ and an arbitrary point $y \in \p M$. In particular, if $(\bar M,g)$ is a simple Riemannian manifold and $g$ is conformally Euclidian in $\bar M$, these distances are known to determine the metric $g$ in $M$. In the case when $(\bar M,g)$ is non-simple we present a more detailed analysis of the wave fronts yielding the scattering relation on $\p M$.