Looks like we have an improvement over a previous contribution in Matrix Completion from a Few Entries by Raghunandan Keshavan, Sewoong Oh and Andrea Montanari. The abstract reads:
Let M be a random n \alpha x n matrix of rank r, and assume that a uniformly random subset E of its entries is observed. We describe an efficient algorithm that reconstructs M from |E| = O(r n) observed entries. More precisely, for any \delta \gt 0, there exists C(\delta, \alpha ) \lt \infty such that, if |E| \ge C(\delta, \alpha ) r n ; then M can be reconstructed with root mean square error smaller than \delta. Further, if |E| \ge C'(\alpha) rn log n, M can be reconstructed exactly with high probability. This settles a question left open by Candes and Recht and improves over the guarantees for their reconstruction algorithm. The complexity of our algorithm is O(|E|r log n), which opens the way to its use for massive data sets. In the process of proving these statements, we obtain a generalization of a celebrated result by Friedman-Kahn-Szemer´edi and Feige-Ofek on the spectrum of sparse random matrices.
Of related interest is On the Calculation of the l2 → l1 Induced Matrix Norm by Konstantinos Drakakis and Barak A. Pearlmutter and Nuclear norm minimization for the planted clique and biclique problems by Brendan Ames and Stephen Vavasis.
Here is a paper that complements well the joint manifolds we saw earlier from the Rice folks. While the joint manifold is very interesting, in the end, image reconstruction is a requisite for some application. This is done in Joint Reconstruction of Compressed Multi-view Images by Xu Chen and Pascal Frossard. The abstract reads:
This paper proposes a distributed representation algorithm for multi-view images that are jointly reconstructed at the decoder. Compressed versions of each image are first obtained independently with random projections. The multiple images are then jointly reconstructed by the decoder, under the assumption that the correlation between images can be represented by local geometric transformations. We build on the compressed sensing framework and formulate the joint reconstruction as a l2-l1 optimization problem. It tends to minimize the MSE distortion of the decoded images, under the constraint that these images have sparse and correlated representations over a structured dictionary of atoms. Simulation results with multi-view images demonstrate that our approach achieves better reconstruction results than independent decoding. Moreover, we show the advantage of structured dictionaries for capturing the geometrical correlation between multi-view images.
As an echo to yesterday's l_o papers, here is Sparse reconstructions via quasiconvex homotopic regularization by Ali Bilgin, Kevin LaTourette (also check the attendant presentation) The abstract reads:
The use of homotopic approximations to the L0 `norm' has been shown to allow for accurate image reconstructions from from sparse data from much higher levels of undersampling than previous techniques have allowed. In this work, I introduce the wavelet transform as a sparsity prior in this context and demonstrate that it can be used to accurately reconstruct images acquired in the MRI setting. Further, I present a proof that for the chosen class of quasiconvex homotopic functions we are guaranteed to converge to a global minima given that a local minima exists. Finally, I investigate the application of a continuation scheme in the regularization parameter and the eects on image reconstruction quality.
In a different area, Bit Precision Analysis for Compressed Sensing by Ehsan Ardestanizadeh, Mahdi Cheraghchi, Amin Shokrollahi. The abstract reads:
and The statistical restricted isometry property and the Wigner semicircle distribution of incoherent dictionaries by Shamgar Gurevich, Ronny Hadani. The abstract reads:This paper studies the stability of some reconstruction algorithms for compressed sensing in terms of the bit precision. Considering the fact that practical digital systems deal with discretized signals, we motivate the importance of the total number of accurate bits needed from the measurement outcomes in addition to the number of measurements. It is shown that if one uses a $2k \times n$ Vandermonde matrix with roots on the unit circle as the measurement matrix, $O(\ell + k \log(n/k))$ bits of precision per measurement are sufficient to reconstruct a $k$-sparse signal $x \in \R^n$ with dynamic range (i.e., the absolute ratio between the largest and the smallest nonzero coefficients) at most $2^\ell$ within $\ell$ bits of precision, hence identifying its correct support. Finally, we obtain an upper bound on the total number of required bits when the measurement matrix satisfies a restricted isometry property, which is in particular the case for random Fourier and Gaussian matrices. For very sparse signals, the upper bound on the number of required bits for Vandermonde matrices is shown to be better than this general upper bound.
In this article we present a statistical version of the Candes-Tao restricted isometry property (SRIP for short) which holds in general for any incoherent dictionary which is a disjoint union of orthonormal bases. In addition, we show that, under appropriate normalization, the eigenvalues of the associated Gram matrix fluctuate around 1 according to the Wigner semicircle distribution. The result is then applied to various dictionaries that arise naturally in the setting of finite harmonic analysis, giving, in particular, a better understanding on a remark of Applebaum-Howard-Searle-Calderbank concerning RIP for the Heisenberg dictionary of chirp like functions.Also for the same authors, Incoherent dictionaries and the statistical restricted isometry property.
Linear Transformations and Restricted Isometry Property by Leslie Ying, Yi Ming Zou. The abstract reads:
The Restricted Isometry Property (RIP) introduced by Candes and Tao is a fundamental property in compressed sensing theory. It says that if a sampling matrix satisfies the RIP of certain order proportional to the sparsity of the signal, then the original signal can be reconstructed even if the sampling matrix provides a sample vector which is much smaller in size than the original signal. This short note addresses the problem of how a linear transformation will affect the RIP. This problem arises from the consideration of extending the sensing matrix and the use of compressed sensing in different bases. As an application, the result is applied to the redundant dictionary setting in compressed sensing.
Distortion-Rate Functions for Quantized Compressive Sensing (version 1) by Wei Dai, Hoa Vinh Pham, and Olgica Milenkovic. The abstract reads:
We study the average distortion introduced by quantizing compressive sensing measurements. Both uniform quantization and non-uniform quantization are considered. The asymptotic distortion-rate functions are obtained when the measurement matrix belongs to certain random ensembles. A new modification of greedy reconstruction algorithm that accommodates quantization errors is proposed and its performance is evaluated through extensive computer simulations.
This is version 3 of Subspace Pursuit for Compressive Sensing Signal Reconstruction by Wei Dai and Olgica Milenkovic. The abstract reads:
We propose a new method for reconstruction of sparse signals with and without noisy perturbations, termed the subspace pursuit algorithm. The algorithm has two important characteristics: low computational complexity, comparable to that of orthogonal matching pursuit techniques when applied to very sparse signals, and reconstruction accuracy of the same order as that of LP optimization methods. The presented analysis shows that in the noiseless setting, the proposed algorithm canexactly reconstruct arbitrary sparse signals provided that the sensing matrix satisfies the restricted isometry property with a constant parameter. In the noisy setting and in the case that the signal is not exactly sparse, it can be shown that the mean squared error of the reconstruction is upper bounded by constant multiples of the measurement and signal perturbation energies.
An interesting presentation by Yair Weiss, Hyun Sung Chang and William Freeman entitled Computational Photography, Computational Neuroscience, and Random Projections. They are giving a different answer to the idea I mentioned a while back, this is great. In particular, they talk about their Uncertain Component Analysis as featured in their previous paper entitled: Learning Compressed Sensing. I note their use of George Costanza in the slides (like I did earlier).
Sparse Representation for the Classification of Tumors Using Gene Expression Data by Xiyi Hang, and Fang-Xiang Wu. The abstract reads:
Of interest for the Compressive Sensing Hardware page which I have revamped a little bit is the following older paper: An architecture for the efficient implementation of compressive sampling reconstruction algorithms in reconfigurable hardware by Fernando E. Ortiz and Eric J. Kelmelis and Gonzalo R. Arce. The abstract reads:Personalized drug design requires the classification of cancer patients as accurate as possible. With advances in genome sequencing and microarray technology, a large amount of gene expression data has been and will continuously be produced from various cancerous patients. Such cancer-alerted gene expression data allows us to classify tumors at the genome-wide level. However, cancer-alerted gene expression datasets typically have much more the number of genes (features) than the number of samples (patients), which imposes a challenge for classification of tumors. In this paper, a new method is proposed for cancer diagnosis using gene expression data by casting the classification problem as finding sparse representations of test samples with respect to training samples. The sparse representation is computed by the l1-regularized least square method. To investigate its performance, the proposed method is applied to six tumor gene expression datasets and compared with various support vector machine (SVM) methods. The experimental results have shown that the performance of the proposed method is comparable with or better than those of SVMs. In addition, the proposed method is more efficient than SVMs as it has no need of model selection.
According to the Shannon-Nyquist theory, the number of samples required to reconstruct a signal is proportional to its bandwidth. Recently, it has been shown that acceptable reconstructions are possible from a reduced number of random samples, a process known as compressive sampling. Taking advantage of this realization has radical impact on power consumption and communication bandwidth, crucial in applications based on small/mobile/unattended platforms such as UAVs and distributed sensor networks. Although the benefits of these compression techniques are self-evident, the reconstruction process requires the solution of nonlinear signal processing algorithms, which limit applicability in portable and real-time systems. In particular, (1) the power consumption associated with the difficult computations offsets the power savings afforded by compressive sampling, and (2) limited computational power prevents these algorithms to maintain pace with the data-capturing sensors, resulting in undesirable data loss. FPGA based computers offer low power consumption and high computational capacity, providing a solution to both problems simultaneously. In this paper, we present an architecture that implements the algorithms central to compressive sampling in an FPGA environment. We start by studying the computational profile of the convex optimization algorithms used in compressive sampling. Then we present the design of a pixel pipeline suitable for FPGA implementation, able to compute these algorithms.
On his blog, Paul Deignan wonders what wording should be used for compressive sensing. Send him your suggestions.
IPAM has a short introduction to the Bregman scheme used to do reconstruction in CS here. Yin Zhang will give a talk at Rice on Monday (it is on the CS calendar)
Finally, here is homework on CS by Subhabrata Bhattacharya at UCF.
A quick comment on the post about quasiconvex homotopic \ell_{0} minimization. I have to point out that in the published version of our referenced report (IEEE Trans. Medical Imaging 28(1):106-121), the use of wavelets is discussed and an example reconstruction using curvelets is presented.
ReplyDeleteNonetheless, nice results, I look forward to seeing more!
Josh T.
One comment about quasiconvex homotopic paper. I think claim 1 is not true. It is not necessary that local minima for a quasiconvex function is a global minima. As a check, just consider a function which is linear with slope 1 on [0,1] and 1 on [1,2]. Any local minima on [1,2] gives the same value but it is not the global minima.
ReplyDeleteAs a response to the previous comment, I agree that the claim you are referring to is false as written. If we restrict ourselves to Strictly Quasiconvex functions - which are the form of the functions which we are interested in for this paper, then the claim holds true.
ReplyDeleteI'd like to point out that this paper is far from finished, and still has a fair amount of grooming to do.
To Josh - thank you for the kind words, I will be certain to look over your updated version.
Kevin L.