Pages

Tuesday, March 31, 2009

CS: A question in DSN, a talk, FPCA code, Near-Oracle Performance of Basis Pursuit Under Random Noise


It looks like the stats of the blog are down this week, it may be a combination of workshop attendances in far away places and/or Spring Break.

In yesterday's entry, one commenter pointed out that one of the arxiv paper seemed to lack some reference to prior work. As a general rule, if you provide me with some fair comments, I can forward those to the authors of the papers while keeping your name anonymous.

Angshul Majumdar asked the following question:

I do not really understand how Distributed Sensor Networks benefit from CS. In general the main constraint in DSN is the battery life of the sensors. Now the battery is consumed for communications. Therefore the final aim is to reduce the communication cost of each sensor. Now, in general if there are n sensors each one will send one piece of data to the fusion center. But by CS techniques each center needs to multiple its values to a k dimensional vector and send out k pieces of data. What CS argues is that all the sensors need to send out k pieces of data, and they are added in air by the superposition of radio waves, and the fusion center decodes the n dimensional vector from the k dimensional vector. The problem is that each sensor is sending out k pieces instead of one. Therefore it is likely that the battery burns our faster. May be I haven't understood the problem. Can you explain? Or who is the right person to enquire about this?
Anybody care to answer Angshul ?

Gabriel Peyre let me know that he will be presenting a seminar on Wednesday April 1st at IHP in Paris. His talk is entitled From Compressive Sampling to Compressive Computing  and it is a joint work with Laurent Demanet.

Also, Shiqian Ma just let me know that he has set up a website for his " FPCA code (Fixed Point Continuation with Approximate SVD), which is designed for solving matrix rank minimization and matrix completion problems. From the site:

FPCA is short for Fixed Point Continuation with Approximate SVD for solving matrix rank minimization and matrix completion problems.

The affinely constrained matrix rank minimization (ACMRM) problem is :

A special case of ACMRM is the matrix completion (MC) problem:

That is, a subset of entries of low rank matrix M is known and we want to fill in the unknown entries.

The tightest convex relaxation of ACMRM is its Nuclear norm relaxation:

Note that if b is contaminated by noise, we should relax this problem to:

FPCA solves the Lagrangian version of the above problem:

This problem is also called the nuclear norm regularized least squares problem.

The website is at :  http://www.columbia.edu/~sm2756/FPCA.htm . Thank you Shiqian!

With regards to new papers, I just found this one on arxiv:

Zvika Ben-Haim, Yonina Eldar and Michael Elad, Near-Oracle Performance of Basis Pursuit Under Random Noise. The abstract reads:

We consider the problem of estimating a deterministic sparse vector x0 from underdetermined noisy measurements, in which the noise is a Gaussian random vector. Two techniques which are commonly used in this setting are the Dantzig selector and basis pursuit denoising (BPDN). It has previously been shown that, with high probability, the Dantzig selector comes close to the performance of the oracle estimator which knows the locations of the nonzero components of x0, when the performance is measured by the L2 distance between x0 and its estimate. In this paper, we demonstrate that BPDN achieves analogous results, but that the constants involved in the BPDN analysis are significantly better.
Image Credit: NASA/JPL/Space Science Institute, Epimetheus.

Monday, March 30, 2009

CS: An answer to a question, Statistical RIP and Semi-Circle Distribution of Incoherent Dictionaries, Learning with Structured Sparsity


In response to a previous query in this entry, Matthew Herman responded with:

You posted a question recently from Deming Yuan (Vincent) about an upper bound on the number of missing entries in a measurement matrix. If that matrix is amenable to analysis with the RIP, then it seems that my new paper with Thomas Strohmer can be used to model this problem. Let

\hat(A) = A + E

be a complete measurement matrix (without any entries missing), where A is an incomplete matrix (with a few missing entries), and E is a perturbation to A consisting mostly of zeros except for a few nonzero entries corresponding to the locations of A which are "missing."

It would take some extra work, but in theory, one could then use this model and our results on perturbations to a measurement matrix to find an upper bound on the number of missing entries. In particular, further assumptions would have to be made on the nature of these entries (e.g., how they are grouped, if they have some kind of special structure, etc.). This is discussed briefly in the appendix of our paper "General Deviants: An Analysis of Perturbations in Compressed Sensing."


Thanks Matthew !

In this paper we formulate and prove a statistical version of the Candes-Tao restricted isometry property (SRIP for short) which holds in general for any incoherent dictionary which is a disjoint union of orthonormal bases. In addition, we prove that, under appropriate normalization, the eigenvalues of the associated Gram matrix fluctuate around 1 according to the Wigner semicircle distribution. The result is then applied to various dictionaries that arise naturally in the setting of finite harmonic analysis, giving, in particular, a better understanding on a remark of Applebaum-Howard-Searle-Calderbank concerning RIP for the Heisenberg dictionary of chirp like functions.

and Learning with Structured Sparsity by Junzhou Huang, Tong Zhang, Dimitris Metaxas. The abstract reads:
This paper investigates a new learning formulation called structured sparsity, which is a natural extension of the standard sparsity concept in statistical learning and compressive sensing. By allowing arbitrary structures on the feature set, this concept generalizes the group sparsity idea that has become popular in recent years. A general theory is developed for learning with structured sparsity, based on the notion of coding complexity associated with the structure. It is shown that if the coding complexity of the target signal is small, then one can achieve improved performance by using coding complexity regularization methods, which generalize the standard sparse regularization. Moreover, a structured greedy algorithm is proposed to efficiently solve the structured sparsity problem. It is shown that the greedy algorithm approximately solves the coding complexity optimization problem under appropriate conditions. Experiments are included to demonstrate the advantage of structured sparsity over standard sparsity on some real applications.


Credit photo: NASA, ISS as seen from STS-119

Saturday, March 28, 2009

CS: Top ten Lists

If you had the possibility of asking a question about compressive sensing in some anonymous fashion, which one would that be ? What do you hear most often about compressive sensing that is not true ? Can you give me an argument as to why compressive sensing is not relevant to a certain area of expertise ? If you want to contribute to some of the top ten lists I am writing, please go ahead add your input in the comment section or directly by e-mail. Go crazy. Thanks!


P.S: If you send me an e-mail, please also notify me if you want to remain anonymous.




Credit: NASA, The International Space Station fully assembled as seen by the Space Shuttle (STS-119).

Friday, March 27, 2009

CS: Matrix completion, Compressive Sensing Tutorial Videos, Non-convex algorithms

The image on the side represents Uranus as seen from the Hubble telescope after using stars in a calibration exercise. Each star imaging provides a Point Spread Function for the camera. Would it be a good thing to perform the same imaging calibration for the Cassini wide angle camera now that we know it has dust on one of its filter ? if one could aim for the right star, the camera could provide some superresolution.


Dear Greg,

The noncommutative circulant matrices are interesting, although they begin to fall outside of the scope of matrix completion as it becomes quite hard for such matrices to be low rank (the noncommutative Fourier transform, in addition to being sparse, has to avoid all the high-dimensional irreducible representations). The proofs of the matrix completion results mimic in many places the compressed sensing results (and now I have a better understanding as to why, thanks to your observation), but they are more complicated and give worse bounds. (For instance, applying the result of this paper directly to the problem of measuring a function with only r Fourier coefficients using as few samples as possible, we need nr log^6 n or so random samples, whereas our original result in that case gave r log n. (The factor of n discrepancy is due to the fact that in the circulant case, measuring one matrix entry automatically gives you another n-1 matrix entries for free, so that should be ignored.)

At an early stage in the project with Emmanuel we tried using various non-commutative tools, notably the non-commutative Khintchine inequality, but the high moment expressions we ended up facing were just too entangled together in a non-commutative way for such tools to simplify the expressions we were facing. It seems that the moment method is actually one of the more robust methods for doing things like controlling operator norms of products of random matrices, though the combinatorial tasks needed to execute the method can be really quite sizeable.

while Greg Kuperberg replied:

Yeah I sort of figured that the log factors would be worse. Of course it raises the question of what’s better about circulant matrices, actual convergence/completion or merely the methods for obtaining bounds.

As for non-commutative circulant matrices, it is true that there is a relevant negative result concerning the structure of finite groups. Namely, if all irreps have dimension at most k, then the group G has an abelian subgroup of index at most f(k). However, I think that f may grow quickly enough that there is a middle ground, where the rank is not all that much larger than sparseness and yet G is not all that close to abelian.

Then too, when you have a non-commutative circulant matrix M, or indeed in any circumstance when M a priori commutes with some group action of a group G, then the spectrum of M is colored by the irreps of G. So in this case you have many natural variations of the nuclear norm obtained by assigning different weights to the irreps. Should each irrep be weighted by its dimension, as you will obtain by direct restriction of the ordinary nuclear norm? Or should it be weighted by 1, or by something in between? Well, maybe it’s as simple as that the sparseness promise is described with a weighting, and the convex relaxation should simply be chosen with the same weighting.


While we are on the subject of matrix completion, version 3 of Matrix Completion from a Few Entries by Raghunandan H. Keshavan, Andrea Montanari, Sewoong Oh is now available. The abstract reads:
Let M be a random (alpha n) x n matrix of rank r<<= C(rn/|E|)^0.5 . Further, if r=O(1), M can be reconstructed exactly from |E| = O(n log(n)) entries. These results apply beyond random matrices to general low-rank incoherent matrices. This settles (in the case of bounded rank) a question left open by Candes and Recht and improves over the guarantees for their reconstruction algorithm. The complexity of our algorithm is O(|E|r log(n)), which opens the way to its use for massive data sets. In the process of proving these statements, we obtain a generalization of a celebrated result by Friedman-Kahn-Szemeredi and Feige-Ofek on the spectrum of sparse random matrices.

Svetlana Avramov-Zamurovic has some new videos of the Compressive Sensing tutorial she is giving at USNA.


I have added them to the Compressive Sensing Video page.

Angshul Majumdar has provided a new suite of software to Matlab Central entitled: Non Convex Algorithms for Group Sparse Optimization that features a Reweighted L2,p algorithm.

Thursday, March 26, 2009

CS: This week's long post. Some Talks, a competition, SPARS'09 and other papers.


A small foreword...

For those of you who want to get directly any entries of this blog to your mailbox, you probably want to add your email address in the right side section of this blog. If you are seeing this through a Feedreader, you may want to come to the site directly. If you sign up, please note that there is no way for me to prevent any of these e-mails to reach you when I blog on other subject areas than compressive sensing. This is one of the reason I have decreased a little bit the blogging on other topics but I feel bad about this as the initial itent of the blog was not CS in particular. So to summarize, if the title of the E-mail or the entry on the Feedreader does not start with CS then you may want to disregard it altogether. I am also generally avoiding what I would consider sensitive subjects as the readership of this blog is very widespread.
Yet I cannot guarantee an ability to know what those subjects are in the different places this blog is being read. For what it's worth, we now have 139 members at the LinkedIn Compressive Sensing Group, this is more than the number of contacts I have on LinkedIn and there is very little overlap. We also have 278 readers of the blog using different Feed Readers while 72 people receive every entries by e-mail. With regards to people coming daily directly to the blog that number is about 370.

With the rapid escalation in the number of papers and publications I have become limited in the number of comments I can make on each of them, therefore I am going to try to give more context at a higher level in the future.

First, Fredo Durand will give a talk on computational photography at Ecole Normale Superieure, Paris on Friday (Thanks Laurent!). While we are on the subject of Compressive Sensing Hardware, those of you interested in using a GPU for some reconstruction or even acqusition, here is GPU Programming For The Rest Of Us.

Hoping to make $10,000 again, on may want to check the prize for the competition on Quantitative Single-Neuron Modeling 2009. The introduction says the following:
How well are single-cell properties reproduced by the present-day neuronal models? Recently, several labs have approached this question by assessing the quality of neuron models with respect to spike timing prediction or characteristic features of the voltage trace. So far, every modeler used his own preferred performance measure on his own data set. The Quantitative Single-Neuron Modeling Competition offers a coherent framework to compare neuronal models and fitting methods.
Participation
Participants can submit their prediction to one or more of the challenges A, B, C or D. Anyone can participate and any type of model is accepted.

Goal
This competition is an opportunity to bridge the gap between experimentalists and modelers. The Quantitative Single-Neuron Modeling Competition is an invitation to compare your methods and models to those of other people in the field. Past competitions remain available on this website and will serve as benchmarks (Jolivet et al 2008a, Jolivet et al 2008b). Prizes are given to outstanding contributions.
Prizes
The prizes are given according to the “Rules and Conditions” specified below.

The INCF Prize (10 000 CHF) is given to the participant(s) providing a significant win in at least 2 of the four challenges.
The FACETS Award (500 CHF) is shared between all participants providing a shared win in a single challenge.
Important Dates
Submissions via this website will open June 25, 2009.
The submission deadline is August 25, 2009.
The results will be presented at the INCF Congress of Neuroinformaticsin Pilsen, September 6-8, 2009.


For this entry, we have several papers from SPARS'09 as well as other papers collected from the web (they are not in a particular order). Some of them may look like they are not about CS but they fit in the larger set of papers devoted to manifold based sensing and attendant classification tasks:


A presentation by Matthias Seeger entitled Approximate Bayesian Inference and Measurement Design Optimization for Low-Level Computer Vision and Imaging

And the papers are:

Computational Methods for Sparse Solution of Linear Inverse Problems by Joel Tropp and Stephen Wright. The abstract reads:
In sparse approximation problems, the goal is to find an approximate representation of a target signal using a linear combination of a few elementary signals drawn from a fixed collection. This paper surveys the major algorithms that are used for solving sparse approximation problems in practice. Specific attention is paid to computational issues, to the circumstances in which individual methods tend to perform well, and to the theoretical guarantees available. Many fundamental questions in electrical engineering, statistics, and applied mathematics can be posed as sparse approximation problems, which makes the algorithms discussed in this paper versatile tools with a wealth of applications

Multi-Manifold Semi-Supervised Learning by Andrew Goldberg, Xiaojin Zhu, Aarti Singh, Zhiting Xu, Robert Nowak. The abstract reads:
We study semi-supervised learning when the data consists of multiple intersecting manifolds. We give a finite sample analysis to quantify the potential gain of using unlabeled data in this multi-manifold setting. We then propose a semi-supervised learning algorithm that separates different manifolds into decision sets, and performs supervised learning within each set. Our algorithm involves a novel application of Hellinger distance and size-constrained spectral clustering. Experiments demonstrate the benefit of our multimanifold semi-supervised learning approach.

SPIR-iT: Iterative Self-consistent Parallel Imaging Reconstruction from Arbitrary k-Space by Michael Lustig and John Pauly. The abstract reads:
A new approach to autocalibrating, coil-by-coil parallel imaging reconstruction is presented. It is a generalized reconstruction framework based on self consistency. The reconstruction problem is formulated as an optimization that yields the most consistent solution with the calibration and acquisition data. The approach is general and can accurately reconstruct images from arbitrary k-space sampling patterns. The formulation can flexibly incorporate additional image priors such as off resonance correction and regularization terms. Several iterative strategies to solve the posed reconstruction problem in both image and k-space domain are presented. These are based on projection over convex sets (POCS) and conjugate gradient (CG) algorithms. Phantom and in-vivo studies demonstrate efficient reconstructions from undersampled Cartesian and spiral trajectories. Reconstructions that include off-resonance correction and non-linear regularization are also demonstrated.



Restricted Isometry Property and lp sparse recovery failure by Mike Davies, Rémi Gribonval. The abstract reads:
This paper considers conditions based on the restricted isometry constant (RIC) under which the solution of an underdetermined linear system with minimal lp norm, 0 \lt p [\leq] 1, is guaranteed to be also the sparsest one. Specifically matrices are identified that have RIC, [\delta_{2m}], arbitrarily close to 1/[{ \sqrt{2} \) \approx 0.707] where sparse recovery with p = 1 fails for at least one m-sparse vector. This indicates that there is limited room for improvement over the best known positive results of Foucart and Lai, which guarantee that l1-minimisation recovers all m-sparse vectors for any matrix with [\delta_{2m} \lt 2(3-{ \sqrt{2})/7 \approx 0.4531]. We also present results that show, compared to [l~{1}] minimisation, [l~{p}] minimisation recovery failure is only slightly delayed in terms of the RIC values. Furthermore when l_p optimisation is attempted using an iterative reweighted [l~{p}] scheme, failure can still occur for [\delta_{2m}] arbitrarily close to 1[{ \sqrt{2} \).

I note the argumentation given in the third paragraph entitled III. RIP REST IN PEACE?
(which echoes several entries such as this one) and while the KGG argumentation of Wotao Yin and Yin Zhang (ref 18) seems interesting yet is NP-hard, I note that that there are now some tools to take a stab at the nullspace property:
both programs are prominently listed in the measurement matrix section of the Big Picture.

Here is an important paper for any practioner: Basis Identification from Random Sparse Samples by Rémi Gribonval and Karin Schnass. The abstract reads:
This article treats the problem of learning a dictionary providing sparse representations for a given signal class, via ℓ1-minimisation. The problem is to identify a dictionary [\Phi] from a set of training samples Y knowing that [Y = \PhiX] for some coefficient matrix X. Using a characterisation of coefficient matrices X that allow to recover any basis as a local minimum of an ℓ1-minimisation problem, it is shown that certain types of sparse random coefficient matrices will ensure local identifiability of the basis with high probability. The typically sufficient number of training samples grows up to a logarithmic factor linearly with the signal dimension.

In this paper we introduce the multiple-description ℓ1-compression problem: minimize kz1k1+λkz2k1 subject to the three distortion constraints kA1z1 −yk2 \leq δ1, kA2z2 −yk2 \leq δ2, and k1/2(A1z1 + A2z2) − yk2 \leq γ. This problem formulation is interesting in, e.g., ad-hoc networks where packets can be lost. If a description (z2) is lost in the network and only one description is received (z1), it is still possible to decode and obtain a signal quality equal or better than described by the parameter δ1 (and vice versa). If both descriptions are received, the quality is determined by the parameter γ. This problem is difficult to solve using first-order projection methods due to the intersecting second-order cones. However, we show that by recasting the problem into its dual form, one can avoid the difficulties due to conflicting fidelity constraints. We then show that efficient first-order ℓ1-compression methods are applicable, which makes it possible to solve large scale problems, e.g., multiple-description ℓ1-compression of video.

Distributed Compressed Sensing for Sensor Networks, Using p-thresholding by Mohammad Golbabaee, Pierre Vandergheynst. The abstract reads:
Distributed compressed sensing is the extension of compressed sampling (CS) to sensor networks. The idea is to design a CS joint decoding scheme at the base station which exploits the inter-sensor correlations, in order to recover the whole observations from very few number of random measurements per node. Here, the questions are about modeling the correlations, design of the joint recovery algorithms, analysis of those algorithms, the comparison between the performance of the joint and separate decoder and nally determining how optimal they are.

Average performance of the sparsest approximation in a dictionary by François Malgouyres and Mila Nikolova. The abstract reads:
Given data d element of R^N, we consider its representation u* involving the least number of non-zero elements (denoted by ℓ0(u*)) using a dictionary A (represented by a matrix) under the constraint kAu − dk \leq τ, for τ \gt 0 and a norm k.k. This (nonconvex) optimization problem leads to the sparsest approximation of d. We assume that data d are uniformly distributed in θBfd (1) where θ>0 and Bfd (1) is the unit ball for a norm fd. Our main result is to estimate the probability that the data d give rise to a K−sparse solution u*: we prove that P (ℓ0(u*) \leq K) = CK( τ θ )(N−K) + o(( τ θ )(N−K)), where u* is the sparsest approximation of the data d and CK > 0. The constants CK are an explicit function of k.k, A, fd and K which allows us to analyze the role of these parameters for the obtention of a sparsest K−sparse approximation. Consequently, given fd and θ, we have a tool to build A and k.k in such a way that CK (and hence P (ℓ0(u*) \leq K)) are as large as possible for K small. In order to obtain the above estimate, we give a precise characterization of the set [\zigma τK] of all data leading to a K−sparse result. The main difficulty is to estimate accurately the Lebesgue measure of the sets {[\zigma τ K] ∩ Bfd (θ)}. We sketch a comparative analysis between our Average Performance in Approximation (APA) methodology and the well known Nonlinear Approximation (NA) which also assess the performance in approximation.

Magnetic Resonance Spectrum Separation Using Sparse Representations and Wavelet Filters by Yu Guo, Su Ruan, Jérôme Landré, Jean-Marc Constants. The abstract reads:
Magnetic Resonance spectroscopy (MRS) provides a “frequency-signal intensity” spectrum of multiple peaks that reflect the biochemical composition of a localized region in the body. The peak intensity or the area under each peak is proportional to the concentration of that assigned metabolite. Accurate quantification of in vivo MRS (measuring peak intensity or area) is very important to diagnose certain metabolic disorders. However, strongly overlapping metabolite peaks, poor knowledge about background component (the baseline), and low signalto- noise ratio (SNR) make the task difficult. In this paper, a novel spectrum separation method using sparse representations and wavelet filters is proposed to separate baseline and spectra of different metabolites and finally achieves an accurate MRS quantification. With the proposed method, the accuracy and the robustness of MRS quantification are improved, from simulation data, compared with a commonly used frequency-domain MRS quantification method. The quantification on tumor metabolism with in vivo brain MR spectra is also demonstrated.

Compressed Sensing of Analog Signals in Shift-Invariant Spaces by Yonina Eldar. The abstract reads:
A traditional assumption underlying most data converters is that the signal should be sampled at a rate exceeding twice the highest frequency. This statement is based on a worst-case scenario in which the signal occupies the entire available bandwidth. In practice, many signals are sparse so that only part of the bandwidth is used. In this paper, we develop methods for low-rate sampling of continuous-time sparse signals in shift-invariant (SI) spaces generated by m kernels with period T. We model sparsity by treating the case in which only k out of the m generators are active, however, we do not know which k are chosen. We show how to sample such signals at a rate much lower than m/T, which is the minimal sampling rate without exploiting sparsity. Our approach combines ideas from analog sampling in a subspace with a recently developed block diagram that converts an infinite set of sparse equations to a finite counterpart. Using these two components we formulate our problem within the framework of finite compressed sensing (CS) and then rely on algorithms developed in that context. The distinguishing feature of our results is that in contrast to standard CS, which treats finite-length vectors, we consider sampling of analog signals for which no underlying finite-dimensional model exists. The proposed framework allows to extend much of the recent literature on CS to the analog domain.

Jose Bioucas-Dias and Mario Figueiredo, Total variation restoration of speckled images using a split-Bregman algorithm. The abstract reads:
Multiplicative noise models occur in the study of several coherent imaging systems, such as synthetic aperture radar and sonar, and ultrasound and laser imaging. This type of noise is also commonly referred to as speckle. Multiplicative noise introduces two additional layers of difficulties with respect to the popular Gaussian additive noise model: (1) the noise is multiplied by (rather than added to) the original image, and (2) the noise is not Gaussian, with Rayleigh and Gamma being commonly used densities. These two features of the multiplicative noise model preclude the direct application of state-of-the-art restoration methods, such as those based on the combination of total variation or wavelet-based regularization with a quadratic observation term. In this paper, we tackle these difficulties by: (1) using the common trick of converting the multiplicative model into an additive one by taking logarithms, and (2) adopting the recently proposed split Bregman approach to estimate the underlying image under total variation regularization. This approach is based on formulating a constrained problem equivalent to the original unconstrained one, which is then solved using Bregman iterations (equivalently, an augmented Lagrangian method). A set of experiments show that the proposed method yields state-of-the-art results.

A Manifold Lifting Algorithm for Multi-View Compressive Imaging by Mike Wakin. The abstract reads:
We consider a multi-view imaging scenario where a number of cameras observe overlapping, translated subimages of a larger scene. To simplify the acquisition and encoding of these images, we propose a non-collaborative compressive sensing protocol at each camera. We discuss a prototype algorithm for joint reconstruction of the images from the ensemble of random measurements, based on the geometric manifold structure that arises from the varying camera positions. Even when the camera positions are unknown, we demonstrate that it is possible to simultaneously resolve the images and register their positions using only the random measurements.
A tiny little problem with the example is that this is a real satelitte imagery, i.e. this image was taken one row at a time in a pushbroom fashion) and so any movement on the image cannot be seen. The physics of the image taking won't alow it. To get more interesting result, one might be interested in using any of the photos we took from a stratospheric balloon here (this 4GB dataset is free to use). The reality of image taking in this context (no pushbroom) becomes a little more challenging as the clouds move as well as the camera. From the conclusion:

It is worth emphasizing the apparent benefit of random measurements that we are observing in our experiments. Suppose for each satellite that we were to permit a form of “transform coding”, where instead of transmitting 96 random measurements, we request each satellite to transmit its 96 largest wavelet coefficients. On an image-by-image basis, this typically is a better way to compress images to minimize PSNR. However, we see in Fig. 1(d) that even if we have perfect knowledge of the camera positions and fuse the available wavelet-based measurements at the collection point using a global ℓ1 minimization akin to (1), the ultimate reconstruction is inferior to our result in Fig. 4(b).
Two noiselet transform codes written by Laurent Duval and Laurent Jacques can be found here.


A Multiscale Framework for Compressive Sensing of Video by J.Y. Park and Mike Wakin. The abstract reads:
Compressive Sensing (CS) allows the highly efficient acquisition of many signals that could be difficult to capture or encode using conventional methods. From a relatively small number of random measurements, a high-dimensional signal can be recovered if it has a sparse or near-sparse representation in a basis known to the decoder. In this paper, we consider the application of CS to video signals in order to lessen the sensing and compression burdens in single- and multi-camera imaging systems. In standard video compression, motion compensation and estimation techniques have led to improved sparse representations that are more easily compressible; we adapt these techniques for the problem of CS recovery. Using a coarse-to-fine reconstruction algorithm, we alternate between the tasks of motion estimation and motion-compensated wavelet-domain signal recovery. We demonstrate that our algorithm allows the recovery of video sequences from fewer measurements than either frame-by-frame or inter-frame difference recovery methods.
From the Conclusion section:
For single-view imaging, our CS-based algorithm will not necessarily provide bit-rate savings when compared with traditional MPEG compression. However, the ability to capture a video using few measurements may be useful in applications where is it difficult or expensive to obtain raw samples. Another interesting application of the proposed algorithm is in multi-view imaging. In such a system, multi-camera arrays are used to capture a scene from several different angles. Several limitations—in transmission bandwidth, power, storage, and so on—could be overcome with the application of CS [9]. The proposed algorithmcould be applied by treating each signal captured by each camera as a frame of a video sequence. This remains a subject of ongoing work and experimentation.
So as expected, it looks like we cannot do better than MPEG once the imagery has already been acquired but there seems to be some good news for CS only acquisition devices.


Image Denoising Using Sparse Representations by SeyyedMajid Valiollahzadeh, Hamed Firouzi1, Massoud Babaie-Zadeh, and Christian JuttenThe abstract reads:
The problem of removing white zero-mean Gaussian noise from an image is an interesting inverse problem to be investigated in this paper through sparse and redundant representations. However, finding the sparsest possible solution in the noise scenario was of great debate among the researchers. In this paper we make use of new approach to solve this problem and show that it is comparable with the state-of-art denoising approaches.

and finally,

Robust Counting Via Counter Braids: An Error-Resilient Network Measurement Architecture by Yi Lu, Balaji Prabhakar. The abstract reads:
A novel counter architecture, called Counter Braids, has recently been proposed for accurate per-flow measurement on high-speed links. Inspired by sparse random graph codes, Counter Braids solves two central problems of per-flow measurement: one-to-one flow-to-counter association and large amount of unused counter space. It eliminates the one-to-one association by randomly hashing a flow label to multiple counters and minimizes counter space by incrementally compressing counts as they accumulate. The random hash values are reproduced offline from a list of flow labels, with which flow sizes are decoded using a fast message passing algorithm. The decoding of Counter Braids introduces the problem of collecting flow labels active in a measurement epoch. An exact solution to this problem is expensive. This paper complements the previous proposal with an approximate flow label collection scheme and a novel error-resilient decoder that decodes despite missing flow labels. The approximate flow label collection detects new flows with variable-length signature counting Bloom filters in SRAM, and stores flow labels in high-density DRAM. It provides a good trade-off between space and accuracy: more than 99 percent of the flows are captured with very little SRAM space. The decoding challenge posed by missing flow labels calls for a new algorithm as the original message passing decoder becomes error-prone. In terms of sparse random graph codes, the problem is equivalent to decoding with graph deficiency, a scenario beyond coding theory. The error-resilient decoder employs a new message passing algorithm that recovers most flow sizes exactly despite graph deficiency. Together, our solution achieves a 10-fold reduction in SRAM space compared to hash-table based implementations, as demonstrated with Internet trace evaluations.
the following has just an abstract:

Space-optimal heavy hitters with strong error bounds by Radu Berinde, Graham Cormode, Piotr Indyk, and Martin Strauss. The abstract reads:
The problem of finding heavy-hitters and approximating the frequencies of items is at the heart of many problems in data stream analysis. It has been observed that several proposed solutions to this problem can outperform their worst-case guarantees on real data. This leads to the question of whether some stronger bounds can be guaranteed. We answer this in the positive by showing that a class of “counter-based algorithms” (including the popular and very space-efficient and algorithms) provide much stronger approximation guarantees than previously known. Specifically, we show that errors in the approximation of individual elements do not depend on the frequencies of the most frequent elements, but only on the frequency of the remaining “tail”. This shows that counter-based methods are the most space-efficient (in fact, space-optimal) algorithms having this strong error bound. This tail guarantee allows these algorithms to solve the “sparse recovery” problem. Here, the goal is to recover a faithful representation of the vector of frequencies, f. We prove that using space O(k), the algorithms construct an approximation f* to the frequency vector f so that the L1 error f-f*1 is close to the best possible error minf' f' - f1, where f' ranges over all vectors with at most k non-zero entries. This improves the previously best known space bound of about O(k logn) for streams without element deletions. Other consequences of the tail guarantees are results for skewed (Zipfian) data, and guarantees for accuracy of merging multiple summarized streams.

Wednesday, March 25, 2009

CS: Bayesian Compressive Sensing using Laplace Priors, Distilled Sensing

I do not know if it is a clue of things to come but over the course of two days, two graduate students ( Derin Babacan and Jarvis Haupt ) have set up pages for their projects and wanted to make an announcement on Nuit Blanche. I am gladly obliging with the added word that I am very impressed to see students taking these initiatives. It used to be only Assistant Professors who would go for some self promotion but I am glad students are doing it as it shows their interest in providing the full strength of their ideas to the community.

First, Derin Babacan let me know that his research team has made available two papers and attendant source code on their Bayesian approach to compressive sensing. The two papers are:
Bayesian Compressive Sensing using Laplace Priors by S. Derin Babacan, Rafael Molina, and Aggelos Katsaggelos. The abstract reads:
In this paper we model the components of the compressive sensing (CS) problem, i.e., the signal acquisition process, the unknown signal coefficients and the model parameters for the signal and noise using the Bayesian framework. We utilize a hierarchical form of the Laplace prior to model sparsity of the unknown signal. We describe the relationship among a number of sparsity priors proposed in the literature, and show the advantages of the proposed model including its high degree of sparsity. Moreover, we show that some of the existing models are special cases of the proposed model. We present two algorithms resulting from our model; one global optimization algorithm and one constructive (greedy) algorithm designed for fast reconstruction useful in practical settings. Unlike most existing CS reconstruction methods, both algorithms are fully-automated, i.e., the unknown signal coefficients and all necessary parameters are estimated solely from the observation and therefore no user-intervention is needed. Additionally, the proposed algorithms provide estimates of the uncertainty of the reconstructions. We provide experimental results with synthetic 1D signals and images, and compare with the state-of-the-art CS reconstruction algorithms demonstrating the superior performance of the proposed approach.

and Fast Bayesian Compressive Sensing using Laplace Priors. by S. Derin Babacan, Rafael Molina, and Aggelos Katsaggelos. The abstract reads:
In this paper we model the components of the compressive sensing (CS) problem using the Bayesian framework by utilizing a hierarchical form of the Laplace prior to model sparsity of the unknown signal. This signal prior includes some of the existing models as special cases and achieves a high degree of sparsity. We develop a constructive (greedy) algorithm resulting from this formulation where necessary parameters are estimated solely from the observation and therefore no user-intervention is needed. We provide experimental results with synthetic 1D signals and images, and compare with the state-of-the-art CS reconstruction algorithms demonstrating the superior performance of the proposed approach.
The source code can be found on the project webpage.



The algorithm is compared with BCS, BP, OMP, StOMP with CFAR thresholding (denoted by FAR) and GPSR.


Jarvis Haupt put together a page describing (at a relatively high-level) what Distilled Sensing is and how it works:

Jarvis also let me know that the latest DS paper is Distilled Sensing: Selective Sampling for Sparse Signal Recovery by Jarvis Haupt, Rui Castro and Robert Nowak. The abstract readss;
A selective sampling procedure called distilled sensing (DS) is proposed, and shown to be an effective method for recovering sparse signals in noise. Based on the notion that it is often easier to rule out locations that do not contain signal than it is to directly identify non-zero signal components, DS is a sequential method that systematically focuses sensing resources towards the signal subspace. This adaptivity in sensing results in rather surprising gains in sparse signal recovery dramatically weaker sparse signals can be recovered using DS compared with conventional non-adaptive sensing procedures.

Jarvis also told me that some toy model illustrating the DS algorithm will be available soon.

Thank you Jarvis and Derin.

Tuesday, March 24, 2009

CS: Some open questions, Talk slides, Tutorials, Split Bregman Methods, Different coded apertures


Andriyan Suksmono asked me why on a previous post, the image of Cassini showing Saturn and its rings had these Airy-like circles. So I went on Twitter and asked Phil Plait who eventually pointed to an entry that Emily Lakdawalla wrote at the Planetary Society Blog wrote on the 20th. So it looks like it is an artifact of dust on one of the filter of the camera. While one could look at it as an inconvenience let us look at it as an opportunity: since the dust does not move, could we obtain some sort of superresolution since we also have plenty of images to do some sort of calibration. Any thoughts ?

Other images from the same camera do not show these "defects" how can that this be explained ?



Recently Deming Yuan (Vincent), a reader of this blog, wrote me the following:

Recently, a problem puzzled me : when some elements of the measuring matrix A are missing , can we still recover the sparse vector ? If the answer is positive, does there exist an upper bound of the number of missing measurements? Is there any paper about this topic?
To what I replied

I think finding all the elements is something difficult (see below). Finding some elements, I don't know. You may want to check the work of Dror Baron and Yaron Rachlin on the secrecy of cs measurements:

http://nuit-blanche.blogspot.com/2008/10/cs-secrecy-of-cs-measurements-counting.html

But in fact, I think I am somewhat off the mark giving that answer since Dror and Yaron are not answering that problem. Does any of you have a better answer for Deming/Vincent?

Here are two presentations of paper mentioned previously:
Also found a tutorial by Matthias Seeger entitled: Bayesian Modelling in Machine Learning: A Tutorial Review

Gabriel Peyre has compiled a long list of "numerical tours". Two of these tours involve Compressive Sensing, namely:

And finally two papers:

Split Bregman Methods and Frame Based Image Restoration by Jian-Feng Cai, Stanley Osher, and Zuowei Shen. The abstract reads:
Split Bregman methods introduced in [47] have been demonstrated as efficient tools to solve various problems arising in total variation (TV) norm minimizations of partial differential equation models for image restoration, such as image denoising and magnetic resonance imaging (MRI) reconstruction from sparse samples. In this paper, we prove that the split Bregman iterations, where the number of inner iterations is fixed to be one, converge. Furthermore, we show that these split Bregman iterations can be used to solve minimization problems arising from the analysis based approach for image restoration in the literature. Finally, we apply these split Bregman iterations to the analysis based image restoration approach whose analysis operator is derived from tight framelets constructed in [59]. This gives a set of new frame based image restoration algorithms that cover several topics in image restorations, such as image denoising, deblurring, inpainting and cartoon-texture image decomposition. Several numerical simulation results are provided.




What are Good Apertures for Defocus Deblurring? by Changyin Zhou and Shree Nayar, The asbtract reads :
In recent years, with camera pixels shrinking in size, images are more likely to include defocused regions. In order to recover scene details from defocused regions, deblurring techniques must be applied. It is well known that the quality of a deblurred image is closely related to the defocus kernel, which is determined by the pattern of the aperture. The design of aperture patterns has been studied for decades in several fields, including optics, astronomy, computer vision, and computer graphics. However, previous attempts at designing apertures have been based on intuitive criteria related to the shape of the power spectrum of the aperture pattern. In this paper, we present a comprehensive framework for evaluating an aperture pattern based on the quality of deblurring. Our criterion explicitly accounts for the effects of image noise and the statistics of natural images. Based on our criterion, we have developed a genetic algorithm that converges very quickly to near-optimal aperture patterns. We have conducted extensive simulations and experiments to compare our apertures with previously proposed ones.

This is not compressive sensing per se, however it is coded aperture that uses many different configuration. The authors use a linear reconstruction algorithm (a very fast one at that) and there are some interesting notes about the psf being location dependent. In the conclusion, one can read:

Diffraction is another important issue that requires further investigation. Our work, as well as previous works on coded apertures, have avoided having to deal with diffraction by simply using low-resolution aperture patterns. By explicitly modeling diffraction effects, we may be able to find even better aperture patterns for defocus deblurring.
This is interesting as it gets us back to the beginning of this entry.


The LinkedIn group on Compressive Sensing has now 135 members.


In a different area, forget about hyperspectral imagery, looks like the human eye and the panchromatic filter of Geo-Eye images displayed on Google Earth is enough according to this Cnet article:

A thief in the UK used Google Earth to search the roofs of churches and schools to find lead roof tiles that he could then steal. A friend of the thief said, "He could tell the lead roofs apart on Google Earth, as they were slightly darker than normal."


Image Credit: NASA/JPL/Space Science Institute.

Monday, March 23, 2009

CS: Matrix completion and Compressive Sensing, SVT, IHT,GI, Astronomical Data Analysis and Sparsity, ML conf.,

Seen in the comments section of Terry Tao's blog on the entry about Matrix completion we covered before:

Greg Kuperberg remarked :
The papers describe this matrix completion problem and algorithm as analogous to compressed sensing. But could compressed sensing be interpreted as a special case of matrix completion, namely for circulant matrices? (Or maybe generalized circulant, for some other abelian or even non-abelian group.) ....The point is that first, a circulant matrix is an expanded form of its first row, which is an arbitrary vector; and second, that you obtain the singular values of the circulant matrix by taking the Fourier transform of that vector. I don’t know enough details of compressed sensing to say this for sure, but this construction leads me to ask whether matrix completion is not just analogous to compressed sensing, but rather a generalization.
To what Terry Tao replied:
...Hmm, that’s a good observation. The compressed sensing problem of recovering a function with few Fourier coefficients from a sparse sample is indeed formally equivalent to recovering a circulant matrix with low rank from a sparse sample, and the recovery algorithm used in the former (L^1 minimisation of the Fourier transform) is a special case of the recovery algorithm used in the latter (minimisation of nuclear norm).
Then Greg Kuperberg follows up with:
There are many special cases of the matrix completion problem that could be called compressed sensing. As I said more briefly in the parentheses, suppose that the rows and columns of a matrix are indexed by elements of a group, and suppose that the entry M_{ab} only depends on a^{-1}b. Then this is a generalized kind of circulant matrix, particularly if the group is non-commutative, and the recovery algorithm is another kind of compressed sensing.

For this reason, matrix completion could just as well be called non-commutative compressed sensing. This is surely a useful viewpoint. Also this phrase “nuclear norm” makes this slightly less clear, since that norm is of course just the spectral ell^1 norm.

The next question of course is what theorems about compressed sensing are subsumed by the matrix completion version. If a result has been subsumed, that’s interesting for one reason; and if it hasn’t been, it’s also interesting because you can ask why not.

—-

Taking things in a different direction, when I just skim the moment estimates in the matrix completion paper, it reminds me of the moment version of the non-commutative central limit theorem. (I don’t mean the free probability version, although that one is also a moment story, but rather the standard quantum probability version.) This is known as the Giri-von-Waldenfels theorem. I think that it’s an interesting problem to work out non-moment versions of this result, i.e., versions that more resemble convergence in distribution. A key problem is to define what convergence in distribution should mean in the quantum case. (Again, in the case of free probability it’s easier, because the limiting distribution is bounded and thus determined by its moments.) In my arXiv paper “A tracial quantum central limit theorem”, I obtained a partial result in this direction.


Interesting discussion! Since we are on the subject of Matrix Completion, the Singular Value Thresholding website is at:

The introduction reads;

Singular Value Thresholding (SVT) is an algorithm to minimize the nuclear norm of a matrix, subject to certain types of constraints. It has been successfully used in many matrix-completion problems (for more on the matrix completion problem, see Exact matrix completion via convex optimization by E.J. Candes and B. Recht). The SVT algorithm is described in the paper A singular value thresholding algorithm for matrix completion by J-F. Cai, E.J. Candes and Z. Shen.

The version of SVT available at this website is restricted to equality constraints of the form:

However, the user can easily modify it to handle inequality constraints of the form: SVT can also work with more general constraints (e.g. any Lipschitz-continuous convex function), but for simplicity, this is not implemented in the code below.

The SVT algorithm solves the following problem:

where the first norm is the nuclear norm (the sum of singular values) and the second norm is the Frobenius norm. Problem (P3), for large values of tau, is closely related to problem (P2):

Problem (P2) is often used as a proxy for problem (P1), since (P1) is not convex:
The code is available from the site.

While we are on the subject of Mathematics and Compressed Sensing, I nearly missed this most recent presentation/paper by Yves Meyer (and Basarab Matei as co-author) at the College de France entitled Simple Quasicrystals are Sets of Stable Sampling a continuation on his very interesting A variant on the compressed sensing of Emmanuel Candes (I talked about it here, here and here in reference to MRI). I seem to note that there doesn't seem to be a restriction on the positivity of the function.

In a different area, this is an issue of general interest: what are the parameters needed to make sure your reconstruction solver will converge to the right solution. It looks like there is not an obvious choice for the Bregman solvers as pointed by Michael Friedlander when he noted an issue with the parameters choice in this entry. More recently, an unknown blogger seems to have similar issues with a Bregman's algorithm as well. And so I am not overly surprised to see two papers at SPARS'09 (list of papers are here and here) trying to deal with a similar issue with the IHT algorithm:

Following up on a previous post, I went back to the Technology Review Arxiv blog and found a comment by KFC (the author of the blog) who cut and pasted an e-mail by Ori Katz one of the investigator at Weizmann performing the Ghost Imaging Study. I have seen those reconstructions with GPSR and they are indeed very nice. As I said in the previous post, I will feature the preprint when it comes out.


Found on Arxiv:

From some of the folks involved in getting compressive sensing in one of its first use in a spacemission coding scheme (Herschel) here is: Astronomical Data Analysis and Sparsity: from Wavelets to Compressed Sensing by Jean-Luc Starck and Jerome Bobin. The abstract reads:

Wavelets have been used extensively for several years now in astronomy for many purposes, ranging from data filtering and deconvolution, to star and galaxy detection or cosmic ray removal. More recent sparse representations such ridgelets or curvelets have also been proposed for the detection of anisotropic features such cosmic strings in the cosmic microwave background.We review in this paper a range of methods based on sparsity that have been proposed for astronomical data analysis. We also discuss what is the impact of Compressed Sensing, the new sampling theory, in astronomy for collecting the data, transferring them to the earth or reconstructing an image from incomplete measurements.

Finally, there is a Call for participation for a Workshop on Sparse Methods for Music Audio that will be held in conjunction with the 26th International Conference on Machine Learning in Montreal, Quebec, June 14 - 18, 2009.




Credit photo: NASA, Photography of Paris taken by an astronaut on board of the International Space Station on January 7th, 2008. See also astronaut photography of earth.

Friday, March 20, 2009

CS: More SPARS'09 papers.


Here are some other papers that will presented at SPARS'09. Kudos to the organizers for making them available before the meeting. It looks like Saint-Malo is going to be the place to be April 6th thru 9th. Enjoy the reading over the week-end or for some of you on the 10 hours flight to France.


Contextual Image Compression from Adaptive Sparse Data Representations by Laurent Demaret, Armin Iske, Wahid Khachabi. The abstract reads:

Natural images contain crucial information in sharp geometrical boundaries between objects. Therefore, their description by smooth isotropic function spaces (e.g. Sobolev or Besov spaces) is not sufficiently accurate. Moreover, methods known to be optimal for such isotropic spaces (tensor product wavelet decompositions) do not provide optimal nonlinear approximations for piecewise smooth bivariate
functions. Among the geometrybased alternatives that were proposed during the last few years, adaptive thinning methods work with continuous piecewise affine functions on anisotropic triangulations to construct sparse representations for piecewise smooth bivariate functions. In this article, a customized compression method for coding the sparse data information, as output by adaptive thinning, is proposed. The compression method is based on contextual encoding of both the sparse data positions and their attached luminance values. To this end, the structural properties of the sparse data representation are essentially exploited. The resulting contextual image compression method of this article outperforms our previous methods (all relying on adaptive thinning) quite significantly. Moreover, our proposed compression method also outperforms JPEG2000 for selected natural images, at both low and middle bitrates, as this is supported by numerical examples in this article.




Compressive Domain Interference Cancellation by Mark A. Davenport, Petros T. Boufounos, Richard Baraniuk. The abstract reads:

In this paper we consider the scenario where a compressive sensing system acquires a signal of interest corrupted by an interfering signal. Under mild sparsity and orthogonality conditions on the signal and interference, we demonstrate that it is possible to efficiently filter out the interference from the compressive measurements in a manner that preserves our ability to recover the signal of interest. Specifically, we develop a filtering method that nulls out the interference while maintaining the restricted isometry property (RIP) on the set of potential signals of interest. The construction operates completely in the compressive domain and has computational complexity that is polynomial in the number of measurements.



Fast Algorithm for Sparse Signal Approximation using Multiple Additive Dictionaries by Ray Maleh, Daehyun Yoon, Anna C. Gilbert. The abstract reads:

There are several models for sparse approximation: one where a signal is a sparse linear combination of vectors over a redundant dictionary and a second model in which a collection of signals is a simultaneous sparse linear combination over a single dictionary. In this work, interpolate between these two models to synthesize a single signal of interest from K highly incoherent dictionaries while enforcing simultaneous sparsity on the K resulting coefficient vectors. We define this as the parallel approximation problem, which arises quite naturally in many applications such as MRI parallel excitation using multiple transmission coils. We present an efficient algorithm to solve the parallel approximation problem called Parallel Orthogonal Matching Pursuit (POMP). We prove its correctness in a general setting and then discuss adaptations needed to make it suitable for use in an MRI parallel excitation setting. We then discuss parallel excitation in more detail and demonstrate how POMP solves the problem as accurately, but much faster, than previously proposed convex optimization methods.



An upper bound on the estimation error of the sparsest solution of underdetermined linear systems by Massoud Babaie-Zadeh, G. Hosein Mohimani, Christian Jutten. The abstract reads:

Let A be an n X m matrix with m \gt n, and suppose the underdetermined linear system As = x admits a unique sparse solution s0 (i.e. it has a solution s0 for which ks0k0 \lt 1 2 spark(A)). Suppose that we have somehow a solution (sparse or non-sparse) ^s of this system as an estimation of the true sparsest solution s0. Is it possible to construct an upper bound on the estimation error k^s - s0k2 without knowing s0? The answer is positive, and in this paper we construct such a bound which, in the case A has Unique Representation Property (URP), depends on the smallest singular value of all n X n submatrices of A.



Algorithms for Multiple Basis Pursuit Denoising by Alain Rakotomamonjy. The abstract reads:

We address the problem of learning a joint sparse approximation of several signals over a dictionary. We pose the problem as a matrix approximation problem with a row-sparsity inducing penalization on the coefficient matrix. We propose a simple algorithm based on iterative shrinking for solving the problem. At the present time, such a problem is solved either by using a Second-Order Cone programming or by means of a MFocuss algorithm. While the former algorithm is computationally expensive, the latter is efficient but present some pitfalls like presences of fixed points which are undesiderable when solving a convex problem. By analyzing the optimality conditions of the problem, we derive a simple algorithm. The algorithm we propose is efficient and is guaranteed to converge to the optimal solution, up to a given tolerance. Furthermore, by means of a reweighted scheme, we are able to improve the sparsity of the solution.


The Two Stage l1 Approach to the Compressed Sensing Problem by Stéphane Chrétien. The abstract reads:

This paper gives new results on the recovery of sparse signals using l1-norm minimization. We introduce a twostage l1 algorithm equivalent to the first two iterations of the alternating l1 relaxation introduced in [1]


Local orthogonal greedy pursuits for scalable sparse approximation of large signals with shift-invariant dictionaries by Boris Mailhé, Rémi Gribonval, Frédéric Bimbot, Pierre Vandergheynst. The abstract reads:

We propose a way to increase the speed of greedy pursuit algorithms for scalable sparse signal approximation. It is designed for dictionaries with localized atoms, such as timefrequency dictionaries. When applied to OMP, our modification leads to an approximation as good as OMP while keeping the computation time close to MP. Numerical experiments with a large audio signal show that, compared to OMP and Gradient Pursuit, the proposed algorithm runs in over 500 less time while leaving the approximation error almost unchanged.


A Shift Tolerant Dictionary Training Method by B. Vikrham Gowreesunker 1, Ahmed H. Tewfik. The abstract reads:

Traditional dictionary learning method work by vectorizing long signals, and training on the frames of the data, thereby restricting the learning to time-localized atoms. We study a shift-tolerant approach to learning dictionaries, whereby the features are learned by training on shifted versions of the signal of interest. We propose an optimized Subspace Clustering learning method to accommodate the larger training set for shift-tolerant training. We illustrate up to 50% improvement in sparsity on training data for the Subspace Clustering method, and the KSVD method [1] with only a few integer shifts. We demonstrate improvement in sparsity for data outside the training set, and show that the improved sparsity translates into improved source separation of instantaneous audio mixtures.




General Perturbations in Compressed Sensing by Matthew A. Herman, Thomas Strohmer. The abstract reads:

We analyze the Basis Pursuit recovery of signals when observing K-sparse data with general perturbations (i.e., additive, as well as multiplicative noise). This completely perturbed model extends the previous work of Cand`es, Romberg and Tao on stable signal recovery from incomplete and inaccurate measurements. Our results show that, under suitable conditions, the stability of the recovered signal is limited by the noise level in the observation. Moreover, this accuracy is within a constant multiple of the best-case reconstruction using the technique of least squares.



Dictionary learning with spatio-spectral sparsity constraints by Yassir Moudden, Jérome Bobin, Jean-Luc Starck, Jalal Fadili. The abstract reads:

Devising efficient sparse decomposition algorithms in large redundant dictionaries has attracted much attention recently. However, choosing the right dictionary for a given data set remains an issue. An interesting approach is to learn the best dictionary from the data itself. The purpose of this contribution is to describe a new dictionary learning algorithm for multichannel data analysis purposes under specific assumptions. We assume a large number of contiguous channels as in so-called hyperspectral data. In this case it makes sense to consider a priori that the collected data exhibits sparse spectral signatures and sparse spatial morphologies in specified dictionaries of spectral and spatial waveforms. Building on GMCA, the proposed algorithm gives a practical way to enforce the additional a priori spectral sparsity constraint on the dictionary space. Numerical experiments with synthetic and real hyperspectral data illustrate the efficiency of the proposed algorithm.




Compressive Sensing Recovery of Spike Trains Using A Structured Sparsity Model by Chinmay Hegde, Marco F. Duarte , Volkan Cevher. The abstract reads:

The theory of Compressive Sensing (CS) exploits a well-known concept used in signal compression - sparsity - to design new, efficient techniques for signal acquisition. CS theory states that for a length-N signal x with sparsity level K, M = O(K log(N/K)) random linear projections of x are sufficient to robustly recover x in polynomial time. However, richer models are often applicable in real-world settings that impose additional structure on the sparse nonzero coefficients of x.Many such models can be succinctly described as a union of K-dimensional subspaces. In recent work, we have developed a general approach for the design and analysis of robust, efficient CS recovery algorithms that exploit such signal models with structured sparsity. We apply our framework to a new signal model which is motivated by neuronal spike trains. We model the firing process of a single Poisson neuron with absolute refractoriness using a union of subspaces. We then derive a bound on the number of random projections M needed for stable embedding of this signal model, and develop a algorithm that provably recovers any neuronal spike train from M measurements. Numerical experimental results demonstrate the benefits of our model-based approach compared to conventional CS recovery techniques.




Reconstruction of Undersampled Cardiac Cine MRI data Using Compressive Sensing Principles by Pooria Zamani, Hamid Soltanian-Zadeh. The abstract reads:

Reduction of MRI data acquisition time is an important goal in the MRI field. Undersampling k-t space is a solution to reduce acquisition time. MR images may have sparse or compressible presentations in appropriate transform domains, such as wavelets. According to the Compressive Sensing (CS) theory, they can be recovered from randomly undersampled k-t space data that guarantees incoherency between sparsifying transform and sampling operator. However, pure random k-space sampling can be more time-consuming than full k-space sampling because of MRI principles. In this paper, we propose a new method based on hidden Markov models (HMM) to undersample k-space along the phase-encoding direction. To this end, we cluster extracted features of each k-space line by fuzzy c-means (FCM) method and consider the resulting class labels as the states of a Markov chain. Then we train a HMM and find the related transition matrix to each k-space line. We choose the lines having more non-diagonal transition matrices to sample data along them. We reconstruct the image by minimizing the L1 norm subject to data consistency using conjugate-gradient method and use simulation to set the parameters of the proposed method (e.g., iteration number). We apply our method to reconstruct undersampled Cardiac Cine MRI data with and without sparsifying transform, successfully. The use of fuzzy clustering as an intermediate tool to study complicated phenomena by HMM, applicability to non-dynamic MRI data and simplicity can be accounted as the specifications of the proposed method.


Greedy Deconvolution of Point-like Objects by Dirk A. Lorenz, Dennis Trede. The abstract reads:


The orthogonal matching pursuit (OMP) is an algorithm to solve sparse approximation problems. In [1] a sufficient condition for exact recovery is derived, in [2] the authors transfer it to noisy signals. We will use OMP for reconstruction of an inverse problem, namely the deconvolution problem. In sparse approximation problems one often has to deal with the problem of redundancy of a dictionary, i.e. the atoms are not linearly independent. However, one expects them to be approximatively orthogonal and this is quantified by incoherence. This idea cannot be transfered to ill-posed inverse problems since here the atoms are typically far from orthogonal: The illposedness of the (typically compact) operator causes that the correlation of two distinct atoms probably gets huge, i.e. that two atoms can look much alike. Therefore in [3], [4] the authors derive a recovery condition which uses the kind of structure one assumes on the signal and works without the concept of coherence. In this paper we will transfer these results to noisy signals. For our source we assume that it consists of a superposition of point-like objects with an a-priori known distance. We will apply it exemplarily to Dirac peaks convolved with Gaussian kernel as used in mass spectrometry.




Compressed sensing based compression of SAR raw data by Gabriel Rilling, Mike Davies, Bernard Mulgrew. The abstract reads:

Due to their noise-like features, SAR images are difficult to acquire with compressed sensing techniques. However, some parts of the images, typically associated to man-made structures, are compressible and we investigate two techniques exploiting that information to allow a compressive acquisition of the whole image. These techniques result in a significant enhancement of the image quality compared to classical compressed sensing. Moreover, compared to classical sampling and quantisation of the SAR raw data, they allow a significant reduction of bitrate with a limited increase of the distortion. However, their efficiency depends strongly on the presence of compressible parts in the image.



Circulant and Toeplitz Matrices in Compressed Sensing by Holger Rauhut. The abstract reads:
Compressed sensing seeks to recover a sparse vector from a small number of linear and non-adaptive measurements. While most work so far focuses on Gaussian or Bernoulli random measurements we investigate the use of partial random circulant and Toeplitz matrices in connection with recovery by `1-minization. In contrast to recent work in this direction we allow the use of an arbitrary subset of rows of a circulant and Toeplitz matrix. Our recovery result predicts that the necessary number of measurements to ensure sparse reconstruction by `1-minimization with random partial circulant or Toeplitz matrices scales linearly in the sparsity up to a log-factor in the ambient dimension. This represents a significant improvement over previous recovery results for such matrices. As a main tool for the proofs we use a new version of the non-commutative Khintchine inequality.





A quantitative criterion for selecting the optimal sparse representation of dynamic cardiac data in compresses MRI by Muhammad Usman, Philipp G. Batchelor. The abstract reads:
One of the important performance factors in compressed sensing (CS) reconstructions is the level of sparsity in sparse representation of the signal. The goal in CS is to find the sparsest representation of the underlying signal or image. However, for compressible or nearly sparse signals such as dynamic cardiac MR data, the quantification of sparsity is quite subjective due to issues such as dropped SNR or low contrast to noise ratio (CNR) in sparse domains such as x-f space or temporal difference domains. Hence, we need a criterion to compare different sparse representations of compressible signals. In this paper, we define a model that can fit the decay of practical compressible signals and as an application; we verify that this model can be used as a basis for selecting the optimal sparse representation of dynamic cardiac MR data.



Structured Sparsity: from Mixed Norms to Structured Shrinkage by Matthieu Kowalski, Bruno Torrésani. The abstract reads:

Sparse and structured signal expansions on dictionaries can be obtained through explicit modeling in the coefficient domain. The originality of the present contribution lies in the construction and the study of generalized shrinkage operators, whose goal is to identify structured significance maps. These generalize Group LASSO and the previously introduced Elitist LASSO by introducing more flexibility in the coefficient domain modeling. We study experimentally the performances of corresponding shrinkage operators in terms of significance map estimation in the orthogonal basis case. We also study their performance in the overcomplete situation, using iterative thresholding.




Sparse filter models for solving permutation indeterminacy in convolutive blind source separation by Prasad Sudhakar, Rémi Gribonval. The abstract reads:

Frequency-domain methods for estimating mixing filters in convolutive blind source separation (BSS) suffer from permutation and scaling indeterminacies in sub-bands. Solving these indeterminacies are critical to such BSS systems. In this paper, we propose to use sparse filter models to tackle the permutation problem. It will be shown that the ℓ1-norm of the filter matrix increases with permutations and with this motivation, an algorithm is then presented which aims to solve the permutations in the absence of any scaling. Experimental evidence to show the behaviour of ℓ1-norm of the filter matrix to sub-band permutations is presented. Then, the performance of our proposed algorithm is presented, both in noiseless and noisy cases.



Minimization of a sparsity promoting criterion for the recovery of complex-valued signals
by Lotfi Chaâri, Jean-Christophe Pesquet, Amel Benazza-Benyahia, Philippe Ciuciu. The abstract reads:

Ill-conditioned inverse problems are often encountered in signal/image processing. In this respect, convex objective functions including a sparsity promoting penalty term can be used. However, most of the existing optimization algorithms were developed for real-valued signals. In this paper, we are interested in complex-valued data. More precisely, we consider a class of penalty functions for which the associated regularized minimization problem can be solved numerically by a forward-backward algorithm. Functions within this class can be used to promote the sparsity of the solution. An application to parallel Magnetic Resonance Imaging (pMRI) reconstruction where complex-valued images are reconstructed is considered.



A Compressed Sensing Approach for Biological Microscopy Image Denoising
by Marcio M. Marim, Elsa D. Angelini , Jean-Christophe Olivo-Marin. The abstract reads:


Compressed Sensing (CS) provides a new framework for signal sampling, exploiting redundancy and sparsity in incoherent bases. For images with homogeneous objects and background, CS provides an optimal reconstruction framework from a set of random projections in the Fourier domain, while constraining bounded variations in the spatial domain. In this paper, we propose a CS-based method to simultaneously acquire and denoise data based on statistical properties of the CS optimality, signal modeling and characteristics of noise reconstruction. Our approach has several advantages over traditional denoising methods, since it can under-sample, recover and denoise images simultaneously. We demonstrate with simulated and practical experiments on fluorescence images that we obtain images with similar or increased SNR even with reduced exposure times. Such results open the gate to new mathematical imaging protocols, offering the opportunity to reduce exposure time along with photo-toxicity and photo-bleaching and assist biological applications relying on fluorescence microscopy.



Phase Transitions for Restricted Isometry Properties by Jeffrey D. Blanchard , Coralia Cartis , Jared Tanner. The abstract reads:

Currently there is no framework for the transparent comparison of sparse approximation recoverability results derived using different methods of analysis. We cast some of the most recent recoverability results for `1-regularization in terms of the phase transition framework advocated by Donoho. To allow for quantitative comparisons across different methods of analysis a particular random matrix ensemble must be selected; here we focus on Gaussian random matrices. Methods of analysis considered include the Restricted Isometry Property of Cand`es and Tao, geometric covering arguments of Rudelson and Vershynin, and convex polytopes formulations of Donoho.



Sparsity Measure and the Detection of Significant Data by Abdourrahmane Atto, Dominique Pastor, Grégoire Mercier. The abstract reads:

The paper provides a formal description of the sparsity of a representation via the detection thresholds. The formalism proposed derives from theoretical results about the detection of significant coefficients when data are observed in presence of additive white Gaussian noise. The detection thresholds depend on two parameters describing the sparsity degree for the representation of a signal. The standard universal and minimax thresholds correspond to detection thresholds associated with different sparsity degrees.




Sparsity hypotheses for robust estimation of the noise standard deviation in various signal processing applications by Dominique Pastor, Francois-Xavier Socheleau, Abdeldjalil Aïssa-El-Bey. The abstract reads:

This paper concerns the problem of estimating the noise standard deviation in different signal processing applications. The presented estimator derives from recent results in robust statistics based on sparsity hypotheses. More specifically, these theoretical results make the link between a standard problem in robust statistics (the estimation of the noise standard deviation in presence of outliers) and sparsity hypotheses. The estimator derived from these theoretical results can be applied to different signal processing applications where estimation of the noise standard deviation is crucial. In the present paper, we address speech denoising and Orthogonal Frequency Division Multiple Access (OFDMA). A relevant application should also be Communication Electronic Support (CES). For such applications, the algorithm proposed is a relevant alternative to the median absolute deviation (MAD) estimator.



Sparse Super-Resolution with Space Matching Pursuits by Guoshen Yu , Stéphane Mallat. The abstract reads:

Super-resolution image zooming is possible when the image has some geometric regularity. Directional interpolation algorithms are used in industry, with ad-hoc regularity measurements. Sparse signal decompositions in dictionaries of curvelets or bandlets find indirectly the directions of regularity by optimizing the sparsity. However, super-resolution interpolations in such dictionaries do not outperform cubic spline interpolations. It is necessary to further constraint the sparse representation, which is done through projections over structured vector spaces. A space matching pursuit algorithm is introduced to compute image decompositions over spaces of bandlets, from which a super-resolution image zooming is derived. Numerical experiments illustrate the efficiency of this super-resolution procedure compared to cubic spline interpolations.
How to use the iterative hard thresholding algorithm by Thomas Blumensath, Michael E Davies. The abstract reads:
Several computationally efficient algorithms have been shown to offer near optimal recovery of sparse signals from a small number of linear measurements. However, whilst many of the methods have similar guarantees whenever the measurements satisfy the so called restricted isometry property, empirical performance of the methods can vary significantly in a regime in which this condition is not satisfied. We here modify the Iterative Hard Thresholding algorithm by including an automatic step-size calculation. This makes the method independent from an arbitrary scaling of the measurement system and leads to a method that shows state of the art empirical performance. What is more, theoretical guarantees derived for the unmodified algorithm carry over to the new method with only minor changes.



Freely Available, Optimally Tuned Iterative Thresholding Algorithms for Compressed Sensing by Arian Maleki , David L. Donoho. The abstract reads:

We conducted an extensive computational experiment, lasting multiple CPU-years, to optimally select parameters for important classes of algorithms for finding sparse solutions of underdetermined systems of linear equations. We make the optimally tuned implementations freely available at sparselab.stanford.edu; they can be used 'out of the box' with no user input: it is not necessary to select thresholds or know the likely degree of sparsity. Our class of algorithms includes iterative hard and soft thresholding with or without relaxation, as well as CoSaMP, Subspace Pursuit and some natural extensions. As a result, our optimally tuned algorithms dominate such proposals. Our notion of optimality is defined in terms of phase transitions, i.e. we maximize the number of nonzeros at which the algorithm can successfully operate. We show that the phase transition is a well-defined quantity with our suite of random underdetermined linear systems. Our tuning gives the highest transition possible within each given class of algorithms. We verify by extensive computation the robustness of our recommendations to the amplitude distribution of the nonzero coefficients as well as the matrix ensemble defining the underdetermined system. Several specific findings are established. (a) For all algorithms the worst amplitude distribution for nonzeros is generally the constantamplitude random-sign distribution; where all nonzeros are the same size. (b) Various random matrix ensembles give the same phase transitions; random partial isometries give different transitions and require different tuning; (c) Optimally tuned Subspace Pursuit dominates optimally tuned CoSaMP, particularly so when the system is almost square. (d) For randomly decimated partial Fourier transform sampling, our recommended Iterative Soft Thresholding gives extremely good performance, making more complex algorithms like CoSaMP and Subspace Pursuit relatively pointless.




Exploiting the Sparsity of the Sinusoidal Model Using Compressed Sensing for Audio Coding
by Anthony Griffin 1, Christos Tzagkarakis 1, Toni Hirvonen 1, Athanasios Mouchtaris 1, Panagiotis Tsakalides. The abstract reads:

Audio signals are represented via the sinusoidal model as a summation of a small number of sinusoids. This approach introduces sparsity to the audio signals in the frequency domain, which is exploited in this paper by applying Compressed Sensing (CS) to this sparse representation. CS allows sampling of signals at a much lower rate than the Nyquist rate if they are sparse in some basis. In this manner, a novel sinusoidal audio coding approach is proposed, which differs in philosophy from current state-of-the-art methods which encode the sinusoidal parameters (amplitude, frequency, phase) directly. It is shown here that encouraging results can be obtained by this approach, although inferior at this point compared to state-of-the-art. Several practical implementation issues are discussed, such as quantization of the CS samples, frequency resolution vs. coding gain, error checking, etc., and directions for future research in this framework are proposed.




Sparse representations versus the matched filter by F. Martinelli, J.L. Sanz. The abstract reads:

We have considered the problem of detection and estimation of compact sources immersed in a background plus instrumental noise. Sparse approximation to signals deals with the problem of finding a representation of a signal as a linear combination of a small number of elements from a set of signals called dictionary. The estimation of the signal leads to a minimization problem for the amplitude associated to the sources. We have developed a methodology that minimizes the lp-norm with a constraint on the goodness-of-fit and we have compared different norms against the matched filter.



Image Restoration with Compound Regularization Using a Bregman Iterative Algorithm
by Manya V. Afonso, José M. Bioucas-Dias, Mario A. T. Figueiredo. The abstract reads:

Some imaging inverse problems may require the solution to simultaneously exhibit properties that are not enforceable by a single regularizer. One way to attain this goal is to use a linear combinations of regularizers, thus encouraging the solution to simultaneously exhibit the characteristics enforced by each individual regularizer. In this paper, we address the optimization problem resulting from this type of compound regularization using the split Bregman iterative method. The resulting algorithm only requires the ability to efficiently compute the denoising operator associated to each in- volved regularizer. Convergence is guaranteed by the theory behind the Bregman iterative approach to solving constrained optimization problems. In experiments with images that are simultaneously sparse and piece-wise smooth, the proposed algorithm successfully solves the deconvolution problem with a compound regularizer that is the linear combination of the `1 and total variation (TV) regularizers. The lowest MSE obtained with the (`1+TV) regularizer is lower than that obtained with TV or `1 alone, for any value of the corresponding regularization parameters.

Sparse Coding and Automatic Relevance Determination for Multi-way models by Morten Mørup, Lars Kai Hansen. The abstract reads:

Multi-way modeling has become an important tool in the analysis of large scale multi-modal data. An important class of multi-way models is given by the Tucker model which decomposes the data into components pertaining to each modality as well as a core array indicating how the components of the various modalities interact. Unfortunately, the Tucker model is not unique. Furthermore, establishing the adequate model order is difficult as the number of components are specified for each mode separately. Previously, rotation criteria such as VARIMAX has been used to resolve the non-uniqueness of the Tucker representation [7]. Furthermore, all potential models have been exhaustively evaluated to estimate the adequate number of components of each mode. We demonstrate how sparse coding can prune excess components and resolve the non-uniqueness of the Tucker model while Automatic Relevance Determination in Bayesian learning form a framework to learn the adequate degree of sparsity imposed. On a wide range of multi-way data sets the proposed method is demonstrated to successfully prune excess components thereby establishing the model order. Furthermore, the non-uniqueness of the Tucker model is resolved since among potential models the models giving the sparsest representation as measured by the sparse coding regularization is attained. The approach readily generalizes to regular sparse coding as well as the CandeComp/PARAFAC model as both models are special cases of the Tucker model.


Credit Photo: NASA. View of the Earth from the International Space Station.