Pages

Thursday, March 26, 2009

CS: This week's long post. Some Talks, a competition, SPARS'09 and other papers.


A small foreword...

For those of you who want to get directly any entries of this blog to your mailbox, you probably want to add your email address in the right side section of this blog. If you are seeing this through a Feedreader, you may want to come to the site directly. If you sign up, please note that there is no way for me to prevent any of these e-mails to reach you when I blog on other subject areas than compressive sensing. This is one of the reason I have decreased a little bit the blogging on other topics but I feel bad about this as the initial itent of the blog was not CS in particular. So to summarize, if the title of the E-mail or the entry on the Feedreader does not start with CS then you may want to disregard it altogether. I am also generally avoiding what I would consider sensitive subjects as the readership of this blog is very widespread.
Yet I cannot guarantee an ability to know what those subjects are in the different places this blog is being read. For what it's worth, we now have 139 members at the LinkedIn Compressive Sensing Group, this is more than the number of contacts I have on LinkedIn and there is very little overlap. We also have 278 readers of the blog using different Feed Readers while 72 people receive every entries by e-mail. With regards to people coming daily directly to the blog that number is about 370.

With the rapid escalation in the number of papers and publications I have become limited in the number of comments I can make on each of them, therefore I am going to try to give more context at a higher level in the future.

First, Fredo Durand will give a talk on computational photography at Ecole Normale Superieure, Paris on Friday (Thanks Laurent!). While we are on the subject of Compressive Sensing Hardware, those of you interested in using a GPU for some reconstruction or even acqusition, here is GPU Programming For The Rest Of Us.

Hoping to make $10,000 again, on may want to check the prize for the competition on Quantitative Single-Neuron Modeling 2009. The introduction says the following:
How well are single-cell properties reproduced by the present-day neuronal models? Recently, several labs have approached this question by assessing the quality of neuron models with respect to spike timing prediction or characteristic features of the voltage trace. So far, every modeler used his own preferred performance measure on his own data set. The Quantitative Single-Neuron Modeling Competition offers a coherent framework to compare neuronal models and fitting methods.
Participation
Participants can submit their prediction to one or more of the challenges A, B, C or D. Anyone can participate and any type of model is accepted.

Goal
This competition is an opportunity to bridge the gap between experimentalists and modelers. The Quantitative Single-Neuron Modeling Competition is an invitation to compare your methods and models to those of other people in the field. Past competitions remain available on this website and will serve as benchmarks (Jolivet et al 2008a, Jolivet et al 2008b). Prizes are given to outstanding contributions.
Prizes
The prizes are given according to the “Rules and Conditions” specified below.

The INCF Prize (10 000 CHF) is given to the participant(s) providing a significant win in at least 2 of the four challenges.
The FACETS Award (500 CHF) is shared between all participants providing a shared win in a single challenge.
Important Dates
Submissions via this website will open June 25, 2009.
The submission deadline is August 25, 2009.
The results will be presented at the INCF Congress of Neuroinformaticsin Pilsen, September 6-8, 2009.


For this entry, we have several papers from SPARS'09 as well as other papers collected from the web (they are not in a particular order). Some of them may look like they are not about CS but they fit in the larger set of papers devoted to manifold based sensing and attendant classification tasks:


A presentation by Matthias Seeger entitled Approximate Bayesian Inference and Measurement Design Optimization for Low-Level Computer Vision and Imaging

And the papers are:

Computational Methods for Sparse Solution of Linear Inverse Problems by Joel Tropp and Stephen Wright. The abstract reads:
In sparse approximation problems, the goal is to find an approximate representation of a target signal using a linear combination of a few elementary signals drawn from a fixed collection. This paper surveys the major algorithms that are used for solving sparse approximation problems in practice. Specific attention is paid to computational issues, to the circumstances in which individual methods tend to perform well, and to the theoretical guarantees available. Many fundamental questions in electrical engineering, statistics, and applied mathematics can be posed as sparse approximation problems, which makes the algorithms discussed in this paper versatile tools with a wealth of applications

Multi-Manifold Semi-Supervised Learning by Andrew Goldberg, Xiaojin Zhu, Aarti Singh, Zhiting Xu, Robert Nowak. The abstract reads:
We study semi-supervised learning when the data consists of multiple intersecting manifolds. We give a finite sample analysis to quantify the potential gain of using unlabeled data in this multi-manifold setting. We then propose a semi-supervised learning algorithm that separates different manifolds into decision sets, and performs supervised learning within each set. Our algorithm involves a novel application of Hellinger distance and size-constrained spectral clustering. Experiments demonstrate the benefit of our multimanifold semi-supervised learning approach.

SPIR-iT: Iterative Self-consistent Parallel Imaging Reconstruction from Arbitrary k-Space by Michael Lustig and John Pauly. The abstract reads:
A new approach to autocalibrating, coil-by-coil parallel imaging reconstruction is presented. It is a generalized reconstruction framework based on self consistency. The reconstruction problem is formulated as an optimization that yields the most consistent solution with the calibration and acquisition data. The approach is general and can accurately reconstruct images from arbitrary k-space sampling patterns. The formulation can flexibly incorporate additional image priors such as off resonance correction and regularization terms. Several iterative strategies to solve the posed reconstruction problem in both image and k-space domain are presented. These are based on projection over convex sets (POCS) and conjugate gradient (CG) algorithms. Phantom and in-vivo studies demonstrate efficient reconstructions from undersampled Cartesian and spiral trajectories. Reconstructions that include off-resonance correction and non-linear regularization are also demonstrated.



Restricted Isometry Property and lp sparse recovery failure by Mike Davies, Rémi Gribonval. The abstract reads:
This paper considers conditions based on the restricted isometry constant (RIC) under which the solution of an underdetermined linear system with minimal lp norm, 0 \lt p [\leq] 1, is guaranteed to be also the sparsest one. Specifically matrices are identified that have RIC, [\delta_{2m}], arbitrarily close to 1/[{ \sqrt{2} \) \approx 0.707] where sparse recovery with p = 1 fails for at least one m-sparse vector. This indicates that there is limited room for improvement over the best known positive results of Foucart and Lai, which guarantee that l1-minimisation recovers all m-sparse vectors for any matrix with [\delta_{2m} \lt 2(3-{ \sqrt{2})/7 \approx 0.4531]. We also present results that show, compared to [l~{1}] minimisation, [l~{p}] minimisation recovery failure is only slightly delayed in terms of the RIC values. Furthermore when l_p optimisation is attempted using an iterative reweighted [l~{p}] scheme, failure can still occur for [\delta_{2m}] arbitrarily close to 1[{ \sqrt{2} \).

I note the argumentation given in the third paragraph entitled III. RIP REST IN PEACE?
(which echoes several entries such as this one) and while the KGG argumentation of Wotao Yin and Yin Zhang (ref 18) seems interesting yet is NP-hard, I note that that there are now some tools to take a stab at the nullspace property:
both programs are prominently listed in the measurement matrix section of the Big Picture.

Here is an important paper for any practioner: Basis Identification from Random Sparse Samples by Rémi Gribonval and Karin Schnass. The abstract reads:
This article treats the problem of learning a dictionary providing sparse representations for a given signal class, via ℓ1-minimisation. The problem is to identify a dictionary [\Phi] from a set of training samples Y knowing that [Y = \PhiX] for some coefficient matrix X. Using a characterisation of coefficient matrices X that allow to recover any basis as a local minimum of an ℓ1-minimisation problem, it is shown that certain types of sparse random coefficient matrices will ensure local identifiability of the basis with high probability. The typically sufficient number of training samples grows up to a logarithmic factor linearly with the signal dimension.

In this paper we introduce the multiple-description ℓ1-compression problem: minimize kz1k1+λkz2k1 subject to the three distortion constraints kA1z1 −yk2 \leq δ1, kA2z2 −yk2 \leq δ2, and k1/2(A1z1 + A2z2) − yk2 \leq γ. This problem formulation is interesting in, e.g., ad-hoc networks where packets can be lost. If a description (z2) is lost in the network and only one description is received (z1), it is still possible to decode and obtain a signal quality equal or better than described by the parameter δ1 (and vice versa). If both descriptions are received, the quality is determined by the parameter γ. This problem is difficult to solve using first-order projection methods due to the intersecting second-order cones. However, we show that by recasting the problem into its dual form, one can avoid the difficulties due to conflicting fidelity constraints. We then show that efficient first-order ℓ1-compression methods are applicable, which makes it possible to solve large scale problems, e.g., multiple-description ℓ1-compression of video.

Distributed Compressed Sensing for Sensor Networks, Using p-thresholding by Mohammad Golbabaee, Pierre Vandergheynst. The abstract reads:
Distributed compressed sensing is the extension of compressed sampling (CS) to sensor networks. The idea is to design a CS joint decoding scheme at the base station which exploits the inter-sensor correlations, in order to recover the whole observations from very few number of random measurements per node. Here, the questions are about modeling the correlations, design of the joint recovery algorithms, analysis of those algorithms, the comparison between the performance of the joint and separate decoder and nally determining how optimal they are.

Average performance of the sparsest approximation in a dictionary by François Malgouyres and Mila Nikolova. The abstract reads:
Given data d element of R^N, we consider its representation u* involving the least number of non-zero elements (denoted by ℓ0(u*)) using a dictionary A (represented by a matrix) under the constraint kAu − dk \leq τ, for τ \gt 0 and a norm k.k. This (nonconvex) optimization problem leads to the sparsest approximation of d. We assume that data d are uniformly distributed in θBfd (1) where θ>0 and Bfd (1) is the unit ball for a norm fd. Our main result is to estimate the probability that the data d give rise to a K−sparse solution u*: we prove that P (ℓ0(u*) \leq K) = CK( τ θ )(N−K) + o(( τ θ )(N−K)), where u* is the sparsest approximation of the data d and CK > 0. The constants CK are an explicit function of k.k, A, fd and K which allows us to analyze the role of these parameters for the obtention of a sparsest K−sparse approximation. Consequently, given fd and θ, we have a tool to build A and k.k in such a way that CK (and hence P (ℓ0(u*) \leq K)) are as large as possible for K small. In order to obtain the above estimate, we give a precise characterization of the set [\zigma τK] of all data leading to a K−sparse result. The main difficulty is to estimate accurately the Lebesgue measure of the sets {[\zigma τ K] ∩ Bfd (θ)}. We sketch a comparative analysis between our Average Performance in Approximation (APA) methodology and the well known Nonlinear Approximation (NA) which also assess the performance in approximation.

Magnetic Resonance Spectrum Separation Using Sparse Representations and Wavelet Filters by Yu Guo, Su Ruan, Jérôme Landré, Jean-Marc Constants. The abstract reads:
Magnetic Resonance spectroscopy (MRS) provides a “frequency-signal intensity” spectrum of multiple peaks that reflect the biochemical composition of a localized region in the body. The peak intensity or the area under each peak is proportional to the concentration of that assigned metabolite. Accurate quantification of in vivo MRS (measuring peak intensity or area) is very important to diagnose certain metabolic disorders. However, strongly overlapping metabolite peaks, poor knowledge about background component (the baseline), and low signalto- noise ratio (SNR) make the task difficult. In this paper, a novel spectrum separation method using sparse representations and wavelet filters is proposed to separate baseline and spectra of different metabolites and finally achieves an accurate MRS quantification. With the proposed method, the accuracy and the robustness of MRS quantification are improved, from simulation data, compared with a commonly used frequency-domain MRS quantification method. The quantification on tumor metabolism with in vivo brain MR spectra is also demonstrated.

Compressed Sensing of Analog Signals in Shift-Invariant Spaces by Yonina Eldar. The abstract reads:
A traditional assumption underlying most data converters is that the signal should be sampled at a rate exceeding twice the highest frequency. This statement is based on a worst-case scenario in which the signal occupies the entire available bandwidth. In practice, many signals are sparse so that only part of the bandwidth is used. In this paper, we develop methods for low-rate sampling of continuous-time sparse signals in shift-invariant (SI) spaces generated by m kernels with period T. We model sparsity by treating the case in which only k out of the m generators are active, however, we do not know which k are chosen. We show how to sample such signals at a rate much lower than m/T, which is the minimal sampling rate without exploiting sparsity. Our approach combines ideas from analog sampling in a subspace with a recently developed block diagram that converts an infinite set of sparse equations to a finite counterpart. Using these two components we formulate our problem within the framework of finite compressed sensing (CS) and then rely on algorithms developed in that context. The distinguishing feature of our results is that in contrast to standard CS, which treats finite-length vectors, we consider sampling of analog signals for which no underlying finite-dimensional model exists. The proposed framework allows to extend much of the recent literature on CS to the analog domain.

Jose Bioucas-Dias and Mario Figueiredo, Total variation restoration of speckled images using a split-Bregman algorithm. The abstract reads:
Multiplicative noise models occur in the study of several coherent imaging systems, such as synthetic aperture radar and sonar, and ultrasound and laser imaging. This type of noise is also commonly referred to as speckle. Multiplicative noise introduces two additional layers of difficulties with respect to the popular Gaussian additive noise model: (1) the noise is multiplied by (rather than added to) the original image, and (2) the noise is not Gaussian, with Rayleigh and Gamma being commonly used densities. These two features of the multiplicative noise model preclude the direct application of state-of-the-art restoration methods, such as those based on the combination of total variation or wavelet-based regularization with a quadratic observation term. In this paper, we tackle these difficulties by: (1) using the common trick of converting the multiplicative model into an additive one by taking logarithms, and (2) adopting the recently proposed split Bregman approach to estimate the underlying image under total variation regularization. This approach is based on formulating a constrained problem equivalent to the original unconstrained one, which is then solved using Bregman iterations (equivalently, an augmented Lagrangian method). A set of experiments show that the proposed method yields state-of-the-art results.

A Manifold Lifting Algorithm for Multi-View Compressive Imaging by Mike Wakin. The abstract reads:
We consider a multi-view imaging scenario where a number of cameras observe overlapping, translated subimages of a larger scene. To simplify the acquisition and encoding of these images, we propose a non-collaborative compressive sensing protocol at each camera. We discuss a prototype algorithm for joint reconstruction of the images from the ensemble of random measurements, based on the geometric manifold structure that arises from the varying camera positions. Even when the camera positions are unknown, we demonstrate that it is possible to simultaneously resolve the images and register their positions using only the random measurements.
A tiny little problem with the example is that this is a real satelitte imagery, i.e. this image was taken one row at a time in a pushbroom fashion) and so any movement on the image cannot be seen. The physics of the image taking won't alow it. To get more interesting result, one might be interested in using any of the photos we took from a stratospheric balloon here (this 4GB dataset is free to use). The reality of image taking in this context (no pushbroom) becomes a little more challenging as the clouds move as well as the camera. From the conclusion:

It is worth emphasizing the apparent benefit of random measurements that we are observing in our experiments. Suppose for each satellite that we were to permit a form of “transform coding”, where instead of transmitting 96 random measurements, we request each satellite to transmit its 96 largest wavelet coefficients. On an image-by-image basis, this typically is a better way to compress images to minimize PSNR. However, we see in Fig. 1(d) that even if we have perfect knowledge of the camera positions and fuse the available wavelet-based measurements at the collection point using a global ℓ1 minimization akin to (1), the ultimate reconstruction is inferior to our result in Fig. 4(b).
Two noiselet transform codes written by Laurent Duval and Laurent Jacques can be found here.


A Multiscale Framework for Compressive Sensing of Video by J.Y. Park and Mike Wakin. The abstract reads:
Compressive Sensing (CS) allows the highly efficient acquisition of many signals that could be difficult to capture or encode using conventional methods. From a relatively small number of random measurements, a high-dimensional signal can be recovered if it has a sparse or near-sparse representation in a basis known to the decoder. In this paper, we consider the application of CS to video signals in order to lessen the sensing and compression burdens in single- and multi-camera imaging systems. In standard video compression, motion compensation and estimation techniques have led to improved sparse representations that are more easily compressible; we adapt these techniques for the problem of CS recovery. Using a coarse-to-fine reconstruction algorithm, we alternate between the tasks of motion estimation and motion-compensated wavelet-domain signal recovery. We demonstrate that our algorithm allows the recovery of video sequences from fewer measurements than either frame-by-frame or inter-frame difference recovery methods.
From the Conclusion section:
For single-view imaging, our CS-based algorithm will not necessarily provide bit-rate savings when compared with traditional MPEG compression. However, the ability to capture a video using few measurements may be useful in applications where is it difficult or expensive to obtain raw samples. Another interesting application of the proposed algorithm is in multi-view imaging. In such a system, multi-camera arrays are used to capture a scene from several different angles. Several limitations—in transmission bandwidth, power, storage, and so on—could be overcome with the application of CS [9]. The proposed algorithmcould be applied by treating each signal captured by each camera as a frame of a video sequence. This remains a subject of ongoing work and experimentation.
So as expected, it looks like we cannot do better than MPEG once the imagery has already been acquired but there seems to be some good news for CS only acquisition devices.


Image Denoising Using Sparse Representations by SeyyedMajid Valiollahzadeh, Hamed Firouzi1, Massoud Babaie-Zadeh, and Christian JuttenThe abstract reads:
The problem of removing white zero-mean Gaussian noise from an image is an interesting inverse problem to be investigated in this paper through sparse and redundant representations. However, finding the sparsest possible solution in the noise scenario was of great debate among the researchers. In this paper we make use of new approach to solve this problem and show that it is comparable with the state-of-art denoising approaches.

and finally,

Robust Counting Via Counter Braids: An Error-Resilient Network Measurement Architecture by Yi Lu, Balaji Prabhakar. The abstract reads:
A novel counter architecture, called Counter Braids, has recently been proposed for accurate per-flow measurement on high-speed links. Inspired by sparse random graph codes, Counter Braids solves two central problems of per-flow measurement: one-to-one flow-to-counter association and large amount of unused counter space. It eliminates the one-to-one association by randomly hashing a flow label to multiple counters and minimizes counter space by incrementally compressing counts as they accumulate. The random hash values are reproduced offline from a list of flow labels, with which flow sizes are decoded using a fast message passing algorithm. The decoding of Counter Braids introduces the problem of collecting flow labels active in a measurement epoch. An exact solution to this problem is expensive. This paper complements the previous proposal with an approximate flow label collection scheme and a novel error-resilient decoder that decodes despite missing flow labels. The approximate flow label collection detects new flows with variable-length signature counting Bloom filters in SRAM, and stores flow labels in high-density DRAM. It provides a good trade-off between space and accuracy: more than 99 percent of the flows are captured with very little SRAM space. The decoding challenge posed by missing flow labels calls for a new algorithm as the original message passing decoder becomes error-prone. In terms of sparse random graph codes, the problem is equivalent to decoding with graph deficiency, a scenario beyond coding theory. The error-resilient decoder employs a new message passing algorithm that recovers most flow sizes exactly despite graph deficiency. Together, our solution achieves a 10-fold reduction in SRAM space compared to hash-table based implementations, as demonstrated with Internet trace evaluations.
the following has just an abstract:

Space-optimal heavy hitters with strong error bounds by Radu Berinde, Graham Cormode, Piotr Indyk, and Martin Strauss. The abstract reads:
The problem of finding heavy-hitters and approximating the frequencies of items is at the heart of many problems in data stream analysis. It has been observed that several proposed solutions to this problem can outperform their worst-case guarantees on real data. This leads to the question of whether some stronger bounds can be guaranteed. We answer this in the positive by showing that a class of “counter-based algorithms” (including the popular and very space-efficient and algorithms) provide much stronger approximation guarantees than previously known. Specifically, we show that errors in the approximation of individual elements do not depend on the frequencies of the most frequent elements, but only on the frequency of the remaining “tail”. This shows that counter-based methods are the most space-efficient (in fact, space-optimal) algorithms having this strong error bound. This tail guarantee allows these algorithms to solve the “sparse recovery” problem. Here, the goal is to recover a faithful representation of the vector of frequencies, f. We prove that using space O(k), the algorithms construct an approximation f* to the frequency vector f so that the L1 error f-f*1 is close to the best possible error minf' f' - f1, where f' ranges over all vectors with at most k non-zero entries. This improves the previously best known space bound of about O(k logn) for streams without element deletions. Other consequences of the tail guarantees are results for skewed (Zipfian) data, and guarantees for accuracy of merging multiple summarized streams.

No comments:

Post a Comment