Monday, July 11, 2011

Call for Host of SPARS'13 and Compressive Sensing Literature This Week

Remi Gribonval sent me the following:

Dear Igor,

In the follow-up of the workshop SPARS'11 on Signal Processing with Adaptive/Sparse Representations, the SPARS Steering Committee is looking for hosts for the following SPARS meeting.

Would you mind relaying this on your blog ?

All the best,

Absolutely, here is the call for hosts of the next SPARS meeting:

Dear colleagues,

In the follow-up of the workshop SPARS'11 on Signal Processing with Adaptive/Sparse Representations, the SPARS Steering Committee is looking for hosts for the following SPARS meeting.
The SPARS workshop is expected to run about each 24 months, so the next workshop would most likely be held in end of June/beginning of July 2013.
For more about SPARS11 and previous workshops, see , and

(As a past host myself, I know that this is an exciting and rewarding opportunity to make a major contribution to the community!)

If you and/or colleagues would like to host the next workshop, or think this sounds interesting and would like to know more, please send me a brief email as an "Expression of Interest".

Full details are not required at this stage, but as much of the following information (if known) would be helpful:

* Proposed location / venue

* Names of organizers

* Suggested dates

* Funding strategy

* Other information if available (accommodation options, travel, costs, other special features)

Please send any initial "expressions of interest" by email to me ( by August 30.

The SPARS Steering Committee will then discuss these expressions of interest and we will get back to you.

In the meantime, if you have any queries, please let me know.

Best wishes,

Rémi Gribonval.

Note: the Steering Committee will take the following elements into particular consideration when assessing propositions:

* Format: SPARS is aimed to be a "human size" workshop: not more than 200 participants, one or at most two parallel tracks, and ample time for discussions.

* Student-friendliness: support to students, e.g., through reduced fee and/or travel grants, is expected.

* Abstract 'only': SPARS is primarily intended to be a forum for exchanges at the frontier between applied math and electrical engineering. To accommodate the different publishing cultures of these communities,
SPARS11 only requested a mandatory 1-page abstract, but full papers were not required. The organizers of SPARS13 are free to consider optional proceedings for authors who would wish to provide an extended paper.

* Funding: Bidders should show that they can provide / have applied for a grant to support the organization of the event.

* Accessibility & affordability: ease of access and affordability may be preferred to fancy locations.

* Timing: time conflicts with other sparsity related workshops such as SaMPTA, SPIE Wavelets&Sparsity, etc. should be avoided if possible.

Thanks Remi 

You're in the Edinburgh area and want to make your body available for some Compressive sensing testing, Mike Davies is performing CS MRI studies on 10 healthy subjects. Please note all the exclusion criteria.

While we are on the subject of Edinburgh, here are the presentations slides for SPARS11 for Analysis Operator Learning for Overcomplete Co-sparse Representations by Mehrdad Yaghoobi, Sangnam Nam, Remi Gribonval, and Mike E. Davies

also found on the interwebs:

Signal and Image Processing in Astrophysics by Sandrine Pires

Group Testing with Probabilistic Tests: Theory, Design and Application by Mahdi Cheraghchi, Ali Hormati, Amin Karbasi, and Martin Vetterli. The abstract reads:
Identification of defective members of large populations has been widely studied in the statistics community under the name of group testing. It involves grouping subsets of items into different pools and detecting defective members based on the set of test results obtained for each pool. In a classical noiseless group testing setup, it is assumed that the sampling procedure is fully known to the reconstruction algorithm, in the sense that the existence of a defective member in a pool results in the test outcome of that pool to be positive. However, this may not be always a valid assumption in some cases of interest. In particular, we consider the case where the defective items in a pool can become independently inactive with a certain probability. Hence, one may obtain a negative test result in a pool despite containing some defective items. As a result, any sampling and reconstruction method should be able to cope with two different types of uncertainty, i.e., the unknown set of defective items and the partially unknown, probabilistic testing procedure. In this work, motivated by the application of detecting infected people in viral epidemics, we design non-adaptive sampling procedures that allow successful identification of the defective items through a set of probabilistic tests. Our design requires only a small number of tests to single out the defective items. In particular, for a population of size N and at most K defective items with activation probability p, our results show that M = O(K2log (N/K)/p3) tests is sufficient if the sampling procedure should work for all possible sets of defective items, while M = O(K log (N)/p3) tests is enough to be successful.

Compressed sensing (CS) samples signals at a much lower rate than the Nyquist rate if they are sparse in some basis. In this paper, the CS methodology is applied to sinusoidally modeled audio signals. As this model is sparse by definition in the frequency domain (being equal to the sum of a small number of sinusoids), we investigate whether CS can be used to encode audio signals at low bitrates. In contrast to encoding the sinusoidal parameters (amplitude, frequency, phase) as current state-of-the-art methods do, we propose encoding few randomly selected samples of the time-domain description of the sinusoidal component (per signal segment). The potential of applying compressed sensing both to single-channel and multi-channel audio coding is examined. The listening test results are encouraging, indicating that the proposed approach can achieve comparable performance to that of state-ofthe-art methods. Given that CS can lead to novel coding systems where the sampling and compression operations are combined into
one low-complexity step, the proposed methodology can be considered as an important step towards applying the CS framework to audio coding applications.

Mehrdad Yaghoobi, Sangnam Nam, Remi Gribonval, and Mike E. Davies. The abstract reads:
We consider the problem of learning a low-dimensional signal model from a collection of training samples. The mainstream approach would be to learn an overcomplete dictionary to provide good approximations of the training samples using sparse synthesis coefficients. This famous sparse model has a less well known counterpart, in analysis form, called the cosparse analysis model. In this new model, signals are characterized by their parsimony in a transformed domain using an overcomplete analysis operator. We propose to learn an analysis operator from a training corpus using a constrained optimization program based on L1 optimization. We derive a practical learning algorithm, based on projected subgradients, and demonstrate its ability to robustly recover a ground truth analysis operator, provided the training set is of sufficient size. A local optimality condition is derived, providing preliminary theoretical support for the well-posedness of the learning problem under appropriate conditions.

Estimation of Sparse MIMO Channels with Common Support by Yann Barbotin, Ali Hormati, Sundeep Rangan, Martin Vetterli. The anstract reads:
We consider the problem of estimating sparse communication channels in the MIMO context. In small to medium bandwidth communications, as in the current standards for OFDM and CDMA communication systems (with bandwidth up to 20 MHz), such channels are individually sparse and at the same time share a common support set. Since the underlying physical channels are inherently continuous-time, we propose a parametric sparse estimation technique based on finite rate of innovation (FRI) principles. Parametric estimation is especially relevant to MIMO communications as it allows for a robust estimation and concise description of the channels. The core of the algorithm is a generalization of conventional spectral estimation methods to multiple input signals with common support. We show the application of our technique for channel estimation in OFDM (uniformly/contiguous DFT pilots) and CDMA downlink (Walsh-Hadamard coded schemes). In the presence of additive white Gaussian noise, theoretical lower bounds on the estimation of SCS channel parameters in Rayleigh fading conditions are derived. Finally, an analytical spatial channel model is derived, and simulations on this model in the OFDM setting show the symbol error rate (SER) is reduced by a factor 2 (0 dB of SNR) to 5 (high SNR) compared to standard non-parametric methods - e.g. lowpass interpolation.

Weighted algorithms for compressed sensing and matrix completion by Stéphane Gaïffas, Guillaume Lecué. Thye abstract reads:
This paper is about iteratively reweighted basis-pursuit algorithms and matrix completion problems. In a first part, we give a theoretical explanation of the fact that reweighted basis pursuit can improve a lot upon basis pursuit for exact recovery in compressed sensing. We exhibit a condition that links the accuracy of the weights to the RIP and incoherency constants, which ensures exact recovery. In a second part, we introduce a new algorithm for matrix completion, based on the idea of iterative reweighting. Since a weighted nuclear "norm" is typically non-convex, it cannot be used easily as an objective function. So, we define a new estimator based on a fixed-point equation. We give empirical evidences of the fact that this new algorithm leads to strong improvements over nuclear norm minimization on simulated and real matrix completion problems.

For comparison purposes, it would have been great if figure 1 could have been compared to the figure and variables of the Donoho-Tanner phase transition curve. I am just saying.

No comments: