Jeff Blanchard just sent me the following;
Dear Igor,
I wonder if you could advertise this symposium on sparse approximation and compressed sensing I am organizing at ICNAAM. Here's the link to the page for the symposium
and the link to the conference
If you don't advertise symposiums or sessions, I certainly understand, but you are absolutely the most effective method of advertising anything on compressed sensing.
From the website:
19–25 September 2010 • Rhodes, GreeceThe ICNAAM symposium on sparse approximation and compressed sensing will bring researchers from all stages of their career together to discuss the theory and applications of exploiting sparsity in signal processing, information theory, and statistics. Topics will include, but are not strictly limited to,
- Sparse coding, vector quantization and dictionary learning
- Sparse approximation algorithms
- Compressed sensing
- Analog/Continuous Compressed Sensing
- Sparse/structured signal representations
- Compression and coding
- Feature extraction, classification, detection
- Sparsity measures in approximation theory, information theory and statistics
- Applications
Speakers (as of 10 April 2010)
- Jeff Blanchard, Grinnell College and University of Edinburgh
- Charles Dossal, Université Bordeaux 1
- Miki Elad, Technion
- Yonina Eldar, Technion
- Anders Hansen, Cambridge University
- Blake Hunter, University of California, Davis
- Deanna Needell, Stanford University
- Ozgur Yilmaz, University of British Columbia
Call for abstracts
To submit an abstract, email an extended abstract of 2–3 pages to Jeff Blanchard (jeff@math.grinnell.edu). Abstracts will be reviewed on an ongoing basis. Abstracts submitted after 22 July 2010 will not be considered. You are encouraged to email your intention to submit an abstract as soon as possible. Preparation guidelines and templates are available.
Registration
When your abstract is accepted, please register for the conference. There are three registration deadlines with increasing registration fees:
Early (30 April), Regular (15 June), Late (29 July).
Please note that, according to the conference registration page, if you register and your abstract is not accepted for publication in the proceedings, the registration fee will be refunded.
Thanks Jeff.
Compressive sensing (CS) is an emerging approach for acquisition of signals having a sparse or compressible representation in some basis. While CS literature has mostly focused on problems involving 1-D and 2-D signals, many important applications involve signals that are multidimensional. We propose the use of Kronecker product matrices in CS for two purposes. First, we can use such matrices as sparsifying bases that jointly model the different types of structure present in the signal. Second, the measurement matrices used in distributed measurement settings can be easily expressed as Kronecker products. This new formulation enables the derivation of analytical bounds for sparse approximation and CS recovery of multidimensional signals.
How do you implement it ?Marco was kind enough to answer:
Thanks for your email. It's a good question. The nice thing about the Kronecker product measurements is that they can be implemented "separately", as follows:This is equivalent to applying the Kronecker product of the measurement matrices on an appropriately vectorized version of the data.
- first, apply one measurement matrix Phi1 on all sections of the multidimensional data x along one dimension,
- then, stack the measurements obtained for each section along the first dimension to obtain a multidimensional measurement y1,
- then, apply the next measurement matrix Phi2 on the sections along the next dimension on the previously obtained measurements y1,
- then, stack the measurements obtained for each section along the second dimension to obtain a new multidimensional measurement y2,
- repeat until done with dimensions and vectorize the resulting y_D to obtain final measurements y.
For example: the hyperspectral single pixel camera applies the same projection matrix Phi1 on all the spectral bands of the hyperspectral datacube being acquired. Each band then provides a vector of measurements, which we stack into a matrix y1 (say, as rows). We then apply the second projection matrix Phi2 on each one of the columns of the matrx-stacked measurements y1, which effectively *mixes* the measurements coming from the different bands, into a new matrix of measurements y2, and then vectorize y2 to get y.
Hopefully this makes things clear? The idea is the same for the sparsity bases (apply transformations on one dimension at a time, obtained the coefficients in the Kronecker basis at the end).
Let me know if I can help further.
The arxiv search produced three new papers and then some today: Sparse Reconstruction via The Reed-Muller Sieve by Robert Calderbank, Stephen Howard, Sina Jafarpour. The abstract reads:
This paper introduces the Reed Muller Sieve, a deterministic measurement matrix for compressed sensing. The columns of this matrix are obtained by exponentiating codewords in the quaternary second order Reed Muller code of length $N$. For $k=O(N)$, the Reed Muller Sieve improves upon prior methods for identifying the support of a $k$-sparse vector by removing the requirement that the signal entries be independent. The Sieve also enables local detection; an algorithm is presented with complexity $N^2 \log N$ that detects the presence or absence of a signal at any given position in the data domain without explicitly reconstructing the entire signal. Reconstruction is shown to be resilient to noise in both the measurement and data domains; the $\ell_2 / \ell_2$ error bounds derived in this paper are tighter than the $\ell_2 / \ell_1$ bounds arising from random ensembles and the $\ell_1 /\ell_1$ bounds arising from expander-based ensembles.
We propose a robust and efficient algorithm for the recovery of the jointly sparse support in compressed sensing with multiple measurement vectors (the MMV problem). When the unknown matrix of the jointly sparse signals has full rank, MUSIC is a guaranteed algorithm for this problem, achieving the fundamental algebraic bound on the minimum number of measurements. We focus instead on the unfavorable but practically significant case of rank deficiency or bad conditioning. This situation arises with limited number of measurements, or with highly correlated signal components. In this case MUSIC fails, and in practice none of the existing MMV methods can consistently approach the algebraic bounds. We propose iMUSIC, which overcomes these limitations by combining the advantages of both existing methods and MUSIC. It is a computationally efficient algorithm with a performance guarantee.
Compressive Sensing (CS) is a new technique for the efficient acquisition of signals, images, and other data that have a sparse representation in some basis, frame, or dictionary. By sparse we mean that the N-dimensional basis representation has just K lt lt N significant coefficients; in this case, the CS theory maintains that just M = K log N random linear signal measurements will both preserve all of the signal information and enable robust signal reconstruction in polynomial time. In this paper, we extend the CS theory to pulse stream data, which correspond to S-sparse signals/images that are convolved with an unknown F-sparse pulse shape. Ignoring their convolutional structure, a pulse stream signal is K=SF sparse. Such signals figure prominently in a number of applications, from neuroscience to astronomy. Our specific contributions are threefold. First, we propose a pulse stream signal model and show that it is equivalent to an infinite union of subspaces. Second, we derive a lower bound on the number of measurements M required to preserve the essential information present in pulse streams. The bound is linear in the total number of degrees of freedom S + F, which is significantly smaller than the naive bound based on the total signal sparsity K=SF. Third, we develop an efficient signal recovery algorithm that infers both the shape of the impulse response as well as the locations and amplitudes of the pulses. The algorithm alternatively estimates the pulse locations and the pulse shape in a manner reminiscent of classical deconvolution algorithms. Numerical experiments on synthetic and real data demonstrate the advantages of our approach over standard CS.
Showing up on my radar screen also:
and of related interest from the same authorsWe consider two widely used notions in machine learning, namely: sparsity and algorithmic stability. Both notions are deemed desirable in designing algorithms, and are believed to lead to good generalization ability. In this paper, we show that these two notions contradict each other. That is, a sparse algorithm can not be stable and vice versa. Thus, one has to tradeoff sparsity and stability in designing a learning algorithm. In particular, our general result implies that ℓ1-regularized regression (Lasso) cannot be stable, while ℓ2-regularized regression is known to have strong stability properties.
We consider the dimensionality-reduction problem (finding a subspace approximation of observed data) for contaminated data in the high dimensional regime, where the number of observations is of the same magnitude as the number of variables of each observation, and the data set contains some (arbitrarily) corrupted observations. We propose a High-dimensional Robust Principal Component Analysis (HR-PCA) algorithm that is tractable, robust to contaminated points, and easily kernelizable. The resulting subspace has a bounded deviation from the desired one, achieves maximal robustness, and unlike ordinary PCA algorithms, achieves optimality in the limit case where the proportion of corrupted points goes to zero.
No comments:
Post a Comment