Pages

Monday, November 29, 2010

CS: 5D lighting camera, Cyclic Matching Pursuit with a Time-frequency Dictionary, SMALL Workshop on Sparse Dictionary Learning

Today, we have an exciting post. A technology that could use some compressive sensing, a new reconstruction code released and an announcement for a workshop in London.

First Ramesh Raskar works on a 5D camera (3D in space and 2D in angle): Much of the story is on this page:
       http://web.media.mit.edu/~raskar//femto/
       http://www.bbc.co.uk/news/technology-11544037


This is a camera that can look around corners. This has clear opportunities in compressive sensing given the inherent linear projection involved in measurements. I am sure their team will like to hear from CS researchers

Bob Sturm thinks I provided some motivation for this entry Paper of the Day (Po'D): Cyclic Matching Pursuit with a Time-frequency Dictionary Edition where he releases the attendant code. The paper of interest is the following: Cyclic Matching Pursuits with Multiscale Time-frequency Dictionaries by Bob Sturm, Mads  Christensen. The abstract reads:
We generalize cyclic matching pursuit (CMP), propose an orthogonal variant, and examine their performance using multiscale time-frequency dictionaries in the sparse approximation of signals. Overall, we find that the cyclic approach of CMP produces signal models that have a much lower approximation error than existing greedy iterative descent methods such as matching pursuit (MP), and are competitive with models found using orthogonal MP (OMP), and orthogonal least squares (OLS). This implies that CMP is a strong alternative to the more computationally complex approaches of OMP and OLS for modeling high-dimensional signals.
The code and the discussion on this paper can be found here.

Finally, Maria Jafari let me know of the SMALL Workshop on Sparse Dictionary Learning in London

Dear Igor,

We would like to invite you to participate in a free two-day SMALL Workshop on Sparse Dictionary Learning (SMALL  as in Sparse Models, Algorithms, and Learning for Large-scale data, the EU project behind the Workshop), which will take place on 6 and 7 January 2011, at Queen Mary University of London.
The readers of Nuit Blanche might also be interested in participating, so we would be very grateful if you could post this on your blog.

The structure of the workshop will include presentations from eight distinguished invited speakers (see below), poster sessions during lunch and a discussion session focused on future directions of research at the end of the second day.

The workshop will be free to attend, however we do not have funding available to cover travel expenses for attendees.

We still have space for a few posters that can be presented during the lunch/networking session. If you wish to submit a poster, please notify us by sending a title and a 100 word abstract.
Notification of acceptance will be after the registration deadline.

Please confirm by Friday 3rd December 2010 if you will be able to participate in our workshop.
Best wishes

Maria Jafari and Mark Plumbley



Workshop: Sparse Dictionary Learning

Sparse signal models have been successfully applied by the signal processing community to a variety of applications. They rely on the selection of a sparsifying dictionary that matches the signal characteristics, and thus yields a sparse representation. While the dictionary is often selected from a pool of pre-defined dictionaries, eg wavelets and DCT, this does not always exploit the properties of the signal. An alternative is to learn the sparsifying dictionary from the data. This flexible approach to sparse modeling has the potential of yielding efficient methods that scale well with dimension, and incorporate knowledge about the nature and structure of the data.

In this workshop, we aim to present existing work on dictionary learning methods, discuss how they address the need to develop fast and structured algorithms, and consider future directions for this exciting and challenging field.


Invited speakers:

Francis Bach
Mike Davies
Miki Elad
Kjersti Engan
Holger Rauhut
Karin Schnass
Pierre Vandergheynst
John Wright
---------------------------------
The SMALL project, “Sparse models, algorithms and learning for large scale data (SMALL)”, a collaboration between: INRIA/IRISA (France), Queen Mary University of London (UK), University of Edinburgh (UK), EPFL (Switzerland), Technion (Israel).



Invited Speakers

John Wright (Microsoft Research Asia)
Dictionary Learning via L1 Minimization: Well-posedness and Correct Recovery
We discuss the problem of decomposing a given data matrix as a product of some unknown basis and a matrix of sparse coefficients. This problem finds applications in source separation, and is also attracting significant interest in the sparse representation community. We discuss circumstances under which this problem is well-posed, as well as situations under which it can be solved by efficient algorithms. In particular, we show that under natural assumptions, the problem is indeed globally well-posed even with fairly small sets of observations (size proportional to the size of the dictionary to be recovered times a logarithmic oversampling factor). We further show that under similar circumstances, L1 norm minimization locally recovers the desired solution. This extends results of Gribonval and Schnass to the case of nonsquare dictionaries. We discuss circumstances under which this recovery might actually be expected to be globally optimal, and give discuss several examples applications in hyperspectral imaging.

Holger Rauhut (University of Bonn)
Compressive Sensing and Structured Random Matrices
Compressive sensing is a recent paradigm in signal processing and sampling theory that predicts that sparse signals can be recovered from a small number of linear and non-adaptive measurements using convex optimization or greedy algorithms. Quite remarkably, all good constructions of the so called measurement matrix known so far are based on randomness. While Gaussian random matrices provide optimal recovery guarantees, such unstructured matrices are of limited use in applications. Indeed, structure often allows to have fast matrix vector multiplies. This is crucial in order to speed up recovery algorithms and to deal with large scale problems. The talk discusses models of structured random matrices that are useful in certain applications, and presents corresponding recovery guarantees. An important type of structured random matrix arises in connection with sampling sparse expansions in terms of bounded orthogonal systems (such as the Fourier system). The second type of structured random matrices to be discussed are partial random circulant matrices, that is, from convolution. In particular, we present recent results with J. Romberg and J. Tropp on the restricted isometry property of such matrices.

Kjersti Engan (University of Stavanger )
Dictionary Learning by RLS-DLA with Applications
The Recursive Least Squares Dictionary Learning Algorithm (RLS-DLA) is an online/adaptive approach, i.e. it updates the dictionary for every new training vector. Including a forgetting factor in the algorithm we get a search-then-converge scheme, much less dependent on an initial dictionary compared to some other dictionary learning algorithms. This talk will give a presentation of the algorithm, and some of the properties of the algorithm will be discussed.
Experiments on texture classification using Frame Texture Classification Method (FTCM) with dictionaries learned by RLS-DLA will be presented. We will also show experiments from other applications like compression and denoising of images.

Francis Bach (INRIA-ENS)
Structured Sparsity-inducing Norms through Submodular Functions
Sparse methods for supervised learning aim at finding good linear predictors from as few variables as possible, i.e., with small cardinality of their supports. This combinatorial selection problem is often turned into a convex optimization problem by replacing the cardinality function by its convex envelope (tightest convex lower bound), in this case the L1-norm. In this paper, we investigate more general set-functions than the cardinality, that may incorporate prior knowledge or structural constraints which are common in many applications: namely, we show that for nondecreasing submodular set-functions, the corresponding convex envelope can be obtained from its Lovasz extension, a common tool in submodular analysis. This defines a family of polyhedral norms, for which we provide generic algorithmic tools (subgradients and proximal operators) and theoretical results (conditions for support recovery or high-dimensional inference). By selecting specific submodular functions, we can give a new interpretation to known norms, such as those based on rank-statistics or grouped norms with potentially overlapping groups; we also define new norms, in particular ones that can be used as non-factorial priors for supervised learning.

Karin Schnass (RICAM)
Dictionary Identification - Sparse Matrix-Factorisation via L1-Minimisation
The talk will be about the problem of learning a dictionary providing sparse representations for a given signal class, via $\ell_1$-minimisation. The problem can also be seen as factorising a $\ddim \times \nsig$ matrix $Y=(y_1 \ldots y_\nsig), \, y_n\in \R^\ddim$ of training signals into a $\ddim \times \natoms$ dictionary matrix $\dico$ and a $\natoms \times \nsig$ coefficient matrix $\X=(x_1\ldots x_\nsig),\, x_n \in \R^\natoms$, which is sparse. The exact question studied here is when a dictionary coefficient pair $(\dico,\X)$ can be recovered as local minimum of a (nonconvex) $\ell_1$-criterion with input $Y=\dico \X$. First, for general dictionaries and coefficient matrices, algebraic conditions ensuring local identifiability are derived, which are then specialised to the case when the dictionary is a basis. Finally, assuming a random Bernoulli-Gaussian sparse model on the coefficient matrix, it is shown that sufficiently incoherent bases are locally identifiable with high probability. The perhaps surprising result is that the typically sufficient number of training samples $\nsig$ grows up to a logarithmic factor only linearly with the signal dimension, i.e. $\nsig \approx C \natoms \log \natoms$, in contrast to previous approaches requiring combinatorially many samples.

Mike Davies (University of Edinburgh)
Rank Aware Algorithms for Joint Sparse Recovery
This talk will revisit the sparse multiple measurement vector (MMV) problem, where the aim is to recover a set of jointly sparse multichannel vectors from incomplete measurements. This problem has received increasing interest as an extension of single channel sparse recovery, which lies at the heart of the emerging field of compressed sensing. However, MMV approximation also has links with work on direction of arrival estimation in the field of array signal. Inspired by these links, we consider a new family of MMV algorithms based on the well-know MUSIC method in array processing highlighting the role of the rank of the unknown signal matrix in determining the difficulty of the recovery problem. We will show that the recovery conditions for our new algorithms take into account the observed rank of the signal matrix. This is in contrast to previously proposed techniques such as Simultaneous Orthogonal Matching Pursuit (SOMP) and mixed norm minimization techniques which we show to be effectively blind to the rank information. Numerical simulations demonstrate that our new rank aware techniques are significantly better than existing methods in dealing with multiple measurements.

Michael Elad (The Technion)
Exploiting Statistical Dependencies in Sparse Representations
In the commonly used sparse representation modeling, the atoms are assumed to be independent of each other when forming the signal. In this talk we shall introduce a statistical model called Boltzman Machine (BM) that enables such dependencies to be taken into account. Adopting a Bayesian point of view, we first treat the pursuit problem - given a signal, and assuming that the model parameters and the dictionary are known, find its sparse representation. We derive the exact MAP estimation, and show that just like in the independent case, this leads to an exponential search problem. We derive two algorithms for its evaluation: a greedy approximation approach for the general case, and an exact estimation that corresponds to a unitary dictionary and banded interaction matrix. We also consider the estimation of the model parameters, learning these parameters directly from training data. We show that given the signals' representations, this problem can be posed as a convex optimization task by using the Maximum Pseudo-Likelihood (MPL).

Pierre Vandergheynst(EPFL)
Transductive Learning with Spectral Graph Wavelets
In this talk I will quickly review the construction and properties of frames of spectral graph wavelets for functions defined on the vertices of a weighted undirected graph. I then leverage the existing state of the art algorithms in sparse recovery to formulate various versions of transductive learning on graphs. Comparing analysis and synthesis based algorithms reveals that the structure of the frame plays a very important role and that sparsity is nicely connected to a notion of smoothness on graphs. I take these lessons one step further to propose better optimization criteria for this problem and finish with a list of interesting open questions for the SMALL consortium.

No comments:

Post a Comment