Pages

Monday, September 13, 2010

CS: Job offers, TFOCS: Templates for Convex Cone Problems with Applications to Sparse Signal Recovery, a course on Compressed Sensing in Belgium

This is the post I wanted to write for a long time since it contains information spanning education, algorithm development and actual jobs for hardware development.. Several job offers and an approach to reconstruction that is likely to provide a wide range of useful new reconstruction solvers. and a course are in the mix today. I'll leave the post up for the next few days, enjoy!

The job offers first (both of them are on the CS jobs page)
  • September 10th, 2010Job Openings at InView Technology Corporation, Austin, TX InView is a manufacturer of CS based cameras and hyperspectral imagers, and is commercializing the research results of Rich Baraniuk and Kevin Kelly at Rice University. We are looking to fill a number of positions, especially a CS Signal Processing expert. Please see http://www.inviewcorp.com/careers.html
  • September 10th, 2010, Compressed Sensing Scientist / 9month Research Project 2011
    We seek to collaborate with a scientist for a research project proposal. The Research Task is as follows:

    Contributors are sought to significantly advance the through-put and efficiency of collecting large amounts of useful data through novel, mission prioritized data acquisition and compression capabilities, without loss of signal integrity.

    Potential topics for consideration include, but are not limited to, the following:
    - Algorithm development or improvement allowing for user-defined prioritization of lossless data compression or extraction
    - Computational methods that assist in mission prioritized data collection, transmission or transfer
    - Development of machine learning methods to reduce total data collection
    - Investigation into compressed sensing techniques applicable for very large data collections

    If granted this project, a 9month payed task would be awarded. Our response, in form of 'Proposal' is due Sept 15. v/r, Eric

Now onto the new reconstruction approach: TFOCS

Templates for Convex Cone Problems with Applications to Sparse Signal Recovery by Stephen Becker, Emmanuel J. Candes and Michael Grant. The abstract reads:

This paper develops a general framework for solving a variety of convex cone problems that frequently arise in signal processing, machine learning, statistics, and other fields. The approach works as follows: fi rst, determine a conic formulation of the problem; second, determine its dual; third, apply smoothing; and fourth, solve using an optimal rst-order method. A merit of this approach is its flexibility: for example, all compressed sensing problems can be solved via this approach. These include models with objective functionals such as the total-variation norm, kWxk1 where W is arbitrary, or a combination thereof. In addition, the paper also introduces a number of technical contributions such as a novel continuation scheme, a novel approach for controlling the step size, and some new results showing that the smooth and unsmoothed problems are sometimes formally equivalent. Combined with our framework, these lead to novel, stable and computationally efficient algorithms. For instance, our general implementation is competitive with state-of-the-art methods for solving intensively studied problems such as the LASSO. Further, numerical experiments show that one can solve the Dantzig selector problem, for which no efficient large-scale solvers exist, in a few hundred iterations. Finally, the paper is accompanied with a software release. This software is not a single, monolithic solver; rather, it is a suite of programs and routines designed to serve as building blocks for constructing complete algorithms.
From the contribution section:

The formulation of compressed sensing models in conic form is not widely known. Yet the convex optimization modeling framework CVX [22] converts all models into conic form; and the compressed sensing package `1-Magic [13] converts problems into second-order cone programs (SOCPs). Both systems utilize interior-point methods instead of rst-order methods, however. As mentioned above, the smoothing step is inspired by [35], and is similar in structure to traditional Lagrangian augmentation. As we also noted, first-order methods have been a subject of considerable research. Taken separately, then, none of the components in this approach is new. However their combination and application to solve compressed sensing problems leads to e ective algorithms that have not previously been considered. For instance, applying our methodology to the Dantzig selector gives a novel and efficient algorithm (in fact, it gives several novel algorithms, depending on which conic form is used). Numerical experiments presentedlater in the paper show that one can solve the Dantzig selector problem with a reasonable number of applications of A and its adjoint; the exact number depends upon the desired level of accuracy. In the case of the LASSO, our approach leads to novel algorithms which are competitive with state-of-the-art methods such as SPGL1. Aside from good empirical performance, we believe that the primary merit of our framework lies in its flexibility. To be sure, all the compressed sensing problems listed at the beginning of this paper, and of course many others, can be solved via this approach. These include total-variation norm problems, `1-analysis problems involving objectives of the form kWxk1 where W is neither orthogonal nor diagonal, and so on. In each case, our framework allows us to construct an effective algorithm, thus providing a computational solution to almost every problem arising in sparse signal or low-rank matrix recovery applications.
Furthermore, in the course of our investigation, we have developed a number of additional technical contributions. For example, we will show that certain models, including the Dantzig selector, exhibit an exact penalty property: the exact solution to the original problem is recovered even when some smoothing is applied. We have also developed a novel continuation scheme that allows us to employ more aggressive smoothing to improve solver performance while still recovering the exact solution to the unsmoothed problem. The flexibility of our template also provides opportunities to employ novel approaches for controlling the step size.


The TFOCS page is here.


Also Jared Tanner asked me to announce a course on CS in Belgium:

Would you mind announcing a short course I am giving on compressed sensing in Belgium this November and December?
Details are at: http://sites.uclouvain.be/socn/Courses/Courses2010-1
It will assume no background on the topic, go through many of the celebrated proofs and main topics during the 18 hours of lectures.  I understand that although targeted for PhD students in Belgium, anyone is welcome to attend, but it isn't free.
The desciption of the course reads:

Description

A revolution is underway in how information is collected and analysed. Traditionally, vast amounts of data are compiled, the essential features are extracted, and the remaining data disposed of. For instance, the resolution of consumer grade digital cameras is rated in terms of millions of pixels (points) measured, whereas the resulting images are typically smaller by a factor of hundreds. To diminish this highly inefficient model, techniques have been developed that learn from their earlier measurements, and in doing so try to determine the essential features with fewer adaptive measurements. Unfortunately, adaptive measurements must be taken sequentially, which limits the usefulness of such techniques in our modern real-time distributed computing environment.
In 2004 the Compressed Sensing paradigm was introduced by Donoho as well as Candes and Tao. From completely non-adaptive measurements, this technique recovers the dominant information in data at the minimum possible sampling rate achievable by learning algorithms. This is possible by replacing traditional point samples with randomised sampling and a novel reconstruction algorithm. The underlying principal's of this technique has since been extended to other notion where some underlying simplicity can be exploited. For instance, matrix completion is a new technique where unknown entries in a matrix can be predicted from a small subset of its entries, provided the matrix can be well approximated by a low rank matrix.
This course will: review methods from numerical harmonic analysis and approximation theory that give sparse/simple representations, introduce compressed sensing and methods by which the fundamental results can be proven, and explore extensions of compressed sensing and sparse approximation to matrix completion.
Session 1: A tour of numerical harmonic analysis. Review classical approximation properties of fourier series and orthogonal polynomials. Introduce multiscale techniques, wavelets, and higher dimensional extensions such as curvelets and shearlets.
Session 2: Basis Pursuit as a method to construct data adaptive representations. Introduction to generic methods of analysis from sparse approximation: coherence, cumulative coherence, and the restricted isometry property. Prove basic results for linear programming and one step thresholding.
Session 3: Introduction to compressed sensing and encoder/decoder pairs. Analyze greedy decoders Orthogonal Matching Pursuit, Iterated Hard Thresholding, and two-step greedy methods.
Session 4: The phase transition framework for decoders. Establish bounds on the restricted isometry constants for matrix ensembles: Gaussian, Fourier, and sparse discrete matrices.
Session 5: Geometric models of sparsity. Analysis of the linear programming decoder, nonnegativity uniqueness, bound constraints, and nonconvex models.
Session 6: Introduction to matrix completion. Low rank approximation, sampling and subspace coherence, and applications.


Credit: NASA/JPL/Space Science Institute, Quartet and Crescent  September 10, 2010

No comments:

Post a Comment