Pages

Tuesday, December 14, 2010

CS: PhD studentship, more matrices, adaptive compressive sensing with dictionaries, RPCA for CT, Optimization Algorithms in Machine Learning

Yves Wiaux just sent me the following:
Dec 13th, 2010. PhD Research, EPFL, BASP NODE ANNOUNCEMENT for a PhD POSITION:

In the context of new funding obtained at the Swiss National Science Foundation (SNSF, http://www.snf.ch/) by Dr Y. Wiaux (BASP, http://lts2www.epfl.ch/~ywiaux/index-baspnode) and collaborators a Ph.D. position is available at the Swiss Federal Institute of Technology in Lausanne (EPFL) as soon as in early 2011, in the field of Advanced Magnetic Resonance (MR) imaging. This position will be hosted by the Center for Biomedical Imaging (CIBM, http://www.cibm.ch/), with strong collaboration with the signal processing laboratories. It is opened to highly qualified candidates holding a Master of sience, in physics, electrical engineering, bio engineering, or any closely related field. Strong competence in signal processing (inverse problems: denoising, deconvolution, etc.; sparsity: wavelets, compressed sensing, etc.) and programming (MATLAB, C) is required. Competence in biomedical, in particular MR, imaging are advantages.

Note that EPFL offers very attractive salaries.

All Application documents, including a cover letter, a CV (with full details of marks over the Master), and three reference letters, should be sent to Y. Wiaux, directly by email (mailto: yves.wiaux@epfl.ch).
Thanks Yves.

We use convex relaxation techniques to provide a sequence of regularized low-rank solutions for large-scale matrix completion problems. Using the nuclear norm as a regularizer, we provide a simple and very efficient convex algorithm for minimizing the reconstruction error subject to a bound on the nuclear norm. Our algorithm SOFT-IMPUTE iteratively replaces the missing elements with those obtained from a soft-thresholded SVD.With warm starts this allows us to efficiently compute an entire regularization path of solutions on a grid of values of the regularization parameter. The computationally intensive part of our algorithm is in computing a low-rank SVD of a dense matrix. Exploiting the problem structure, we show that the task can be performed with a complexity of order linear in the matrix dimensions. Our semidefinite-programming algorithm is readily scalable to large matrices; for example SOFT-IMPUTE takes a few hours to compute low-rank approximations of a 106×106 incomplete matrix with 107 observed entries, and fits a rank-95 approximation to the full Netflix training set in 3.3 hours. Our methods achieve good training and test errors and exhibit superior timings when compared to other competitive state-of-the-art techniques.
Thanks Olivier.

In recent years, the theory of Compressed Sensing has emerged as an alternative for the Shannon sampling theorem, suggesting that compressible signals can be reconstructed from far fewer samples than required by the Shannon sampling theorem. In fact the theory advocates that non-adaptive, ‘random’ functionals are in some sense optimal for this task. However, in practice Compressed Sensing is very difficult to implement for large data sets, since the algorithms are exceptionally slow and have high memory consumption. In this work, we present a new alternative method for simultaneous image acquisition and compression called Adaptive Compressed Sampling. Our approach departs fundamentally from the (non adaptive) Compressed Sensing mathematical framework by replacing the ‘universal’ acquisition of incoherent measurements with a direct and fast method for adaptive transform coefficient acquisition. The main advantages of this direct approach are that no complex recovery algorithm is in fact needed and that it allows more control over the compressed image quality, in particular, the sharpness of edges. Our experimental results show that our adaptive algorithms perform better than existing non-adaptive methods in terms of image quality and speed.

We pose transductive classification as a matrix completion problem. By assuming the underlying matrix has a low rank, our formulation is able to handle three problems simultaneously: i) multi-label learning, where each item has more than one label, ii) transduction, where most of these labels are unspecified, and iii) missing data, where a large number of features are missing. We obtained satisfactory results on several real-world tasks, suggesting that the low rank assumption may not be as restrictive as it seems. Our method allows for different loss functions to apply on the feature and label entries of the matrix. The resulting nuclear norm minimization problem is solved with a modified fixed-point continuation method that is guaranteed to find the global optimum.

The purpose of this article for four-dimensional (4D) computed tomography (CT) is threefold. (1) A new spatiotemporal model is presented from matrix perspective with the row dimension in space and the column dimension in time, namely, Robust PCA based 4DCT model (Robust Principle Component Analysis based 4D CT). That is, instead of viewing 4D object as a temporal collection of three-dimensional (3D) images and looking for local coherence in time or space independently, we percept it as a mixture of low-rank matrix and sparse matrix and explore the maximum temporal coherence of spatial structure among phases. Here the low-rank matrix corresponds to the “stationary” background or reference state in space over time, while the sparse matrix stands for the “moving” or “changing” components, (under some proper sparsifying transform,) e.g., tumors or organs in motion. Furthermore, this Robust PCA based 4DCT model can be applicable in other imaging problems for motion reduction or/and change detection. (2) A dynamic data acquisition procedure, i.e., a temporally spiral scheme, is proposed that can potentially maintain the similar reconstruction accuracy while using fewer projections of the data. The key point of this dynamic scheme is to reduce the total number of measurements and hence the radiation dose by acquiring complementary data in different phases without redundant measurements of the common background structure. (3) An accurate, efficient, yet simple-to-implement algorithm based on split Bregman method is developed for solving the model problem with the sparse representation in tight frames.

Presentation on Optimization Algorithms in Machine Learning by Stephen Wright





Image Credit: NASA/JPL/Space Science Institute,
W00065917.jpg was taken on December 05, 2010 and received on Earth December 06, 2010. The camera was pointing toward SATURN at approximately 2,218,496 kilometers away, and the image was taken using the MT2 and CL2 filters.

No comments:

Post a Comment