Pages

Friday, May 31, 2013

A PostDoc and a Ph.D in Medical Imaging, Machine Learning and Compressive Sensing at INRIA, Rennes, France

Remi Gribonval just sent me the following:
Dear Igor,

,,,Would you mind advertising the Ph.D. offer and Post-doc position below on Nuit Blanche ?

Cheers,
Remi.
Sure I can but if you (or anybody who wants to post a job announcement) want maximum exposure, you could also post that announcement in the following groups: Google+ Community, the CompressiveSensing subreddit, the LinkedIn Compressive Sensing group or the Matrix Factorization group !

----
The VISAGES and PANAMA teams at Inria Rennes are seeking highly qualified candidates with background in applied mathematics, machine learning, and image processing for a Ph.D. and a Postdoc research project in neuro-imaging.  

  • 1 Postdoc in Computer Vision,  Medical Image Analysis, Machine Learning and Compressive Sensing  at INRIA, Rennes, France

Deadline for applicationJune 30th, 2013

Scientific Environment:
This work will be conducted in the context of the Inria National research project HEROES and theLabex CominLabs Hemisfer project. It will be conducted in collaboration between the Unit/Project VISAGES U746 (INSERM / INRIA / CNRS / University of Rennes I) whose research activities are directed towards neuroimaging and medical image processing, and the PANAMA Inria research Team whose research activities are directed towards machine learning and compressive sensing. This work will benefit from a new research 3T MRI system provided by the NeurInfo in-vivo neuroimaging platform on which these new research protocols will be set up.

Links


Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Start-ups: GraphLab, Wise.io, InView, Centice, Aqueti

Nuit Blanche being at the crossroad of different subject areas;(compressive sensing, machine learning, graph computations) that all aim at taming and making sense of the tsunami of data that is slowly but surely overpowering us, it is no wonder that some of the people that are familiar with the themes of  Nuit Blanche are also some of the ones deciding to take their ideas and concepts into the real world aka into start-ups. Here are some:



If you want to know more about GraphLab, you really want to attend the workshop coming up on July 1st. They just recently released their list of speakers (i,e it is not a preliminary agenda anymore). Danny Bickson has a blog.



If you recall Dan Starr was one of the organizers of the widely successful Berkeley meeting "From Data to Knowledge: Machine-Learning with Real-time and Streaming Applications"

on the hardware side of things: 


Inview Corp
Inview focuses, in part, on developing some of the Intellectual Property from Rice on the Single Pixel Camera for SWIR type of applications and more. Here is their recent press release I featured on the blog.
Bob Bridge and Kevin Kelly are part of the management team while Rich Baraniuk is director at the board of Directors
Centice

From Centice’s Technology's page:

What is coded aperture spectroscopy?
Spectroscopy has been around for years and historically scientists used this technology to understand the chemical structure of materials.  Coded apertures are grids, gratings or other patterns of materials opaque to various wavelengths of light.  Coded apertures are used in x-rays because their high energies pass through normal lenses and mirrors.  So by combining the two scientific measurement techniques, one is able to view the chemical structure of materials, such as narcotics, through lenses.
David Brady is part of the executive team.


From the technology page:
​Aqueti builds Aware gigapixel cameras using MC2 microcameras, gigagon lenses and zoomcast technology.

Aqueti is dedicated to information efficiency. The world is awash in visible information, almost all of which is undetected. Many things that we would like to see remain unseen because eyes and cameras cannot look everywhere at once. Simple mysteries, such as how the beating of a butterfly wing affects the environment or where each needle lands when a tree falls in the forest, could be resolved if we knew where and when to look. Exciting stories, such as how every player on the field and every fan in the stands responds to a well hit baseball, could be told if only we could capture all the information in the light around us. With Aqueti cameras there is no need to know where to “point and shoot.” We capture everything that a camera could see looking everywhere with revolutionary efficiency.
Parallel processing, which has been the lynchpin of supercomputing for the past quarter century, is Aqueti’s key innovation. Just as supercomputers are built from arrays of microprocessors, Aqueti builds supercameras from arrays of microcameras. If we had 1000 people, our cameras and eyes could look everywhere at once. This is the power of parallel processing. Unfortunately, 1000 cameras would cost a lot and take a lot of space. Aqueti’s multiscale camera technology implements parallel processing on the microscale to build cameras that capture the field of view of 1000 conventional cameras with a cost and volume comparable to a single system.
David Brady is part of the executive team.

As one can see, the five start ups are looking at different facets of themes we discuss here between actual hardware sensing ( Inview, Centice, Aqueti) all the way to machine learning (Wise.io) and graph computations (GraphLab). If you have a start-up with a theme covered here and you are a reader of the blog, please let me know I'll add you to the list.


Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Thursday, May 30, 2013

SPARS13 Abstracts, ROKS 2013 List of Papers, SAHD2013 and other CS/MF related meetings

While ICASSP13 is in full swing (list of accepted paper is here), let's see what other meetings are on the horizon 

For SPARS13, we now have the list of abstracts from the webpage:

Monday, July 8.

 
08:00Registration opens
08:40 - 08:50  Opening remarks
08:50 - 09:50  Plenary talk - Stéphane Mallat
09:50 - 10:10Scalable and accurate quantum tomography from fewer measurements
Stephen Becker, Volkan Cevher   abstract   video
10:10 - 10:30  Matrix completion algorithms with optimal phase transition
Jared Tanner, Ke Wei   abstract   video
10:30 - 11:00  Coffee break  (Provided)
11:00 - 11:20  The degrees of freedom of the group Lasso for a general design
Samuel Vaiter, Gabriel Peyré, Jalal Fadili, Charles-Alban Deledalle, Charles Dossal   abstract   video
11:20 - 11:40  SPARsity and Clustering Regularization for Regression
Xiangrong Zeng, Mario Figueiredo   abstract   video
11:40 - 12:00  Tractability of Interpretability via Selection of Group-Sparse Models
Nirav Bhan, Luca Baldassarre, Volkan Cevher   abstract   video
12:00 - 13:30  Lunch break
13:30 - 14:30  Plenary talk - Assigning statistical significance in high-dimensional problems,  Peter Bühlmann
14:30 - 16:30  Poster/DEMO sessions
16:00 - 16:30  Coffee Break (Provided)
16:30 - 17:00  Highlight talk - The Computational Complexity of Spark, RIP, and NSP
Andreas Tillmann, Marc Pfetsch   abstract   video
17:00 - 17:20  Generalized Null Space and Restricted Isometry Properties
Tomer Peleg, Remi Gribonval,Mike Davies   abstract   video
17:20 - 17:40  Unrecoverable subsets by OMP and Basis Pursuit
Charles Soussen, Cedric Herzet, Jerome Idier, Remi Gribonval   abstract   video
17:40 - 18:00  Theoretical Performance Guarantees of Analysis Thresholding
Tomer Peleg, Michael Elad   abstract   video
18:00  Reception at the Rolex Learning Center
 

Tuesday, July 9.

 
08:00Registration opens
08:50 - 09:50  Plenary talk - Wavelet for Graphs and its Deployment to Image Processing,  Michael Elad
09:50 - 10:10Bits of Images: Inverting Local Image Binary Descriptors
Emmanuel D'Angelo, Laurent Jacques, Alexandre Alahi, Pierre Vandergheynst  abstract   video
10:10 - 10:30  Anisotropic Foveated Self-Similarity
Alessandro Foi, Giacomo Boracchi   abstract   video
10:30 - 11:00  Coffee break  (Provided)
11:00 - 11:20  Compressive Gaussian Mixture Estimation
Anthony Bourrier, Remi Gribonval, Patrick Pérez   abstract   video
11:20 - 11:40  Not All ℓp-Norms are Compatible with Sparse Stochastic Processes
Arash Amini, Michael Unser   abstract   video
11:40 - 12:00  A Bayesian Estimation for the Co-Sparse Analysis Model
Javier Turek, Irad Yavneh, Michael Elad   abstract   video
12:00 - 13:30  Lunch break
13:30 - 14:30  Plenary talk - Richard Baraniuk
14:30 - 14:50  Learning Measurement Matrices for Redundant Dictionaries
Chinmay Hegde, Aswin Sankaranarayanan, Richard Baraniuk   abstract   video
14:50 - 15:10  On Dictionary Identification via K-SVD
Karin Schnass   abstract   video
15:10 - 15:30  Can We Allow Linear Dependencies in the Dictionary in the Synthesis Framework?
Raja Giryes, Michael Elad   abstract   video
15:30 - 16:00  Coffee Break (Provided)
16:00  Special Lecture by Ronald DeVore
Title: An abbreviated and very personal history of nonlinear approximation
Abstract: Nonlinear approximation plays a central role in signal/image processing as well as in numerical analysis. Many numerical algorithms are built on some form of nonlinear approximation. This talk will discuss the history of the developments of this subject. We will touch on notions such as adaptivity, sparsity, compressibility, and greedy algorithms, and trace their roots.
 

Wednesday, July 10.

 
08:00Registration opens
08:50 - 09:50  Plenary talk - Rob Nowak
09:50 - 10:10A Multipath Sparse Beamforming Method
Afsaneh Asaei, Bhiksha Raj, Volkan Cevher, Herve Bourlard   abstract   video
10:10 - 10:30  Compressive MIMO Radar with Random Sensor Array
Thomas Strohmer, Haichao Wang   abstract   video
10:30 - 11:00  Coffee break  (Provided)
11:00 - 11:20  From compression to compressed sensing: analog signals
Arian Maleki, Shirin Jalali   abstract   video
11:20 - 11:40  A stable and consistent approach to generalized sampling
Clarice Poon   abstract   video
11:40 - 12:00  1-Bit Matrix Completion
Mark Davenport, Yaniv Plan, Ewout van den Berg, Mary Wootters   abstract   video
12:00 - 13:30  Lunch break
13:30 - 14:30  Plenary talk - From Convex Feasibility to Optimization in Signal Recovery and Learning,  Patrick Combettes
14:30 - 16:30  Poster/DEMO sessions
16:00 - 16:30  Coffee Break (Provided)
16:30 - 17:00  Highlight talk - Compressive Harmonic Retrieval via Matrix Completion
17:00 - 17:20  GESPAR: Efficient Phase Retrieval of Sparse Signals
Yoav Shechtman, Amir Beck, Yonina Eldar   abstract   video
17:20 - 17:40  Modified Non-local Hard Thresholding for Super Resolution
Hassan Mansour, Yonina Eldar   abstract   video
17:40 - 18:00  Knowledge-enhanced compressive measurement designs for estimating sparse signals in clutter
Swayambhoo Jain, Akshay Soni, Jarvis Haupt, Nikhil Rao, Robert Nowak   abstract  video
19:00  Banquet at The Palace Hotel, Lausanne
 

Thursday, July 11.

 
08:00Registration opens
08:50 - 09:50  Plenary talk - Inderjit Dhillion
09:50 - 10:10On Sparse Representation in Fourier and Canonical Bases
Pier Luigi Dragotti, Yue Lu   abstract   video
10:10 - 10:30  Breaking the coherence barrier: asymptotic incoherence and asymptotic sparsity in compressed sensing
Anders Hansen   abstract   video
10:30 - 11:00  Coffee break  (Provided)
11:00 - 11:20  Beyond incoherence: stable and robust image recovery from variable density frequency samples
Rachel Ward, Felix Krahmer   abstract   video
11:20 - 11:40  Clutter Mitigation in Echocardiography using Sparse Signal Separation
Javier Turek, Michael Elad, Irad Yavneh   abstract   video
11:40 - 12:00  Solution of Inverse Problem in Diffuse Optical Tomography using Compression of Sensitivity Maps on n-Simplex Meshes
Marta Betcke, Simon Arridge   abstract   video
12:00 - 13:30  Lunch break
13:30 - 14:30  Plenary talk - Sparse stochastic processes: A continuous-domain statistical framework for compressed sensing,  Michael Unser
14:30 - 16:30  Poster/DEMO sessions
16:00 - 16:30  Coffee Break (Provided)
16:30 - 17:00  Highlight talk - Sketched SVD: Recovering Spectral Features from Compressive Measurements
17:00 - 17:20  Multichannel Blind Deconvolution Using Low-rank and Sparse Decomposition
Mahdad Hosseini Kamal, Pierre Vandergheynst   abstract   video
17:20 - 17:40  Blind Calibration for Phase Shifts in Compressive Systems
Cagdas Bilen, Remi Gribonval, Gilles Puy, Laurent Daudet   abstract   video
17:40 - 18:00  The phase diagram of dictionary learning and blind calibration
Florent Krzakala, Marc Mezard, Lenka Zdeborova   abstract   video


While for ROKS13, we have a list of presentation titles:

Oral session 1: Feature selection and sparsity
(July 8, 15:10-16:40)


  • The graph-guided group lasso for genome-wide association studies
    Zi Wang and Giovanni Montana
    Affiliations: Imperial College London

  • Feature Selection via Detecting Ineffective Features
    Kris De Brabanter and Laszlo Gyorfi
    Affiliations: KU Leuven and Budapest University of Technology and Economics

  • Sparse network-based models for patient classification using fMRI
    Maria J. Rosa, Liana Portugal, John Shawe-Taylor and Janaina Mourao-Miranda
    Affiliations: Computer Science Department, University College London, UK

Oral session 2: Optimization algorithms
(July 9, 11:00-12:30)


  • Incremental Forward Stagewise Regression: Computational Complexity and Connections to LASSO
    Robert Freund, Paul Grigas and Rahul Mazumder
    Affiliations: MIT Sloan School of Management and MIT Operations Research Center

  • Convergence analysis of stochastic gradient descent on strongly convex objective functions
    Cheng Tang and Claire Monteleoni
    Affiliations: The George Washington University

  • Fixed-Size Pegasos for Large Scale Pinball Loss SVM
    Vilen Jumutc, Xiaolin Huang and Johan A.K. Suykens
    Affiliations: KULeuven

Oral session 3: Kernel methods and support vector machines
(July 9, 16:30-18:30)


  • Output Kernel Learning Methods
    Francesco Dinuzzo, Cheng Soon Ong and Kenji Fukumizu
    Affiliations: Max Planck Institute for Intelligent Systems Tuebingen

  • Deep Support Vector Machines for Regression Problems
    M.A. Wiering, M. Schutten, A. Millea, A. Meijster and L.R.B. Schomaker
    Affiliations: University of Groningen

  • Subspace Learning and Empirical Operator Estimation
    Alessandro Rudi, Guille D. Canas and Lorenzo Rosasco
    Affiliations: Istituto Italiano di Tecnologia and IIT-MIT and DIBRIS Universita`di Genova

  • Kernel based identification of systems with multiple outputs using nuclear norm regularization
    Tillmann Falck, Bart De Moor and Johan A.K. Suykens
    Affiliations: KU Leuven

Oral session 4: Structured low-rank approximation
(July 10, 11:00-12:30)


  • First-order methods for low-rank matrix factorization applied to informed source separation
    Augustin Lefevre and Francois Glineur
    Affiliations: ICTEAM institute - Universite Catholique de Louvain-la-Neuve and CORE institute - Universite Catholique de Louvain-la-Neuve

  • Structured low-rank approximation as optimization on a Grassmann manifold
    Konstantin Usevich and Ivan Markovsky
    Affiliations: Department ELEC, Vrije Universiteit Brussel

  • Scalable Structured Low Rank Matrix Optimization Problems
    Marco Signoretto, Volkan Cevher and Johan A.K. Suykens
    Affiliations: ESAT-SCD/SISTA KULeuven, LIONS EPFL

Oral session 5: Robustness
(July 10, 16:30-18:00)


  • Learning with Marginalized Corrupted Features
    Laurens van der Maaten, Minmin Chen, Stephen Tyree and Kilian Weinberger
    Affiliations: Delft University of Technology and Washington University in St. Louis and Washington University in St. Louis and Washington University in St. Louis

  • Robust regularized M-estimators of regression parameters and covariance matrix
    Esa Ollila, Hyon-Jung Kim and Visa Koivunen
    Affiliations: Department of Signal Processing and Acoustics, Aalto University, Finland

  • Robust Near-Separable Nonnegative Matrix Factorization Using Linear Optimization
    Nicolas Gillis and Robert Luce 
    Affiliations: Universite catholique de Louvain and T.U. Berlin

Poster session 1
(July 9, 13:15-14:30)


  • Data-Driven and Problem-Oriented Multiple-Kernel Learning
    Valeriya Naumova and Sergei V. Pereverzyev
    Affiliations: Johann Radon Institute for Computational and Applied Mathematics (RICAM) Austrian Academy of Sciences

  • Support Vector Machine with spatial regularization for pixel classification
    Remi Flamary and Alain Rakotomamonjy
    Affiliations: Universite de Nice Sophia Antipolis, Laboratoire Lagrange, OCA, CNRS, LITIS, Universite de Rouen

  • Regularized structured low-rank approximation
    Mariya Ishteva and Konstantin Usevich and Ivan Markovsky
    Affiliations: Dept. ELEC, Vrije Universiteit Brussel

  • A Heuristic Approach to Model Selection for Online Support Vector Machines
    Davide Anguita, Alessandro Ghio, Isah A. Lawal and Luca Oneto
    Affiliations: DITEN - University of Genoa

  • Lasso and Adaptive Lasso with Convex Loss Functions
    Wojciech Rejchel
    Affiliations: Nicolaus Copernicus University, Torun, Poland

  • Conditional Gaussian Graphical Models for Multi-output Regression of Neuroimaging Data
    Andre F Marquand, Maria Joao Rosa and Orla Doyle
    Affiliations: King`s College London and University college London and King`s College London

  • High-dimensional convex optimization problems via optimal affine subgradient algorithms
    Masoud Ahookhosh and Arnold Neumaier
    Affiliations: Universitaet Wien

  • Joint Estimation of Modular Gaussian Graphical Models
    Jose Sanchez and Rebecka Jornsten
    Affiliations: Chalmers University of Technology and University of Gothenburg

  • Learning Rates of l1-regularized Kernel Regression
    Lei Shi, Xiaolin Huang and Johan A.K. Suykens
    Affiliations: Department of Electrical Engineering, Katholieke Universiteit Leuven

  • Reduced Fixed-Size LSSVM for Large Scale Data
    Raghvendra Mall and Johan A.K. Suykens
    Affiliations: KU Leuven

Poster session 2
(July 10, 13:15-14:30)


  • Pattern Recognition for Neuroimaging Toolbox
    Jessica Schrouff, Maria J. Rosa, Jane Rondina, Andre Marquand, Carlton Chu, John Ashburner, Jonas Richiardi, Christophe Phillips and Janaina Mourao-Miranda
    Affiliations: Cyclotron Research Centre, University of Liege, Belgium and Computer Science Department, University College London, UK and Computer Science Department, University College London, UK and Institute of Psychology, King`s College, London, UK and NIMH, NIH, Bethesda, USA and Wellcome Trust Centre for Neuroimaging, University College London, UK and Stanford University, USA and Cyclotron Research Centre, University of Liege, Belgium and Computer Science Department, University College London, UK

  • Stable LASSO for High-Dimensional Feature Selection through Proximal Optimization
    Roman Zakharov and Pierre Dupont
    Affiliations: UCL

  • Regularization in topology optimization
    Atsushi Kawamoto, Tadayoshi Matsumori, Daisuke Murai and Tsuguo Kondoh
    Affiliations: Toyota Central R&D Labs., Inc.

  • Classification of MCI and AD patients combining PET data and psychological scores
    Fermin Segovia, Christine Bastin, Eric Salmon and Christophe Phillips
    Affiliations: Cyclotron Research Centre, University of Liege, Belgium

  • Kernels design for Internet traffic classification
    Emmanuel Herbert, Stephane Senecal and Stephane Canu
    Affiliations: Orange Labs, LITIS/INSA Rouen

  • Kernel Adaptive Filtering: Which Technique to Choose in Practice
    Steven Van Vaerenbergh and Ignacio Santamaria
    Affiliations: Department of Communications Engineering, University of Cantabria, Spain

  • Structured Machine Learning for Mapping Natural Language to Spatial ontologies
    Parisa Kordjamshidi and Marie-Francine Moens
    Affiliations: KU Leuven

  • Windowing strategies for on-line multiple kernel regression
    Manuel Herrera and Rajan Filomeno Coelho
    Affiliations: BATir Dep. - Universite libre de Bruxelles

  • Non-parallel semi-supervised classification
    Siamak Mehrkanoon and Johan A.K. Suykens
    Affiliations: KU Leuven

  • Visualisation of neural networks for model reduction
    Tamas Kenesei and Janos Abonyi
    Affiliations: University of Pannonia, Department of Process Engineering

Credit: NASA/ESA, Soho

Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Wednesday, May 29, 2013

Hierarchical Tucker Tensor Optimization - Applications to Tensor Completion

Curt Da Silva just sent me the following:

Hey Igor,

..... I found your Nuit Blanche article on "A Riemannian geometry for low-rank matrix completion" [note from Igor: It's not mine it's Bamdev MishraK. Adithya Apuroop and Rodolphe Sepulchre's] and I would like to make you aware of an extension, in a similar vein to this work, to the multidimensional interpolation case in the Hierarchical Tucker format, which can be found at https://www.slim.eos.ubc.ca/content/hierarchical-tucker-tensor-optimization-applications-tensor-completion . We use a manifold-based approach to efficiently interpolate Hierarchical Tucker tensors with missing entries, specifically for seismic examples. We should be releasing a preprint of our full work in the near future as well. Thank you for your consideration.

Sincerely,
Curt Da Silva

In this work, we develop an optimization framework for problems whose solutions are well-approximated by Hierarchical Tucker tensors, an efficient structured tensor format based on recursive subspace factorizations. Using the differential geometric tools presented here, we construct standard optimization algorithms such as Steepest Descent and Conjugate Gradient, for interpolating tensors in HT format. We also empirically examine the importance of one's choice of data organization in the success of tensor recovery by drawing upon insights from the Matrix Completion literature. Using these algorithms, we recover various seismic data sets with randomly missing source pairs.
I wonder if the QTT format could be useful as well.



Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Tuesday, May 28, 2013

Manopt: A Matlab tool­box for opti­mization on manifolds - implementation -

From the previous entry, but I wanted to dedicate an entry just that implementation, Bamdev Mishra also mentions the release of Manopt. Please note the tutorial page and a forum:





Manopt  is a Matlab tool­box for opti­mization on manifolds. Op­ti­miza­tion on man­i­folds is a pow­er­ful par­a­digm to ad­dress non­lin­ear op­ti­miza­tion prob­lems. With Ma­nopt, it is easy to deal with var­i­ous types of con­straints that arise nat­u­rally in ap­pli­ca­tions, such as or­tho­nor­mal­ity or low rank. It comes with a large library of manifolds and ready-to-use Riemannian optimization algorithms. It is well documented and includes diagnostics tools to help you get started quickly and make sure you make no mistakes along the way. It is designed to provide great flexibility in describing your cost function and incorporates an optional caching system for more efficiency.


Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.