Pages

Friday, May 31, 2019

"Generative models are the new sparsity": The spiked matrix model with generative priors - implementation -

** Nuit Blanche is now on Twitter: @NuitBlog **

Out of all the good things coming out of this paper, here is what I note: The Great Convergence continues and we also find phase transitions. Also even though reconstruction is noisy, it is still perceptually ok because humans have been dealing with all sorts of imprecisions in their vision systems. 




Using a low-dimensional parametrization of signals is a generic and powerful way to enhance performance in signal processing and statistical inference. A very popular and widely explored type of dimensionality reduction is sparsity; another type is generative modelling of signal distributions. Generative models based on neural networks, such as GANs or variational auto-encoders, are particularly performant and are gaining on applicability. In this paper we study spiked matrix models, where a low-rank matrix is observed through a noisy channel. This problem with sparse structure of the spikes has attracted broad attention in the past literature. Here, we replace the sparsity assumption by generative modelling, and investigate the consequences on statistical and algorithmic properties. We analyze the Bayes-optimal performance under specific generative models for the spike. In contrast with the sparsity assumption, we do not observe regions of parameters where statistical performance is superior to the best known algorithmic performance. We show that in the analyzed cases the approximate message passing algorithm is able to reach optimal performance. We also design enhanced spectral algorithms and analyze their performance and thresholds using random matrix theory, showing their superiority to the classical principal component analysis. We complement our theoretical results by illustrating the performance of the spectral algorithms when the spikes come from real datasets.
An implementation of the results can be found here: https://github.com/sphinxteam/StructuredPrior_demo



Follow @NuitBlog or join the CompressiveSensing Reddit, the Facebook page, the Compressive Sensing group on LinkedIn  or the Advanced Matrix Factorization group on LinkedIn

Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email.

Other links:
Paris Machine LearningMeetup.com||@Archives||LinkedIn||Facebook|| @ParisMLGroup< br/> About LightOnNewsletter ||@LightOnIO|| on LinkedIn || on CrunchBase || our Blog
About myselfLightOn || Google Scholar || LinkedIn ||@IgorCarron ||Homepage||ArXiv

Thursday, May 30, 2019

Learning step sizes for unfolded sparse coding

** Nuit Blanche is now on Twitter: @NuitBlog **




Sparse coding is typically solved by iterative optimization techniques, such as the Iterative Shrinkage-Thresholding Algorithm (ISTA). Unfolding and learning weights of ISTA using neural networks is a practical way to accelerate estimation. In this paper, we study the selection of adapted step sizes for ISTA. We show that a simple step size strategy can improve the convergence rate of ISTA by leveraging the sparsity of the iterates. However, it is impractical in most large-scale applications. Therefore, we propose a network architecture where only the step sizes of ISTA are learned. We demonstrate that for a large class of unfolded algorithms, if the algorithm converges to the solution of the Lasso, its last layers correspond to ISTA with learned step sizes. Experiments show that our method is competitive with state-of-the-art networks when the solutions are sparse enough.



Follow @NuitBlog or join the CompressiveSensing Reddit, the Facebook page, the Compressive Sensing group on LinkedIn  or the Advanced Matrix Factorization group on LinkedIn

Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email.

Other links:
Paris Machine LearningMeetup.com||@Archives||LinkedIn||Facebook|| @ParisMLGroup< br/> About LightOnNewsletter ||@LightOnIO|| on LinkedIn || on CrunchBase || our Blog
About myselfLightOn || Google Scholar || LinkedIn ||@IgorCarron ||Homepage||ArXiv

Wednesday, May 29, 2019

Sparse-Plex library and a fast OMP implementation

** Nuit Blanche is now on Twitter: @NuitBlog **



Shailesh (also Shailesh1729) sent me the following a few months ago:
Hi Igor,

I have been a regular reader of your blog Nuit Blanche.


I would like to draw your attention to a fast C OMP implementation with MATLAB interfaces written by me. Its documentation is available here. It is up to 4 times faster than the OMP implementation in OMPBOX. I hope you will find this information useful and worth sharing on your blog / twitter.


I have written fast C implementations of other algorithms whose documentation I am updating currently.




With regards,
- Shailesh
Thanks Shailesh ! Of obvious interest is his Sparse-Plex library to learn about Compressive Sensing. It is here.




Follow @NuitBlog or join the CompressiveSensing Reddit, the Facebook page, the Compressive Sensing group on LinkedIn  or the Advanced Matrix Factorization group on LinkedIn

Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email.

Other links:
Paris Machine LearningMeetup.com||@Archives||LinkedIn||Facebook|| @ParisMLGroup< br/> About LightOnNewsletter ||@LightOnIO|| on LinkedIn || on CrunchBase || our Blog
About myselfLightOn || Google Scholar || LinkedIn ||@IgorCarron ||Homepage||ArXiv

Tuesday, May 28, 2019

Differentially Private Compressive k-Means

** Nuit Blanche is now on Twitter: @NuitBlog **





This work addresses the problem of learning from large collections of data with privacy guarantees. The sketched learning framework proposes to deal with the large scale of datasets by compressing them into a single vector of generalized random moments, from which the learning task is then performed. We modify the standard sketching mechanism to provide differential privacy, using addition of Laplace noise combined with a subsampling mechanism (each moment is computed from a subset of the dataset). The data can be divided between several sensors, each applying the privacy-preserving mechanism locally, yielding a differentially-private sketch of the whole dataset when reunited. We apply this framework to the k-means clustering problem, for which a measure of utility of the mechanism in terms of a signal-to-noise ratio is provided, and discuss the obtained privacy-utility tradeoff.


Follow @NuitBlog or join the CompressiveSensing Reddit, the Facebook page, the Compressive Sensing group on LinkedIn  or the Advanced Matrix Factorization group on LinkedIn

Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email.

Other links:
Paris Machine LearningMeetup.com||@Archives||LinkedIn||Facebook|| @ParisMLGroup About LightOnNewsletter ||@LightOnIO|| on LinkedIn || on CrunchBase || our Blog
About myselfLightOn || Google Scholar || LinkedIn ||@IgorCarron ||Homepage||ArXiv

Monday, May 27, 2019

Estimating the inverse trace using random forests on graphs

** Nuit Blanche is now on Twitter: @NuitBlog **


Using Machine learning techniques to perform machine learning computation, I really like the meta aspect of this paper. Estimating the inverse trace using random forests on graphs by Simon Barthelmé, Nicolas Tremblay, Alexandre Gaudillière, Luca Avena, Pierre-Olivier Amblard
Some data analysis problems require the computation of (regularised) inverse traces, i.e. quantities of the form $\Tr (q \bI + \bL)^{-1}$. For large matrices, direct methods are unfeasible and one must resort to approximations, for example using a conjugate gradient solver combined with Girard's trace estimator (also known as Hutchinson's trace estimator). Here we describe an unbiased estimator of the regularized inverse trace, based on Wilson's algorithm, an algorithm that was initially designed to draw uniform spanning trees in graphs. Our method is fast, easy to implement, and scales to very large matrices. Its main drawback is that it is limited to diagonally dominant matrices $\bL$.



Follow @NuitBlog or join the CompressiveSensing Reddit, the Facebook page, the Compressive Sensing group on LinkedIn  or the Advanced Matrix Factorization group on LinkedIn

Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email.

Other links:
Paris Machine LearningMeetup.com||@Archives||LinkedIn||Facebook|| @ParisMLGroup< br/> About LightOnNewsletter ||@LightOnIO|| on LinkedIn || on CrunchBase || our Blog
About myselfLightOn || Google Scholar || LinkedIn ||@IgorCarron ||Homepage||ArXiv

Saturday, May 25, 2019

Saturday Morning Videos: Imaging and Machine Learning Workshop, @IHP Paris, April 1st – 5th , 2019

** Nuit Blanche is now on Twitter: @NuitBlog ** 



This is the third Workshop «Imaging and Machine Learning» within the Mathematics of Imaging series organized in Paris this semester (videos of Workshop 1 are here, videos of Workshop 2 are here)

Structured prediction via implicit embeddings - Alessandro Rudi - Workshop 3 - CEB T1 2019


A Kernel Perspective for Regularizing Deep Neural Networks - Julien Mairal - Workshop 3 - CEB T1 2019

Optimization meets machine learning for neuroimaging - Alexandre Gramfort - Workshop 3 - CEB T1 2019

Random Matrix Advances in Machine Learning - Romain Couillet - Workshop 3 - CEB T1 2019

Iterative regularization via dual diagonal descent - Silvia Villa - Workshop 3 - CEB T1 2019

Scalable hyperparameter transfer learning - Valerio Perrone - Workshop 3 - CEB T1 2019

Using structure to select features in high dimension. - Chloe-Agathe Azencott - Workshop 3 - CEB T1 2019

Predicting aesthetic appreciation of images. - Naila Murray - Workshop 3 - CEB T1 2019

Learning Representations for Information Obfuscation (...) - Guillermo Sapiro - Workshop 3 - CEB T1 2019

Convex unmixing and learning the effect of latent (...) - Guillaume Obozinski - Workshop 3 - CEB T1 2019

Revisiting non-linear PCA with progressively grown autoencoders. - José Lezama - Workshop 3 - CEB T1 2019

On the several ways to regularize optimal transport. - Marco Cuturi - Workshop 3 - CEB T1 2019

Combinatorial Solutions to Elastic Shape Matching. - Daniel Cremers - Workshop 3 - CEB T1 2019

L’intelligence Artificielle est-elle Logique ou Géométrique ? - Stephane Mallat - Grand Public -CEB T1 2019

Rank optimality for the Burer-Monteiro factorization - Irène Waldspurger - Workshop 3 - CEB T1 2019

Bayesian inversion for tomography through machine learning. - Ozan Öktem - Workshop 3 - CEB T1 2019

Understanding geometric attributes with autoencoders. - Alasdair Newson - Workshop 3 - CEB T1 2019

Multigrain: a unified image embedding for classes (...) - Bertrand Thirion - Workshop 3 - CEB T1 2019

Deep Inversion, Autoencoders for Learned Regularization (...) - Christoph Brune - Workshop 3 - CEB T1 2019

Optimal machine learning with stochastic projections (...) - Lorenzo Rosasco - Workshop 3 - CEB T1 2019

Roto-Translation Covariant Convolutional Networks for (...) - Remco Duits - Workshop 3 - CEB T1 2019

Unsupervised domain adaptation with application to urban (...) - Patrick Pérez - Workshop 3 - CEB T1 2019

Designing multimodal deep architectures for Visual Question (...) - Matthieu Cord - Workshop 3 - CEB T1 2019

Towards demystifying over-parameterization in deep (...) - Mahdi Soltanolkotabi - Workshop 3 - CEB T1 2019

Nonnegative matrix factorisation with the beta-divergence (...) - Cédric Févotte - Workshop 3 -CEB T1 2019

Autoencoder Image Generation with Multiscale Sparse (...) - Stéphane Mallat - Workshop 3 - CEB T1 2019

Learning from permutations. - Jean-Philippe Vert - Workshop 3 - CEB T1 2019

Learned image reconstruction for high-resolution (...) - Marta Betcke - Workshop 3 - CEB T1 2019

Contextual Bandit: from Theory to Applications. - Claire Vernade - Workshop 3 - CEB T1 2019


On the Global Convergence of Gradient Descent for (...) - Francis Bach - Workshop 3 - CEB T1 2019



Follow @NuitBlog or join the CompressiveSensing Reddit, the Facebook page, the Compressive Sensing group on LinkedIn  or the Advanced Matrix Factorization group on LinkedIn

Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email.

Other links:
Paris Machine LearningMeetup.com||@Archives||LinkedIn||Facebook|| @ParisMLGroup< br/> About LightOnNewsletter ||@LightOnIO|| on LinkedIn || on CrunchBase || our Blog
About myselfLightOn || Google Scholar || LinkedIn ||@IgorCarron ||Homepage||ArXiv

Saturday Morning Videos: IPAM Workshop IV: Deep Geometric Learning of Big Data and Applications (May 20 - 24, 2019)

** Nuit Blanche is now on Twitter: @NuitBlog **



Hat tip to Xavier for letting us know about these. Here are the videos and slides of Workshop IV: Deep Geometric Learning of Big Data and Applications, Part of the Long Program Geometry and Learning from Data in 3D and Beyond at IPAM. The workshop took place May 20 - 24, 2019. And thank you to the organizing committee (Xavier BressonYann LeCunStanley Osher, Rene Vidal, Rebecca Willett) for making this workshop happen !



Arthur Szlam (Facebook)



Soumith Chintala (Facebook AI Research)


Jeremias Sulam (Johns Hopkins University)


 
Marc Pollefeys (ETH Zurich)
Semantic 3D reconstruction 


 
Bahram Jalali (University of California, Los Angeles (UCLA))
Low Latency Deep Imaging Cytometry



Tom Goldstein (University of Maryland)


Hongyang Zhang (Carnegie Mellon University)

 
Roy Lederman (Yale University)


 
Xavier Bresson (Nanyang Technological University, Singapore)



Hamed Pirsiavash (University of Maryland Baltimore County)



Jian Tang (HEC Montréal)



Thomas Kipf (Universiteit van Amsterdam)



Jure Leskovec (Stanford University)
Deep Generative Models for Graphs: Methods & Applications



Mathias Niepert (NEC Laboratories Europe)

Federico Monti (Universita della Svizzera Italiana


 
Mikhail Belkin (Ohio State University)



Thiago Serra (Mitsubishi Electric Research Laboratories (Merl))


Rene Vidal (Johns Hopkins University)



Stanley Osher (University of California, Los Angeles (UCLA))



Srikumar Ramalingam (University of Utah)


Luc Van Gool (ETH Zurich)



Taco Cohen (Universiteit van Amsterdam)


Kostas Daniilidis (University of Pennsylvania)


Ersin Yumer (Uber ATG)




Follow @NuitBlog or join the CompressiveSensing Reddit, the Facebook page, the Compressive Sensing group on LinkedIn  or the Advanced Matrix Factorization group on LinkedIn

Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email.

Other links:
Paris Machine LearningMeetup.com||@Archives||LinkedIn||Facebook|| @ParisMLGroup
About LightOnNewsletter ||@LightOnIO|| on LinkedIn || on CrunchBase || our Blog
About myselfLightOn || Google Scholar || LinkedIn ||@IgorCarron ||Homepage||ArXiv