Wednesday, September 19, 2018

WiMLDS and Paris Machine Learning meetup Hors série #1: Scalable Automatic Machine Learning with H2O with Erin Ledell

We're back with Season 6 of the Paris Machine Learning meetup!
Tonight, the Women in Machine Learning & Data Science (WiMLDS) meetup and the Paris Machine Learning Group are hosting an exceptional “Hors Série” meetup featuring Erin LeDell and Jo-Fai Chow We will be hsoted and sponsored by by Ingima !

The meetup will be live streamed for those who can’t be there. Slides are also available below:




19:30 – Introduction by Ingima, the Paris WiMLDS + Paris ML Group teams


19:40 – “Scalable Automatic Machine Learning with H2O” (keynote format) by Erin LeDell, Chief Machine Learning Scientist at H2O.ai.

Abstract:
This presentation will provide a history and overview of the field of Automatic Machine Learning (AutoML), followed by a detailed look inside H2O's AutoML algorithm. H2O AutoML provides an easy-to-use interface which automates data pre-processing, training and tuning a large selection of candidate models (including multiple stacked ensemble models for superior model performance). The result of the AutoML run is a "leaderboard" of H2O models which can be easily exported for use in production. AutoML is available in all H2O interfaces (R, Python, Scala, web GUI) and due to the distributed nature of the H2O platform, can scale to very large datasets. The presentation will end with a demo of H2O AutoML in R and Python, including a handful of code examples to get you started using automatic machine learning on your own projects.

Bio:
Dr. Erin LeDell is the Chief Machine Learning Scientist at H2O.ai. Erin has a Ph.D. in Biostatistics with a Designated Emphasis in Computational Science and Engineering from University of California, Berkeley. Her research focuses on automatic machine learning, ensemble machine learning and statistical computing. She also holds a B.S. and M.A. in Mathematics. Before joining H2O.ai, she was the Principal Data Scientist at Wise.io (acquired by GE Digital in 2016) and Marvin Mobile Security (acquired by Veracode in 2012), and the founder of DataScientific, Inc.


Abstract:
Joe Chow (H2O.ai) recently teamed up with IBM and Aginity to create a proof of concept "Moneyball" app for the IBM Think conference in Vegas. The original goal was just to prove that different tools (e.g. H2O, Aginity AMP, IBM Data Science Experience, R and Shiny) could work together seamlessly for common business use-cases. Little did Joe know, the app would be used by Ari Kaplan (the real "Moneyball" guy) to validate the future performance of some baseball players. Ari recommended one player to a Major League Baseball team. The player was signed the next day with a multimillion-dollar contract. This talk is about Joe's journey to a real "Moneyball" application.
20:50 Networking / Cocktail

During the event, you can share content using #WiMLDSParis and @WiMLDS_Paris or #ParisML and @ParisMLgroup

After the meet-up, the video will be shared on : http://parismlgroup.org/about.php & https://medium.com/@WiMLDS_Paris

---
Host information :

The room can welcome 90 people. First arrived, first served!
Keep in mind the session will be streamed.



Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Friday, September 14, 2018

Highly Technical Reference Page: The Rice University Compressive Sensing page.




Rich sent this to me a a few days ago:

Hi Igor -  
i hope all goes well. FYI, the Rice CS Archive is back online after being down for more than a year thanks to some Russian hackers who thought we had something to do with the 2018 election. it’s available here:
richb 
Richard G. Baraniuk
Victor E. Cameron Professor of Electrical and Computer Engineering
Founder and Director, OpenStax
Rice University 
The Rice page is one of the first page that got me thinking I should list all those Highly Technical Reference Pages in one fell swoop.

Thursday, September 13, 2018

“And we’re back for Season 6” Paris Machine Learning Newsletter, September 2018 (in French)


“And we’re back for Season 6” the Paris Machine Learning Meetup Newsletter, September 2018

Sommaire
  1. L’édito de Franck, Jacqueline, Igor, “And we’re back for Season 6”
  2. On Aime Beaucoup !
  3. La saison dernière.

1 L’édito de Franck, Jacqueline, Igor, “And we’re back for Season 6”

Jacqueline Forien nous rejoint en tant qu’organisatrice du meetup.

La saison 5, c’était 8 hors série et 9 meetups réguliers, plus de 7200+ membres ce qui en fait un des plus grand meetup du monde sur cette thématique. On a vu plein de choses l’année dernière du point de vue politique mais aussi dans les meetups. On reviendra la dessus plus tard dans une autre newsletter. Ce qu’il faut savoir c’est que NIPS la conférence de référence en IA a vendu ses tickets en 11 minutes 38 secondes. D’expérience, c’est plus rapide que la vente des billets de BTS quand il viendront à Bercy en Octobre. Ce qui est sûr c’est que ces expériences que sont les rencontres autour du Machine Learning doivent rester et c’est pour cela que toutes les présentations et vidéos de nos meetups sont dans nos archives et sont listées plus bas dans cette newsletter.

Cette dernière saison n’aurait pas pu se faire sans les entreprises et associations suivantes:

Un grand merci pour leur implication dans une communauté dynamique sur l’IA ici à Paris et en Europe.

Notre premier meetup se fera en coordination avec le Women in Machine Learning and Data Science, pour s’inscrire c’est ici: #Hors-série — Paris WiMLDS & Paris ML Meetup

Les dates de nos meetups pour la saison 6:
  • Hors série #1 19/09
  • #2 10/10
  • #3 14/11
  • #4 12/12
  • #5 09/01
  • #6 13/02
  • #7 13/03
  • #8 10/04
  • #9 15/05
  • #10 12/06

Si vous voulez nous accueillir ou sponsoriser, n’hésitez pas à nous contacter grâce à ce formulaire ou via notre site.

Vous pouvez nous suivre sur Twitter @ParisMLgroup.



2. On Aime Beaucoup !

Chloé Azencott, une des speakers du meetup, vient de sortir un livre sur le Machine Learning en Français. C’est Introduction au Machine Learning et il y a plein d’exemples de code.

Des conférences et meetups qu’on aime bien!

++++Important: France is AI conférence: 3e édition de notre conférence annuelle les 17 et 18 octobre 2018 à Station F.+++: Le lien d’inscription eventbriteavec le code promo MEETUPS100 offre 100 place gratuites. Au-delà des 100 premières, les places peuvent être obtenu avec 50% de réduction avec le code MEETUPS50

Les petits nouveaux meetups:

Ceux qui recommencent:

3. La saison dernière


La saison dernière (Saison 5), c’était 8 hors série et 9 meetups réguliers pour un total de 95 meetups en 5 saisons. Voici les liens vers les présentations et videos faites à ces meetups:

Regular meetups

Hors série

Voilà, c’est tout pour aujourd’hui !


PS: N’oubliez pas que vous pouvez aussi suivre le Paris Machine Learning Meetup sur Twitter, LinkedIn, Facebook et Google+ .

Vous pouvez consulter les archives des meet ups précédents.

On travaille aussi sur un nouveau site web : MLParis.org

Le Paris Machine Learning Meetup, c’est 7200 membres ce qui en fait un des plus important du monde avec déjà plus de 95 rencontres et 10 dates programmées pour cette saison 6.
  • Si vous êtes étudiant, postdoc ou chercheur, le meet up est une belle tribune pour parler de vos travaux avant de les présenter aux conférences NIPS/ICML/ICLR/COLT/UAI/ACL/KDD ;
  • Pour les startups, c’est un bon moyen de parler de vos projets ou de recruter les futurs superstars de votre équipe IA/Data Science ;
  • Et pour tous, c’est un moyen simple de se tenir informé des derniers développements du domaine et d’avoir des échanges uniques avec les conférenciers et les autres participants.

Comme toujours, premier arrivé, premier entré. Le nombre de places dans les salles est limité. Au delà de leur capacité, nous ne pourrons pas vous faire rentrer. Vous pouvez suivre le taux de remplissage en suivant #MLParis sur twitter.



Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !

Wednesday, September 12, 2018

Manopt 5.0, toolbox release: Optimization on Manifolds: - implementation -



Nicolas just sent me the following:
Dear Igor,

Bamdev (cc) and I just released Manopt 5.0, our Matlab toolbox for optimization on manifolds:

We would be delighted if you could announce this major release on your blog once again.

Manopt is a toolbox for optimization on manifolds, with major applications in machine learning and computer vision (low rank constraints, orthogonal matrices, rotations, positive definite matrices, ...). Of course, Manopt can also optimize over linear spaces (and it's quite good at it).

The toolbox is user friendly, requiring little knowledge about manifolds to get started. See our tutorial and the many examples in the release:



Highlight -- this release brings:

Thanks!
Nicolas and Bamdev


Thanks Nicolas  !


Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Monday, August 20, 2018

SPORCO: Convolutional Dictionary Learning - implementation -



Brendt sent me the following a few days ago: 

Hi Igor,
We have two new papers on convolutional dictionary learning as well as some recent related code. Could you please post an announcement on Nuit Blanche?
Brendt
Sure Brendt ! It is already mentioned in the Advanced Matrix Factorization Jungle Page as this is an awesome update to the previous announcement.



"Convolutional Dictionary Learning: A Comparative Review and New Algorithms", available from http://dx.doi.org/10.1109/TCI.2018.2840334 and https://arxiv.org/abs/1709.02893, reviews existing batch-mode convolutional dictionary learning algorithms and proposes some new ones with significantly improved performance. Implementations of all of the most competitive algorithms are included in the Python version of the SPORCO library at https://github.com/bwohlberg/sporco .

"First and Second Order Methods for Online Convolutional Dictionary Learning", available from http://dx.doi.org/10.1137/17M1145689 and https://arxiv.org/abs/1709.00106, extends our previous work and proposes some new algorithms for online convolutional dictionary learning that we believe outperform existing alternatives. Implementations of all of the new algorithms are included in the
Matlab version of the SPORCO library at http://purl.org/brendt/software/sporco and the first order algorithm is also included in the Python version of the SPORCO library at https://github.com/bwohlberg/sporco . A very recent addition to the Python version is the ability to exploit the SPORCO-CUDA extension to greatly accelerate the learning process.



Convolutional sparse representations are a form of sparse representation with a dictionary that has a structure that is equivalent to convolution with a set of linear filters. While effective algorithms have recently been developed for the convolutional sparse coding problem, the corresponding dictionary learning problem is substantially more challenging. Furthermore, although a number of different approaches have been proposed, the absence of thorough comparisons between them makes it difficult to determine which of them represents the current state of the art. The present work both addresses this deficiency and proposes some new approaches that outperform existing ones in certain contexts. A thorough set of performance comparisons indicates a very wide range of performance differences among the existing and proposed methods, and clearly identifies those that are the most effective.


Convolutional sparse representations are a form of sparse representation with a structured, translation invariant dictionary. Most convolutional dictionary learning algorithms to date operate in batch mode, requiring simultaneous access to all training images during the learning process, which results in very high memory usage and severely limits the training data that can be used. Very recently, however, a number of authors have considered the design of online convolutional dictionary learning algorithms that offer far better scaling of memory and computational cost with training set size than batch methods. This paper extends our prior work, improving a number of aspects of our previous algorithm; proposing an entirely new one, with better performance, and that supports the inclusion of a spatial mask for learning from incomplete data; and providing a rigorous theoretical analysis of these methods.


Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Thursday, July 26, 2018

CfP: Call for Papers: Special Issue on Information Theory Applications in Signal Processing

Sergio just sent me the following:
Dear Igor,
Could you please announce in nuit blanche the following call for contributions to our Special Issue.
Best resgards,
Sergio
Sure Sergio !
Dear colleagues, 
We are currently leading a Special Issue entitled "Information Theory Applications in Signal Processing" for the journal Entropy (ISSN 1099-4300, IF 2.305). A short prospectus is given at the volume website: 
We would like to invite you to contribute a review or full research paper for publication in this Special Issue after standard peer-review procedure in Open access form.
The official deadline for submission is 30 November 2018. However, you may send your manuscript at any time before the deadline. We can organize a very fast peer-review, if accepted, the paper will be published immediately. Please also feel free to distribute this call for papers to colleagues and collaborators.
You can contact with the assistant editor Ms. Alex Liu (alex.liu@mdpi.com) to solve any question or doubt.
Thank you in advance for considering our invitation.
Sincerely,
Guest Editors:
Dr. Sergio Cruces (http://personal.us.es/sergio/)
Dr. Rubén Martín-Clemente (http://personal.us.es/ruben/)
Dr. Wojciech Samek (http://iphome.hhi.de/samek/)




Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Monday, July 23, 2018

Rank Minimization for Snapshot Compressive Imaging - implementation -



Yang just sent me the following:

Hi Igor,

I am writing regarding a paper on compressive sensing you may find of interest, co-authored with Xin Yuan, Jinli Suo, David Brady, and Qionghai Dai. We get exciting results on snapshot compressive imaging (SCI), i.e., encoding each frame of an image sequence with a spectral-, temporal-, or angular- variant random mask and summing them pixel-by-pixel to form one-shot measurement. Snapshot compressive hyperspectral, high-speed, and ligh-field imaging are among representatives.

We combine rank minimization to exploit the nonlocal self-similarity of natural scenes, which is widely acknowledged in image/video processing and alternating minimization approach to solve this problem. Results of both simulation and real data from four different SCI systems, where measurement noise is dominant, demonstrate that our proposed algorithm leads to significant improvements (>4dB in PSNR) and more robustness to noise compared with current state-of-the-art algorithms.

Paper arXiv link: https://arxiv.org/abs/1807.07837.
Github repository link: https://github.com/liuyang12/DeSCI.

Here is an animated demo for visualization and comparison with the state-of-the-art algorithms, , i.e., GMM-TP (TIP'14), MMLE-GMM (TIP'15), MMLE-MFA (TIP'15), and GAP-TV (ICIP'16).
Thanks,
Yang (y-liu16@mails.tsinghua.edu.cn)


Thanks Yang !

Snapshot compressive imaging (SCI) refers to compressive imaging systems where multiple frames are mapped into a single measurement, with video compressive imaging and hyperspectral compressive imaging as two representative applications. Though exciting results of high-speed videos and hyperspectral images have been demonstrated, the poor reconstruction quality precludes SCI from wide applications.This paper aims to boost the reconstruction quality of SCI via exploiting the high-dimensional structure in the desired signal. We build a joint model to integrate the nonlocal self-similarity of video/hyperspectral frames and the rank minimization approach with the SCI sensing process. Following this, an alternating minimization algorithm is developed to solve this non-convex problem. We further investigate the special structure of the sampling process in SCI to tackle the computational workload and memory issues in SCI reconstruction. Both simulation and real data (captured by four different SCI cameras) results demonstrate that our proposed algorithm leads to significant improvements compared with current state-of-the-art algorithms. We hope our results will encourage the researchers and engineers to pursue further in compressive imaging for real applications.

Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !

Thursday, July 19, 2018

CSJob: PhD and Postdoc positions KU Leuven: Optimization frameworks for deep kernel machines


Johan let me know of the following positions in his group:

Dear Igor,
could you please announce this on nuit blanche.
many thanks,
Johan


Sure thing Johan !

PhD and Postdoc positions KU Leuven: Optimization frameworks for deep kernel machines
The research group KU Leuven ESAT-STADIUS is currently offering 2 PhD and 1 Postdoc (1 year, extendable) positions within the framework of the KU Leuven C1 project Optimization frameworks for deep kernel machines (promotors: Prof. Johan Suykens and Prof. Panos Patrinos).
Deep learning and kernel-based learning are among the very powerful methods in machine learning and data-driven modelling. From an optimization and model representation point of view, training of deep feedforward neural networks occurs in a primal form, while kernel-based learning is often characterized by dual representations, in connection to possibly infinite dimensional problems in the primal. In this project we aim at investigating new optimization frameworks for deep kernel machines, with feature maps and kernels taken at multiple levels, and with possibly different objectives for the levels. The research hypothesis is that such an extended framework, including both deep feedforward networks and deep kernel machines, can lead to new important insights and improved results. In order to achieve this, we will study optimization modelling aspects (e.g. variational principles, distributed learning formulations, consensus algorithms), accelerated learning
schemes and adversarial learning methods.
The PhD and Postdoc positions in this KU Leuven C1 project (promotors: Prof. Johan Suykens and Prof. Panos Patrinos) relate to the following  possible topics:
-1- Optimization modelling for deep kernel machines
-2- Efficient learning schemes for deep kernel machines
-3- Adversarial learning for deep kernel machines
For further information and on-line applying, see
https://www.kuleuven.be/personeel/jobsite/jobs/54740654" (PhD positions) and
https://www.kuleuven.be/personeel/jobsite/jobs/54740649" (Postdoc position)
(click EN for English version).
The research group ESAT-STADIUS http://www.esat.kuleuven.be/stadius at the university KU Leuven Belgium provides an excellent research environment being active in the broad area of mathematical engineering, including data-driven modelling, neural networks and machine learning, nonlinear systems and complex networks, optimization, systems and control, signal processing, bioinformatics and biomedicine.





Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Friday, July 13, 2018

Phase Retrieval Under a Generative Prior


Vlad just sent me the following: 
Hi Igor,

I am writing regarding a paper you may find of interest, co-authored with Paul Hand and Oscar Leong. It applies a deep generative prior to phase retrieval, with surprisingly good results! We can show recovery occurs at optimal sample complexity for gaussian measurements, which in a sense resolves the sparse phase retrieval O(k^2 log n) bottleneck.

https://arxiv.org/pdf/1807.04261.pdf


Best,

-Vlad

Thanks Vlad ! Here is the paper:

Phase Retrieval Under a Generative Prior by Paul Hand, Oscar Leong, Vladislav Voroninski
The phase retrieval problem asks to recover a natural signal y0Rn from m quadratic observations, where m is to be minimized. As is common in many imaging problems, natural signals are considered sparse with respect to a known basis, and the generic sparsity prior is enforced via 1 regularization. While successful in the realm of linear inverse problems, such 1 methods have encountered possibly fundamental limitations, as no computationally efficient algorithm for phase retrieval of a k-sparse signal has been proven to succeed with fewer than O(k2logn) generic measurements, exceeding the theoretical optimum of O(klogn). In this paper, we propose a novel framework for phase retrieval by 1) modeling natural signals as being in the range of a deep generative neural network G:RkRn and 2) enforcing this prior directly by optimizing an empirical risk objective over the domain of the generator. Our formulation has provably favorable global geometry for gradient methods, as soon as m=O(kd2logn), where d is the depth of the network. Specifically, when suitable deterministic conditions on the generator and measurement matrix are met, we construct a descent direction for any point outside of a small neighborhood around the unique global minimizer and its negative multiple, and show that such conditions hold with high probability under Gaussian ensembles of multilayer fully-connected generator networks and measurement matrices. This formulation for structured phase retrieval thus has two advantages over sparsity based methods: 1) deep generative priors can more tightly represent natural signals and 2) information theoretically optimal sample complexity. We corroborate these results with experiments showing that exploiting generative models in phase retrieval tasks outperforms sparse phase retrieval methods.



Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Printfriendly