Wednesday, February 13, 2019

The 100th Paris Machine Learning meetup tonight: La revanche des neurones, DL on Healthrecords, Search-Oriented Conversational AI, Nanotechnology and electricity consumption,





In order to come to the meetup, you MUST register on eventbrite to be able to enter + identity card
Link : eventbrite ticket (https://bit.ly/2I1XXL4)

Thanks for Scaleway for hosting us and provide catering !
The video streaming is below. 




As usual, there is no waiting list or reserved seat: first come first served (the room has a 120 seats capacity)

Schedule :
6:45 PM : door opening
7:00 PM : intro + speakers
9:00 PM : networking - cocktail
10:30 PM : door closing

Speakers


La controverse entre IA symbolique et connexioniste

Recent advances in artificial intelligence present a high energy cost, which poses a problem both for the environment and for their integration into connected objects.
This energy cost is due to the fact that AI algorithms are implemented on conventional computers, which are poorly adapted to them.
A promising approach is to use the brain as a model for developing new types of computers that use less energy. In this presentation I explore some key ideas, including the proximity of computation and memory as well as the management of errors and randomness.
Fajwel Fogel, (Sancare), Deep Learning on health records, www.sancare.fr 
Data scientists from Sancare will provide an overview of some of the challenges faced when training deep learning models on electronic health records (EHR), such as robustness and ability to provide explanations
Training deep learning models on electronic health records (EHR) can be prohibitively expensive in terms of computational cost.
Datasets typically include millions of records, each containing several thousands of words.
Moreover, due to the sensitive nature of EHR, all computations must be performed on-premise, i.e., on the campus of the hospital, where GPU resources are usually rare or non-existent.

Chatbots and intelligent personal assistants (such as Siri, Cortana, the Google Assistant, and Amazon Alexa) are being used increasingly more for different purposes, including information access and retrieval.
These dialog systems enable naturalistic human-like interactions where the information needs are expressed in natural language.
Unlike in traditional search engines, where a user-issued query is answered with a search result page, conversational agents can respond in a variety of ways, for example, asking questions back to the user for clarification.
In this talk, I will present our paper "A Reinforcement Learning-driven Translation Model for Search-Oriented Conversational Systems" where we focus on the understanding of natural language expressions for building keyword-based queries. We proposed a reinforcement learning-driven translation model framework able to
1) learn the translation from NL expressions to queries in a supervised way, and,
2) to overcome the lack of large-scale dataset by framing the translation model as a word selection approach and injecting relevance feedback in the learning process. Experiments are carried out on two TREC datasets and outline the effectiveness of our approach.

We use open-data and machine learning to compute and forecast where and how has been produced the electricity you consume



Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Thursday, December 20, 2018

LightOn: Forward We Go !


It's been a while as I have been a bit busy. I will be back to a more regular schedule but in the meantime, we just raised some funds with Quantonation and Anorak for LightOn that should allow us to go forward in building up an optical technology for Machine Learning. Here are the announcements: 

 We just had some coverage on VentureBeat and you can follow our announcement and progress directly on Twitter and LinkedIn.

Wednesday, November 14, 2018

Paris ML E#2 S#6: Conscience, Code Analysis, Can a machine learn like a child?



Pour ce deuxieme meetup régulier de la saison, nous parlerons au moins de conscience, de santé et de code et de comment les machines apprennent comme les enfants... Merci à Samsung de nous accueillir !



Voici le programme pour l'instant:

+ Presentation Gilles Mazars, AI Labs Samsung, 


From my recent paper published in Brain: https://doi.org/10.1093/brain/awy267] Determining the state of consciousness in patients with disorders of consciousness is a challenging practical and theoretical problem. Recent findings suggest that multiple markers of brain activity extracted from the EEG may index the state of consciousness in the human brain. Furthermore, machine learning has been found to optimize their capacity to discriminate different states of consciousness in clinical practice. ... Our findings demonstrate that EEG markers of consciousness can be reliably, economically and automatically identified with machine learning in various clinical and acquisition contexts.


During this talk Eiso Kant demonstrates how different machine learning techniques can be used to learn from source code and provide developers with novel insights into their code. This talk includes several demos that show the power of MLonCode.
+ Autonomous developmental learning: can a machine learn like a child? , Pierre Oudeyer

Résumé: Current approaches to artificial intelligence and machine learning are still fundamentally limited in comparison with autonomous learning capabilities of children. Even impressive systems like AlphaGo require huge amounts of trial and error and the help of an engineer to deal with other games or tasks. On the contrary, children learn fast and robustly a wide and open-ended repertoire of skills, without needing any form of intervention by an engineer. I will present a research program that has studied computational modeling of child development and learning mechanisms in the last decade. I will explain approaches to model curiosity-driven autonomous learning, with algorithmic models enabling machines to sample and explore their own goals, self-organizing a learning curriculum without any external supervision. I will show how this has helped scientists understand better aspects of human development, and how this has opened novel approaches to address the current limits of machine learning. I will illustrate this research with experiments where robots learn autonomously repertoires of complex tasks. I will then conclude by illustrating how these approaches can be applied successfully in the domain of educational technologies, enabling to personalize sequences of exercises for human learners, while maximizing both learning efficiency and intrinsic motivation.


Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !

Wednesday, October 10, 2018

Paris Machine Learning #1 S6 at Vente-Privée

So after two "Hors Série" meetups, we decided to start Season 6 of the Paris Machine Learning meetup tonight. We'll talk about algorithms used at Vente Privée, one of the most ambitious company in France but also about Quantum computing and Machine Learning and eventually how SNCF is becoming algorithm/data driven. The streaming and the presentations will be accessible below:








Nous aurons donc le premier meetup régulier de la saison chez Vente-privée. C'est un cadre unique comme nous l'avons vu il y a deux ans. Il y aura surement une visite organisée des locaux.

Un grand merci à Vente Privée, de nous accueillir !

AGENDA :
Doors open 6:45PM // talk 7-9PM // network 9-10:30PM

Program

Jéremie Jakubowicz, Vente Privee, Data Science at Vente Privee
In this talk we will reveal what's been happening within the Data Science Team at Vente Privee this year...

At vente-privee we customize a lot of things, and this talk would describe the mechanism behind catalog customization. When a customer enters a sales, we create a section filled with items recommended for this specific customer, based on its previous purchases, and other criteria.

Quantum computing paradigm applied to automated machine learning. an efficient alternative to hyperparamter search.


Le rôle de la Fab Big Data et de l'équipe data science et engineering pour le groupe SNCF.. Quelques projets en cours représentatifs :
  • Adhérence : mieux connaître, localiser et comprendre les phénomènes de perte d'adhérence (c'est le contact entre la roue et le rail).
  • Energie : analyse des consommations d'energie électrique et prévision de consommation.
  • Projets prospectifs :
  • Active learning
  • Lisibilité des modèles de ML



Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Monday, October 08, 2018

Job: Postdoctoral Researcher in Small Data Deep Learning and Explainable Machine Learning, Livermore, CA

Bhavya just sen me the following:

Hi Igor, 
I would like to ask you a favor. We are looking for a Postdoctoral Researcher interested in small data deep learning and explainable machine learning. I was wondering whether it is possible to list the opening on your blog. Information on the available position is below.
We are looking for a Postdoctoral Researcher with expertise in statistics, machine learning, convex/non-convex optimization and/or uncertainty quantification. Postdoctoral Researcher will support ongoing efforts concerned with small-data deep learning and related topics, such as, transfer learning, generative modeling, self-supervised or unsupervised learning, and explainable ML. This position is in the Computation Directorate within the Center for Applied Scientific Computing (CASC) Division at Lawrence Livermore National Lab, Livermore, CA.
Essential Duties
  • Research, design, implement and apply a variety of advanced data science methods in multiple application areas (such as material science, high energy physics, predictive medicine, cybersecurity) in a collaborative scientific environment.
  • Document research by publishing papers at conferences/journals such as NIPS, ICML, ICLR, IJCAI, AAAI, AISTATS, ACL, CVPR, JMLR or similar.

Qualifications
  • Ph.D. in statistics or computer science or a related field.
  • Experience in modern machine learning environments (TensorFlow, PyTorch, etc.).
  • Proficiency in one or more of the following machine learning areas: deep learning, reinforcement learning, and Bayesian nonparametric.
  • Knowledge of C/C++, Python.

If interested, please contact me directly at kailkhura1@llnl.gov.
Regards,
Bhavya






Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

A Neural Architecture for Bayesian CompressiveSensing over the Simplex via Laplace Techniques



Steffen just sent me the following:

Dear Igor,

I'm a long-time reader of your blog and wanted to share our recent paper on a relation between compressed sensing and neural network architectures. The paper introduces a new network construction based on the Laplace transform that results in activations such as ReLU and gating/threshold functions. It would be great if you could distribute the link on nuit-blanche. 
The paper/preprint is here:
https://ieeexplore.ieee.org/document/8478823
https://www.netit.tu-berlin.de/fileadmin/fg314/limmer/LimSta18.pdf 
Many thanks and best regards,
Steffen 
Dipl.-Ing. Univ. Steffen Limmer
Raum HFT-TA 412
Technische Universität Berlin
Institut für Telekommunikationssysteme
Fachgebiet Netzwerk-Informationstheorie
Einsteinufer 25, 10587 Berlin

Thanks Steffen ! Here is the paper:


This paper presents a theoretical and conceptual framework to design neural architectures for Bayesian compressive sensing of simplex-constrained sparse stochastic vectors. First we recast the problem of MMSE estimation (w.r.t. a pre-defined uniform input distribution over the simplex) as the problem of computing the centroid of a polytope that is equal to the intersection of the simplex and an affine subspace determined by compressive measurements. Then we use multidimensional Laplace techniques to obtain a closed-form solution to this computation problem, and we show how to map this solution to a neural architecture comprising threshold functions, rectified linear (ReLU) and rectified polynomial (ReP) activation functions. In the proposed architecture, the number of layers is equal to the number of measurements which allows for faster solutions in the low-measurement regime when compared to the integration by domain decomposition or Monte-Carlo approximation. We also show by simulation that the proposed solution is robust to small model mismatches; furthermore, the proposed architecture yields superior approximations with less parameters when compared to a standard ReLU architecture in a supervised learning setting.









Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Sunday, October 07, 2018

Sunday Morning Video (in french): Les travaux de Grothendieck.sur les espaces de Banach, Gilles. Pisier (Lectures grothendieckiennes)

This video in French mentions the connection between Grothendieck's work and some of the subject areas mentioned on Nuit Blanche.( see here, here and here).


La thèse de Grothendieck et son article ultérieur intitulé "Résumé de la théorie métrique des produits tensoriels topologiques" (1956) a eu un énorme impact sur le développement de la géométrie des espaces de Banach pendant les 60 dernières années. Nous passerons en revue ce "Résumé" en nous concentrant sur le résultat que Grothendieck lui-même a appelé le théorème fondamental de la théorie métrique des produits tensoriels, maintenant devenu "l'inégalité de Grothendieck" ou "le théorème de Grothendieck". Ce résultat a récemment fait une apparition pour le moins inattendue dans plusieurs domaines a priori fort éloignés des préoccupations de Grothendieck. L'une a trait aux C ∗ -algèbres et aux espaces d'opérateurs (ou "espaces de Banach non-commutatifs"), une autre aux inégalités de Bell et à leur "violation" en mécanique quantique, une dernière relie la constante de Grothendieck au problème P=NP et à la théorie des graphes.

Here is a review that covers some of what is mentioned in the video: 


Probably the most famous of Grothendieck's contributions to Banach space theory is the result that he himself described as "the fundamental theorem in the metric theory of tensor products". That is now commonly referred to as "Grothendieck's theorem" (GT in short), or sometimes as "Grothendieck's inequality". This had a major impact first in Banach space theory (roughly after 1968), then, later on, in C∗-algebra theory, (roughly after 1978). More recently, in this millennium, a new version of GT has been successfully developed in the framework of "operator spaces" or non-commutative Banach spaces. In addition, GT independently surfaced in several quite unrelated fields:\ in connection with Bell's inequality in quantum mechanics, in graph theory where the Grothendieck constant of a graph has been introduced and in computer science where the Grothendieck inequality is invoked to replace certain NP hard problems by others that can be treated by "semidefinite programming" and hence solved in polynomial time. In this expository paper, we present a review of all these topics, starting from the original GT. We concentrate on the more recent developments and merely outline those of the first Banach space period since detailed accounts of that are already available, for instance the author's 1986 CBMS notes.





Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !

Wednesday, September 19, 2018

WiMLDS and Paris Machine Learning meetup Hors série #1: Scalable Automatic Machine Learning with H2O with Erin Ledell

We're back with Season 6 of the Paris Machine Learning meetup!
Tonight, the Women in Machine Learning & Data Science (WiMLDS) meetup and the Paris Machine Learning Group are hosting an exceptional “Hors Série” meetup featuring Erin LeDell and Jo-Fai Chow We will be hsoted and sponsored by by Ingima !

The meetup will be live streamed for those who can’t be there. Slides are also available below:




19:30 – Introduction by Ingima, the Paris WiMLDS + Paris ML Group teams


19:40 – “Scalable Automatic Machine Learning with H2O” (keynote format) by Erin LeDell, Chief Machine Learning Scientist at H2O.ai.

Abstract:
This presentation will provide a history and overview of the field of Automatic Machine Learning (AutoML), followed by a detailed look inside H2O's AutoML algorithm. H2O AutoML provides an easy-to-use interface which automates data pre-processing, training and tuning a large selection of candidate models (including multiple stacked ensemble models for superior model performance). The result of the AutoML run is a "leaderboard" of H2O models which can be easily exported for use in production. AutoML is available in all H2O interfaces (R, Python, Scala, web GUI) and due to the distributed nature of the H2O platform, can scale to very large datasets. The presentation will end with a demo of H2O AutoML in R and Python, including a handful of code examples to get you started using automatic machine learning on your own projects.

Bio:
Dr. Erin LeDell is the Chief Machine Learning Scientist at H2O.ai. Erin has a Ph.D. in Biostatistics with a Designated Emphasis in Computational Science and Engineering from University of California, Berkeley. Her research focuses on automatic machine learning, ensemble machine learning and statistical computing. She also holds a B.S. and M.A. in Mathematics. Before joining H2O.ai, she was the Principal Data Scientist at Wise.io (acquired by GE Digital in 2016) and Marvin Mobile Security (acquired by Veracode in 2012), and the founder of DataScientific, Inc.


Abstract:
Joe Chow (H2O.ai) recently teamed up with IBM and Aginity to create a proof of concept "Moneyball" app for the IBM Think conference in Vegas. The original goal was just to prove that different tools (e.g. H2O, Aginity AMP, IBM Data Science Experience, R and Shiny) could work together seamlessly for common business use-cases. Little did Joe know, the app would be used by Ari Kaplan (the real "Moneyball" guy) to validate the future performance of some baseball players. Ari recommended one player to a Major League Baseball team. The player was signed the next day with a multimillion-dollar contract. This talk is about Joe's journey to a real "Moneyball" application.
20:50 Networking / Cocktail

During the event, you can share content using #WiMLDSParis and @WiMLDS_Paris or #ParisML and @ParisMLgroup

After the meet-up, the video will be shared on : http://parismlgroup.org/about.php & https://medium.com/@WiMLDS_Paris

---
Host information :

The room can welcome 90 people. First arrived, first served!
Keep in mind the session will be streamed.



Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Friday, September 14, 2018

Highly Technical Reference Page: The Rice University Compressive Sensing page.




Rich sent this to me a a few days ago:

Hi Igor -  
i hope all goes well. FYI, the Rice CS Archive is back online after being down for more than a year thanks to some Russian hackers who thought we had something to do with the 2018 election. it’s available here:
richb 
Richard G. Baraniuk
Victor E. Cameron Professor of Electrical and Computer Engineering
Founder and Director, OpenStax
Rice University 
The Rice page is one of the first page that got me thinking I should list all those Highly Technical Reference Pages in one fell swoop.

Thursday, September 13, 2018

“And we’re back for Season 6” Paris Machine Learning Newsletter, September 2018 (in French)


“And we’re back for Season 6” the Paris Machine Learning Meetup Newsletter, September 2018

Sommaire
  1. L’édito de Franck, Jacqueline, Igor, “And we’re back for Season 6”
  2. On Aime Beaucoup !
  3. La saison dernière.

1 L’édito de Franck, Jacqueline, Igor, “And we’re back for Season 6”

Jacqueline Forien nous rejoint en tant qu’organisatrice du meetup.

La saison 5, c’était 8 hors série et 9 meetups réguliers, plus de 7200+ membres ce qui en fait un des plus grand meetup du monde sur cette thématique. On a vu plein de choses l’année dernière du point de vue politique mais aussi dans les meetups. On reviendra la dessus plus tard dans une autre newsletter. Ce qu’il faut savoir c’est que NIPS la conférence de référence en IA a vendu ses tickets en 11 minutes 38 secondes. D’expérience, c’est plus rapide que la vente des billets de BTS quand il viendront à Bercy en Octobre. Ce qui est sûr c’est que ces expériences que sont les rencontres autour du Machine Learning doivent rester et c’est pour cela que toutes les présentations et vidéos de nos meetups sont dans nos archives et sont listées plus bas dans cette newsletter.

Cette dernière saison n’aurait pas pu se faire sans les entreprises et associations suivantes:

Un grand merci pour leur implication dans une communauté dynamique sur l’IA ici à Paris et en Europe.

Notre premier meetup se fera en coordination avec le Women in Machine Learning and Data Science, pour s’inscrire c’est ici: #Hors-série — Paris WiMLDS & Paris ML Meetup

Les dates de nos meetups pour la saison 6:
  • Hors série #1 19/09
  • #2 10/10
  • #3 14/11
  • #4 12/12
  • #5 09/01
  • #6 13/02
  • #7 13/03
  • #8 10/04
  • #9 15/05
  • #10 12/06

Si vous voulez nous accueillir ou sponsoriser, n’hésitez pas à nous contacter grâce à ce formulaire ou via notre site.

Vous pouvez nous suivre sur Twitter @ParisMLgroup.



2. On Aime Beaucoup !

Chloé Azencott, une des speakers du meetup, vient de sortir un livre sur le Machine Learning en Français. C’est Introduction au Machine Learning et il y a plein d’exemples de code.

Des conférences et meetups qu’on aime bien!

++++Important: France is AI conférence: 3e édition de notre conférence annuelle les 17 et 18 octobre 2018 à Station F.+++: Le lien d’inscription eventbriteavec le code promo MEETUPS100 offre 100 place gratuites. Au-delà des 100 premières, les places peuvent être obtenu avec 50% de réduction avec le code MEETUPS50

Les petits nouveaux meetups:

Ceux qui recommencent:

3. La saison dernière


La saison dernière (Saison 5), c’était 8 hors série et 9 meetups réguliers pour un total de 95 meetups en 5 saisons. Voici les liens vers les présentations et videos faites à ces meetups:

Regular meetups

Hors série

Voilà, c’est tout pour aujourd’hui !


PS: N’oubliez pas que vous pouvez aussi suivre le Paris Machine Learning Meetup sur Twitter, LinkedIn, Facebook et Google+ .

Vous pouvez consulter les archives des meet ups précédents.

On travaille aussi sur un nouveau site web : MLParis.org

Le Paris Machine Learning Meetup, c’est 7200 membres ce qui en fait un des plus important du monde avec déjà plus de 95 rencontres et 10 dates programmées pour cette saison 6.
  • Si vous êtes étudiant, postdoc ou chercheur, le meet up est une belle tribune pour parler de vos travaux avant de les présenter aux conférences NIPS/ICML/ICLR/COLT/UAI/ACL/KDD ;
  • Pour les startups, c’est un bon moyen de parler de vos projets ou de recruter les futurs superstars de votre équipe IA/Data Science ;
  • Et pour tous, c’est un moyen simple de se tenir informé des derniers développements du domaine et d’avoir des échanges uniques avec les conférenciers et les autres participants.

Comme toujours, premier arrivé, premier entré. Le nombre de places dans les salles est limité. Au delà de leur capacité, nous ne pourrons pas vous faire rentrer. Vous pouvez suivre le taux de remplissage en suivant #MLParis sur twitter.



Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !

Printfriendly