Thursday, March 26, 2020

Accelerating SARS-COv2 Molecular Dynamics Studies with Optical Random Features

** Nuit Blanche is now on Twitter: @NuitBlog **



We just published a new blog post at LightOn. This time, we used LightOn's Optical Processing Unit to show how our hardware can help in speeding up global sampling studies that are using Molecular Dynamics simulations, such as in the case of metadynamics. Our engineer, Amélie Chatelain wrote a blog post about it and it is here: Accelerating SARS-COv2 Molecular Dynamics Studies with Optical Random Features

We showed that LightOn's OPU, in tandem with the NEWMA algorithm, becomes very interesting (compared to CPU implementations of Random Fourier Features and FastFood) for simulations featuring more than 4 000 atoms.
  

Because building computational hardware makes no sense if we don't have a community that lifts us, the code used to generate the plots in that blog post is publicly available at the following link: https://github.com/lightonai/newma-md.

Follow @NuitBlog or join the CompressiveSensing Reddit, the Facebook page, the Compressive Sensing group on LinkedIn  or the Advanced Matrix Factorization group on LinkedIn


Other links:
Paris Machine LearningMeetup.com||@Archives||LinkedIn||Facebook|| @ParisMLGroup< br/> About LightOnNewsletter ||@LightOnIO|| on LinkedIn || on CrunchBase || our Blog

Saturday, March 14, 2020

Au Revoir Backprop ! Bonjour Optical Transfer Learning !

** Nuit Blanche is now on Twitter: @NuitBlog **


We recently used LightOn's Optical Processing Unit to show how our hardware fared in the context of Transfer learning. Our engineer, Luca Tommasone wrote a blog post about it and it is here: Au Revoir Backprop! Bonjour Optical Transfer Learning!

Because building computational hardware makes no sense if we don't have a community that lifts us, the code used to generate the plots in that blog post is publicly available at the following link: https://github.com/lightonai/transfer-learning-opu.




Enjoy and most importantly stay safe !





Follow @NuitBlog or join the CompressiveSensing Reddit, the Facebook page, the Compressive Sensing group on LinkedIn  or the Advanced Matrix Factorization group on LinkedIn

Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email.

Other links:
Paris Machine LearningMeetup.com||@Archives||LinkedIn||Facebook|| @ParisMLGroup< br/> About LightOnNewsletter ||@LightOnIO|| on LinkedIn || on CrunchBase || our Blog
About myselfLightOn || Google Scholar || LinkedIn ||@IgorCarron ||Homepage||ArXiv

Wednesday, January 15, 2020

Beyond Overfitting and Beyond Silicon: The double descent curve

** Nuit Blanche is now on Twitter: @NuitBlog **

We recently tried a small experiment with LightOn's Optical Processing Unit on the issue of generalization. Our engineer, Alessandro Cappelli, did the experiment and wrote a blog post on it and it is here: Beyond Overfitting and Beyond Silicon: The double descent curve 

Two days ago, Becca Willett was talking on the same subject at the Turing Institute in London.

A function space view of overparameterized neural networks Rebecca Willett.



Attendant preprint is here:

A Function Space View of Bounded Norm Infinite Width ReLU Nets: The Multivariate Case by Greg Ongie, Rebecca Willett, Daniel Soudry, Nathan Srebro
A key element of understanding the efficacy of overparameterized neural networks is characterizing how they represent functions as the number of weights in the network approaches infinity. In this paper, we characterize the norm required to realize a function f:RdR as a single hidden-layer ReLU network with an unbounded number of units (infinite width), but where the Euclidean norm of the weights is bounded, including precisely characterizing which functions can be realized with finite norm. This was settled for univariate univariate functions in Savarese et al. (2019), where it was shown that the required norm is determined by the L1-norm of the second derivative of the function. We extend the characterization to multivariate functions (i.e., networks with d input units), relating the required norm to the L1-norm of the Radon transform of a (d+1)/2-power Laplacian of the function. This characterization allows us to show that all functions in Sobolev spaces Ws,1(R)sd+1, can be represented with bounded norm, to calculate the required norm for several specific functions, and to obtain a depth separation result. These results have important implications for understanding generalization performance and the distinction between neural networks and more traditional kernel learning.


Follow @NuitBlog or join the CompressiveSensing Reddit, the Facebook page, the Compressive Sensing group on LinkedIn  or the Advanced Matrix Factorization group on LinkedIn

Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email.

Other links:
Paris Machine LearningMeetup.com||@Archives||LinkedIn||Facebook|| @ParisMLGroup< br/> About LightOnNewsletter ||@LightOnIO|| on LinkedIn || on CrunchBase || our Blog
About myselfLightOn || Google Scholar || LinkedIn ||@IgorCarron ||Homepage||ArXiv

Wednesday, December 18, 2019

LightOn’s AI Research Workshop — FoRM #4: The Future of Random Matrices. Thursday, December 19th

** Nuit Blanche is now on Twitter: @NuitBlog **




Tomorrow we will feature LightOn’s 4th AI Research workshop on the Future of Random Matrices (FoRM). It starts at 2pm on Thursday, December 19th (That’s 2pm CET/Paris, 1pm GMT/UTC/London, 8am EST/NY-Montreal, 5am PST/California, 9pm UTC+8/ Shenzhen). We have an exciting and diverse line-up with talks on compressive learning, binarized neural networks, particle physics, and matrix factorization.

Feel free to join us, or to catch the event livestream — link to be available on this page on the day of the event.


Without further ado, here is the program:


Program
  • 1:45pm — Welcome coffee and opening. A short introduction about LightOn, Igor Carron
  • 2:00pm — Compressive Learning with Random Projections, Ata Kaban
  • 2:45pm — Medical Applications of Low Precision Neuromorphic Systems, Bodgan Penkovsky
  • 3:30pm — Comparing Low Complexity Linear Transforms, Gavin Gray4:00pm — Coffee break and discussions
  • 4:20pm —LightOn’s OPU+Particle Physics, David Rousseau, Aishik Ghosh, Laurent Basara, Biswajit Biswas
  • 5:00pm — Accelerated Weighted (Nonnegative) Matrix Factorization with Random Projections, Matthieu Puigt
  • 5:45pm — Wrapping-up and beers on our rooftop


Talks and abstracts

Ata Kaban, University of Birmingham.
Compressive Learning with Random Projections
By direct analogy to compressive sensing, compressive learning has been originally coined to mean learning efficiently from random projections of high dimensional massive data sets that have a sparse representation. In this talk we discuss compressive learning without the sparse representation requirement, where instead we exploit the
natural structure of learning problems.

Bodgan Penkovsky, Paris-Sud University.
Medical Applications of Low Precision Neuromorphic Systems
The advent of deep learning has considerably accelerated machine learning development, but its development at the edge is limited by its high energy cost and memory requirement. With new memory technology available, emerging Binarized Neural Networks (BNNs) are promising to reduce the energy impact of the forthcoming machine learning hardware generation, enabling machine learning on the edge devices and avoiding data transfer over the network. In this talk we will discuss strategies to apply BNNs to biomedical signals such as electrocardiography and electroencephalography, without sacrificing accuracy and improving energy use. The ultimate goal of this research is to enable smart autonomous healthcare devices.

Gavin Gray, Edinburgh University.
Comparing Low Complexity Linear Transforms
In response to the development of recent efficient dense layers, this talk discusses replacing linear components in pointwise convolutions with structured linear decompositions for substantial gains in the efficiency/accuracy tradeoff. Pointwise convolutions are fully connected layers and are thus prepared for replacement by structured transforms. Networks using such layers are able to learn the same tasks as those using standard convolutions, and provide Pareto-optimal benefits in efficiency/accuracy, both in terms of computation (mult-adds) and parameter count (and hence memory).

David RousseauAishik GhoshLaurent Basara, Biswajit Biswas. LAL Orsay, LRI Orsay, BITS University.
OPU+Particle Physics

LightOn’s OPU is opening a new machine learning paradigm. Two use cases have been selected to investigate the potentiality of OPU for particle physics:
  • End-to-End learning: high energy proton collision at the Large Hadron Collider have been simulated, each collision being recorded as an image representing the energy flux in the detector. Two classes of events have been simulated: signal are created by a hypothetical supersymmetric particle, and background by known processes. The task is to train a classifier to separate the signal from the background. Several techniques using the OPU will be presented, compared with more classical particle physics approaches.
  • Tracking: high energy proton collisions at the LHC yield billions of records with typically 100,000 3D points corresponding to the trajectory of 10,000 particles. Various investigations of the potential of the OPU to digest this high dimensional data will be reported.


Matthieu Puigt, Université du Littoral Côte d’Opale.
Accelerated Weighted (Nonnegative) Matrix Factorization with Random Projections
Random projections belong to the major techniques used to process big data. They have been successfully applied to, e.g., (Nonnegative) Matrix Factorization ((N)MF). However, missing entries in the matrix to factorize (or more generally weights which model the confidence in the entries of the data matrix) prevent their use. In this talk, I will present the framework that we recently proposed to solve this issue, i.e., to apply random projections to weighted (N)MF. We experimentally show the proposed framework to significantly speed-up state-of-the-art weighted NMF methods under some mild conditions.



The workshop will take place at IPGG, 6 Rue Jean Calvin, 75005 Paris. The location is close to both the Place Monge and the Censier-Daubenton subway stations on line7. it is also close to the Luxembourg station on the RER B line. The location is close to bus stops on the 21, 24, 27, 47, and 89 routes. Note that strikes are still ongoing, and some of these options may not be available.

We will be in the main amphitheater, downstairs on your right when you enter the building. Please register in advance on our meetup group so as to help us in the organization of the workshop.




Follow @NuitBlog or join the CompressiveSensing Reddit, the Facebook page, the Compressive Sensing group on LinkedIn  or the Advanced Matrix Factorization group on LinkedIn

Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email.

Other links:
Paris Machine LearningMeetup.com||@Archives||LinkedIn||Facebook|| @ParisMLGroup
 About LightOnNewsletter ||@LightOnIO|| on LinkedIn || on CrunchBase || our Blog
About myselfLightOn || Google Scholar || LinkedIn ||@IgorCarron ||Homepage||ArXiv

Wednesday, December 11, 2019

Ce Soir: Paris Machine Learning Meetup #2 Season 7: Symbolic maths, Data Generation thru GAN, "Prevision Retards" @SNCF, Retail and AI, Rapids.ai Leveraging GPUs

** Nuit Blanche is now on Twitter: @NuitBlog **


A big thank you to Scaleway for hosting us in their inspiring office and sponsoring the networking event afterwards.


So this is quite exciting. Our meetup group has 7 999 members and we are going to organize a meetup in a town that is paralyzed by strikes. During the course of existence of this meetup, we have seen worse.  

For those of you who will not be able to make it, all information slides and link to streaming are below:


Tabular data are the most common within companies. Generating synthetic data that respects the statistical properties of the original data can have several applications: a machine learning that respects data privacy, improving the robustness of a model in relation to data drift, etc. Since 2018, there has been an increasing number of academic publications presenting the use of GANs on this type of data, particularly on patient medical data. We have performed a proof of concept on real data, and present the results of several models from the research, namely the Wasserstein GAN, the Wasserstein GAN with Gradient Penalty and the Cramér-GAN, with the objective of "model compatibility", i.e. the possibility of using synthetic data to replace real data to train a classifier.

2. Eloïse Nonne, Soumaya Ihihi, "Prévisions Retards" a Machine Learning project led by e.SNCF's Data IoT team.
Its goal is to integrate predictions of train delays into the SNCF mobile application. Every day, our model predicts delays for the next 7 days, at each stop, for every train in Paris area network. The challenge of this project is to improve the reliability of passenger information and to provide more relevant routes for the application users. We will present the project, from the definition of needs and exploratory data analysis, to its industrialization in the cloud and the reliability of its predictions.

This talk is focussed on AI and ML applications in retail. Discover how Carrefour is transforming through the introduction of the Google - Carrefour Lab by Elina Ashkinazi-Ildis, Director of the Lab. Then go further with the "shelf out detection" usecase presented by Kasra Mansouri, Data Scientist within Artefact.

RAPIDS makes it possible to have end-to-end data science pipelines run entirely on GPU architecture. It capitalizes on the parallelization capabilities of GPUs to accelerate data preprocessing pipelines, with a pandas-like dataframe syntax. GPU-optimized versions of scikit-learn algorithms are available, and RAPIDS also integrates with major deep learning frameworks.
This talk will present RAPIDS and its capabilities, and how to integrate it in your pipelines.


Neural networks have a reputation for being better at solving statistical or approximate problems than at performing calculations or working with symbolic data. In this paper, we show that they can be **surprisingly good** at more elaborated tasks in mathematics, such as symbolic integration and solving differential equations. We propose a syntax for representing mathematical problems, and methods for generating large datasets that can be used to train sequence-to-sequence models. We achieve results that outperform commercial Computer Algebra Systems such as Matlab or Mathematica.
https://arxiv.org/abs/1912.01412



Follow @NuitBlog or join the CompressiveSensing Reddit, the Facebook page, the Compressive Sensing group on LinkedIn  or the Advanced Matrix Factorization group on LinkedIn

Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email.

Other links:
Paris Machine LearningMeetup.com||@Archives||LinkedIn||Facebook|| @ParisMLGroup
About LightOnNewsletter ||@LightOnIO|| on LinkedIn || on CrunchBase || our Blog
About myselfLightOn || Google Scholar || LinkedIn ||@IgorCarron ||Homepage||ArXiv

Wednesday, November 13, 2019

Paris Machine Learning Meetup #1 Season 7: Neuroscience & AI, Time series, Deep Transfert learning in NLP, Media Campaign, Energy Forecasting

** Nuit Blanche is now on Twitter: @NuitBlog **


A big thank-you to Publicis Sapient for welcoming us to their inspiring office. Presentation slides will be available here. The streaming of the event can also be found here:






Publicis Sapient is a digital transformation partner helping companies and established organizations get to their future, digitally-enabled state, both in the way they work and the way they serve their customers. Within Publicis Sapient, the Data Science Team builds machine learning products in order to support clients in their transformation.

Margot Fournier, Publicis Sapient. Classification of first time visitors
A significant share of visitors on a site do not return, making it crucial to identify levers that can decrease bouncing rate. For a client in the retail sector, we developed several models that are able to predict both the gender and the segment in which unlogged and unknown visitors fit in. This allows to personalize the experience from the first visit and prevent users from bouncing.

Maxence Brochard, Publicis Sapient. Media campaign optimization
Internet users leave multiple traces of micro-conversions (searches, clicks, whishlist...) during their visit on an ecommerce site: these micro-conversions can be weak signals of an act of purchase in the near future. To analyze those signals, we built a solution to detect visitors that are likely to convert and target them in while optimizing media campaigns budgets.




The fields of neuroscience and artificial intelligence (AI) have a long and intertwined history. In more recent times, however, communication and collaboration between the two fields has become less commonplace.In https://bit.ly/2WLsMaQ, the authors argue that better understanding biological brains could play a vital role in building intelligent machines. They survey historical interactions between the AI and neuroscience fields and emphasize current advances in AI that have been inspired by the study of neural computation in humans and other animals. Finally they conclude by highlighting shared themes that may be key for advancing future research in both fields.




The recent M4 forecasting competition (https://www.mcompetitions.unic.ac.cy) has demonstrated that the use of one forecasting method alone is not the most efficient approach in terms of forecasting accuracy. In this talk, I will focus on an energy consumption forecasting use case integrating exogenous data such as weather conditions and open data. In particular, I will present a forecasting time series challenge and the best practices observed on the best submissions and showcase an interesting approach based on a combination of classical statistical forecasting methods and machine learning algorithms, such as gradient boosting, for increased performance. Generalizing the use of these methods can be a major help to address the challenge of electricity demand and production adjustment.




Follow @NuitBlog or join the CompressiveSensing Reddit, the Facebook page, the Compressive Sensing group on LinkedIn  or the Advanced Matrix Factorization group on LinkedIn

Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email.

Other links:
Paris Machine LearningMeetup.com||@Archives||LinkedIn||Facebook|| @ParisMLGroup< br/> About LightOnNewsletter ||@LightOnIO|| on LinkedIn || on CrunchBase || our Blog
About myselfLightOn || Google Scholar || LinkedIn ||@IgorCarron ||Homepage||ArXiv

Saturday, November 09, 2019

Paris Machine Learning Meetup Hors Série #1: A Talk with François Chollet Hors série with François Chollet, (Creator of the Keras Library)

** Nuit Blanche is now on Twitter: @NuitBlog **


The first Paris Machine Learning Meetup Hors Série #1 of the season is a Talk with François Chollet Hors série with Francois Chollet, (Creator of the Keras Library).

This event is being recorded.


We thank Morning coworking for hosting us and LightOn for their support in organizing this event. 

Today, we welcome Francois Chollet. François is a researcher at Google and creator of the Keras Deep Learning library (https://keras.io). He will talk to us about the new features of the TensorFlow library as well as give us some insight of the latest in Deep Learning Research.

Schedule :
  • 2pm : Keras & TensorFlow for Deep Learning
  • 2.30pm : Q&A
  • 2.40pm : Latest research in Deep Learning
  • 2.50pm : Q&A
  • 3pm : networking

nb :
1/ this is **not** a coding session
2/ this event does not include a buffet (drink, food)


Follow @NuitBlog or join the CompressiveSensing Reddit, the Facebook page, the Compressive Sensing group on LinkedIn  or the Advanced Matrix Factorization group on LinkedIn

Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email.

Other links:
Paris Machine LearningMeetup.com||@Archives||LinkedIn||Facebook|| @ParisMLGroup< br/> About LightOnNewsletter ||@LightOnIO|| on LinkedIn || on CrunchBase || our Blog
About myselfLightOn || Google Scholar || LinkedIn ||@IgorCarron ||Homepage||ArXiv

Friday, November 01, 2019

Videos: IMA Computational Imaging Workshop, October 14 - 18, 2019

** Nuit Blanche is now on Twitter: @NuitBlog ** 



Stanley ChanJeff FesslerJustin HaldarUlugbek KamilovSaiprasad RavishankarRebecca WillettBrendt Wohlberg just organized a workshop at IMA on computational imaging. Short story as this blog just passed the 8 million page views. Understanding of Compressed sensing was in large part, at least by looking at the stats:hits on this blog, due to an IMA meeting on the subject and the fact that people could watch the videos afterward. Hoping for this workshop to follow the same path. Given the amount of ML in it, I wonder if it shouldn't have been called TheGreatConvergence meeting:-)


This workshop will serve as a venue for presenting and discussing recent advances and trends in the growing field of computational imaging, where computation is a major component of the imaging system. Research on all aspects of the computational imaging pipeline from data acquisition (including non-traditional sensing methods) to system modeling and optimization to image reconstruction, processing, and analytics will be discussed, with talks addressing theory, algorithms and mathematical techniques, and computational hardware approaches for imaging problems and applications including MRI, tomography, ultrasound, microscopy, optics, computational photography, radar, lidar, astronomical imaging, hybrid imaging modalities, and novel and extreme imaging systems. The expanding role of computational imaging in industrial imaging applications will also be explored.
Given the rapidly growing interest in data-driven, machine learning, and large-scale optimization based methods in computational imaging, the workshop will partly focus on some of the key recent and new theoretical, algorithmic, or hardware (for efficient/optimized computation) developments and challenges in these areas. Several talks will focus on analyzing, incorporating, or learning various models including sparse and low-rank models, kernel and nonlinear models, plug-and-play models, graphical, manifold, tensor, and deep convolutional or filterbank models in computational imaging problems. Research and discussion of methods and theory for new sensing techniques including data-driven sensing, task-driven imaging optimization, and online/real-time imaging optimization will be encouraged. Discussion sessions during the workshop will explore the theoretical and practical impact of various presented methods and brainstorm the main challenges and open problems.
The workshop aims to encourage close interactions between mathematical and applied computational imaging researchers and practitioners, and bring together experts in academia and industry working in computational imaging theory and applications, with focus on data and system modeling, signal processing, machine learning, inverse problems, compressed sensing, data acquisition, image analysis, optimization, neuroscience, computation-driven hardware design, and related areas, and facilitate substantive and cross-disciplinary interactions on cutting-edge computational imaging methods and systems.




Follow @NuitBlog or join the CompressiveSensing Reddit, the Facebook page, the Compressive Sensing group on LinkedIn  or the Advanced Matrix Factorization group on LinkedIn

Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email.

Other links:
Paris Machine LearningMeetup.com||@Archives||LinkedIn||Facebook|| @ParisMLGroup< br/> About LightOnNewsletter ||@LightOnIO|| on LinkedIn || on CrunchBase || our Blog
About myselfLightOn || Google Scholar || LinkedIn ||@IgorCarron ||Homepage||ArXiv

Thursday, October 10, 2019

Deep Compressed Sensing -implementation-

** Nuit Blanche is now on Twitter: @NuitBlog **

As promised back in May, the implementation of the Deep Compressed Sensing paper is now available.
Hi,
Thank you for your interest and your wait. Now the code accompanying our ICML paper is available at: https://github.com/deepmind/deepmind-research/tree/master/cs_gan
Best wishes,
Yan


Compressed sensing (CS) provides an elegant framework for recovering sparse signals from compressed measurements. For example, CS can exploit the structure of natural images and recover an image from only a few random measurements. CS is flexible and data efficient, but its application has been restricted by the strong assumption of sparsity and costly reconstruction process. A recent approach that combines CS with neural network generators has removed the constraint of sparsity, but reconstruction remains slow. Here we propose a novel framework that significantly improves both the performance and speed of signal recovery by jointly training a generator and the optimisation process for reconstruction via meta-learning. We explore training the measurements with different objectives, and derive a family of models based on minimising measurement errors. We show that Generative Adversarial Nets (GANs) can be viewed as a special case in this family of models. Borrowing insights from the CS perspective, we develop a novel way of improving GANs using gradient information from the discriminator.
Follow @NuitBlog or join the CompressiveSensing Reddit, the Facebook page, the Compressive Sensing group on LinkedIn  or the Advanced Matrix Factorization group on LinkedIn


Other links:
Paris Machine LearningMeetup.com||@Archives||LinkedIn||Facebook|| @ParisMLGroup
About LightOnNewsletter ||@LightOnIO|| on LinkedIn || on CrunchBase || our Blog

Printfriendly