Friday, May 29, 2020

Photonic Computing for Massively Parallel AI is out and it is spectacular!




It’s been a long time brewing but we just released our first white paper on Photonic Computing for Massively Parallel AI. The document features the technology we develop at LightOn, some of its use, some testimonials, and how we see the future of computing. It is downloadable here or from our website: LightOn.ai

Enjoy!



Follow @NuitBlog or join the CompressiveSensing Reddit, the Facebook page, the Compressive Sensing group on LinkedIn  or the Advanced Matrix Factorization group on LinkedIn

Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email.

Other links:
Paris Machine LearningMeetup.com||@Archives||LinkedIn||Facebook|| @ParisMLGroup

Friday, May 15, 2020

Tackling Reinforcement Learning with the Aurora OPU

** Nuit Blanche is now on Twitter: @NuitBlog **



Martin Graive did an internship at LightOn and decided to investigate how to use Random Projections in the context of Reinforcement Learning. He just wrote a blog post on the matter entitled "Tackling Reinforcement Learning with the Aurora OPU". The attendant GitHub repo is located here. Enjoy!




Follow @NuitBlog or join the CompressiveSensing Reddit, the Facebook page, the Compressive Sensing group on LinkedIn  or the Advanced Matrix Factorization group on LinkedIn

Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email.

Other links:
Paris Machine LearningMeetup.com||@Archives||LinkedIn||Facebook|| @ParisMLGroup
About LightOnNewsletter ||@LightOnIO|| on LinkedIn || on CrunchBase || our Blog
About myselfLightOn || Google Scholar || LinkedIn ||@IgorCarron ||Homepage||ArXiv

Wednesday, April 29, 2020

3-year PhD studentship in Inverse Problems and Optical Computing, LightOn, Paris, France

** Nuit Blanche is now on Twitter: @NuitBlog **


Come and join us at LightOn, we have a 3-year PhD fellowship available for someone who can help us build our future photonic cores. Here is 


As part of the newly EU-funded ITN project “Post-Digital”, LightOn has an opening for a fully-funded 3 year Ph.D. studentship to join its R&D team, at the crossroads between Computer Science and Physics. 

The goal of this 3 year Ph.D. position is to theoretically, numerically, and experimentally investigate how optimization techniques can be used in the design of hybrid computing pipelines, including a number of photonic building blocks (“photonic cores”). In particular, the optimized networks will be used to solve large-scale physics-based inverse problems in science and engineering - for instance in medical imaging (e.g. ultrasound), or simulation problems. The candidate will first investigate how LigthOn’s current range of photonics co-processors can be integrated within task-specific networks. The candidate will then develop a computational framework for the optimization of electro-optical systems. Finally, optimized systems will be built and evaluated on experimental data. This project will be part of LightOn’s internal THEIA project, aiming at automating the design of hybrid computing architectures, including combinations of LightOn’s photonic cores and traditional silicon chips.

In the framework of the EU funded ITN Post-Digital network, this project involves collaborations and 3-month secondments with two research groups led by:
  • Daniel Brunner (Université Bourgogne Franche-Comté / FEMTO-ST Besançon), who will be the academic supervisor - The candidate will be registered as a Ph.D. student at UBFC.
  • Pieter Bienstman (IMEC, Leuven, Belgium).
The supervisor at LightOn will be Laurent Daudet, CTO - currently on leave from his position of professor of physics at Université de Paris.

Due to the EU funding source, please make sure you comply with the mobility and eligibility rule before applying. Application: Position to be filled no later than Sept 1st, 2020.

Send your application with a CV to jobs@lighton.io with [Post-Digital PhD] in the subject line. Shortlisted applicants will be asked to provide references. This project has received funding from the European Union’s Horizon 2020 research and innovation program under the Marie Skłodowska-Curie grant agreement No 860830.

For more information: https://lighton.ai/careers/


Follow @NuitBlog or join the CompressiveSensing Reddit, the Facebook page, the Compressive Sensing group on LinkedIn  or the Advanced Matrix Factorization group on LinkedIn

Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email.

Other links:
Paris Machine LearningMeetup.com||@Archives||LinkedIn||Facebook|| @ParisMLGroup
About LightOnNewsletter ||@LightOnIO|| on LinkedIn || on CrunchBase || our Blog
About myselfLightOn || Google Scholar || LinkedIn ||@IgorCarron ||Homepage||ArXiv

Tuesday, April 07, 2020

LightOn Cloud 2.0 featuring LightOn Aurora OPUs

** Nuit Blanche is now on Twitter: @NuitBlog **



At LightOn, we just launched LightOn Cloud 2.0 that feature several Aurora Optical Processing Unit for use by the Machine Learning Community. the blog post about this can be found here. You can request access to the Cloud at https://cloud.lighton.ai/

We are also having a LightOn Cloud for Research program: https://cloud.lighton.ai/lighton-research/





Follow @NuitBlog or join the CompressiveSensing Reddit, the Facebook page, the Compressive Sensing group on LinkedIn  or the Advanced Matrix Factorization group on LinkedIn

Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email.

Other links:
Paris Machine LearningMeetup.com||@Archives||LinkedIn||Facebook|| @ParisMLGroup
About LightOnNewsletter ||@LightOnIO|| on LinkedIn || on CrunchBase || our Blog
About myselfLightOn || Google Scholar || LinkedIn ||@IgorCarron ||Homepage||ArXiv

Thursday, March 26, 2020

Accelerating SARS-COv2 Molecular Dynamics Studies with Optical Random Features

** Nuit Blanche is now on Twitter: @NuitBlog **



We just published a new blog post at LightOn. This time, we used LightOn's Optical Processing Unit to show how our hardware can help in speeding up global sampling studies that are using Molecular Dynamics simulations, such as in the case of metadynamics. Our engineer, Amélie Chatelain wrote a blog post about it and it is here: Accelerating SARS-COv2 Molecular Dynamics Studies with Optical Random Features

We showed that LightOn's OPU, in tandem with the NEWMA algorithm, becomes very interesting (compared to CPU implementations of Random Fourier Features and FastFood) for simulations featuring more than 4 000 atoms.
  

Because building computational hardware makes no sense if we don't have a community that lifts us, the code used to generate the plots in that blog post is publicly available at the following link: https://github.com/lightonai/newma-md.

Follow @NuitBlog or join the CompressiveSensing Reddit, the Facebook page, the Compressive Sensing group on LinkedIn  or the Advanced Matrix Factorization group on LinkedIn


Other links:
Paris Machine LearningMeetup.com||@Archives||LinkedIn||Facebook|| @ParisMLGroup< br/> About LightOnNewsletter ||@LightOnIO|| on LinkedIn || on CrunchBase || our Blog

Saturday, March 14, 2020

Au Revoir Backprop ! Bonjour Optical Transfer Learning !

** Nuit Blanche is now on Twitter: @NuitBlog **


We recently used LightOn's Optical Processing Unit to show how our hardware fared in the context of Transfer learning. Our engineer, Luca Tommasone wrote a blog post about it and it is here: Au Revoir Backprop! Bonjour Optical Transfer Learning!

Because building computational hardware makes no sense if we don't have a community that lifts us, the code used to generate the plots in that blog post is publicly available at the following link: https://github.com/lightonai/transfer-learning-opu.




Enjoy and most importantly stay safe !





Follow @NuitBlog or join the CompressiveSensing Reddit, the Facebook page, the Compressive Sensing group on LinkedIn  or the Advanced Matrix Factorization group on LinkedIn

Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email.

Other links:
Paris Machine LearningMeetup.com||@Archives||LinkedIn||Facebook|| @ParisMLGroup< br/> About LightOnNewsletter ||@LightOnIO|| on LinkedIn || on CrunchBase || our Blog
About myselfLightOn || Google Scholar || LinkedIn ||@IgorCarron ||Homepage||ArXiv

Wednesday, January 15, 2020

Beyond Overfitting and Beyond Silicon: The double descent curve

** Nuit Blanche is now on Twitter: @NuitBlog **

We recently tried a small experiment with LightOn's Optical Processing Unit on the issue of generalization. Our engineer, Alessandro Cappelli, did the experiment and wrote a blog post on it and it is here: Beyond Overfitting and Beyond Silicon: The double descent curve 

Two days ago, Becca Willett was talking on the same subject at the Turing Institute in London.

A function space view of overparameterized neural networks Rebecca Willett.



Attendant preprint is here:

A Function Space View of Bounded Norm Infinite Width ReLU Nets: The Multivariate Case by Greg Ongie, Rebecca Willett, Daniel Soudry, Nathan Srebro
A key element of understanding the efficacy of overparameterized neural networks is characterizing how they represent functions as the number of weights in the network approaches infinity. In this paper, we characterize the norm required to realize a function f:RdR as a single hidden-layer ReLU network with an unbounded number of units (infinite width), but where the Euclidean norm of the weights is bounded, including precisely characterizing which functions can be realized with finite norm. This was settled for univariate univariate functions in Savarese et al. (2019), where it was shown that the required norm is determined by the L1-norm of the second derivative of the function. We extend the characterization to multivariate functions (i.e., networks with d input units), relating the required norm to the L1-norm of the Radon transform of a (d+1)/2-power Laplacian of the function. This characterization allows us to show that all functions in Sobolev spaces Ws,1(R)sd+1, can be represented with bounded norm, to calculate the required norm for several specific functions, and to obtain a depth separation result. These results have important implications for understanding generalization performance and the distinction between neural networks and more traditional kernel learning.


Follow @NuitBlog or join the CompressiveSensing Reddit, the Facebook page, the Compressive Sensing group on LinkedIn  or the Advanced Matrix Factorization group on LinkedIn

Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email.

Other links:
Paris Machine LearningMeetup.com||@Archives||LinkedIn||Facebook|| @ParisMLGroup< br/> About LightOnNewsletter ||@LightOnIO|| on LinkedIn || on CrunchBase || our Blog
About myselfLightOn || Google Scholar || LinkedIn ||@IgorCarron ||Homepage||ArXiv

Wednesday, December 18, 2019

LightOn’s AI Research Workshop — FoRM #4: The Future of Random Matrices. Thursday, December 19th

** Nuit Blanche is now on Twitter: @NuitBlog **




Tomorrow we will feature LightOn’s 4th AI Research workshop on the Future of Random Matrices (FoRM). It starts at 2pm on Thursday, December 19th (That’s 2pm CET/Paris, 1pm GMT/UTC/London, 8am EST/NY-Montreal, 5am PST/California, 9pm UTC+8/ Shenzhen). We have an exciting and diverse line-up with talks on compressive learning, binarized neural networks, particle physics, and matrix factorization.

Feel free to join us, or to catch the event livestream — link to be available on this page on the day of the event.


Without further ado, here is the program:


Program
  • 1:45pm — Welcome coffee and opening. A short introduction about LightOn, Igor Carron
  • 2:00pm — Compressive Learning with Random Projections, Ata Kaban
  • 2:45pm — Medical Applications of Low Precision Neuromorphic Systems, Bodgan Penkovsky
  • 3:30pm — Comparing Low Complexity Linear Transforms, Gavin Gray4:00pm — Coffee break and discussions
  • 4:20pm —LightOn’s OPU+Particle Physics, David Rousseau, Aishik Ghosh, Laurent Basara, Biswajit Biswas
  • 5:00pm — Accelerated Weighted (Nonnegative) Matrix Factorization with Random Projections, Matthieu Puigt
  • 5:45pm — Wrapping-up and beers on our rooftop


Talks and abstracts

Ata Kaban, University of Birmingham.
Compressive Learning with Random Projections
By direct analogy to compressive sensing, compressive learning has been originally coined to mean learning efficiently from random projections of high dimensional massive data sets that have a sparse representation. In this talk we discuss compressive learning without the sparse representation requirement, where instead we exploit the
natural structure of learning problems.

Bodgan Penkovsky, Paris-Sud University.
Medical Applications of Low Precision Neuromorphic Systems
The advent of deep learning has considerably accelerated machine learning development, but its development at the edge is limited by its high energy cost and memory requirement. With new memory technology available, emerging Binarized Neural Networks (BNNs) are promising to reduce the energy impact of the forthcoming machine learning hardware generation, enabling machine learning on the edge devices and avoiding data transfer over the network. In this talk we will discuss strategies to apply BNNs to biomedical signals such as electrocardiography and electroencephalography, without sacrificing accuracy and improving energy use. The ultimate goal of this research is to enable smart autonomous healthcare devices.

Gavin Gray, Edinburgh University.
Comparing Low Complexity Linear Transforms
In response to the development of recent efficient dense layers, this talk discusses replacing linear components in pointwise convolutions with structured linear decompositions for substantial gains in the efficiency/accuracy tradeoff. Pointwise convolutions are fully connected layers and are thus prepared for replacement by structured transforms. Networks using such layers are able to learn the same tasks as those using standard convolutions, and provide Pareto-optimal benefits in efficiency/accuracy, both in terms of computation (mult-adds) and parameter count (and hence memory).

David RousseauAishik GhoshLaurent Basara, Biswajit Biswas. LAL Orsay, LRI Orsay, BITS University.
OPU+Particle Physics

LightOn’s OPU is opening a new machine learning paradigm. Two use cases have been selected to investigate the potentiality of OPU for particle physics:
  • End-to-End learning: high energy proton collision at the Large Hadron Collider have been simulated, each collision being recorded as an image representing the energy flux in the detector. Two classes of events have been simulated: signal are created by a hypothetical supersymmetric particle, and background by known processes. The task is to train a classifier to separate the signal from the background. Several techniques using the OPU will be presented, compared with more classical particle physics approaches.
  • Tracking: high energy proton collisions at the LHC yield billions of records with typically 100,000 3D points corresponding to the trajectory of 10,000 particles. Various investigations of the potential of the OPU to digest this high dimensional data will be reported.


Matthieu Puigt, Université du Littoral Côte d’Opale.
Accelerated Weighted (Nonnegative) Matrix Factorization with Random Projections
Random projections belong to the major techniques used to process big data. They have been successfully applied to, e.g., (Nonnegative) Matrix Factorization ((N)MF). However, missing entries in the matrix to factorize (or more generally weights which model the confidence in the entries of the data matrix) prevent their use. In this talk, I will present the framework that we recently proposed to solve this issue, i.e., to apply random projections to weighted (N)MF. We experimentally show the proposed framework to significantly speed-up state-of-the-art weighted NMF methods under some mild conditions.



The workshop will take place at IPGG, 6 Rue Jean Calvin, 75005 Paris. The location is close to both the Place Monge and the Censier-Daubenton subway stations on line7. it is also close to the Luxembourg station on the RER B line. The location is close to bus stops on the 21, 24, 27, 47, and 89 routes. Note that strikes are still ongoing, and some of these options may not be available.

We will be in the main amphitheater, downstairs on your right when you enter the building. Please register in advance on our meetup group so as to help us in the organization of the workshop.




Follow @NuitBlog or join the CompressiveSensing Reddit, the Facebook page, the Compressive Sensing group on LinkedIn  or the Advanced Matrix Factorization group on LinkedIn

Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email.

Other links:
Paris Machine LearningMeetup.com||@Archives||LinkedIn||Facebook|| @ParisMLGroup
 About LightOnNewsletter ||@LightOnIO|| on LinkedIn || on CrunchBase || our Blog
About myselfLightOn || Google Scholar || LinkedIn ||@IgorCarron ||Homepage||ArXiv

Wednesday, December 11, 2019

Ce Soir: Paris Machine Learning Meetup #2 Season 7: Symbolic maths, Data Generation thru GAN, "Prevision Retards" @SNCF, Retail and AI, Rapids.ai Leveraging GPUs

** Nuit Blanche is now on Twitter: @NuitBlog **


A big thank you to Scaleway for hosting us in their inspiring office and sponsoring the networking event afterwards.


So this is quite exciting. Our meetup group has 7 999 members and we are going to organize a meetup in a town that is paralyzed by strikes. During the course of existence of this meetup, we have seen worse.  

For those of you who will not be able to make it, all information slides and link to streaming are below:


Tabular data are the most common within companies. Generating synthetic data that respects the statistical properties of the original data can have several applications: a machine learning that respects data privacy, improving the robustness of a model in relation to data drift, etc. Since 2018, there has been an increasing number of academic publications presenting the use of GANs on this type of data, particularly on patient medical data. We have performed a proof of concept on real data, and present the results of several models from the research, namely the Wasserstein GAN, the Wasserstein GAN with Gradient Penalty and the Cramér-GAN, with the objective of "model compatibility", i.e. the possibility of using synthetic data to replace real data to train a classifier.

2. Eloïse Nonne, Soumaya Ihihi, "Prévisions Retards" a Machine Learning project led by e.SNCF's Data IoT team.
Its goal is to integrate predictions of train delays into the SNCF mobile application. Every day, our model predicts delays for the next 7 days, at each stop, for every train in Paris area network. The challenge of this project is to improve the reliability of passenger information and to provide more relevant routes for the application users. We will present the project, from the definition of needs and exploratory data analysis, to its industrialization in the cloud and the reliability of its predictions.

This talk is focussed on AI and ML applications in retail. Discover how Carrefour is transforming through the introduction of the Google - Carrefour Lab by Elina Ashkinazi-Ildis, Director of the Lab. Then go further with the "shelf out detection" usecase presented by Kasra Mansouri, Data Scientist within Artefact.

RAPIDS makes it possible to have end-to-end data science pipelines run entirely on GPU architecture. It capitalizes on the parallelization capabilities of GPUs to accelerate data preprocessing pipelines, with a pandas-like dataframe syntax. GPU-optimized versions of scikit-learn algorithms are available, and RAPIDS also integrates with major deep learning frameworks.
This talk will present RAPIDS and its capabilities, and how to integrate it in your pipelines.


Neural networks have a reputation for being better at solving statistical or approximate problems than at performing calculations or working with symbolic data. In this paper, we show that they can be **surprisingly good** at more elaborated tasks in mathematics, such as symbolic integration and solving differential equations. We propose a syntax for representing mathematical problems, and methods for generating large datasets that can be used to train sequence-to-sequence models. We achieve results that outperform commercial Computer Algebra Systems such as Matlab or Mathematica.
https://arxiv.org/abs/1912.01412



Follow @NuitBlog or join the CompressiveSensing Reddit, the Facebook page, the Compressive Sensing group on LinkedIn  or the Advanced Matrix Factorization group on LinkedIn

Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email.

Other links:
Paris Machine LearningMeetup.com||@Archives||LinkedIn||Facebook|| @ParisMLGroup
About LightOnNewsletter ||@LightOnIO|| on LinkedIn || on CrunchBase || our Blog
About myselfLightOn || Google Scholar || LinkedIn ||@IgorCarron ||Homepage||ArXiv

Wednesday, November 13, 2019

Paris Machine Learning Meetup #1 Season 7: Neuroscience & AI, Time series, Deep Transfert learning in NLP, Media Campaign, Energy Forecasting

** Nuit Blanche is now on Twitter: @NuitBlog **


A big thank-you to Publicis Sapient for welcoming us to their inspiring office. Presentation slides will be available here. The streaming of the event can also be found here:






Publicis Sapient is a digital transformation partner helping companies and established organizations get to their future, digitally-enabled state, both in the way they work and the way they serve their customers. Within Publicis Sapient, the Data Science Team builds machine learning products in order to support clients in their transformation.

Margot Fournier, Publicis Sapient. Classification of first time visitors
A significant share of visitors on a site do not return, making it crucial to identify levers that can decrease bouncing rate. For a client in the retail sector, we developed several models that are able to predict both the gender and the segment in which unlogged and unknown visitors fit in. This allows to personalize the experience from the first visit and prevent users from bouncing.

Maxence Brochard, Publicis Sapient. Media campaign optimization
Internet users leave multiple traces of micro-conversions (searches, clicks, whishlist...) during their visit on an ecommerce site: these micro-conversions can be weak signals of an act of purchase in the near future. To analyze those signals, we built a solution to detect visitors that are likely to convert and target them in while optimizing media campaigns budgets.




The fields of neuroscience and artificial intelligence (AI) have a long and intertwined history. In more recent times, however, communication and collaboration between the two fields has become less commonplace.In https://bit.ly/2WLsMaQ, the authors argue that better understanding biological brains could play a vital role in building intelligent machines. They survey historical interactions between the AI and neuroscience fields and emphasize current advances in AI that have been inspired by the study of neural computation in humans and other animals. Finally they conclude by highlighting shared themes that may be key for advancing future research in both fields.




The recent M4 forecasting competition (https://www.mcompetitions.unic.ac.cy) has demonstrated that the use of one forecasting method alone is not the most efficient approach in terms of forecasting accuracy. In this talk, I will focus on an energy consumption forecasting use case integrating exogenous data such as weather conditions and open data. In particular, I will present a forecasting time series challenge and the best practices observed on the best submissions and showcase an interesting approach based on a combination of classical statistical forecasting methods and machine learning algorithms, such as gradient boosting, for increased performance. Generalizing the use of these methods can be a major help to address the challenge of electricity demand and production adjustment.




Follow @NuitBlog or join the CompressiveSensing Reddit, the Facebook page, the Compressive Sensing group on LinkedIn  or the Advanced Matrix Factorization group on LinkedIn

Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email.

Other links:
Paris Machine LearningMeetup.com||@Archives||LinkedIn||Facebook|| @ParisMLGroup< br/> About LightOnNewsletter ||@LightOnIO|| on LinkedIn || on CrunchBase || our Blog
About myselfLightOn || Google Scholar || LinkedIn ||@IgorCarron ||Homepage||ArXiv

Printfriendly