Wednesday, June 20, 2018

Deep Mesh Projectors for Inverse Problems - implementation -

Ivan just let me know of the following instance of the Great Convergence:
Dear Igor,

A few weeks ago you featured two interesting papers that use random projections to train robust convnets (http://nuit-blanche.blogspot.com/2018/05/adversarial-noise-layer-regularize.html).

I wanted to let you know about our related work that is a bit different in spirit: we learn to solve severely ill-posed inverse problems by learning to reconstruct low-dimensional projections of the unknown model instead of the full model. When we choose the low-dimensional subspaces to be piecewise-constant on random meshes, the projected inverse maps are much simpler to learn (in terms of Lipschitz stability constants, say), leading to a comparably better behaved inverse.

If you’re interested, the paper is here:

https://arxiv.org/abs/1805.11718

and the code here:

https://github.com/swing-research/deepmesh

I would be grateful if you could advertise the work on Nuit Blanche.

Best wishes,
Ivan
thanks Ivan !




We develop a new learning-based approach to ill-posed inverse problems. Instead of directly learning the complex mapping from the measured data to the reconstruction, we learn an ensemble of simpler mappings from data to projections of the unknown model into random low-dimensional subspaces. We form the reconstruction by combining the estimated subspace projections. Structured subspaces of piecewise-constant images on random Delaunay triangulations allow us to address inverse problems with extremely sparse data and still get good reconstructions of the unknown geometry. This choice also makes our method robust against arbitrary data corruptions not seen during training. Further, it marginalizes the role of the training dataset which is essential for applications in geophysics where ground-truth datasets are exceptionally scarce.


Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !

Monday, June 18, 2018

q-Neurons: Neuron Activations based on Stochastic Jackson's Derivative Operators



Ke just sent me the following:

Hello Igor,

I am doing machine learning and information geometry. I have been following your blog and enjoyed the many interesting stuff.


We have just built a new type of stochastic neuron:
https://arxiv.org/abs/1806.00149which is super simple to implement while shows consistently better performance.


If possible, could you kindly make a small announcement about it on nuit-blanche? Thanks in advance!

Best,
Ke

https://courbure.com/k
Thanks Ke !


q-Neurons: Neuron Activations based on Stochastic Jackson's Derivative Operators by Frank Nielsen, Ke Sun

We propose a new generic type of stochastic neurons, called 

q
-neurons, that considers activation functions based on Jackson's 
q
-derivatives with stochastic parameters 
q
. Our generalization of neural network architectures with 
q
-neurons is shown to be both scalable and very easy to implement. We demonstrate experimentally consistently improved performances over state-of-the-art standard activation functions, both on training and testing loss functions.




Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Friday, June 15, 2018

PhD and Postdoc positions KU Leuven (ERC Advanced grant E-DUALITY)

Johan just asked me the following:
Dear Igor,

could you please announce these vacancies on Nuit Blanche.

Best regards,
Johan
Sure Johan !
---------------------------------------------

The research group KU Leuven ESAT-STADIUS is currently offering 3 PhD and 3 Postdoc (1 year, extendable) positions within the framework of the ERC (European Research Council) Advanced Grant E-DUALITY http://www.esat.kuleuven.be/stadius/E (PI: Johan Suykens) on Exploring Duality for Future Data-driven Modelling.

Within this ERC project E-DUALITY we aim at realizing a powerful and unifying framework (including e.g. kernel methods, support vector machines, deep learning, multilayer networks, tensor-based models and others) for handling different system complexity levels, obtaining optimal model representations and designing efficient algorithms.

The research positions relate to the following possible topics:
  1. Duality principles
  2. Multiple data sources and coupling schemes
  3. Manifold learning and semi-supervised schemes
  4. Optimal prediction schemes
  5. Scalability, on-line updating, interpretation and visualization
  6. Mathematical foundations
  7. Matching model to system characteristics

For further information and on-line applying, see
https://www.kuleuven.be/personeel/jobsite/jobs/54681979" (PhD positions) and
https://www.kuleuven.be/personeel/jobsite/jobs/54681807" (Postdoc positions)
(click EN for English version).

The research group ESAT-STADIUS http://www.esat.kuleuven.be/stadius at the university KU Leuven Belgium provides an excellent research environment being active in the broad area of mathematical engineering, including data-driven modelling, neural networks and machine learning, nonlinear systems and complex networks, optimization, systems and control, signal processing, bioinformatics and biomedicine.





Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Thursday, June 14, 2018

Paris Machine Learning #9, Season 5: Adaptive Learning, Emotion Recognition, Search, AI and High Energy Physics, How Kids code AI?


This is the last regular meetup of  season 5 of the Paris Machine Learning meetup (we have two hors séries coming up). We are more than 7100 members ! Woohoo !

Tonight will be once again an exciting meetup with presentations on how kids have learned how to build an algorithm for a small autonomous car, the next trackML challenge and on how algorithm can help High Energy Physics and much much more.....

YouTube streaming should be here at the beginning of the meetup:





Thanks to SwissLife for hosting this meetup and sponsoring the food and drinks afterwards.

SCHEDULE :
6:45 PM doors opening ; 7-9 PM : talks ; 9-10 PM : socializing ; 10PM : end

TALKS :

We are organizing a data science competition to stimulate both the ML and HEP communities to renew the toolkit of physicists in preparation for the advent of the next generation of particle detectors in the Large Hadron Collider at CERN.
With event rates already reaching hundred of millions of collisions per second, physicists must sift through ten of petabytes of data per year. Ever better software is needed for processing and filtering the most promising events.
This will allow the LHC to fulfill its rich physics programme, understanding the private life of the Higgs boson, searching for the elusive dark matter, or elucidating the dominance of matter over anti-matter in the observable Universe.

Real data evolve in time, but classical machine learning algorithms do not, without retraining. In this talk, we will present methods in adaptive learning, i.e. algorithms that learn in real time on infinite data streams, and are constantly up-to-date.

In this work, we design a neural network for recognizing emotions in speech, using the standard IEMOCAP dataset. Following the latest advances in audio analysis, we use an architecture involving both convolutional layers, for extracting high-level features from raw spectrograms, and recurrent ones for aggregating long-term dependencies. Applying techniques of data augmentation, layer- wise learning rate adjustment and batch normalization, we obtain highly competitive results, with 64.5% weighted accuracy and 61.7% unweighted accuracy on four emotions.

In the current era of big data, many machine learning applications have come to rely on the abundance of collectively stored user data.
While this has led to startling new achievements in AI, recent events such as the Cambridge Analytica scandal have created an incentive for users to shy away from cloud based intelligence.
In this talk, we explore methods that seek to locally exploit a user's navigation history so as to minimize his reliance on external search engines.
We begin by outlining the challenges of being computationally limited by the user's browser. We then show how these limitations can be overcome by precomputing a semantics engine that is already present in our solution upon installation.
By relying on this precomputed intelligence, the local algorithm need only perform lightweight computations to adapt to the user's browsing habits. We then conclude with a short demonstration.

At Magic Makers we teach kids and teenagers how to code since 2014 and each year we ask ourselves this type of question. Previously we took on the challenges of teaching mobile app development, drone programming and 3D game design (with Unity). Coding AI was to be our biggest challenge yet. In April, we gave our first workshop on AI with 7 teenagers. For a week they coded feed-forward neural networks and CNNs to classify images, make an autonomous car for the IronCar challenge and create new Pokemons with GANs. We will present how we approached this challenge, what our first attempt at solving it looks like and what our lovely teens managed to create.



Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Thursday, May 31, 2018

McKernel: A Library for Approximate Kernel Expansions in Log-linear Time - implementation -


Woohoo ! following up on a previous postJoachim lets me know of the release of an implementation:
Hi Igor,
The library is now up. The name changed to McKernel. Thanks for your interest.
https://github.com/curto2/mckernelhttps://arxiv.org/pdf/1702.08159
Cheers,
Curtó
Thanks !

Kernel Methods Next Generation (KMNG) introduces a framework to use kernel approximates in the mini-batch setting with SGD Optimizer as an alternative to Deep Learning. McKernel is a C++ library for KMNG ML Large-scale. It contains a CPU optimized implementation of the Fastfood algorithm that allows the computation of approximated kernel expansions in log-linear time. The algorithm requires to compute the product of Walsh Hadamard Transform (WHT) matrices. A cache friendly SIMD Fast Walsh Hadamard Transform (FWHT) that achieves compelling speed and outperforms current state-of-the-art methods has been developed. McKernel allows to obtain non-linear classification combining Fastfood and a linear classifier.

Implementation is here: https://github.com/curto2/mckernel





Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Tuesday, May 29, 2018

NEWMA: a new method for scalable model-free online change-point detection - implementation -

What if you could perform random projections fast ? Well, Nicolas, Damien and Iacopo are answering this question in the change point detection case when the streaming data is large. 


We consider the problem of detecting abrupt changes in the distribution of a multi-dimensional time series, with limited computing power and memory. In this paper, we propose a new method for model-free online change-point detection that relies only on fast and light recursive statistics, inspired by the classical Exponential Weighted Moving Average algorithm (EWMA). The proposed idea is to compute two EWMA statistics on the stream of data with different forgetting factors, and to compare them. By doing so, we show that we implicitly compare recent samples with older ones, without the need to explicitly store them. Additionally, we leverage Random Features to efficiently use the Maximum Mean Discrepancy as a distance between distributions. We show that our method is orders of magnitude faster than usual non-parametric methods for a given accuracy.

Implementation of NEWMA is on LightOnAI github.





Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Monday, May 28, 2018

Adversarial Noise Layer: Regularize Neural Network By Adding Noise / Training robust models using Random Projection (implementation)

Using random projection to train models is a thing:




In this paper, we introduce a novel regularization method called Adversarial Noise Layer (ANL), which significantly improve the CNN's generalization ability by adding adversarial noise in the hidden layers. ANL is easy to implement and can be integrated with most of the CNN-based models. We compared the impact of the different type of noise and visually demonstrate that adversarial noise guide CNNs to learn to extract cleaner feature maps, further reducing the risk of over-fitting. We also conclude that the model trained with ANL is more robust to FGSM and IFGSM attack. Code is available at: this https URL


Regularization plays an important role in machine learning systems. We propose a novel methodology for model regularization using random projection. We demonstrate the technique on neural networks, since such models usually comprise a very large number of parameters, calling for strong regularizers. It has been shown recently that neural networks are sensitive to two kinds of samples: (i) adversarial samples, which are generated by imperceptible perturbations of previously correctly-classified samples-yet the network will misclassify them; and (ii) fooling samples, which are completely unrecognizable, yet the network will classify them with extremely high confidence. In this paper, we show how robust neural networks can be trained using random projection. We show that while random projection acts as a strong regularizer, boosting model accuracy similar to other regularizers, such as weight decay and dropout, it is far more robust to adversarial noise and fooling samples. We further show that random projection also helps to improve the robustness of traditional classifiers, such as Random Forrest and Gradient Boosting Machines.




Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Monday, May 07, 2018

IMPAC IMaging-PsychiAtry Challenge: predicting autism A data challenge on Autism Spectrum Disorder detection


I usually don't do advertizement for challenges but this one is worth it. Balazs just sent me this:
Dear All, 
The Paris-Saclay CDS, Insitut Pasteur, and IESF are launching the Autism Spectrum Disorder (ASD) classification event on RAMP.studio. ASD is a severe psychiatric disorder that affects 1 in 166 children. There is evidence that ASD is reflected in individuals brain networks and anatomy. Yet, it remains unclear how systematic these effects are and how large is their predictive power. The large cohort assembled here can bring some answers. Predicting autism from brain imaging will provide biomarkers and shed some light on the mechanisms of the pathology. 
The goal of the challenge is to predict ASD (binary classification) from pre-processed structural and functional MRI on more than 2000 subjects. 
The RAMP will run in competitive mode until July 1st at 20h (UTC) and in collaborative (open code) mode between July 1st and the closing ceremony on July 6-7th. The starting kit repo provides detailed instructions on how to start. You can sign up at the Autism RAMP event.
Prizes
The Paris-Saclay CDS and IESF are sponsoring the competitive phase of the event:
  • 1st prize 3000€
  • 2nd prize 2000€
  • 3rd prize 1000€
  • from 4th to 10th place 500 €

Launching hackathon
For those in the Paris area, we are organizing a launching hackaton at La Paillasse on May 14. Please sign up here if you are interested.
For more information please visit the event web page and join the slack team, #autism channel.
Best regards,
Balazs  













Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Printfriendly