Thursday, May 31, 2018

McKernel: A Library for Approximate Kernel Expansions in Log-linear Time - implementation -


Woohoo ! following up on a previous postJoachim lets me know of the release of an implementation:
Hi Igor,
The library is now up. The name changed to McKernel. Thanks for your interest.
https://github.com/curto2/mckernelhttps://arxiv.org/pdf/1702.08159
Cheers,
Curtó
Thanks !

Kernel Methods Next Generation (KMNG) introduces a framework to use kernel approximates in the mini-batch setting with SGD Optimizer as an alternative to Deep Learning. McKernel is a C++ library for KMNG ML Large-scale. It contains a CPU optimized implementation of the Fastfood algorithm that allows the computation of approximated kernel expansions in log-linear time. The algorithm requires to compute the product of Walsh Hadamard Transform (WHT) matrices. A cache friendly SIMD Fast Walsh Hadamard Transform (FWHT) that achieves compelling speed and outperforms current state-of-the-art methods has been developed. McKernel allows to obtain non-linear classification combining Fastfood and a linear classifier.

Implementation is here: https://github.com/curto2/mckernel





Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Tuesday, May 29, 2018

NEWMA: a new method for scalable model-free online change-point detection - implementation -

What if you could perform random projections fast ? Well, Nicolas, Damien and Iacopo are answering this question in the change point detection case when the streaming data is large. 


We consider the problem of detecting abrupt changes in the distribution of a multi-dimensional time series, with limited computing power and memory. In this paper, we propose a new method for model-free online change-point detection that relies only on fast and light recursive statistics, inspired by the classical Exponential Weighted Moving Average algorithm (EWMA). The proposed idea is to compute two EWMA statistics on the stream of data with different forgetting factors, and to compare them. By doing so, we show that we implicitly compare recent samples with older ones, without the need to explicitly store them. Additionally, we leverage Random Features to efficiently use the Maximum Mean Discrepancy as a distance between distributions. We show that our method is orders of magnitude faster than usual non-parametric methods for a given accuracy.

Implementation of NEWMA is on LightOnAI github.





Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Monday, May 28, 2018

Adversarial Noise Layer: Regularize Neural Network By Adding Noise / Training robust models using Random Projection (implementation)

Using random projection to train models is a thing:




In this paper, we introduce a novel regularization method called Adversarial Noise Layer (ANL), which significantly improve the CNN's generalization ability by adding adversarial noise in the hidden layers. ANL is easy to implement and can be integrated with most of the CNN-based models. We compared the impact of the different type of noise and visually demonstrate that adversarial noise guide CNNs to learn to extract cleaner feature maps, further reducing the risk of over-fitting. We also conclude that the model trained with ANL is more robust to FGSM and IFGSM attack. Code is available at: this https URL


Regularization plays an important role in machine learning systems. We propose a novel methodology for model regularization using random projection. We demonstrate the technique on neural networks, since such models usually comprise a very large number of parameters, calling for strong regularizers. It has been shown recently that neural networks are sensitive to two kinds of samples: (i) adversarial samples, which are generated by imperceptible perturbations of previously correctly-classified samples-yet the network will misclassify them; and (ii) fooling samples, which are completely unrecognizable, yet the network will classify them with extremely high confidence. In this paper, we show how robust neural networks can be trained using random projection. We show that while random projection acts as a strong regularizer, boosting model accuracy similar to other regularizers, such as weight decay and dropout, it is far more robust to adversarial noise and fooling samples. We further show that random projection also helps to improve the robustness of traditional classifiers, such as Random Forrest and Gradient Boosting Machines.




Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Monday, May 07, 2018

IMPAC IMaging-PsychiAtry Challenge: predicting autism A data challenge on Autism Spectrum Disorder detection


I usually don't do advertizement for challenges but this one is worth it. Balazs just sent me this:
Dear All, 
The Paris-Saclay CDS, Insitut Pasteur, and IESF are launching the Autism Spectrum Disorder (ASD) classification event on RAMP.studio. ASD is a severe psychiatric disorder that affects 1 in 166 children. There is evidence that ASD is reflected in individuals brain networks and anatomy. Yet, it remains unclear how systematic these effects are and how large is their predictive power. The large cohort assembled here can bring some answers. Predicting autism from brain imaging will provide biomarkers and shed some light on the mechanisms of the pathology. 
The goal of the challenge is to predict ASD (binary classification) from pre-processed structural and functional MRI on more than 2000 subjects. 
The RAMP will run in competitive mode until July 1st at 20h (UTC) and in collaborative (open code) mode between July 1st and the closing ceremony on July 6-7th. The starting kit repo provides detailed instructions on how to start. You can sign up at the Autism RAMP event.
Prizes
The Paris-Saclay CDS and IESF are sponsoring the competitive phase of the event:
  • 1st prize 3000€
  • 2nd prize 2000€
  • 3rd prize 1000€
  • from 4th to 10th place 500 €

Launching hackathon
For those in the Paris area, we are organizing a launching hackaton at La Paillasse on May 14. Please sign up here if you are interested.
For more information please visit the event web page and join the slack team, #autism channel.
Best regards,
Balazs  













Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Printfriendly