This is the last regular meetup of season 5 of the Paris Machine Learning meetup (we have two hors séries coming up). We are more than 7100 members ! Woohoo !
Tonight will be once again an exciting meetup with presentations on how kids have learned how to build an algorithm for a small autonomous car, the next trackML challenge and on how algorithm can help High Energy Physics and much much more.....
YouTube streaming should be here at the beginning of the meetup:
Thanks to SwissLife for hosting this meetup and sponsoring the food and drinks afterwards.
SCHEDULE :
6:45 PM doors opening ; 7-9 PM : talks ; 9-10 PM : socializing ; 10PM : end
TALKS :
Cecile Germain, TrackML Challenge, Can Machine Learning (ML) assist High Energy Physics (HEP) in discovering and characterizing new particles?
We are organizing a data science competition to stimulate both the ML and HEP communities to renew the toolkit of physicists in preparation for the advent of the next generation of particle detectors in the Large Hadron Collider at CERN.
With event rates already reaching hundred of millions of collisions per second, physicists must sift through ten of petabytes of data per year. Ever better software is needed for processing and filtering the most promising events.
This will allow the LHC to fulfill its rich physics programme, understanding the private life of the Higgs boson, searching for the elusive dark matter, or elucidating the dominance of matter over anti-matter in the observable Universe.
- Grégoire Martinon, Quantmetry, Concept drift and adaptive learning
Real data evolve in time, but classical machine learning algorithms do not, without retraining. In this talk, we will present methods in adaptive learning, i.e. algorithms that learn in real time on infinite data streams, and are constantly up-to-date.
- Andrei Petrovskii, Dreamquark.com, CNN+LSTM architecture for speech emotion recognition
In this work, we design a neural network for recognizing emotions in speech, using the standard IEMOCAP dataset. Following the latest advances in audio analysis, we use an architecture involving both convolutional layers, for extracting high-level features from raw spectrograms, and recurrent ones for aggregating long-term dependencies. Applying techniques of data augmentation, layer- wise learning rate adjustment and batch normalization, we obtain highly competitive results, with 64.5% weighted accuracy and 61.7% unweighted accuracy on four emotions.
- Patrick Elhage, Wypl : Reducing user reliance on search engines by fully exploiting internet navigation history
In the current era of big data, many machine learning applications have come to rely on the abundance of collectively stored user data.
While this has led to startling new achievements in AI, recent events such as the Cambridge Analytica scandal have created an incentive for users to shy away from cloud based intelligence.
In this talk, we explore methods that seek to locally exploit a user's navigation history so as to minimize his reliance on external search engines.
We begin by outlining the challenges of being computationally limited by the user's browser. We then show how these limitations can be overcome by precomputing a semantics engine that is already present in our solution upon installation.
By relying on this precomputed intelligence, the local algorithm need only perform lightweight computations to adapt to the user's browsing habits. We then conclude with a short demonstration.
- Romain Liblau, Magic Makers, How teenagers code NeuralNets ? How can we teach teenagers to code AIs ?
At Magic Makers we teach kids and teenagers how to code since 2014 and each year we ask ourselves this type of question. Previously we took on the challenges of teaching mobile app development, drone programming and 3D game design (with Unity). Coding AI was to be our biggest challenge yet. In April, we gave our first workshop on AI with 7 teenagers. For a week they coded feed-forward neural networks and CNNs to classify images, make an autonomous car for the IronCar challenge and create new Pokemons with GANs. We will present how we approached this challenge, what our first attempt at solving it looks like and what our lovely teens managed to create.
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.
No comments:
Post a Comment