Wednesday, March 13, 2019

Ce soir, Paris Machine Learning #5 season 6: Explainable AI, Unity Challenge, Ethical AI



Tonight, we will be hosted and sponsored by CFM capital. Thank you to them. 

The schedule is as followd :
6:45 Doors open
7PM - 9PM Talks
9PM - 10PM Cocktail - Networking

As usual, there is NO waiting list or reserved seat First come first served (the room has 110 seats)

This meetup will be streamed see below:



The presentations:

introduction to CFM Capital, Eric Lebigot

Vincent-Pierre Berges, The Obstacle Tower A Generalization Challenge in Vision, Control, and Planning, https://unity3d.com

The rapid pace of research in Deep Reinforcement Learning has been driven by the presence of fast and challenging simulation environments. These environments often take the form of games; with tasks ranging from simple board games, to classic home console games, to modern strategy games. We propose a new benchmark called Obstacle Tower: a high visual fidelity, 3D, 3rd person, procedurally generated game environment. An agent in the Obstacle Tower must learn to solve both low-level control and high-level planning problems in tandem while learning from pixels and a sparse reward signal. Unlike other similar benchmarks such as the ALE, evaluation of agent performance in Obstacle Tower is based on an agent's ability to perform well on unseen instances of the environment.

$100K AI Contest
Obstacle Tower Challenge

=


Machine learning interpretability is becoming an integral part of the data scientist workflow and can no longer be an afterthought. This talk will explore the vibrant area of machine learning interpretability and explain how to understand black-box models. Thanks to an interpretability technique based on colitional game theory: SHAP.

====


When it comes to actually leverage AI in production and especially in an environment where it interacts with humans, auditability and trust are not optional. That's why Explainable AI becomes a new R&D space. This talks will show why and where explainability in AI is needed, what it actually means and compare some of the techniques that falls into this category.

Impact AI is a think and Do tank that aims to deal with the ethical and societal challenges of AI. We develop an ethical framework for responsible use of Artificial Intelligence respecting principles easy to understand and apply at a large scale. This talk is about the Governance part of this tool box

Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

No comments:

Printfriendly