Using Sparse Linear regression as a way to produce simple (=sparse) explanations of complex models, here is an idea that opens the door to all kinds of sparse matrix factorizations as way to provide explanations of complex models. This is fascinating because, if you are a reader of Nuit Blanche, you know that sudden phase transitions can occur in these computations when you change a little bit the parameters. One time, there is a good/robust (simple = sparse) explanation, the next, explanations can become very much dependent on where you started with no robustness what so ever.

Irrespective, explaining ML models is getting to be an area of generic concern as can be seen from this recent DARPA proposer's day event. Here is the paper:

Irrespective, explaining ML models is getting to be an area of generic concern as can be seen from this recent DARPA proposer's day event. Here is the paper:

“Why Should I Trust you?” Explaining the Predictions of Any Classifier by Marco Tulio Ribeiro, Sameer Singh, Carlos Guestrin

Despite widespread adoption, machine learning models re- main mostly black boxes. Understanding the reasons behind predictions is, however, quite important in assessing trust, which is fundamental if one plans to take action based on a prediction, or when choosing whether to deploy a new model. Such understanding also provides insights into the model, which can be used to transform an untrustworthy model or prediction into a trustworthy one.The GitHub for LIME is here: https://github.com/marcotcr/lime

In this work, we propose LIME, a novel explanation technique that explains the predictions of any classifier in an interpretable and faithful manner, by learning an interpretable model locally around the prediction. We also propose a method to explain models by presenting representative individual predictions and their explanations in a non-redundant way, framing the task as a submodular optimization problem. We demonstrate the flexibility of these methods by explaining different models for text (e.g. random forests) and image classification (e.g. neural networks). We show the utility of explanations via novel experiments, both simulated and with human subjects, on various scenarios that require trust: deciding if one should trust a prediction, choosing between models, improving an untrustworthy classifier, and identifying why a classifier should not be trusted.

Experiments are here.

The attendant short explanation in video is here:

**Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !**

Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

## No comments:

Post a Comment