For this Paris Machine Learning Meetup #7 (Season 2), our host will once again be Ecole 42. Please register here. For the time being, we have the following speakers who will talk to us about Algorithm Fairness, the automatic statistician and ML in companies. Two talks will be in English while the third one will be in French. The streaming starts at 18h55 Paris time with the first two presentations in English. The presentations will be added to this blog entry as we get closer to the meetup. Stay tuned.
- Machine Learning et Entreprise , Francois-Xavier Rousselot ( Video at 1h16m )
- The Automatic Statistician, Zoubin Ghahramani, Cambridge University, site: The Automatic Statistician ( Video at 30m50s)
- Certifying and removing Disparate Impact, Suresh Venkatasubramanian,University of Utah and Sorelle Friedler, Haverford College, Site: Computational Fairness ( Video at 5m28s)
Title: Certifying and removing Disparate Impact, Michael Feldman, Sorelle Friedler, John Moeller, Carlos Scheidegger, Suresh Venkatasubramanian (http://arxiv.org/abs/1412.3756)
Abstract:
What does it mean for an algorithm to be biased?
In U.S. law, unintentional bias is encoded via disparate impact, which occurs when a selection process has widely different outcomes for different groups, even as it appears to be neutral. When the process is implemented using computers, determining disparate impact (and hence bias) is harder. It might not be possible to disclose the process. In addition, even if the process is open, it might be hard to elucidate in a legal setting how the algorithm makes its decisions. Instead of requiring access to the algorithm, we propose making inferences based on the data the algorithm uses.
We link the legal notion of disparate impact to a measure of classification accuracy that while known, has received relatively little attention. We propose a test for disparate impact based on analyzing the information leakage of the protected class from the other data attributes and then describe methods by which data might be made unbiased. Interestingly, our approach resembles some actual selection practices that have recently received legal scrutiny.
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.
No comments:
Post a Comment