The same day the 3rd Workshop on Fairness, Accountability, and Transparency in Machine Learning is being held in New York, we get Automated Inference on Criminality using Face Images by Xiaolin Wu, Xi Zhang on arxiv. Some people are outraged and want the paper kicked out of arxiv others are listing the numerous flaws of the paper. I see it as a typical HORSE paper as defined by Bob Sturm. From the paper:
Subset Sn contains ID photos of 1126 non-criminals that are acquired from Internet using the web spider tool; they are from a wide gamut of professions and social status, including waiters, construction workers, taxi and truck drivers, real estate agents, doctors, lawyers and professors;
It's trying to classify something but it certainly has nothing to do with criminality.
As I said, the 3rd Workshop on Fairness, Accountability, and Transparency in Machine Learning is being held in New York. You want to follow #FATML A livestream of the workshop is available at http://law.nyu.edu/livestreamb and a recording will be available after the event. The schedule of the workshop:
-
09:00 Introduction
Solon Barocas
-
09:15 Opening Panel: Setting the Stage
Rayid Ghani, Sorelle Friedler, Cynthia Rudin, and danah boyd (moderator)
-
10:00 Spotlight Session
- Equality of Opportunity in Supervised Learning – Eric Price, Nati Srebro, and Moritz Hardt
- Fairness in Learning: Classic and Contextual Bandits – Matthew Joseph, Michael Kearns, Jamie Morgenstern, and Aaron Roth
- To Predict and Serve? – Kristian Lum and William Isaac
- Combatting Police Discrimination in The Age of Big Data – Sharad Goel, Maya Perelman, Ravi Shroff, and David Alan Sklansky
- How the Machine ‘Thinks:’ Understanding Opacity in Machine Learning Algorithms – Jenna Burrell
-
10:30 Morning Break
-
11:00 Morning Session
- Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings – Tolga Bolukbasi, Kai-Wei Chang, James Zou, Venkatesh Saligrama, and Adam Kalai
- Semantics Derived Automatically from Language Corpora Necessarily Contain Human Biases – Aylin Caliskan-Islam, Joanna J. Bryson, and Arvind Narayanan
- How to be Fair and Diverse? – L. Elisa Celis, Amit Deshpande, Tarun Kathuria, and Nisheeth Vishnoi
- Exploring or Exploiting? Social and Ethical Implications of Autonomous Experimentation in AI – Sarah Bird, Solon Barocas, Fernando Diaz, Hanna Wallach, and Kate Crawford
- Rawlsian Fairness for Machine Learning – Matthew Joseph, Michael Kearns, Jamie Morgenstern, Seth Neel, and Aaron Roth
-
13:00 Enforcing Human Rights Laws in an Era of Algorithms
Carmelyn Malalis
-
13:10 Lunch
With poster session from 1:30pm, including:
- Iterative Orthogonal Feature Projection for Diagnosing Bias in Black-Box Models – Julius Adebayo and Lalana Kagal
- Price of Transparency in Strategic Machine Learning – Emrah Akyol, Cedric Langbort, and Tamer Basar
- Fair Prediction with Disparate Impact: A Study of Bias in Recidivism Prediction Instruments – Alexandra Chouldechova
- The Case for Temporal Transparency: Detecting Policy Change Events in Black-Box Decision Making Systems – Miguel Ferreira, Muhammad Bilal Zafar, and Krishna Gummadi
- Fair Learning in Markovian Environments – Shahin Jabbari, Matthew Joseph, Michael Kearns, Jamie Morgenstern, and Aaron Roth
- Inherent Trade-Offs in the Fair Determination of Risk Scores – Jon Kleinberg, Sendhil Mullainathan, and Manish Raghavan
- A Statistical Framework for Fair Predictive Algorithms – Kristian Lum and James Johndrow
- Measuring Fairness in Ranked Outputs – Ke Yang and Julia Stoyanovich
- Fairness Beyond Disparate Treatment and Disparate Impact: Learning Classification without Disparate Mistreatment – Muhammad Bilal Zafar, Isabel Valera, Manuel Gomez Rodriguez, and Krishna Gummadi
-
14:40 Afternoon Session
- Interpretable Classification Models for Recidivism Prediction – Jiaming Zeng, Berk Ustun, and Cynthia Rudin
- Algorithmic Transparency via Quantitative Input Influence: Theory and Experiments with Learning Systems – Anupam Datta, Shayak Sen, and Yair Zick
- ‘Why Should I Trust You?’ Explaining the Predictions of Any Classifier –Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin
- Fairness as a Program Property – Aws Albarghouthi, Loris D'Antoni, Samuel Drews, and Aditya Nori
-
16:30 Afternoon Break
-
17:00 Closing panel: Building a community and setting a research agenda
Bettina Berendt, Kate Crawford, Jon Kleinberg, Hanna Wallach, and Suresh Venkatasubramanian (moderator)
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.
No comments:
Post a Comment