Fairness & Accountability




Computer vision has ceased to be a purely academic endeavor. From law enforcement [1], to border control [2], to employment [3], healthcare diagnostics [4], and assigning trust scores [5], computer vision systems have started to be used in all aspects of society. This last year has also seen a rise in public discourse regarding the use of computer-vision based technology by companies such as Google, Microsoft, Amazon and IBM. In research, works such as [6] purport to determine a person’s sexuality from their social network profile images, and [7] claims to classify “violent individuals” from drone footage. These works were published in high impact journals, and some were presented at workshops in top tier computer vision conferences such as CVPR [8].

On the other hand, seminal works such as [9] published last year showed that commercial gender classification systems have high disparities in error rates by skin-type and gender, [10] exposes the gender bias contained in current image captioning based works, and [11] both exposes biases in the widely used CelebA dataset and proposes adversarial learning based methods to mitigate its effects. Policy makers and other legislators have cited some of these seminal works in their calls to investigate unregulated usage of computer vision systems [12].

We believe the vision community is well positioned to foster serious conversations about the ethical considerations of some of the current use cases of computer vision technology, and thus hold a workshop on the Fairness, Accountability, Transparency, and Ethics (FATE) of modern computer vision in order to provide a space to analyze controversial research papers that have garnered a lot of attention. Our workshop also seeks to highlight research on uncovering and mitigating issues of unfair bias and historical discrimination that trained machine learning models learn to mimic and propagate.


Timnit Gebru, Research Scientist Google
Emily Denton, Research Scientist, Google

Livestream Schedule

2:30pm PDT Part 1: FATE in Computer Vision Overview
3:30pm PDT Part 2: Data Ethics
4:30pm PDT Part 3: Considerations of Risks and Harms
5:00pm PDT Live Q&A with the speakers

Video archive

Pre-recorded videos of the three tutorial parts are also available below. 

Teaser picture for paper
Culorful icture protestors from a newspaper discussing Baltimore police use of face recognition to identify protestors.

    Keywords:  fairness, transparency, ethics, accountability
Frid Jun19  
1:30 AM - 2:30 PM
Teaser picture for paper
Data Ethics Timnit Gebru and Emily Denton, Google

    Keywords:  Data ethics, fairness, transparency, power, consent
Frid Jun19  
2:30 AM - 3:30 PM
Teaser picture for paper
Frid Jun19  
3:30 AM - 5:00 PM

We have also made the videos available at the following links accessible in China:
Part 1: FATE in Computer Vision Overview
Part 2: Data Ethics
Part 3: Considerations of Risks and Harms


[1] Clare Garvie, Alvaro Bedoya, and Jonathan Frankle. The Perpetual Line-Up: Unregulated Police Face Recognition in America. Georgetown Law, Center on Privacy & Technology, 2016.
[2] Steven Levy, Inside Palemer Lucky’s Bid to Build a Border Wall, Wird, June 6 2018, https://www.wired.com/story/palmer-luckey-anduril-border-wall/
[3] HireVue, https://www.hirevue.com/
[4] Gulshan, Varun, et al. “Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs.” Jama 316.22 (2016): 2402-2410
[5] Paul Mozur, Inside China’s Dystopian Dreams: A.I., Shame and Lots of Cameras, New York Times, Jun 2018, https://www.nytimes.com/2018/07/08/business/china-surveillance-technology.html
[6] Daisuke Wakabayashi and Scott Shane, Google Will Not Renew Pentagon Contract That Upset Employees, New York Times, June 2018, https://www.nytimes.com/2018/06/01/technology/google-pentagon-project-maven.html
[7] Microsoft Employees Question C.E.O. Over Company’s Contract With ICE, Sheera Frenkel, New York Times, Jul 2018, https://www.nytimes.com/2018/07/26/technology/microsoft-ice-immigration.html
[8] George Joseph and Kenneth Lipp, Ibm Used NYPD Surveillance Footage to Develop Technology that Lets Police Search by Skin Color, September 2018, The Intercept, https://theintercept.com/2018/09/06/nypd-surveillance-camera-skin-tone-search/
[9] Wang, Yilun, and Michal Kosinski. “Deep neural networks are more accurate than humans at detecting sexual orientation from facial images.” (2017).
[10] Singh, Amarjot, Devendra Patil, and S. N. Omkar. “Eye in the Sky: Real-time Drone Surveillance System (DSS) for Violent Individuals Identification using ScatterNet Hybrid Deep Learning Network.” arXiv preprint arXiv:1806.00746 (2018).
[11] CVPR Workshop on Efficient Deep Learning for Computer Vision, CVPR 2018, http://openaccess.thecvf.com/CVPR2018_workshops/CVPR2018_W33.py
[12] Buolamwini, Joy, and Timnit Gebru. “Gender shades: Intersectional accuracy disparities in commercial gender classification.” Conference on Fairness, Accountability and Transparency. 2018