The Impact of Biases in Facial Recognition Artificial Neural Networks - Preliminary Poster at NRHC

Abstract

Following a controlled experiment regarding the testing of a convolutional neural network (CNN) on the task of recognizing and classifying faces of transgender people and non-white people, preliminary data analysis has suggested the need to further incorporate transgender people into datasets when training facial recognition neural networks. The CNN model used in this experiment is a pre-trained model, which was thus tested on a novel dataset consisting of binary transgender individuals. Similar to research suggested by prominent authors in the field of AI - specifically regarding the potential dangers of biases in such algorithms - it was found that self-identifying binary transgender men were more often misgendered than self-identifying binary transgender women. Further research is needed in order to potentially help to mitigate such biases in future iterations of neural networks.

Date
Mar 30, 2023 — Apr 2, 2023
Event
Northeast Regional Honors Conference
Location
Pittsburgh, PA

For further information on the topic of this project, please refer to the AI Bias project page by clicking the Project button at the top of this page.

PosterPresentation