Following a controlled experiment regarding the testing of a convolutional neural network (CNN) on the task of recognizing and classifying faces of transgender people and non-white people, preliminary data analysis has suggested the need to further incorporate transgender people into datasets when training facial recognition neural networks. The CNN model used in this experiment is a pre-trained model, which was thus tested on 3 different datasets in order to measure potential biases: a novel dataset consisting of self-reported binary transgender individuals, a balanced dataset, and an unbalanced dataset. Similar to research suggested by prominent authors in the field of AI - specifically regarding the potential dangers of biases in such algorithms - it was found that self-identifying binary transgender men were more often misgendered than self-identifying binary transgender women. Further research is needed in order to potentially mitigate such biases in future iterations of neural networks.
For further information on the topic of this project, please refer to the AI Bias project page.