LONDON – Twitter says it's investigating why its picture-cropping algorithm sometimes prefers White faces to Black ones.
The investigation comes after Twitter users noticed Black faces were less likely to be shown than White ones in image previews on mobile when the image contains a Black face and a White face.
The micro-blogging platform said it didn't find any evidence of racial and gender bias when it tested the algorithm but conceded it had more analysis to do.
Parag Agrawal, Twitter's chief technology officer, said Twitter analyzed the model when it shipped it, but said that it needs continuous improvement.
"Love this public, open, and rigorous test — and eager to learn from this," he said on the platform.
The issue came to light after Colin Madland, a university manager in Vancouver, noticed that his Black colleague's head kept disappearing when using the video conferencing app Zoom. It appeared as though Zoom's software thought the Black man's head was part of the background and removed it as a result. Zoom did not immediately respond to CNBC's request for comment.
After tweeting about the issue to see if anyone knew what was going on, Madland then realized that Twitter was also guilty of hiding Black faces. Specifically, he noticed Twitter was choosing to preview his own White face over his colleague's Black face on mobile.
Dantley Davis, Twitter's chief design officer, realized that the problem could be corrected by removing Madland's facial hair and glasses.
Twitter has received a fair amount of criticism but Davis said the problem will be fixed.
"I know you think it's fun to dunk on me — but I'm as irritated about this as everyone else. However, I'm in a position to fix it and I will," Davis said.
He added: "It's 100% our fault. No-one should say otherwise."
Following the discovery, Twitter users carried out several other experiments. One experiment suggested that white U.S. Senate majority leader Mitch McConnell's face was preferred to former U.S. President Barack Obama.
Another suggested that a stock photo of a White man in a suit was preferred to one in which the man was Black.
Artificial intelligence has a track record of picking up on biases engrained in society and researchers have found concerning error rates in facial recognition products developed by IBM, Microsoft, and Amazon.
In 2018, Microsoft Research scientist Timnit Gebru and MIT computer scientist Joy Buolamwini co-authored a paper showing IBM and Microsoft's facial recognition systems were significantly worse when it came to identifying darker-skinned individuals.
Microsoft said it had taken steps to improve the accuracy of its facial-recognition technology, and was investing in improving the datasets that it trains systems on, while IBM said it was planning to launch a new version of its service.
The following year, Buolamwini and Deborah Raji co-authored a paper that found that Amazon's Rekognition system struggled to identify the gender of darker-skinned individuals. It would sometimes classify Black women as Black males, but it had no problems when analyzing images of lighter-skinned people.
IBM said in June that it would stop selling its facial-recognition software for racial profiling and mass surveillance.
Sarah Myers West, a postdoctoral researcher at New York University's AI Now Institute, told CNBC: "Algorithmic discrimination is a reflection of larger patterns of social inequality … it's about much more than just bias on the part of engineers or even bias in datasets, and will require more than a shallow understanding or set of fixes."
— CNBC intern Michelle Gao contributed to this article.