Facial recognition technology developed by U.S. firm Clearview AI could be illegal in Europe, according to a European privacy group.
Clearview AI's software allows organizations to match pictures of people's faces to a database containing more than 3 billion images that have been taken from social media platforms and other websites.
In February, BuzzFeed reported that the company has expanded to 26 countries outside the U.S. including Belgium, Denmark, Finland, France, Ireland, Italy, Latvia, Lithuania, Malta, the Netherlands, Norway, Portugal, Slovenia, Spain, Sweden, Switzerland, and the United Kingdom. The report stated that Clearview has "engaged" with national law enforcement agencies, government bodies, and police forces in those countries.
However, the European Data Protection Board warned on Wednesday that "the use of a service such as Clearview AI by law enforcement authorities in the European Union would, as it stands, likely not be consistent with the EU data protection regime."
Hoan Ton-That, Clearview AI's CEO, said: "Clearview's image-search technology is not currently available in the European Union. Nevertheless, Clearview AI processes data-access and data-deletion requests from EU residents. In fact, Clearview AI searches the public internet just like any other search engine."
Over 600 U.S. law enforcement agencies are using Clearview AI's technology, according to The New York Times.
Google, YouTube, Twitter and Facebook have sent cease-and-desist letters to Clearview AI after they learned it was scraping images from their platforms. Ton-That responded by saying his company had a first amendment right to access public information including images from online platforms.
Facial recognition technology has been under the spotlight this week. Amazon and IBM made big announcements concerning facial recognition technology this week after the death of George Floyd while in the custody of the Minneapolis police.
On Monday, IBM announced it was getting out of the facial recognition business altogether, pledging to no longer offer, create or research facial recognition technology, which attempts to match pictures of people's faces to images stored on a database.
Two days later, Amazon said it intends to stop selling its controversial Rekognition platform to police for a year.
The decisions come after civil rights advocates raised concerns about potential racial and gender bias in facial recognition systems.
IBM's Arvind Krishna, who took over the CEO role in April, called on Congress on Monday to enact reforms to advance racial justice and combat systemic racism as he announced IBM was pulling out of the facial recognition business.
"IBM firmly opposes and will not condone uses of any technology, including facial recognition technology offered by other vendors, for mass surveillance, racial profiling, violations of basic human rights and freedoms, or any purpose which is not consistent with our values and Principles of Trust and Transparency," Krishna wrote in the letter to Congress.
"We believe now is the time to begin a national dialogue on whether and how facial recognition technology should be employed by domestic law enforcement agencies."
In a short 102-word blog post, Amazon said it wants U.S. lawmakers to "put in place stronger regulations to govern the ethical use of facial recognition technology."
It added: "We hope this one-year moratorium might give Congress enough time to implement appropriate rules, and we stand ready to help if requested."
Amazon has said nothing about cutting off access to military forces, spy agencies and other law enforcement organizations. It declined to comment when it was pushed on the matter.
The moves from Amazon and IBM are unlikely to have a major impact on their revenues.
Researchers have found concerning error rates in facial recognition products developed by IBM, Microsoft, and Amazon.
In 2018, Microsoft Research scientist Timnit Gebru and MIT computer scientist Joy Buolamwini co-authored a paper showing IBM and Microsoft's facial recognition systems were significantly worse when it came to identifying darker-skinned individuals. Microsoft said it had taken steps to improve the accuracy of its facial recognition technology, and
was investing in improving the datasets that it trains systems on, while IBM said it was planning to launch a new version of its service.
The following year, Buolamwini and Deborah Raji from the AI Now Institute found that Amazon's Rekognition system struggled to identify the gender of darker-skinned individuals. It would sometimes identify black women as black males, but it had no problems when analyzing images of lighter-skinned people.
Amazon attempted to dismiss the study and Buolamwini responded publicly with a blog post.
"Despite receiving preliminary reports of gender and racial bias in a June 25, 2018 letter, Amazon's approach thus far has been one of denial, deflection, and delay," she wrote. "We cannot rely on Amazon to police itself or provide unregulated and unproven technology to police or government agencies."
Elsewhere, San Francisco announced a ban on facial recognition technology last May, becoming the first city in the U.S. to do so.
A group of lawmakers in the U.K. want to see the technology banned across the country, while others want to see it regulated.
"AI driven technologies, such as facial recognition technology, often have global datasets that underpin them," said British MP Darren Jones who has called for a temporary ban on facial recognition in the U.K. "That's why global collaboration on the regulation of these technologies is so important."