The massive market transformation this month that some on Wall Street called a "once in a decade opportunity" might have just been a one-off technical move because of taxes.Marketsread more
The Pentagon will deploy U.S. forces to the Middle East on the heels of the attack on Saudi Arabian oil facilities, United States Secretary of Defense Mark Esper announced...Defenseread more
CNBC did a deep dive through the most recent Wall Street research to find stocks that analysts say are underappreciated.Marketsread more
Shares of MasterCard are up 46% this year, and 1120% since 2011, getting a boost from the strong U.S. consumer.Investingread more
CNBC sat in on an "empathy training" at Amazon PillPack's Somerville offices, which is part of new hire orientation.Technologyread more
Trade with China is the 'big unknown' for the Federal Reserve as it decides how best to support the U.S. economy, says Council on Foreign Relations Director of International...Futures Nowread more
Lobbying experts said the visit is likely an attempt to be in lawmakers' ears as they consider legislation that would impact Facebook.Technologyread more
Yardeni Research's Edward Yardeni believes the U.S. economy is picking up steam.Trading Nationread more
Iran's audacious drone and cruise missile attack on Saudi Arabia's oil producing facilities has provided a critical test yet for the Trump administration's foreign policy. A...Politicsread more
Chinese trade negotiators suddenly canceled a visit to meet U.S. farmers after they wrapped up trade talks in Washington this week.Marketsread more
It almost comes naturally to many smartphone users today. You can just take out your iPhone — or Android equivalent — and hold it up to your face to unlock the device.
But the technology behind that has become increasingly controversial of late, with business executives and regulators alike calling for oversight. Microsoft CEO Satya Nadella earlier this year said the technology warranted "any regulation that helps the marketplace not be a race to the bottom."
While people are far more open to the idea of registering their portrait with Apple's Face ID, the idea of being spotted by an artificial intelligence-powered camera on the street has proven much more unnerving. This is the difference, tech executives and experts say, between consensual identity verification and non-consensual surveillance.
The use of facial recognition technology in London's King's Cross area was met with much backlash earlier this month, drawing the attention of the U.K. data protection watchdog. It emerged that Argent, a property developer, had deployed the software in the space without people's knowledge. Argent was not immediately available for comment when contacted by CNBC.
Some are calling for a ban of so-called live facial recognition, where surveillance cameras equipped with the technology scan people in public places. One of the biggest problems with face identification systems, independent researcher Stephanie Hare said, is that it involves biometric data — in other words, information about people's bodies. She thinks an outright ban on the technology should be one option on the table.
"It needs to be treated in the same way that your DNA would be," Hare told CNBC. "They're in the same category of powerful data. What you could do with face recognition in terms of identifying someone in real time makes it a surveillance technology."
And it's that issue of surveillance that has become a key concern for regulators. Britain's Information Commissioner Elizabeth Denham said she would launch a probe into how the software was used in London, adding she was "deeply concerned about the growing use of facial recognition technology in public spaces" by both law enforcement and the private sector. The privacy regulator has also been investigating the use of facial recognition by police.
Some police forces in the U.K. have conducted trials of the technology, which is being promoted by the Home Office.
London's Metropolitan Police ended its pilot program, which was aimed at identifying criminals, last month. Researchers from the University of Essex found "significant flaws" with the Met's trial, adding that police deployment of live facial recognition technology "may be held unlawful if challenged before the courts."
South Wales Police, on the other hand, has gone ahead with an app that lets officers run a snapshot of a person through a database of suspects to find potential matches. That's despite a court case against the force brought by the campaign group Liberty.
Privacy campaigners at Big Brother Watch want the British parliament to step in. They think that lawmakers in the country should look to ban the technology from being used for monitoring people, rather than introduce regulation that sees it permitted under certain guidelines. Laws can take years to implement and even then policies would vary across different regions.
"We're not asking parliament to regulate, we're asking parliament to immediately put a stop to it," Silkie Carlo, director of Big Brother Watch, told CNBC. "If anyone thinks it's feasible that live facial recognition for public surveillance is possible in a rights-respecting democracy, they'd have to make a pretty convincing argument."
Various police forces across the country have pushed back against the scheme. "That's not a positive thing," said Jason Tooley, chief revenue officer of biometric software maker Veridium. The worry for some is whether legislators take too heavy-handed an approach.
"In terms of innovative technology, we want the police forces to be able to innovate to deliver better services," Tooley told CNBC. "What we've got to try to avoid here is that innovation is squashed or stopped."
Biometric data is already covered by the European Union's GDPR, or General Data Protection Regulation, a data privacy overhaul that was introduced by the bloc last year. The rules call on companies to obtain explicit consent from consumers on the use of their personal information. In Sweden, a local authority was fined under GDPR for trialing facial recognition on high-school students.
But recently it was reported that the EU is looking to tighten its laws around the use of facial recognition as part of an overhaul of how AI is regulated.
Natasha Bertaud, deputy chief spokesperson for the European Commission, declined to comment on that report last week, but pointed to recommendations from a group of experts advising the EU executive body on its approach to AI. That group had suggested the EU consider the need for new regulation of biometric technologies like emotion tracking and facial recognition.
So where do tech firms like Microsoft and Amazon sit in the regulatory debate swirling around facial recognition? Tech giants make "big claims about being on the side of privacy," but ultimately "ride the wave of where public opinion is," said Mike Beck, global head of threat analysis at cybersecurity firm Darktrace.
Amazon's computer vision platform Rekognition — that's the one that can now apparently detect fear — has in the past been used by police in the U.S. That hasn't always sat well with the company's own shareholders, who earlier this year lumped pressure on the tech giant to stop selling the facial identification software to law enforcement.
But the company has — like Microsoft — said it wants to at least see guidelines established to ensure the technology is used ethically. "New technology should not be banned or condemned because of its potential misuse," Michael Punke, vice president of global public policy for Amazon's cloud business, AWS, said in a blog post earlier this year.
Microsoft has repeatedly called on governments to regulate face recognition, with the firm's president, Brad Smith, having previously said that 2019 should be the year for regulation. Google, meanwhile, has said it will not sell the technology "before working through important technology and policy questions."
Beck said that a ban on live facial recognition was "not the answer," adding regulation would need to address how biometric data is collected and handled by organizations. "Regulation is only part of the answer," he said. "Securing data when it is collected is as important as regulating the applications of the technology in the first place."
Meanwhile, Gus Tomlinson, head of strategy at identity verification firm GBG, said a clear regulatory framework could help consumers understand the benefits of the technology — one of the benefits cited by Amazon is that Rekognition has been used to prevent human trafficking and find missing children. Tomlinson told CNBC that policymakers should ensure live facial recognition is only used for "purposes where there is a real legitimate interest."
One big problem with facial recognition is it uses machine-learning algorithms that are fed abundant volumes of data on people's faces to be able to discriminate between one person and another. But that information can be discriminatory in its own right, as demonstrated by MIT researcher Joy Buolamwini, who published a paper that showed such systems are less likely to accurately identify ethnic minorities and women than white men.
The combination of that with laying down the law is problematic, critics say, as it could result in cases of mistaken identity and people being wrongly arrested. Facial recognition "has a track record of misidentifying people of color, women and kids," Hare said. And even as the technology improves, it could become a "perfect tool of oppression," Carlo said, adding: "In extremis, you could live in a society where you have no chance of being anonymous."
The Chinese government uses the technology widely. China has millions of surveillance cameras and almost all of its 1.4 billion citizens are included in a facial recognition database. Those government efforts have been criticized outside China, amid reports from human rights groups and others that the technology is used to track and monitor Uighur Muslims in the west of the country.
In the U.S., meanwhile, the technology is facing increasing pushback from legislators, at least in terms of how it's used by the police. The California State Senate is considering legislation that would ban the use of facial recognition software in police body cameras, while San Francisco's Board of Supervisors already went ahead with moving to ban the use of the technology by law enforcement.
Hare said that the issue was so severe that it could result in a "landmark" court case. Campaigners are already challenging the police use of facial recognition in the U.K., but Hare said there could one day be a "class action lawsuit." She said GDPR — under which firms can be fined up to 4% of their global revenues — would be "legalizing mass surveillance" if it doesn't protect people from live facial recognition.