It wasn't supposed to be this way: The 2017 tax cut and aggressive moves toward deregulation were supposed to pull the U.S. economy out of its glacial move higher.Economyread more
President Trump says Iran may not have intentionally downed an unmanned U.S. surveillance drone.Politicsread more
Slack pursued an unusual direct listing, meaning it did not have banks underwrite the offering.CNBC Disruptor 50read more
Slack's CEO said that the company didn't want to go public via an IPO so that it could be as transparent and accessible as possible.Deals and IPOsread more
Oil jumped as much as 6% on Thursday after Iran shot down a U.S. military drone, prompting President Trump to blast Tehran on Twitter.Energy Commoditiesread more
If Facebook cut corners in something as basic as the branding of its nascent crypto efforts, this dispute could give ammunition to its many critics.Financeread more
Workers in the gig economy could get short changed when it comes to their Social Security checks in retirement. That's because the growing ranks of people who earn money on...Personal Financeread more
CNBC analysis using Kensho found that Disney, Verizon and Home Depot were some of the best performing Dow stocks in declining-rate environments.Investingread more
The possible plan would involve the Rays splitting home games between Florida and Montreal.Sportsread more
Moore's entry into the 2020 race is worrisome for the GOP, which sees the race as its best chance to pick up a Senate seat next year.Politicsread more
For doubters thinking the rally is just a last gasp of the decadelong bull market, chart analysts are here to prove them wrong.Marketsread more
More than ever, Facebook is under pressure to prove that its algorithms are being deployed responsibly.
On Wednesday, at its F8 developer conference, the company revealed that it has formed a special team and developed discrete software to ensure that its artificial intelligence systems make decisions as ethically as possible, without biases.
Facebook, like other big tech companies with products used by large and diverse groups of people, is more deeply incorporating AI into its services. Facebook said this week it will start offering to translate messages that people receive via the Messenger app. Translation systems must first be trained on data, and the ethics push could help ensure that Facebook's systems are taught to give fair translations.
"We'll look back and we'll say, 'It's fantastic that we were able to be proactive in getting ahead of these questions and understand before we launch things what is fairness for any given product for a demographic group,'" Isabel Kloumann, a research scientist at Facebook, told CNBC. She declined to say how many people are on the team.
Facebook said these efforts are not the result of any changes that have taken place in the seven weeks since it was revealed that data analytics firm Cambridge Analytica misused personal data of the social network's users ahead of the 2016 election. But it's clear that public sentiment toward Facebook has turned dramatically negative of late, so much so that CEO Mark Zuckerberg had to sit through hours of congressional questioning last month.
Every announcement is now under a microscope.
Facebook stopped short of forming a board focused on AI ethics, as Axon (formerly Taser) did last week. But the moves align with a broader industry recognition that AI researchers have to work to make their systems inclusive. Alphabet's DeepMind AI group formed an ethics and society team last year. Before that, Microsoft's research organization established a Fairness Accountability Transparency and Ethics group.
The field of AI has had its share of embarrassments, like when Google Photos was spotted three years ago categorizing black people as gorillas.
Last year, Kloumann's team developed a piece of software called Fairness Flow, which has since been integrated into Facebook's widely used FBLearner Flow internal software for more easily training and running AI systems. The software analyzes the data, taking its format into consideration, and then produces a report summarizing it.
Kloumann said she's been working on the subject of fairness since joining Facebook in 2016, just when people in the tech industry began talking about it more openly and as societal concerns emerged about the power of AI.
She said her group has "a bunch of collaborations" with teams inside the company and has worked with outside organizations, like the Better Business Bureau's Institute for Marketplace Trust and the Brookings Institution.
Facebook doesn't plan to release the new Fairness Flow software to the public under an open-source license, but the team could publish academic papers documenting its findings, Kloumann said.
At the same time, Facebook knows it can do more in terms of hiring AI researchers with a diversity of ideas and backgrounds to try to minimize bias in its software. The company's AI research group has been opening labs far away from Facebook's Silicon Valley headquarters — most recently in Montreal.
"This is something that I see us doubling down, for sure," said Joaquin Quinonero Candela, the company's director of applied machine learning.