St. Louis Federal Reserve President James Bullard expressed optimism that the United States and China will reach a deal to end their trade war.World Economyread more
Morgan Stanley earlier this month upgraded shares of Target, calling it a "survivor" in retail.Retailread more
Consumers in China are taking to social media to express their support for Huawei as the U.S. government looks to ramp up pressure on the Chinese smartphone maker.Technologyread more
British Prime Minister Theresa May is expected to make a final attempt at persuading lawmakers to back her "new" Brexit deal on Wednesday.Europe Politicsread more
It's not fast and may be years from visiting your neighborhood, but a walking robot is part of Ford's vision for how its autonomous vehicles deliver packages and goods in the...Technologyread more
Tensions between the two parties have heightened in recent months as the campaign for seats in the Brussels and Strasbourg-based parliament has crescendoed.Europe Politicsread more
Shares of Saudi shopping mall operator Arabian Centres were trading at 24.34 riyals ($6.49) in early deals in Riyadh.IPOsread more
There is at least one thing in common between the U.S. and Russia – their willingness to weaken the European Union, a top EU official said.Politicsread more
U.S. President Donald Trump's latest tariff increase — and Beijing's plans to counter them — are hitting U.S. companies in China, according to a joint survey this month by...China Economyread more
"We are also constantly watching whether the trade war will turn into a tech war," Ma said Tuesday, according to a CNBC translation of his Chinese remarks published by a locak...China Economyread more
TransferWise, the money transfer start-up, was valued at $3.5 billion after investors bought $292 million of shares in a secondary sale.Technologyread more
Facebook came under fire on Thursday night after users noticed search suggestions alluding to child abuse and other vulgar and upsetting results when people started typing "video of..." Facebook promptly apologized and removed the predictions.
YouTube has also been the subject of investigations regarding how it highlights extreme content. On Monday, Youtube users highlighted the prevalence of conspiracy theories and extreme content in the website's autocomplete search box.
Both companies blamed users for their search suggestion issues. Facebook told The Guardian, "Facebook search predictions are representative of what people may be searching for on Facebook and are not necessarily reflective of actual content on the platform."
Alphabet's Google, the owner of YouTube, says that its search results take into account "popularity" and "freshness," which are determined by users.
But this isn't the first time users have driven computer algorithms into unexpected and deeply offensive corners. Microsoft made the same mistake two years ago with a chatbot that learned how to be extremely offensive in less than a day.
In March 2016, Microsoft released a Twitter chatbot named "Tay" that was described as an experiment in "conversational understanding." The bot was supposed to learn to engage with people through "casual and playful conversation."
But Twitter users engaged in conversation that wasn't so casual and playful.
Within 24 hours, Tay was tweeting about racism, anti-semitism, dictators, and more. Part of it was prompted by users asking the bot to repeat after them, but soon the bot started saying strange and offensive things on its own.
As a bot, Tay had no sense of ethics. Although Microsoft claimed the chatbot had been "modeled, cleaned, and filtered," the filtering did not appear to be very effective, and the company soon pulled it and apologized for the offensive remarks.
Without filters, anything goes and whatever maximizes engagement gets the attention of the bot and its followers. Unfortunately, hatred and negativity are great at driving engagement.
The more shocking something is, the more likely people are to read it. Especially when platforms have little moderation and are optimized for maximum engagement.
Twitter's well-documented spread of fake news is the poster child for this issue. The journal "Science" published a study this month looking at the pattern of the spread of misinformation on Twitte. The researchers found that falsehood diffused faster than the truth, and suggested that "the degree of novelty and the emotional reactions of recipients may be responsible for the differences observed."
Psychologists have also studied why bad news appears to be more popular than good news. An experiment run at McGill University showed evidence of a "negativity bias," a term for people's collective hunger for bad news. When you apply this to social media, it's easy to see how harmful content can easily end up in search results.
The McGill scientists also found that most people believe they're better than average and expect things to be all right in the end. This pleasant view of the world makes bad news and offensive content more surprising and fun to see since everything's all right in the world anyway.
When this gets amplified on a level of millions of people conducting searches each day, it brings the negative news to the forefront. People are drawn to the shocking news, it gets traction, more people search for it and then it reaches more people than it should have.
Both Facebook and Google have hired human moderators to find and flag offensive content, but so far they haven't been able to keep up with the volume of new material uploaded, and the new ways that mischievous or malicious users try to ruin the experience for everybody else.
Meanwhile, Microsoft recovered from the Tay debacle and released another chatbot called Zo in 2017. While Buzzfeed managed to get it to slip up and say offensive things , it's nothing on the order of what attackers were able to train Tay to say in just a few hours. Zo is still alive and well today, and largely inoffensive -- if not always on topic.
Maybe it's time for Facebook and Google to give Microsoft Research a call and see if the reseachers there have any tips.