Former Foreign Minister Boris Johnson is seen as the bookmaker's favorite to succeed outgoing Prime Minister Theresa May.Europe Politicsread more
J.P. Morgan Chase economists said they now see second quarter growth of just 1%, down from their prior forecast of 2.25% and way off the 3.2% reported in the first quarter.Market Insiderread more
An analyst for Ark Invest, which has a major investment in Tesla, says recent drastic price-target cuts by others on Wall Street are missing the big picture.Investingread more
Rep. Chip Roy, R-Texas, has objected to a $19.1 billion disaster relief bill that was expected to pass unanimously Friday. The bill is likely to next be considered when...Politicsread more
The markets have been slow to recognize the high-stakes game that's playing out on the world stage.Economyread more
Stocks were headed for weekly losses on Friday as investors worry the U.S.-China trade war is hurting economic growth.US Marketsread more
One of the biggest Chinese chipmakers is delisting from the New York Stock Exchange amid the trade war, but the company said the decision is not related to the intensifying...Marketsread more
President Donald Trump, his businesses and members of his family on Friday appealed a federal judge's decision that Deutsche Bank and Capital One can turn over years of...Politicsread more
While the holiday generally marks a big sales push, this year it comes as cars are sitting on dealer lots longer before getting sold.Personal Financeread more
Facebook's founder Mark Zuckerberg has held talks with the Winklevoss twins, his old rivals, about the social media giant's developing digital currency, the Financial Times...Bitcoinread more
May had failed to win a parliamentary majority on Britain's withdrawal from the European Union.Europe Politicsread more
Facebook came under fire on Thursday night after users noticed search suggestions alluding to child abuse and other vulgar and upsetting results when people started typing "video of..." Facebook promptly apologized and removed the predictions.
YouTube has also been the subject of investigations regarding how it highlights extreme content. On Monday, Youtube users highlighted the prevalence of conspiracy theories and extreme content in the website's autocomplete search box.
Both companies blamed users for their search suggestion issues. Facebook told The Guardian, "Facebook search predictions are representative of what people may be searching for on Facebook and are not necessarily reflective of actual content on the platform."
Alphabet's Google, the owner of YouTube, says that its search results take into account "popularity" and "freshness," which are determined by users.
But this isn't the first time users have driven computer algorithms into unexpected and deeply offensive corners. Microsoft made the same mistake two years ago with a chatbot that learned how to be extremely offensive in less than a day.
In March 2016, Microsoft released a Twitter chatbot named "Tay" that was described as an experiment in "conversational understanding." The bot was supposed to learn to engage with people through "casual and playful conversation."
But Twitter users engaged in conversation that wasn't so casual and playful.
Within 24 hours, Tay was tweeting about racism, anti-semitism, dictators, and more. Part of it was prompted by users asking the bot to repeat after them, but soon the bot started saying strange and offensive things on its own.
As a bot, Tay had no sense of ethics. Although Microsoft claimed the chatbot had been "modeled, cleaned, and filtered," the filtering did not appear to be very effective, and the company soon pulled it and apologized for the offensive remarks.
Without filters, anything goes and whatever maximizes engagement gets the attention of the bot and its followers. Unfortunately, hatred and negativity are great at driving engagement.
The more shocking something is, the more likely people are to read it. Especially when platforms have little moderation and are optimized for maximum engagement.
Twitter's well-documented spread of fake news is the poster child for this issue. The journal "Science" published a study this month looking at the pattern of the spread of misinformation on Twitte. The researchers found that falsehood diffused faster than the truth, and suggested that "the degree of novelty and the emotional reactions of recipients may be responsible for the differences observed."
Psychologists have also studied why bad news appears to be more popular than good news. An experiment run at McGill University showed evidence of a "negativity bias," a term for people's collective hunger for bad news. When you apply this to social media, it's easy to see how harmful content can easily end up in search results.
The McGill scientists also found that most people believe they're better than average and expect things to be all right in the end. This pleasant view of the world makes bad news and offensive content more surprising and fun to see since everything's all right in the world anyway.
When this gets amplified on a level of millions of people conducting searches each day, it brings the negative news to the forefront. People are drawn to the shocking news, it gets traction, more people search for it and then it reaches more people than it should have.
Both Facebook and Google have hired human moderators to find and flag offensive content, but so far they haven't been able to keep up with the volume of new material uploaded, and the new ways that mischievous or malicious users try to ruin the experience for everybody else.
Meanwhile, Microsoft recovered from the Tay debacle and released another chatbot called Zo in 2017. While Buzzfeed managed to get it to slip up and say offensive things , it's nothing on the order of what attackers were able to train Tay to say in just a few hours. Zo is still alive and well today, and largely inoffensive -- if not always on topic.
Maybe it's time for Facebook and Google to give Microsoft Research a call and see if the reseachers there have any tips.