Tech

UK lawmakers urge firms to boycott tech giants who fail to tackle terrorism

Key Points
  • A group of British politicians has released a report condemning internet companies’ reluctance to act on the publication of extremist content.
  • The lawmakers want companies to pull advertising from firms that fail to tackle the issue.
  • Google says it has 10,000 people working to identify content that violates its policies.
A man holds a smartphone with the icons for social networking apps on the screen.
S3studio | Getty Images

U.K. lawmakers urged advertisers to boycott internet firms that fail to remove or control the publication of extremist content.

In a report published Thursday evening, the U.K. Parliament's Intelligence and Security Committee concluded that security agencies needed help from the likes of Facebook, Twitter and Google to curb the "enormous growth" in online extremist material.

The committee said online communications service providers (CSPs) had made "little tangible progress over the last four years" to tackle the publication of this content.

"Action that affects the CSPs' profits clearly hits home harder than any sense of 'doing the right thing'," the report said.

"Encouraging companies who advertise on the CSPs' platforms to put pressure on the CSPs to remove extremist content — with the threat of pulling their adverts if they do not — will have more impact on the CSPs."

Unilever boycott

The Committee said the U.K. government should seek to lobby the business community to take action, "following the Unilever example."

In February, Unilever threatened to boycott Facebook and Google if they failed to police extremist and illegal content. At the time, a Facebook spokesperson told the BBC: "We fully support Unilever's commitments and are working closely with them."

The Committee also pointed to Google subsidiary YouTube, which experienced an exodus of advertisers earlier this year over commercials appearing alongside extremist and illegal content.

A Google spokesperson told CNBC on the phone that 98 percent of all videos it has removed for violent extremism were now flagged by Google's machine-learning algorithms. Since introducing this technology in June 2017, the number of those videos removed before exceeding 10 views grew from 8 percent to more than 50 percent, it said.

The spokesperson added that Google had hit its goal of ensuring that 10,000 people were working to address content that violated its policies, and noted that Google had invested $5 million in supporting non-profit organizations that focused on tackling hate and extremism.

A Twitter spokesperson said the company was committed to "improving the health of the conversation on Twitter."

"Safety is a key part of this goal. In relation to terrorist content, 95% of it is now being removed proactively through our technology – 75% before their first Tweet," they told CNBC via email.

"We will also continue to work collaboratively with the Home Office, law enforcement, and our peer companies through the Global Internet Forum to Counter Terrorism, with a view to making further progress."

Facebook has taken several steps to reduce extremist content on its platform. It states that 99 percent of content relating to terror groups Islamic State and al-Qaeda is taken down before being flagged up by users.The team responsible for enforcing Facebook's policies is made up of around 30,000 people, with 200 dedicated specifically to counter-terrorism procedure.

A spokesperson for Facebook declined to comment directly on the government report.

Business leverage

According to the Committee, it takes U.K. security agencies nine years to remove 300,000 pieces of extremist material — but this would take Twitter just six months.

It was also alleged that a number of internet firms had refused government requests to remove such material from their platforms.

During 2017, the U.K. suffered a wave of terrorist activity, with five serious attacks at Westminster, Manchester Arena, London Bridge, Finsbury Park and Parsons Green. The attacks killed 36 people and injured many more.

Dominic Grieve, the Committee's chairman, said the CSPs had failed to stop their systems being used as a safe haven for extremists and terrorists.

"We have seen that appeals to these companies' sense of corporate and social responsibility have not resulted in them making the changes required — and again these loopholes were used by the perpetrators of the 2017 attacks," he said in a press release.

"We strongly consider that action which affects the CSPs' profits will hit home harder than an appeal to them to 'do the right thing,' and could force them to take action on this crucial issue. Government efforts should now be directed towards the business community, to encourage them to use the leverage they have with the CSPs."