Tech

Tech firms could face EU fines for failing to remove terrorist content within one hour

Key Points
  • A committee of EU lawmakers has approved a draft legislation that would see internet companies legally required to remove terrorist material from their platform within one hour of being alerted to it by authorities.
  • Tech firms could be fined up to 4% of their global turnover if they consistently fail to comply.
  • The European Parliament’s full chamber will vote on whether to approve the draft law next week.
Chesnot | Getty Images

Social media giants could face heavy fines if they fail to remove terrorist content within an hour, under new legislation proposed by EU lawmakers.

The European Parliament's Civil Liberties Committee passed the draft law on Monday evening, which is now subject to approval in a vote in the Parliament's full chamber next week. If approved, the legislation will face a final ratification by EU leaders.

Under the proposed regulation, internet companies that host content uploaded by users — like Facebook or YouTube — and offer their services in the EU will have a one-hour window to remove terrorist content from their site if they are alerted by "competent national authorities." They will not be legally obliged to constantly monitor their platforms for extremist material.

The new law would see tech firms fined up to 4% of their global turnover if they "systematically and persistently" fail to comply.

Each morning, the “Beyond the Valley” newsletter brings you all the latest from the vast, dynamic world of tech – outside the Silicon Valley.

Subscribe:

By signing up for newsletters, you are agreeing to our Terms of Use and Privacy Policy.

Companies that have been issued a substantial number of removal orders may also be asked by authorities to take further measures, such as regular reporting to authorities or increasing staff numbers. Lawmakers also agreed that additional measures ordered by authorities should take a company's financial capabilities into account, as well as "the freedom to receive and impart information and ideas in an open and democratic society."

Smaller platforms would be given a slight advantage, with a 12-hour window to remove content after their first removal order, as well as an explanation from authorities about their obligations under the law.

The legislation targets any material — such as text, images, sound recordings or videos — that incites or solicits terrorist offences, provides instructions for the carrying out of terrorist offences, or solicits participation in the activities of a terrorist group. It will also apply to content providing guidance on how to make and use explosives, firearms and other weapons for terrorist purposes.

The law would protect content that was being distributed for educational, journalistic or research purposes. The expression of controversial views on sensitive political matters would not be subject to the legislation.

Daniel Dalton, member of the Committee on Civil Liberties, Justice and Home Affairs, said in a press release Monday there was a clear problem with terrorist material being circulated unchecked — but he added the legislation would not prohibit free speech.

"This propaganda can be linked to actual terrorist incidents and national authorities must be able to act decisively," he said. "Any new legislation must be practical and proportionate if we are to safeguard free speech. Without a fair process we risk the over-removal of content as businesses would understandably take a safety-first approach to defend themselves. It also absolutely cannot lead to a general monitoring of content by the back door."

Big Tech has faced intensified scrutiny since a video of last month's attacks on two New Zealand mosques was shared repeatedly on several social media sites.

U.K. lawmakers published a proposal for new legislation on Monday that would slap companies with hefty fines, block websites and hold executives personally liable if their platforms host harmful content.

Last week, Australia passed a similar law that could see tech firms and their executives fined or jailed for failing to remove harmful content from their platforms.