CCTV Transcripts

CCTV Script 21/08/18

EU already wanted to slap terrorism and extreme activity related content on social media. In March this year, EU delivered ultimatum to several tech giants, such as Facebook, Twitter and YOUTUBE under Google, asking them delete illicit terrorism content within 1 hour voluntarily, or they will face sanctions.

At that time, EU also released an act, which is directed at unlawful material and is applicable for all EU countries; terrorists' speech, violence content, children molestation, fake products and pirated material were involved. The reason for 1 hour is EU administration terrorism content has the biggest negative spin in the first several hours of appearing on website. So all companies should take " delete that kind of content within an hour" as a daily regulation.

From March to now, almost half a year, there are voices saying EU has lost patience with "voluntary action" by technology companies to remove terrorist content from the Internet, so it plans to implement tough rules. Before that, several major social media sites regulate content on a voluntary basis. But obviously, EU is not satisfied with self-regulation results of those tech companies, meaning EU may be prepared to discard voluntary actions empowered companies to remove related content, and the draft, due out in next month, will impose fines on online published content that is not yanked within an hour. So, this is a compulsory draft.

"EU didn't see enough progress that the tech companies make on pulling off terrorism content, so EU is going to take more effective actions to protect their citizens. On the other side, CCIA, which represents companies such as Google and Facebook, complains EU is so demanding, one hour is too short. And this organization criticized EU's requirement will hurt technology and Economy in EU.

Actually, social media platforms, such as Facebook and Twitter, have tried to build shared imagery and video database by data research and cooperation, enabling AI identity content automatically. Facebook said, their related AI can identity 99% terrorism campaign.

In the end of 2017, Google said they would hire 10000 more content regulators to review the content. According the last release from Google, however, 99% terrorism campaign removed from YOUTUBE was identified by AI, among that, more than half of them had no more than 10 views and more than a fifth had more than 100 views. Both effort and speed have improved. YOUTUBE said, 70% of all terrorism-related content on the website could be removed within eight hours, and Facebook said it had removed 1.9 million content related to the extremist group ISIS or al-Qaida from the platform in the first three months of this year.

Then we can see that, the disagreement between those 2 sides is tech companies think they have already improved self-regulation, while regulator believe the effort of self-regulation by technology companies is out of proportion to the severe counter-terrorism situation faced by governments, and tech companies really make small steps. We have seen the contradictions between tech giants and European governments come to the fore over time. For example, at the beginning of this year, Britain's security minister, Wallace was denounced silicon valley technology giants as "merciless profiteers", he said, as the technology companies refused to provide the government with the suspect encrypted information, making the British taxpayers spend millions of pounds to monitor terrorist ,but at the same time in the first time, the European Union's privacy protection rules "GDPR", general Data Protection Act, has made a lot of technology giants in silicon valley face higher cost using data, and if "delete related content within 1 hour" such tough policies are officially in place, it's no surprise that tech companies will face a sharp spike in operating costs in Europe, and that is the biggest challenge silicon valley giants face in the European market.