Facebook 'bug' stopped it removing terrorist content

F acebook failed to delete posts promoting terrorism because of a "bug" in its systems, the company has admitted. 

The social network, which has been under mounting pressure to police extremist material, blamed a technical glitch for a huge rise in the length of time it took to take down the posts.

Some of the posts could have been up for several weeks or even months before they were deleted, the Telegraph understands. 

The revelation raises more questions about Facebook's ability to police the content on its own website, even as it has invested in tools to automatically spot and delete terrorist images.

Facebook said the average time to take action on posts recently uploaded to the site leapt from less than a minute to 14 hours between April and July, because it fixed a bug which had previously prevented it from removing older posts.

“The increase was prompted by multiple factors, including fixing a bug that prevented us from removing some content that violated our policies, and rolling out new detection and enforcement systems,” said Monika Bickert, Facebook's global head of policy management, and Brian Fishman, its head of counterterrorism policy.

The median time dropped again to less than two minutes in the third quarter of the year.

Facebook said that new technologies used to take down terrorist material “improve and get better over time, but during their initial implementation such improvements may not function as quickly as they will at maturity".

"This may result in increases in the time-to-action, despite the fact that such improvements are critical for a robust counterterrorism effort," the company added.

The site also said it took down 2.2 million newly-uploaded posts that its technology had found in the second quarter of the year, compared to 1.2 million in the first quarter.

So far this year, it has taken down 14.3 million terrorist-related posts. That includes newly-uploaded posts it has found itself, older posts it has found, and those reported by users.

Facebook is using AI to spot potentially harmful posts which look like they express support for Islamic State or al-Qaeda, with an automated tool giving each post a rating to show how likely it is to contain support for terrorism.

Human reviewers then prioritise the items with the highest scores, and some posts with a very high score are automatically removed if the technology indicates that there is a very high likelihood that they contain terrorist content.

Facebook said the machine learning had helped reduce the average amount of time taken to remove posts reported by users from 43 hours in the first quarter to 18 hours in the third.

The site is among social networks which has faced criticism for its role in allowing terror groups to spread propaganda and recruit new members.

It has also faced questions over its use of human content reviewers who must view videos and images which contain violent and unsettling content in order to determine whether they should be removed from the site.

In September the European Commission said Facebook, Google and other tech companies could face fines if they did not remove terrorist content within an hour of being notified about it by the authorities.

https://www.telegraph.co.uk/technology/2018/11/11/facebook-bug-stopped-removing-terrorist-content/