Social media job cuts risk surge of online terrorist content, warns report


Social media sites are at risk of letting terrorist content proliferate on their platforms following job cuts across the industry, undermining years of work in counter-terrorism, a UN-backed organisation has warned.

Some terrorist actors have increased operations on Twitter after staff tasked with monitoring this content were sacked, according to a report by Tech Against Terrorism, an initiative that helps companies police content online.

The warning comes just two months after Twitter cut its 7,500-strong workforce nearly in half following Elon Musk’s $44bn takeover of the social network. Significant cuts were made to Twitter’s trust and safety teams, which aim to protect users from illegal and harmful content, as part of the restructuring.

“If we come across dangerous content, we are not even sure who to contact anymore because they have been sacked,” said Adam Hadley, director of Tech Against Terrorism, which published its annual review of terrorist and violent extremist activity online late on Thursday.

Twitter did not respond to a request for comment.

Tech Against Terrorism is a public-private partnership launched in 2017. Its partners include Meta, Google, Microsoft and Twitter, as well as the UK, Spanish and Swiss governments.

The body works with hundreds of smaller platforms, including Pinterest and Etsy, and in the past year has reviewed more than 19,000 websites or posts containing terrorist content from over 70 tech companies.

Hadley also expressed concerns that broader technology sector job cuts could affect the moderation of terrorist content as very few people have the relevant experience or knowledge to police this content.

Both Snapchat and Meta, which owns Instagram, Facebook and WhatsApp, have cut staff in recent months following slowing revenue growth and declining advertising spending, their primary sources of revenue.

“Platforms constantly have to adapt filters and search terms, and terrorists always find ways around these rules. Our concern is that this requires deep expertise, and many of the experts that we are used to working with at large platforms have left,” Hadley added.

Meta said safety and security remained top priorities, and it had more than 40,000 people devoted to this work. Snap, which owns Snapchat, said its trust and safety team was one of the least affected by the restructuring, and nobody considered a specialist in this area had left.

The report also highlighted the rise of small websites set up by online actors to host and share terrorist content. The widely available sites are often left online for long periods as actors “exploit a lack of global consensus” on takedowns, as well as differing legal jurisdictions and under-resourced law enforcement.

“Almost nothing is happening about these big terrorist-operated websites [and] because many of these sites stay online and are so easy to find, it is hard to discuss it publicly,” Hadley said.

The organisation is working on a free tool with Google’s research and development arm, Jigsaw, to assist the moderation process for owners of smaller websites. In December, Meta released open-source software that other platforms can use to match terror content to existing images or videos in the database and highlight them for urgent human review.

The Tech Against Terrorism report also cited an increasing prevalence of IS and al-Qaeda content on big tech platforms in languages other than English and Arabic, which it said the companies have less capability to moderate. Bad actors are also using marketplace functions on social media platforms to sell terrorist materials, it added.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *