Social media powerhouse TikTok announced significant layoffs today, cutting hundreds of moderators across the UK and Asia as it moves to integrate artificial intelligence more broadly into its operations.
The company stated that affected employees would be given priority for rehire if they meet certain unspecified criteria. However, TikTok has not disclosed the exact number of layoffs from its 2,500-strong UK workforce, according to the Wall Street Journal.
The decision has drawn swift criticism from unions and online safety advocates. John Chadfield, national tech officer for the Communications Workers Union (CWU), told the BBC, “[TikTok is] putting corporate greed over the safety of workers and the public.” He added to the WSJ that the company has ignored repeated warnings from employees about the real-world risks of replacing human moderators with underdeveloped AI systems.
Read More: Insta360 Unveils Detachable Action Camera, Raising the Bar Beyond GoPro
TikTok says AI is already cutting unsafe content
TikTok asserts that artificial intelligence is already effectively identifying and removing unsafe content. However, the Communications Workers Union (CWU) expressed concerns to the BBC that the AI systems may not yet be fully equipped to handle moderation safely, posing potential risks for vulnerable users.
TikTok countered these claims, stating that it employs “comprehensive” AI designed to support the safety of both users and human moderators.
“TikTok is continuing a reorganization that began last year to strengthen our global Trust and Safety operations, which includes consolidating activities into fewer locations worldwide,” the company said in a statement.
The platform emphasized that it has spent several years integrating AI across its core operations, aiming to leverage these tools to “maximize effectiveness and speed” in content moderation.
TikTok is already on regulators’ radar abroad
TikTok is already under regulatory scrutiny in the U.K. over user safety and the handling of personal data. In March, the Information Commissioner’s Office launched an investigation into how the platform manages data belonging to users aged 13 to 17.
In its statement, TikTok referenced recent U.K. legislation, noting that new rules under the Online Safety Act, which came into effect in July, have increased potential fines for non-compliance with national safety standards to as much as 10% of revenue. The company said it is relying more heavily on AI to meet these regulatory requirements.
TikTok also claimed that its AI systems automatically remove approximately 85% of content that violates platform rules, though the company did not provide independent verification of this figure.
Frequently Asked Questions
Why is TikTok laying off moderators?
TikTok is reducing its human moderation workforce in the UK and Asia to integrate artificial intelligence more broadly into content moderation processes.
How many employees were affected?
The company has not disclosed an exact number, but reports indicate that hundreds of moderators were impacted out of a UK workforce of 2,500.
Will displaced employees be rehired?
TikTok stated that laid-off workers will have priority for rehire if they meet certain unspecified criteria.
How is AI used for moderation?
TikTok claims its AI automatically identifies and removes unsafe or non-compliant content, which it says accounts for approximately 85% of removed posts.
Are there safety concerns with AI moderation?
Yes, unions and safety advocates warn that current AI tools may not yet be fully capable of handling content moderation safely, potentially putting vulnerable users at risk.
Why is TikTok relying on AI now?
The company cites regulatory pressure, including the UK’s Online Safety Act, which imposes stricter safety standards and higher potential fines for non-compliance.
Has TikTok faced regulatory scrutiny?
Yes, the UK’s Information Commissioner’s Office is investigating how TikTok manages the personal data of users aged 13 to 17.
Conclusion
TikTok’s move to replace human moderators with artificial intelligence reflects the growing tension between automation, user safety, and regulatory compliance in the tech industry. While the company emphasizes the efficiency and reach of AI, critics warn that underdeveloped systems may put vulnerable users at risk. With increased scrutiny from regulators and new safety laws in the U.K., TikTok faces the challenge of balancing innovation with accountability, all while managing the impact on its workforce.