ByteDance, the company behind video social media platform TikTok, has reportedly laid off hundreds of human content moderators worldwide as it transitions to an AI-first moderation scheme.
Most of the roughly 500 jobs lost were located in Malaysia, Reuters reports. Per the company, ByteDance employs more than 110,000 people in total. “We’re making these changes as part of our ongoing efforts to further strengthen our global operating model for content moderation,” a TikTok spokesperson said in a statement.
The company currently uses a mix of human and AI content moderators, with the machines handling roughly 80% of the work. ByteDance plans to invest some $2 billion in its trust and safety efforts in 2024. The firings come as ByteDance faces increased regulatory scrutiny in the country, which has seen a spike in harmful social media posts and misinformation this year.
Stateside, Instagram head Adam Mosseri announced Friday that the recent spate of Instagram and Threads locking user accounts, down-ranking posts, and marking them as spam were the product of mistakes made by human moderators, rather than the company’s AI moderation system. He claims that the employees were “making calls without being provided the context on how conversations played out, which was a miss.”
However, the humans were not entirely to blame, Mosseri clarified. “One of the tools we built broke,” he conceded, “so it wasn’t showing them sufficient context.”
Over the past few days, users on both sites found their accounts locked and subsequently disabled for violating the platforms’ age restrictions, which prevent people under the age of 13 from having their own accounts. Per The Verge, those accounts stayed locked even after the users uploaded verification of their age.
The company’s PR split from Mosseri’s stance, telling TechCrunch that “not all the issues Instagram users had encountered were related to human moderators.” Per the PR, the age verification issue is still being investigated.