Incident raises concerns over security protocols in tech firms
ByteDance, the owner of TikTok, has dismissed an intern for what it described as “malicious interference” in the training of one of its artificial intelligence (AI) models. The company, however, downplayed the severity of the incident, stating that reports about the extent of the damage were exaggerated.
Intern’s actions caused limited disruption
The Chinese tech giant, renowned for its advancements in AI and algorithm development, clarified that the intern was part of the advertising technology team and had no direct involvement with ByteDance’s AI Lab. The intern allegedly attempted to disrupt the training of ByteDance’s Doubao chatbot, China’s most popular generative AI model, which functions similarly to ChatGPT.
In a statement, ByteDance refuted claims circulating on social media over the weekend, which suggested that the intern’s actions caused widespread damage, including a loss of over $10 million and disruptions to a vast network of graphics processing units (GPUs) used for AI training. The company assured that its commercial AI operations, including its large language models, were unaffected.
“Their social media profile and some media reports contain inaccuracies,” ByteDance said, rejecting allegations of extensive harm.
Firing and broader consequences
Aside from terminating the intern in August, ByteDance reported the individual to their university and relevant industry bodies, signaling the potential broader ramifications of the intern’s actions.
ByteDance’s AI leadership
As a global leader in AI, ByteDance has heavily invested in its AI projects, particularly in tools like its Doubao chatbot and a text-to-video generator, Jimeng. The company’s success in AI is a key factor in the popularity of its social media platforms, including TikTok and Douyin, the Chinese equivalent. Despite this setback, ByteDance emphasized that it remains focused on maintaining its AI leadership and ensuring the security of its technologies.
This incident highlights the challenges tech companies face in protecting their AI projects from internal threats, particularly as they continue to push the boundaries of innovation in artificial intelligence.