Google announced on Wednesday that content creators on YouTube will be required to disclose any altered or synthetic content that they post on the platform, in an effort to tighten standards to combat deepfakes.
Using its privacy request process, Google said it will allow the removal of AI-generated or other synthetic or altered content on YouTube that simulates an identifiable individual, including their face or voice.
“In the coming months, YouTube will require creators to disclose altered or synthetic content that is realistic, including using AI tools, and we’ll inform viewers about such content through labels in the description panel and video player. We’re committed to working with creators before this rolls out to make sure they understand the new requirements,” Google said in a blog post.
There is no silver bullet for combating deepfakes and AI-generated misinformation: Google
The development comes just a week after Indian Union cabinet minister for electronics and information technology Ashwini Vaishnaw and minister of state for electronics and information technology Rajeev Chandrasekhar directed social media platforms to strictly enforce deepfakes.
Mr Vaishnaw stated that the government will issue new guidelines to combat deepfakes, and Mr. Chandrasekhar requested that social media companies update their user policies in accordance with the IT rules notified in October 2022.
According to Google, there is no silver bullet for combating deepfakes and AI-generated misinformation.
“It requires a collaborative effort, one that involves open communication, rigorous risk assessment, and proactive mitigation strategies… Our collaboration with the Indian government for a multi-stakeholder discussion aligns with our commitment to addressing this challenge together and ensuring a responsible approach to AI,” Google said.