In a significant move to address the growing concerns over AI-generated deepfakes and the unauthorized use of original content, the United States has introduced the Content Origin Protection and Integrity from Edited and Deepfaked Media Act (COPIED Act). This new bill, which has garnered strong bipartisan support, aims to safeguard the integrity of original works and curtail the misuse of AI technology.
Bipartisan support for COPIED Act
The COPIED Act arrives on the heels of another Senate bill, the “Take It Down Act,” which was proposed last month to target the removal of AI deepfakes depicting non-consensual intimate imagery.
This legislative push follows a series of high-profile incidents, including the viral spread of AI-generated deepfake nude images of Taylor Swift on social media platforms such as X (formerly Twitter), Facebook, and Instagram in January. These incidents have sparked a nationwide debate on the ethical implications and dangers of AI technology.
Addressing content creator concerns
Beyond combating deepfakes, the COPIED Act seeks to address the grievances of content creators, journalists, artists, and musicians who have seen AI systems profit from their work without acknowledgment or fair compensation.
A recent Forbes report accused Perplexity AI, an AI-enabled search engine, of content theft. This was corroborated by an investigation from Wired, a New York-based technology magazine, which found that Perplexity was summarizing its articles despite the presence of the Robot Exclusion Protocol, designed to prevent such unauthorized use.
Ensuring content authentication
The COPIED Act proposes the establishment of a digital document called “content provenance information,” akin to a logbook for all types of content—news articles, artistic expressions, images, and videos. This mechanism will ensure the authentication and detection of AI-generated content. The bill also includes provisions to make it illegal to tamper with this information, thereby helping journalists and creative artists safeguard their work from AI exploitation.
The bill empowers state officials to enforce its provisions, creating a legal pathway for suing AI companies that remove watermarks or use content without consent and compensation. This approach aims to provide robust protection for original content creators against unauthorized use by AI systems.
Comparison with international regulations
The European Union (EU) has already established comprehensive legislation to regulate AI, known as the EU Artificial Intelligence Act. This act requires European states to classify AI systems into four categories based on risk levels: Unacceptable Risk, High-Risk AI, Limited-Risk AI, and Minimal-Risk AI.
AI systems like those used in China for ascribing social scores to citizens have been deemed to pose an unacceptable risk and are prohibited under the EU’s regulations. In contrast, India has yet to implement specific AI regulatory laws. However, a directive from the Ministry of Electronics and Information Technology in March required AI systems labelled as “under-tested” or “unreliable” to seek government approval before deployment. This directive was later overturned to avoid stifling innovation, reflecting a cautious approach to AI regulation.
As the US moves forward with the COPIED Act, it joins a global effort to regulate AI technology and protect original content creators. The bill’s success could set a precedent for future legislation, aiming to balance innovation with ethical considerations and the protection of intellectual property. Your thoughts on the COPIED Act and its potential impact on AI regulation and content protection are welcome.