Publishers should be able to prevent AI from scraping their content: Google

Publishers should have the option to prevent AI from scraping their content: Google

In their statement to the Australian government, Google stated that publishers should be able to opt out of having their works mined by AI and large language models (LLM). According to Google, copyright laws should be changed so that generative AI systems are not permitted to scour the internet and use anyone’s work. According to The Guardian, Google previously requested a fair use exception for AI systems from the Australian government, but this is the first time it has proposed an opt-out option for publishers.

It called on Down Under lawmakers to promote “copyright systems that enable appropriate and fair use of copyrighted content to enable the training of AI models in Australia on a broad and diverse range of data while supporting workable opt-outs for entities that prefer their data not to be trained in using AI systems”.

Google previously requested a fair use exception for AI systems from the Australian government

However, the corporation has not specified how such a system would operate. However, in a blog post published last month, Danielle Romain, Google’s VP for Trust, stated that the platform was looking to evolve with the rise of AI in order to preserve publishers’ rights. “As new technologies emerge, they present opportunities for the web community to evolve standards and protocols that support the web’s future development. One such community-developed web standard, robots.txt, was created nearly 30 years ago and has proven to be a simple and transparent way for web publishers to control how search engines crawl their content,” the blog post read.

“We believe it’s time for the web and AI communities to explore additional machine-readable means for web publisher choice and control for emerging AI and research use cases,” it added. According to experts, copyright will become one of the bigger problems in the future as generative AI systems continue to expand their horizons. 

“The general rule is that you need millions of data points to be able to produce useful outcomes…which means that there’s going to be copying, which is prima facie a breach of a whole lot of people’s copyright,” Dr. Kayleen Manwaring, a senior lecturer at UNSW Law and Justice, told the publication. According to reports, the Australian government is working on a proposal as part of its news industry support program to prevent AI from scraping sites’ information for free.

Exit mobile version