The developers of ChatGPT- OpenAI have urged for the regulation of superintelligent AIs, claiming that a body like the International Atomic Energy Agency would be necessary to protect humanity from the dangers posed by rapidly advancing AI.
In a letter posted on the business website, co-founders Greg Brockman, Ilya Sutskever, and chief executive Sam Altman urged an international regulatory body to start considering ways to “inspect systems, require audits, test for compliance with safety standards, and place restrictions on degrees of deployment and levels of security” in order to lessen the “existential risk” that such systems might present.
“It’s conceivable that within the next 10 years, AI systems will exceed expert skill level in most domains, and carry out as much productive activity as one of today’s largest corporations,” the note read.
“In terms of both potential upsides and downsides, superintelligence will be more powerful than other technologies humanity has had to contend with. We can have a more prosperous future, but we must manage risk to get there. Given the possibility of existential risk, we can’t just be reactive.”
Center for AI Safety (CAIS) described eight kinds of “catastrophic” and “existential” threats of artificial intelligence (AI)
Leaders of OpenAI urged for some degree of coordination among the institutions conducting artificial intelligence research in order to ensure that the advancement of AI models integrates smoothly with society while also prioritizing safety.
Eight kinds of “catastrophic” and “existential” threats that the development of artificial intelligence (AI) could provide are described by the US-based Center for AI Safety (CAIS), which seeks to “lower societal-scale risks from artificial intelligence).
As per the creators of ChatGPT, those risks mean “people around the world should democratically decide on the bounds and defaults for AI systems”, but admit that “we don’t yet know how to design such a mechanism”.
“We believe it’s going to lead to a much better world than what we can imagine today (we are already seeing early examples of this in areas like education, creative work, and personal productivity),” the note read.
“Increasingly powerful models” of artificial intelligence are “critical” to mitigate the risks the technology poses
“Because the upsides are so tremendous, the cost to build it decreases each year, the number of actors building it is rapidly increasing, and it’s inherently part of the technological path we are on. Stopping it would require something like a global surveillance regime, and even that isn’t guaranteed to work. So we have to get it right.”
Sam Altman, addressed a panel of United States lawmakers, on Tuesday (May 16), and said that regulation of the “increasingly powerful models” of artificial intelligence is “critical” to mitigate the risks the technology poses. Altman also spoke about how using AI to interfere with election integrity is a “significant area of concern”.
“OpenAI was founded on the belief that artificial intelligence has the potential to improve nearly every aspect of our lives, but also that it creates serious risks,” said Altman while addressing a Senate judiciary subcommittee hearing.
This comes as businesses of all sizes compete to market ever-more-sophisticated AI models, raising concerns among detractors and industry experts who have cautioned about how the technology can exacerbate societal harms like factors like misinformation and prejudice.
While voicing his concerns, Altman also discussed the positive social effects of AI, saying that, over time, generative AI produced by OpenAI will “address some of humanity’s biggest challenges, like climate change and curing cancer.”
The CEO of OpenAI stated, “We think that regulatory intervention by governments will be critical to mitigating the risks of increasingly powerful models,” in light of the concerns associated with the technology.