
Artificial General Intelligence (AGI)—a form of AI capable of human-level thinking—could emerge as soon as 2030 and might even “permanently destroy humanity,” according to a recent research paper published by Google DeepMind.
“Given the massive potential impact of AGI, we expect that it too could pose a potential risk of severe harm,” the paper warns. It cites existential threats capable of wiping out humanity as clear examples of such severe risks.
The researchers clarify that evaluating the severity of harm from AGI is not solely up to them. “In between these ends of the spectrum, the question of whether a given harm is severe isn’t a matter for Google DeepMind to decide; instead it is the purview of society, guided by its collective risk tolerance and conceptualization of harm,” the study notes.
DeepMind study outlines AI risks
The paper, co-authored by DeepMind co-founder Shane Legg, does not specify exactly how AGI might cause humanity’s extinction. Instead, it urges the tech industry to implement strong preventative measures to reduce the risk posed by AGI systems.
It categorizes AGI-related risks into four broad types: misuse, misalignment, mistakes, and structural risks. The authors stress that DeepMind’s mitigation efforts focus heavily on preventing misuse, particularly in scenarios where humans might deliberately use AGI to cause harm.
DeepMind CEO backs global oversight
Earlier this year in February, DeepMind CEO Demis Hassabis emphasized the need for an international regulatory structure to govern AGI’s development. He warned that AGI as capable—or even more capable—than human intelligence may be developed within the next five to ten years.
“I would advocate for a kind of CERN for AGI, and by that, I mean a kind of international research-focused high-end collaboration on the frontiers of AGI development to try and make that as safe as possible,” Hassabis said.
He suggested a three-pronged global structure: a research entity like CERN, a watchdog akin to the IAEA to monitor dangerous AGI projects, and a broader multinational governing body similar to the UN. “So a kind of like UN umbrella, something that is fit for purpose for that—a technical UN,” he said.
What is AGI?
AGI is considered the next major leap in artificial intelligence. Unlike today’s AI models that perform narrow, specific tasks, AGI would demonstrate intelligence that could be applied across a wide range of domains. It would be able to understand, learn, and apply knowledge in much the same way a human does.