Wait, What? AI could ‘kill many humans’ within two years, says Sunak adviser

Wait, What? AI could ‘kill many humans’ within two years, says Sunak adviser

Within the next two years, extremely powerful artificial intelligence (AI) systems might be able to “kill humans,” according to the AI adviser to UK Prime Minister Rishi Sunak. Setting off the doomsday clock, Matt Clifford, who is now forming a government AI group, declared that decision-makers from all across the world must cooperate to control the technology because failure to do so might have catastrophic effects.

“You can have really very dangerous threats to humans that could kill many humans, not all humans, simply from where we’d expect models to be in two years’ time,” said Clifford. He said humans should be prepared for threats ranging from cyberattacks to the creation of bioweapons if AI is allowed its way. “The kind of existential risk that I think the letter writers were talking about is…what happens once we effectively create a new species, you know an intelligence that is greater than humans.” “If we try and create artificial intelligence that is more intelligent than humans and we don’t know how to control it, then that’s going to create a potential for all sorts of risks now and in the future…it’s right that it should be very high on the policymakers’ agendas.”

“Powerful AI systems should be developed only once we are confident that their effects will be positive”

When asked what probability he assigned to the idea that AI could wipe out mankind, Clifford responded, “I think it is not zero.” Elon Musk and Steve Wozniak were among the signatories of an open letter to the Future of Life Institute earlier in March that urged a very cautious approach to the development of the next generation of AI. The letter identified 12 studies by professionals, including former workers from OpenAI, Google, and DeepMind.

“Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable,” the letter stated. “AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts,” it added. 

“AI is more dangerous than, say, mismanaged aircraft design or production maintenance or bad car production. In the sense that it has the potential, however, small one may regard that probability, but it is non-trivial – it has the potential of civilization destruction.” 350 AI specialists, including Sam Altman, CEO of OpenAI, acknowledged this week that there was a chance that, in the long run, the technology would wipe out humans.

Exit mobile version