
A resignation fueled by AI concerns
Steven Adler, a longtime researcher at OpenAI, has resigned from his position, citing growing fears about the rapid pace of artificial intelligence development.
In a post on X (formerly Twitter), Adler announced that he left OpenAI in mid-November after working on AI safety, dangerous capability evaluations, agent control, and AGI (Artificial General Intelligence) governance for four years.
“It was a wild ride with lots of chapters – dangerous capability evals, agent safety/control, AGI and online identity, etc. – and I’ll miss many parts of it,” Adler wrote.
A growing fear of the future
However, in his next statement, Adler revealed the true reason behind his departure.
“Honestly, I’m pretty terrified by the pace of AI development these days. When I think about where I’ll raise a future family, or how much to save for retirement, I can’t help but wonder: Will humanity even make it to that point?” he wrote.
Adler warned that the AGI race is an extremely risky gamble, with no major AI lab having a definitive solution to the alignment problem—the challenge of ensuring AI systems act in alignment with human values.
He also pointed out that intense competition among AI labs pressures companies to accelerate development, even when ethical and safety concerns remain unresolved.
Seeking solutions for AI safety
After stepping away from OpenAI, Adler is now exploring AI safety and policy solutions.
“I’m enjoying a break for a bit, but I’m curious: what do you see as the most important & neglected ideas in AI safety/policy? I’m especially excited re: control methods, scheming detection, and safety cases,” he concluded in his post.
Adler’s resignation comes amid growing concerns from top AI researchers about the risks of uncontrolled AI development.
Geoffrey Hinton’s dire warning
Not long ago, Geoffrey Hinton, widely known as the “godfather of AI”, warned that AI could lead to human extinction within the next 30 years.
The British-Canadian computer scientist, who was awarded the 2024 Nobel Prize in Physics for his work on neural networks, estimated a 10% to 20% chance that AI could cause humanity’s downfall within three decades.
Hinton has repeatedly compared humans to toddlers when faced with the growing capabilities of AI.
“Imagine yourself and a three-year-old. We’ll be three-year-olds,” he said, emphasizing the potential intelligence gap between humans and future AI systems.
The future of AI safety
As AI advances at an unprecedented pace, experts warn that governments, researchers, and tech companies must take AI safety seriously.
With top minds in the field stepping away from leading AI labs, the debate over ethical AI development and the risks of AGI is becoming more urgent than ever.