AI researchers suggest a 5% probability of human extinction due to artificial intelligence

human

Many artificial intelligence researchers believe that the creation of superhuman AI in the future has a non-trivial potential to cause human extinction – however, there is substantial dispute and confusion about such concerns.

The findings are based on a poll of 2700 AI researchers who have recently published work at six of the leading AI conferences, making it the largest such survey to date. Participants were invited to offer their ideas on potential dates for future AI technology milestones, as well as the positive or negative societal ramifications of those achievements. Almost 58% of scholars believe there is a 5% risk of human extinction or other severely negative AI-related outcomes.

“It’s an important signal that most AI researchers don’t find it strongly implausible that advanced AI destroys humanity,” says Katja Grace at the Machine Intelligence Research Institute in California, an author of the paper. “I think this general belief in a non-minuscule risk is much more telling than the exact percentage risk.”

AI expert polls lack accuracy; Torres suggests no immediate panic

But, according to Émile Torres of Case Western Reserve University in Ohio, there is no need to panic just yet. Such AI expert polls, they claim, “don’t have a good track record” of projecting future AI breakthroughs. In the long term, AI expert predictions were no more accurate than non-expert public opinion, according to a 2012 research. The authors of this new survey also stated that AI researchers are not experts in anticipating AI’s future direction.

When compared to responses from a 2022 version of the same question, several AI researchers expected that AI would reach certain milestones sooner than originally anticipated. This coincides with ChatGPT’s introduction in November 2022 and Silicon Valley’s push to extensively deploy similar AI chatbot services based on big language sets.

According to the researchers polled, AI systems have a 50% or higher probability of effectively solving most of the 39 example tasks, such as producing new songs indistinguishable from a Taylor Swift smash or constructing an entire payment processing site from scratch, within the next decade. Other jobs, such as physically constructing electrical wiring in a new home or resolving long-standing mathematical riddles, are projected to require more time.

The prospect of AI outperforming humans on all tasks was given a 50% chance of occurring by 2047, while the possibility of all human employment becoming automatable was given a 50% chance of occurring by 2116. These projections are 13 and 48 years earlier than those provided in last year’s study, respectively.

According to Torres, the increased hopes for AI advancement may potentially fall flat. “A lot of these breakthroughs are pretty unpredictable. And the field of AI may go through another winter,” he says, referring to the drying up of funding and corporate interest in AI during the 1970s and 80s.

Other more pressing concerns do not involve superhuman AI. AI-powered scenarios involving deepfakes, public opinion manipulation, manufactured weapons, authoritarian control of populations, and rising economic disparity were identified as of serious or extreme concern by a large majority of AI researchers — 70% or more. Torres also emphasized the risks of AI contributing to disinformation about existential issues like climate change or deteriorating democratic governance.

“We already have the technology, here and now, that could seriously undermine [the US] democracy,” says Torres. “We’ll see what happens in the 2024 election.”

Exit mobile version