OpenAI, which Microsoft owns, has announced the formation of a team of elite machine learning researchers and engineers to guide and operate “Superintelligent” artificial intelligence (AI) systems. The term “superintelligence” alludes to a hypothetical AI model that excels at many different talents rather than just one, as some previous-generation models did. Ilya Sutskever (OpenAI’s chief scientist and one of the company’s co-founders) and Jan Leike (the research lab’s head of alignment) will co-lead the new team.
“Superintelligence will be the most impactful technology humanity has ever invented and could help us solve many of the world’s most important problems. But the vast power of superintelligence could also be very dangerous, and could lead to the disempowerment of humanity or even human extinction,” OpenAI said in a blog post on Wednesday.
OpenAI plans to focus on superintelligence alignment and AI safety
The Company claims that such a device might be available by the end of the decade. In addition, the corporation announced that it will spend the next four years allocating 20% of the computing it has so far secured to the problem of superintelligence alignment.
“While this is an incredibly ambitious goal and we’re not guaranteed to succeed, we are optimistic that a focused, concerted effort can solve this problem. There are many ideas that have shown promise in preliminary experiments, we have increasingly useful metrics for progress, and we can use today’s models to study many of these problems empirically,” the researchers said.
Furthermore, the company stated that the new team’s work is in addition to existing OpenAI work aimed at improving the safety of existing models such as ChatGPT, as well as understanding and mitigating other risks from AI such as misuse, economic disruption, disinformation, bias and discrimination, addiction and overreliance, and others.