“Godfather of AI,” asks governments to prevent machines from ruining the society

"Godfather of AI," asks governments to prevent machines from ruining society

On Wednesday, Geoffrey Hinton, one of the so-called godfathers of artificial intelligence, asked governments to intervene and ensure that machines do not take control of society. Hinton made news in May when he announced his departure from Google after a decade of service in order to talk more freely on the perils of AI, just after the release of ChatGPT captivated the world’s imagination.

The University of Toronto’s highly regarded AI scientist was speaking to a crowded audience at the Collision tech conference in the Canadian metropolis. More than 30,000 company owners, investors, and IT professionals attended the conference, with the majority hoping to learn how to surf the AI wave rather than hear about its perils.

“Before AI is smarter than us, I think the people developing it should be encouraged to put a lot of work into understanding how it might try and take control away,” Hinton said. “Right now there are 99 very smart people trying to make AI better and one very smart person trying to figure out how to stop it taking over and maybe you want to be more balanced,” he said.

Hinton emphasizes AI risks, inequality, fake news; EU exploring solutions

Despite his opponents who believe he is exaggerating the risks of AI, Hinton has emphasized that the concerns should be treated seriously. “I think it’s important that people understand that this is not science fiction, this is not just fear-mongering,” he insisted. “It is a real risk that we must think about, and we need to figure out in advance how to deal with it.”

Hinton also expressed fear that AI might exacerbate inequality, with the significant productivity gains from its adoption benefiting the wealthy rather than workers. “The wealth isn’t going to go to the people doing the work. It is going to go into making the rich richer and not the poorer and that’s very bad for society,” he added.

He also mentioned the dangers of fake news generated by ChatGPT-style bots and expressed hope that AI-generated information may be watermarked in the same manner that central banks watermark actual money. “It’s very important to try, for example, to mark everything fake as fake. Whether we can do that technically, I don’t know,” he said. The European Union is exploring such a technique in its AI Act, which is now being negotiated by legislators and will establish the standards for AI in Europe.

Hinton’s list of AI concerns contrasted with conference talks that focused less on safety and threats and more on capitalizing on the opportunity

Hinton’s list of AI concerns contrasted with conference talks that focused less on safety and threats and more on capitalizing on the opportunity offered by ChatGPT in society. According to Sarah Guo, a venture capitalist, “talking about overpopulation on Mars” is as premature as “talking about AI as an existential threat.”

She also warned against “regulatory capture,” which would see government action safeguard incumbents before it could benefit sectors like health, education, or science. Opinions differed on whether the current generative AI giants — mainly Microsoft-backed OpenAI and Google — would remain unmatched or whether new actors will expand the field with their models and innovations. Opinions ranged on whether the current generative AI behemoths – mostly Microsoft-backed OpenAI and Google – would remain unrivaled, or whether new players would enter the fray with their models and advances.

“In five years, I still imagine that if you want to go and find the best, most accurate, most advanced general model, you’re probably going to still have to go to one of the few companies that have the capital to do it,” said Leigh Marie Braswell of venture capital firm Kleiner Perkins. Gradient Ventures’ Zachary Bratun-Glennon predicted a future in which “there will be millions of models across a network much like we have a network of websites today.”

Exit mobile version