World’s biggest tech companies pledge ‘responsible’ development of AI

World’s biggest tech companies pledge ‘responsible’ development of AI

Some of the world’s leading tech companies have pledged to collaborate in safeguarding against the dangers of artificial intelligence (AI) during a two-day AI summit in Seoul, which was also attended by multiple governments.

At the event co-hosted with Britain, industry leaders from companies like South Korea’s Samsung Electronics and Google committed to “minimize risks” and responsibly develop new AI models, even as they push the field forward.

This renewed commitment, formalized in the Seoul AI Business Pledge on Wednesday, along with new safety commitments announced the day before, builds on the consensus from the inaugural global AI safety summit held at Bletchley Park in Britain last year.

On Tuesday, companies including OpenAI and Google DeepMind pledged to share their risk assessment methodologies, particularly for risks “deemed intolerable,” and to ensure such thresholds are not crossed.

However, experts cautioned that it is challenging for regulators to comprehend and manage AI as the sector evolves rapidly.

“I think that’s a really, really big problem,” said Markus Anderljung, head of policy at the Centre for the Governance of AI, a non-profit research organization based in Oxford, Britain.

“Dealing with AI, I expect to be one of the biggest challenges that governments all across the world will have over the next couple of decades.”

“The world will need to have some kind of joint understanding of what are the risks from these sort of most advanced general models,” he said.

Michelle Donelan, UK Secretary of State for Science, Innovation and Technology, stated in Seoul on Wednesday that “as the pace of AI development accelerates, we must match that speed… if we are to grip the risks.”

She indicated that the next AI summit in France would offer further opportunities to “push the boundaries” in testing and evaluating new technology.

“Simultaneously, we must turn our attention to risk mitigation outside these models, ensuring that society as a whole becomes resilient to the risks posed by AI,” Donelan said.

AI inequality a problem

The remarkable success of ChatGPT following its 2022 release sparked a surge in generative AI development, with tech companies worldwide investing billions in their own models.

These AI models can create text, images, audio, and even video from simple prompts, and advocates have touted them as innovations that will enhance lives and businesses globally.

However, critics, rights activists, and governments have warned of potential misuse, such as manipulating voters with fake news or “deepfake” images and videos of politicians.

Many have called for international standards to regulate AI development and use

“I think there’s increased realization that we need global cooperation to really think about the issues and harms of artificial intelligence. AI doesn’t know borders,” said Rumman Chowdhury, an AI ethics expert who leads Humane Intelligence, an independent non-profit that evaluates and assesses AI models.

Chowdhury told AFP that it’s not just the “runaway AI” of science fiction that is concerning, but also issues like rampant inequality in the sector.

“All AI is just built, developed and the profits reaped (by) very, very few people and organizations,” she told AFP on the sidelines of the Seoul summit.

People in developing nations such as India “are often the staff that does the clean-up. They’re the data annotators, they’re the content moderators. They’re scrubbing the ground so that everybody else can walk on pristine territory”.

Exit mobile version