ChatGPT CEO testifies before Congress as concerns grow about AI risks

ChatGPT CEO testifies before Congress as concerns grow about AI risks

On Tuesday, the CEO of ChatGPT’s artificial intelligence company told Congress that government intervention “will be critical to mitigating the risks of increasingly powerful” AI systems.

“As this technology advances, we understand that people are anxious about how it could change the way we live. We are too,” OpenAI CEO Sam Altman testified at a Senate hearing Tuesday.

His San Francisco-based startup shot to fame after releasing ChatGPT late last year. ChatGPT is a free chatbot program that responds to inquiries in a convincingly human-like manner.

What began as a worry among educators over the use of ChatGPT to cheat on homework assignments has grown into broader fears about the power of the latest generation of “generative AI” tools to mislead people and disseminate misinformation, violate copyright protections, and upend some jobs.

While there is no immediate indication that Congress will enact broad new AI rules, as European legislators are doing, societal concerns brought Altman and other tech CEOs to the White House earlier this month, prompting U.S. agencies to promise to crack down on harmful AI products that violate existing civil rights and consumer protection laws.

Concerns over AI-generated voice cloning and calls for AI system evaluations and reporting

Sen. Richard Blumenthal, a Democrat from Connecticut who chairs the Senate Judiciary Committee’s subcommittee on privacy, technology, and the law, opened the hearing with a recorded speech that sounded like Blumenthal, but was a voice clone trained on Blumenthal’s floor speeches and reciting a speech written by ChatGPT after he asked the chatbot, “How I would open this hearing?”

The result was impressive, said Blumenthal, but he added, “What if I had asked it, and what if it had provided, an endorsement of Ukraine surrendering or (Russian President) Vladimir Putin’s leadership?”

Blumenthal believes that before releasing AI systems, AI businesses should be compelled to evaluate them and report any known hazards.

OpenAI, which was founded in 2015, is also recognized for other AI technologies such as the image-maker DALL-E. Microsoft has spent billions of dollars in the firm and incorporated its technology into its products, notably the search engine Bing.

Altman also plans to embark on a global tour this month, visiting national capitals and major cities on six continents to discuss the technology with politicians and the general public. On the eve of his Senate hearing, he dined with scores of US legislators, several of whom told CNBC that his statements impressed them.

Testimony by Christina Montgomery and Gary Marcus on AI regulation and precision regulation approach

Christina Montgomery, IBM’s chief privacy and trust officer, and Gary Marcus, a professor emeritus at New York University, will also testify. Marcus was among a group of AI experts who called on OpenAI and other tech firms to pause the development of more powerful AI models for six months to give society more time to consider the risks. The letter was in reaction to OpenAI’s latest model, GPT-4, which was advertised as more powerful than ChatGPT when it was released in March.

“Artificial intelligence will be transformative in ways we can’t even imagine, with implications for Americans’ elections, jobs, and security,” said the panel’s ranking Republican, Sen. Josh Hawley of Missouri. “This hearing marks a critical first step towards understanding what Congress should do.”

Altman and other tech industry leaders have stated they favor some type of AI regulation but have warned against overly stringent rules. IBM’s Montgomery wants Congress to embrace a “precision regulation” approach in a draft of her planned speech.

“This means establishing rules to govern the deployment of AI in specific use cases, not regulating the technology itself,” Montgomery said.

Exit mobile version