DEF CON hacker convention: Las Vegas event will let hackers test limits of AI technology

DEF CON hacker convention: Las Vegas event will let hackers test limits of AI technology

OpenAI, the maker of ChatGPT, and other companies that make AI chatbots, including Google and Microsoft, may soon let thousands of hackers explore the limits of AI. According to reports, the Biden administration is working with current AI power players to organize one such event. Rumman Chowdhury, the event’s coordinator, was reported by the Associated Press as stating, “This is why we need thousands of people,” for this summer’s DEF CON hacker convention in Las Vegas, which is predicted to draw a sizable crowd.

Sven Cattell and Austin Carson assisted in leading a workshop inviting community college students to hack an Artificial Intelligence model

“We need a lot of people with a wide range of lived experiences, subject matter expertise, and backgrounds hacking at these models and trying to find problems that can then go be fixed.” The tendency of AI to manufacture knowledge and present it confidently as reality has been made clear to users of ChatGPT, Microsoft’s Bing chatbot, and Google’s Bard. These systems, which are based on big language models, also replicate the cultural prejudices they have picked up from being trained in vast amounts of online writing.

At the South by Southwest music festival in March in Austin, Texas, where Sven Cattell, creator of DEF CON’s venerable AI Village, and Austin Carson, president of responsible AI nonprofit SeedAI, assisted in leading a workshop inviting community college students to hack an AI model, the idea of a mass hack attracted the attention of American government officials, according to the Associated Press. The event will take place on a considerably larger scale this year. Since the introduction of ChatGPT in the latter part of last year, there has been a boom in both public interest and monetary investment in huge language models. This study is the first to address these models.

“Our basic view is that AI systems will need third-party assessments, both before deployment and after deployment”

Some of the details are still being negotiated. But companies that have agreed to provide their models for testing include OpenAI, Google, chipmaker Nvidia, and startups Anthropic, Hugging Face, and Stability AI. “As these foundation models become more and more widespread, it’s really critical that we do everything we can to ensure their safety,” Scale CEO Alexandr Wang told Associated Press. “You can imagine somebody on one side of the world asking it some very sensitive or detailed questions, including some of their personal information. You don’t want any of that information leaking to any other user.”

The DEF CON conference, according to Jack Clark, co-founder of Anthropic, will ideally mark the beginning of a stronger commitment on the part of AI developers to assess the security of the systems they are developing. “Our basic view is that AI systems will need third-party assessments, both before deployment and after deployment. Red-teaming is one way that you can do that,” Clark told Associated Press. “We need to get practice at figuring out how to do this. It hasn’t really been done before.”

Exit mobile version