The Biden administration begins drafting important AI guidelines

AI

On Tuesday, the Biden administration announced the first step toward developing critical guidelines and recommendations for the safe deployment of generative artificial intelligence, as well as how to test and safeguard systems. The Commerce Department’s National Institute of Standards and Technology (NIST) announced on Feb. 2 that it was seeking public input for important testing critical to guaranteeing the safety of AI systems. President Joe Biden’s executive order on AI, according to Commerce Secretary Gina Raimondo, prompted the effort, which aims to develop “industry standards around artificial intelligence safety, security, and trust that will enable America to continue leading the world in the responsible development and use of this rapidly evolving technology.”

The agency is creating rules for evaluating AI, as well as encouraging the creation of standards and providing testing settings for AI systems. The request solicits feedback from AI firms and the general public on generative artificial intelligence risk management and mitigating the hazards of AI-generated misinformation. In recent months, there has been both enthusiasm and concern about generative AI, which can generate text, photographs, and videos in response to open-ended cues. This technology has the potential to make some industries obsolete, upend elections, and potentially dominate people, among other things.

NIST is developing testing recommendations, including where “red-teaming” would be most advantageous for AI risk assessment and management

The directive issued by Biden urged agencies to establish guidelines for this testing as well as address related chemical, biological, radiological, nuclear, and cybersecurity concerns. NIST is developing testing recommendations, including where “red-teaming” would be most advantageous for AI risk assessment and management, as well as best practices for doing so. External red-teaming has been employed in cybersecurity for years to uncover new dangers, with the phrase referencing US Cold War simulations in which the adversary was referred to as the “red team.”

During a major cybersecurity conference in August, AI Village, SeedAI, and Humane Intelligence hosted the first-ever US public assessment “red-teaming” event. Thousands of people took part in the experiment to see whether they “could make the systems produce undesirable outputs or otherwise fail, with the goal of better understanding the risks that these systems present,” according to the White House. The event “demonstrated how external red-teaming can be an effective tool to identify novel AI risks,” it added.

Exit mobile version