The U.S. Commerce Department's National Institute of Standards and Technology (NIST) announced on Tuesday that it is taking the initial step toward developing key standards and guidance for the safe deployment of generative artificial intelligence (AI). NIST is seeking public input by February 2 for conducting crucial testing to ensure the safety of AI systems.
This move follows President Joe Biden's October executive order on AI, and the effort is aimed at establishing "industry standards around AI safety, security, and trust." The agency is working on guidelines for evaluating AI, facilitating the development of standards, and providing testing environments for AI systems.
Generative AI, which can create text, photos, and videos in response to open-ended prompts, has generated both excitement and concerns about potential job displacement, election interference, and the potential for AI systems to overpower humans with catastrophic effects.
President Biden's executive order directed agencies to set standards for testing AI and address related risks, including those in chemical, biological, radiological, nuclear, and cybersecurity domains. NIST is specifically focusing on guidelines for testing, incorporating "red-teaming" to assess and manage AI risks effectively. Red-teaming involves external simulations to identify new risks, and it has been used for years in cybersecurity.
In August, the first-ever U.S. public assessment "red-teaming" event was held during a major cybersecurity conference. Thousands of participants aimed to identify potential issues and risks in AI systems, showcasing the effectiveness of external red-teaming as a tool to understand and manage novel AI risks.
|