The Paramount Task of Setting Standards for AI Safety: An Interview with Elham Tabassi
In the rapidly evolving landscape of artificial intelligence (AI), the necessity for safe, secure, trustworthy, and socially responsible systems is paramount. Unlike the nuclear fission’s advent, this tech revolution has been largely private-sector-driven, posing unique challenges for regulation and standardization. The Biden administration has tasked the National Institute of Standards and Technology (NIST) with defining parameters and creating standards for AI safety, a monumental undertaking given the technology’s complexity and novelty.
Interview with Elham Tabassi, NIST’s Chief AI Advisor
In an exclusive interview, Elham Tabassi, NIST’s chief AI advisor, sheds light on the agency’s efforts to address the challenges of AI safety and trustworthiness.
The Importance of a Shared Lexicon
Tabassi emphasizes the urgent need for a common vocabulary in the AI field. Given its interdisciplinary nature, different stakeholders often use similar terms differently, leading to misunderstandings and miscommunications. Establishing a shared lexicon is crucial for effective collaboration and progress.
The Role of Diverse Expertise
Tabassi stresses the importance of input from various disciplines, including computer science, engineering, law, psychology, and philosophy, in developing AI standards. She highlights the socio-technical nature of AI systems, emphasizing the need to test them in real-world conditions to understand risks and impacts.
NIST’s Limited Resources
Despite the task’s magnitude, NIST operates with a relatively small team and limited budget. Tabassi acknowledges the challenges posed by these constraints but remains optimistic, citing NIST’s history of impactful engagement with broad communities.
The Ambitious Deadline
The executive order issued by President Biden mandates the creation of a toolset for guaranteeing AI safety and trustworthiness by July 2024, a seemingly impossible deadline. Tabassi acknowledges the timeline’s ambitious nature but remains committed to meeting the challenge, drawing on NIST’s expertise and dedication.
Transparency and Openness
Tabassi emphasizes the importance of transparency and openness in NIST’s AI safety work. She points to the public working group formed in June 2023, focused on developing guidelines for authenticating synthetic content, as an example of the agency’s commitment to engaging with stakeholders.
The Role of the AI Safety Institute
The executive order calls for an AI safety institute, which has raised concerns about industry involvement and potential lack of transparency. Tabassi clarifies that NIST is exploring options for a competitive process to support cooperative research opportunities, ensuring scientific independence and neutrality.
Mandatory Obligations for Developers
While NIST’s AI risk framework was voluntary, the executive order mandates certain obligations for developers, including submitting large-language models for government red-teaming once they reach a specific size and computing power threshold. NIST’s role in this process is to advance the measurement science and standards needed for evaluations, rather than conducting the red-teaming itself.
Addressing Risk Assessment and Identification
Tabassi highlights the importance of addressing risk assessment and identification throughout AI systems’ lifecycle, from design and development to deployment and monitoring. She emphasizes the need for regular evaluations and the consideration of tradeoffs between convenience, security, bias, and privacy, depending on the context of use.
Conclusion
The task of setting standards for AI safety and trustworthiness is daunting, but NIST, under Elham Tabassi’s leadership, is committed to meeting the challenge. Through collaboration, transparency, and a comprehensive approach, NIST aims to ensure that AI systems are safe, secure, trustworthy, and socially responsible, shaping a future where technology benefits all.
Join the conversation on AI safety and trustworthiness by sharing your thoughts and insights in the comments section below. Let’s work together to create a future where AI benefits humanity in a safe, ethical, and responsible manner.