Dreamstime Images
dreamstime_l_176410099

New Government Consortium Focuses on AI Safety

Feb. 21, 2024
The Biden-Harris administration unveils a new initiative focused on artificial intelligence.

Download this article in PDF format.

As artificial intelligence (AI) continues to make its way into our daily lives, there’s been a growing concern over the long-term impacts of this advanced technology. From job displacements to data security to a lack of “control” over the technology and its capabilities, the list of concerns is growing as AI finds its way into everything from task automation to personalized marketing campaigns to fraud detection. 

This month, the Biden-Harris administration announced a new initiative focused on AI safety. The U.S. AI Safety Institute Consortium (AISIC) will be made up of AI creators and users; academics; government and industry researchers; and civil society organizations. Collectively, these entities are focused on the support, development and deployment of safe, trustworthy AI applications. 

Working Toward Common Goals

The AISIC will be housed under the new U.S. AI Safety Institute (USAISI), which the National Institute of Standards and Technology (NIST) recently formed. Some of the new consortiums initial goals include:

  • Establish a knowledge and data sharing space for AI stakeholders.
  • Engage in collaborative and interdisciplinary research and development through the performance of a research plan.
  • Prioritize research and evaluation requirements and approaches that may allow for a more complete and effective understanding of AI’s impacts on society and the national economy.
  • Identify and recommend approaches to facilitate the cooperative development and transfer of technology and data between and among consortium members.
  • Identify mechanisms to streamline input from federal agencies on topics within their direct purviews.
  • Enable assessment and evaluation of test systems and prototypes to inform future AI measurement efforts.

According to NIST, the consortium’s work will be “open and transparent” and will provide a hub for interested parties to work together in building and maturing a measurement science for trustworthy and responsible artificial intelligence. 

“To keep pace with AI, we have to move fast and make sure everyone – from the government to the private sector to academia – is rowing in the same direction. Thanks to President Biden’s landmark Executive Order, the AI Safety Consortium provides a critical forum for all of us to work together to seize the promise and manage the risks posed by AI,” said Bruce Reed, White House deputy chief of staff, in a U.S. Department of Commerce press release.

Who’s Onboard with the Initiative?

The consortium includes more than 200 member companies and organizations that are on the frontlines of creating and using advanced AI systems and hardware. Members also include large companies and startups; civil society and academic teams that are building the foundational understanding of how AI can and will transform society; and representatives of professions with deep engagement in AI’s use today. 

According to the U.S. Department of Commerce, AISIC represents the “largest collection of test and evaluation teams established to date,” and will focus on establishing the foundations for a new measurement science in AI safety. The consortium also includes state and local governments, as well as non-profits, and will work with organizations from like-minded nations that have a key role to play in developing interoperable and effective tools for safety worldwide.

The AISIC’s 200+ participants include major tech players like OpenAI, Google, Microsoft, Apple, Amazon, Meta, NVIDIA, Adobe and Salesforce, Mashable reports. The list also includes stakeholders from academia, including institutes from MIT, Stanford and Cornell, plus think tanks and industry researchers like the Center for AI Safety, the Institute for Electrical and Electronics Engineers (IEEE) and the Responsible AI Institute.

“The AI consortium is an outcome of Biden's sweeping executive order which seeks to tame the wild west of AI development,” the publication adds. “AI has been deemed a major risk for national security, privacy and surveillance, election misinformation, and job security to name a few.”

Voice your opinion!

To join the conversation, and become an exclusive member of Supply Chain Connect, create an account today!

About the Author

Bridget McCrea | Contributing Writer | Supply Chain Connect

Bridget McCrea is a freelance writer who covers business and technology for various publications.