Skip to main content

The Biden Administration Issues National Security Memorandum on AI – AI: The Washington Report

  • On October 24, the Biden administration issued the first-ever National Security Memorandum (NSM) on AI.
  • Pursuant to the Biden administration’s Executive Order on AI, the NSM outlines concrete actions for federal agencies to take to ensure that the US government leads the way in global AI development, harnesses AI for its national security mission, and promotes global consensus around AI.
  • Most significantly, the NSM directs the publication of the Framework to Advance AI Governance and Risk Management in National Security, which the White House also published. Among other guidance, the framework prohibits high-impact AI use cases by government agencies.
  • While Congress is considering legislation to establish similar requirements for AI use and by federal agencies, it is unlikely that such legislation will pass within the remaining five weeks of this session.  
     

  
On October 24, 2024, President Biden issued the first-ever National Security Memorandum (NSM) on AI. The NSM aims to galvanize the “federal government’s adoption of AI to advance the national security mission,” while also ensuring that such adoption “reflects democratic values and protects human rights.” To achieve these goals, the NSM outlines various actions that must be taken by government agencies in the short term and long term.

The memorandum is pursuant to the Biden administration’s Executive Order on AI (AI EO) and is one of the actions mandated to be completed within one year of the order’s signing. Next week we will overview all of the one-year actions that were required to be announced by October 30, 2024.

The NSM focuses on three main objectives:

  1. To position the US as a leader in the world’s development of safe, secure, and trustworthy AI
  2. To leverage cutting-edge AI technologies to support the national security objectives of the US government
  3. To promote international agreement and consensus on AI  
     
     

1. To position the US as a leader in the world’s development of safe, secure, and trustworthy AI.

  1. Chip Development. The NSM aims to “improve the security and diversity of chip supply chains,” as “developing advanced AI systems requires large volumes of advanced chips.”
  2. Intelligence about Competitors. The NSM makes “collection on our competitor’s operations against our AI sector a top-tier intelligence priority.” It also instructs relevant US government agencies to supply AI developers with timely cybersecurity and counterintelligence information to protect their innovations.
  3. The AI Safety Institute. The NSM designates the AI Safety Institute (AISI) as the “US industry’s primary port of contact in the US government.” The AISI should facilitate “voluntary pre- and post-public deployment testing for safety, security, and trustworthiness of frontier AI models.”
  4. Resources for Diverse AI Developers. The memorandum directs the National AI Research Resource to “distribute computational resources, data, and other critical assets for AI development to a diverse array of actors that otherwise would lack access to such capabilities — such as universities, nonprofits, and independent researchers.”
  5. Assessment of Competitive Advantage. The National Economic Council is tasked with coordinating “an economic assessment of the relative competitive advantage of the United States private sector AI ecosystem.” 
     

2. To leverage cutting-edge AI technologies to support the national security objectives of the US government.

  1. AI Governance and Risk Management. The NSM “provides the first-ever guidance for AI governance and risk management for use in national security missions,” focusing on upholding human rights and civil rights and keeping pace with the rapid rate of technological change.
  2. Framework to Advance Governance and Risk Management in National Security. The NSM calls for the establishment of a Framework to Advance AI Governance and Risk Management in National Security (the Framework), which was released along with this NSM. The Framework puts forth mechanisms for risk management, evaluations, accountability, and transparency. It also tasks the government with identifying and prohibiting “high-impact AI uses cases based on risks they pose to national security, international norms,” and civil rights. It explicitly prohibits agencies from using AI to assign emotions, evaluate trustworthiness, or infer race.
  3. Streamlined Procurement Practices. The NSM also directs various agencies to propose “streamlined procurement practices and ways to ease collaboration with non-traditional vendors.” 
     

3. To promote international agreement and consensus on AI.

  1. Strategy for Advancing AI Governance Norms. The NSM directs the Department of State, in coordination with other agencies, to “produce a strategy for the advancement of international AI governance norms in line with safe, secure, and trustworthy AI, and democratic values,” as well as engagement with competitors and international organizations.
  2. AI National Security Coordination Group. The NSM also establishes the AI National Security Coordination Group with the Chief AI Officers of the Department of State, Department of Defense, Department of Justice, and a number of other agencies. The group should focus on “ways to harmonize policies relating to the development, accreditation, acquisition, use, and evaluation of AI.”  
     

Legislation to Codify Requirements for High-Impact AI Use Cases

Congress is currently considering bipartisan legislation that would codify requirements, including those contained in the NSM and the Framework, for AI use by federal agencies. As we covered, the PREPARED for AI Act, introduced in June 2024 by Senators Gary Peters (D-MI) and Thom Tillis (R-NC), would establish an AI risk classification system and ban the use of AI by federal agencies to assign emotions, evaluate trustworthiness, or infer race.

However, as we covered last week, Congress has very little time left to pass the PREPARED for AI Act, which has been inactive since July. As we’ve previously covered, while Senator Schumer (D-NY) has indicated he plans to incorporate AI bills into must-pass, end-of-the-year legislation, his main focus has been on incorporating regulations for AI-generated election deepfakes into such legislation. This focus further reduces the likelihood that Congress will enact laws concerning AI use this Congress.

We will continue to monitor, analyze, and issue reports on these developments. Please feel free to contact us if you have questions as to current practices or how to proceed.

 

Subscribe To Viewpoints

Authors

Bruce D. Sokler

Member / Co-chair, Antitrust Practice

Bruce D. Sokler is a Mintz antitrust attorney. His antitrust experience includes litigation, class actions, government merger reviews and investigations, and cartel-related issues. Bruce focuses on the health care, communications, and retail industries, from start-ups to Fortune 100 companies.

Alexander Hecht

ML Strategies - Executive Vice President & Director of Operations

Alexander Hecht is Executive Vice President & Director of Operations of ML Strategies, Washington, DC. He's an attorney with over a decade of senior-level experience in Congress and trade associations. Alex helps clients with regulatory and legislative issues, including health care and technology.

Christian Tamotsu Fjeld

Senior Vice President

Christian Tamotsu Fjeld is a Vice President of ML Strategies in the firm’s Washington, DC office. He assists a variety of clients in their interactions with the federal government.

Matthew Tikhonovsky

Matthew is a Mintz Senior Project Analyst based in Washington, DC.