Center for Excellence in Government Cybersecurity Risk Management and Resilience

Four Takeaways from the US National Security Memorandum and Framework

Four Takeaways from the US National Security Memorandum and Framework

Amarda Shehu, VP and Chief AI Officer and Professor of Computer Science

Jesse Kirkpatrick, Research Associate Professor, Co-director, Mason Autonomy and Robotics Center

J.P. Singh, Professor of Schar School of Government and Policy, Director of AIStrategies Team, Director of Center for AI Innovation for Business

George Mason University

On October 24, 2024, President Biden released the long-anticipated U.S. National Security Memorandum on AI (the memo) alongside a complementary Framework to Advance AI Governance and Risk Management in National Security (the framework). This memo and framework fulfill the commitments outlined in section 4.8 and 4.2, respectively, of the October 2023 Executive Order 14110. With global AI capabilities advancing swiftly, these releases set the groundwork for the United States to secure, govern, and ethically advance AI within the national security ecosystem.

But as these documents capture the nation’s ambitions for AI, the upcoming presidential election will undoubtedly shape their implementation and evolution. Here, we analyze four major takeaways from the memo and framework to understand their immediate and long-term implications for AI governance, responsible AI (RAI), and the role of the U.S. in global AI policy.

Responsible AI as a Cornerstone of National Security Policy

The memo’s emphasis on Responsible AI (RAI) is striking, positioning it as a central theme for national security. In addition to advancing foundational RAI principles, the memo confronts the ongoing challenge of translating these principles into practice. While sectors across government, industry, and academia continue to debate the specifics, the memo’s focus on responsibility suggests a serious commitment to resources, oversight, and governance structures. We see here an attempt not only to encourage responsible development, adoption, and governance of AI but also to emphasize a human chain of command for AI-enabled decision-making. However, what remains uncertain is precisely how these RAI principles will transition from aspirational guidelines to rigorous, measurable practices.

The AI Safety Institute (AISI) as a Central Player

The AISI, an initiative under the Department of Commerce, is a major focal point of the memo. Mentioned over twenty times, AISI has been tasked with implementing standards for RAI while establishing metrics to evaluate frontier AI models within the framework of national security. AISI’s work over the past year has concentrated on setting these standards, and it has already gathered experts from academia, industry, and government to define RAI frameworks and explore testing mechanisms for complex AI models. Yet, a fundamental challenge persists: moving from principles to benchmarks.  The need for robust RAI test beds is particularly pressing; we currently lack environments that can adequately assess RAI brittleness, especially when applied in unpredictable or adversarial settings. Despite these challenges, the memo calls for preliminary testing of at least two frontier AI models within the next 180 days, withvoluntary cooperation from the private sector, hinting at the delicate balance between government oversight and industry autonomy. By describing the tests as “voluntary,” the memo underscores the reliance on cooperation rather than mandate, which brings us to our next point.

Framework Built on Voluntary Participation—But Will It Suffice?

Despite its length, the memo relies primarily on voluntary commitments to cultivate a culture of RAI across both public and private sectors. This is most apparent in its approach to engaging with private sector AI developers and academic institutions. While AISI is positioned to lead in safety testing for frontier AI models, it frames participation as “voluntary,” inviting AI developers to contribute but without an enforceable requirement.

This lack of enforceable measures may lead to uneven adoption across sectors, as businesses weigh the benefits of RAI participation against competitive pressures. If participation is purely voluntary, appropriate carrots and sticks must be leveraged to ensure participation.

Global Governance and a Democratic Approach to AI Regulation

Notably, the memo links the U.S. domestic agenda on AI with international governance. This alignment suggests that the United States will continue to leverage its AI capabilities as a strategic tool within the global policy landscape. The United States’ commitment to democratic governance mechanisms contrasts sharply with authoritarian approaches, where rights protections are not prioritized. By focusing on international standards for AI that preserve civil liberties, civil rights, and human rights the memo positions the U.S. as a counterweight to autocratic systems, particularly China. Although the memo avoids naming specific nations, its emphasis on working with organizations like the OECD hints at a strategic intent to foster an alliance of democracies around AI principles that prioritize and align with democratic norms and values.

Conclusion

The U.S. National Security Memorandum on AI sets a bold agenda, highlighting RAI as a cornerstone to national security, fostering voluntary collaboration, and promoting democratic governance. By engaging both domestic and international stakeholders, it strives to place the U.S. at the forefront of global AI policy. The memo’s reliance on voluntary measures reflects a calculated risk—aiming to preserve innovation while aligning with democratic values. However, significant challenges lie ahead, particularly in translating RAI principles into actionable benchmarks, fostering academic and industry cooperation, and ensuring that U.S.-led standards resonate globally.