close
close

Biden’s National Security Memo Calls for Heavy Lifting

Biden’s National Security Memo Calls for Heavy Lifting

WASHINGTON – President Joe Biden’s directive to all U.S. national security agencies to integrate artificial intelligence technology into their systems sets ambitious goals in a volatile political environment.

It’s the first assessment from technology experts since Biden on Oct. 24 directed a wide range of organizations to use AI responsibly, even as the technology advances rapidly.

“It’s like trying to assemble a plane while you’re still flying it,” said Josh Wallin, a fellow in the defense program at the Center for a New American Security. “It’s a tough climb. This is a new area that a lot of agencies are having to look at that they may not have looked at in the past, but I will also say that it is certainly a critical area.”

Federal agencies will have to quickly hire experts, get them security clearances and begin working on Biden’s challenges as private companies pour money and talent into developing their artificial intelligence models, Wallin said.

In a memo stemming from last year’s executive order, the Pentagon asks; spy agencies; the Departments of Justice, Homeland Security, Commerce, Energy, and Health and Human Services; and others to use artificial intelligence technologies. The directive emphasizes the importance of national security systems “in protecting human rights, civil rights, civil liberties, privacy, and security in AI-enabled national security activities.”

Federal agencies have deadlines, some as long as 30 days, to complete tasks. Wallin and others said the timing depends on the pace of technological progress.

The memo asks the National Institute of Standards and Technology’s AI Security Institute by April to “conduct voluntary preliminary testing of at least two advanced AI models prior to their public deployment or release to assess capabilities that may pose a threat to national security.”

Frontier models are among large artificial intelligence models such as ChatGPT that can recognize speech and generate human-like text.

The testing is intended to ensure that the models do not inadvertently allow attackers and adversaries to launch offensive cyber operations or “accelerate the development of biological and/or chemical weapons, autonomously carry out malicious behavior, or automate the development and deployment of other models.”

But the memo also comes with an important caveat: the deadline for starting testing of AI models will be “subject to collaboration with the private sector.”

Meeting the testing deadline is achievable, said John Miller, senior vice president of policy at ITI, a trade group that represents leading technology companies including Google, IBM, Intel, Meta and others.

Since the institute is “already working with model developers to test and evaluate them, it is possible that companies could complete or at least begin such testing within 180 days,” Miller said in an email. But the memo also asks the AI ​​Safety Institute to issue guidance on model testing within 180 days, and so “it seems reasonable to ask how exactly these two schedules will sync up,” he said.

By February, the National Security Agency “must develop capabilities to conduct rapid, systematic, classified testing of the ability of AI models to detect, generate, and/or aggravate offensive cyber threats. Such tests should evaluate the extent to which artificial intelligence systems, if misused, can accelerate offensive cyber operations,” the memo says.

“Dangerous” order

With only a week left before the presidential election, the outcome of this directive looks very ominous.

The Republican Party platform says that if elected, Donald Trump will reverse Biden’s “dangerous executive order that stifles innovation in artificial intelligence and imposes radical leftist ideas on the development of this technology.” Instead, Republicans support the development of artificial intelligence based on freedom of speech and human flourishing.”

Because Biden’s memo is the result of an executive order, it’s likely that if Trump wins, “they’ll just pull the plug” and go their own way on AI, said Daniel Castro, vice president of the Information Technology and Innovation Foundation. interview.

The leadership of federal agencies tasked with enforcing compliance will also change significantly under Trump. With the new administration, about 4,000 federal government jobs are changing hands.

But people tracking the issue note that there is broad bipartisan consensus that the adoption of artificial intelligence technology for national security purposes is too important to be derailed by partisan bickering.

The goals and timelines in the memo reflect in-depth discussions between the agencies over several months, said Michael Horowitz, a University of Pennsylvania professor who was until recently deputy assistant secretary of defense whose portfolio included the military use of AI and advanced technologies.

“I think implementation (of the memorandum), regardless of who wins the election, will be absolutely critical,” Horowitz said in an interview.

Wallin noted that the memo highlights the need for U.S. agencies to understand the risks associated with advanced generative artificial intelligence models, including risks associated with chemical, biological and nuclear weapons. On threats such as the threat to national security, there is an agreement between the parties, he said in an interview.

Senate Intelligence Chairman Mark Warner, D-Va., said in a statement that he supports Biden’s memo, but the administration must work “in the coming months with Congress to develop a clearer strategy for engaging the private sector in addressing national security risks.” aimed at AI.” systems throughout the supply chain.”

Immigration policy

The memo recognizes the long-term need to attract talent from around the world to the United States in fields such as semiconductor design, an issue that may be tied to larger immigration issues. The Departments of Defense, State Security and Homeland Security are tasked with using available legal authority to engage them.

“I think there is widespread recognition of the unique importance of STEM talent in driving U.S. technology leadership,” Horowitz said. “And AI is no exception to this rule.”

The memorandum also asks the State Department, the U.S. Mission to the United Nations, and the U.S. Agency for International Development to develop a strategy within four months to promote international governance standards for the use of AI for national security purposes.

According to Horowitz, the United States has already taken several steps to promote international cooperation in the field of artificial intelligence for both civilian and military purposes. He cited the example of the Political Declaration on the Responsible Military Use of Artificial Intelligence and Autonomy, which was supported by more than 50 countries.

“This demonstrates how the United States is already leading the way by setting strong standards for responsible behavior,” Horowitz said.

The push for responsible use of technology needs to be seen in the context of a broader global debate about whether countries are moving toward authoritarian systems or toward democracy and respect for human rights, Castro said. He noted that China is increasing investments in Africa.

“If we want African countries to join the US and Europe in AI policy instead of going to China,” he said, “what are we actually doing to get them on our side?”

___

© 2024 CQ-Roll Call, Inc., All rights reserved. Visit cqrollcall.com. Distributed by Tribune Content Agency, LLC.

The story continues

© Copyright 2024 CQ-Roll Call. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.