On May 1, the Department of War announced that it had entered into agreements with SpaceX, OpenAI, Amazon Web Services, Google, Oracle, NVIDIA, Reflection, and Microsoft to supply their projects on classified networks for “lawful operational use.”
In its announcement about classified AI developments, the Department also noted that GenAI.mil, the suite of AI tools it began to bring online for its three million civilian and military employees in December, is now utilized by 1.3 million personnel.
GenAI.mil came under scrutiny because of confusion during its rollout, with reactions from employees running the gamut of enthusiasm, concern, and consternation. Despite earlier concerns, the Department seems eager to move forward with the normalization of AI use throughout the workforce.
To date, five of six military branches have made GenAI.mil their preferred AI system, with only the U.S. Coast Guard continuing to prioritize its own Ask Hamilton tool. By late April, DefenseScoop reported that the Department had already used GenAI.mil to build out 100,000 agents, systems that can act on their own to make plans and take actions.
Initial Delivery
GenAI.mil’s platforms roared into development in July, when Gemini, xAI, OpenAI, and Anthropic received $200 million contracts to expand their commercial capabilities to the Department of War.
The first tool delivered to employees was Google Cloud’s Gemini for Government. In a press release, the War Department explained that the addition “empowers agentic workflows, unleashes experimentation, and ushers in AI-driven culture change that will dominate the digital battlefield for years to come.” Secretary of War Pete Hegseth explained to employees that “AI should be in your battle rhythm every day. It should be your teammate.”
In the first GenAI.mil press release, the Department stated that there would be “no-cost training” for employees on using the new technology suite. As of December, Defense Scoop spoke with users who indicated that there had been no training or guidance provided in how to utilize the new systems.
Users also said that the introduction of the new AI systems caused worry. One senior Army official explained that the abrupt release of GenAI.mil, announced via pop-up, led some employees to question whether “our computers had been hacked and if this was a legitimate new software.”
With limited explanation of how the technology would be used and what its limitations would be, speculation ran wild. Some employees told Defense Scoop that they were worried about whether AI would be used in decisionmaking scenarios, explaining that “AI shouldn’t be the only thing that’s making the decisions on the strategic or operational or tactical level.” Others were concerned about security, wondering whether information that users place into the systems might “inform the commercial models housed in GenAI.mil.”
On the other side of the argument, Navy veteran and former Pentagon official Emilia Probasco told Fox News that the availability of Gemini at the Department of War was likely a security improvement. Probasco explained that parts of the workforce were “probably” utilizing AI on their home computers previously. Now, “they’ve got a more secure environment where they can experiment with these tools,” Probasco said.
There has been particular concern about the addition of xAI for Government, announced in December. In the recent past, xAI’s Grok chatbot has praised Hitler, gone on antisemitic tirades, and is accused of having a “perceived ideological bias.”
In October, Tech Policy Press said that for the wider U.S. government to utilize Grok was a violation of the Trump administration’s AI directive, which mandates that AI cleared for government use should be “truth-seeking, accurate, and ideologically neutral.” The author suggested that “banning ‘biased AI’ on paper while deploying a biased AI system in practice undermines both the letter and the spirit of federal AI policy.”
Comparability with legacy systems
GenAI.mil products were originally cleared for sensitive Controlled Unclassified Information (CUI) and Impact Level 5 (IL5) content. The latest classified programs announced in May will be approved for use on IL6 and IL7 networks.
A number of AI platforms at various classification levels were already available to War Department employees prior to GenAI.mil’s rollout. Some users complained that prior systems were more advanced than the GenAI.mil products.
One defense official told Defense Scoop that generative AI system NIPRGPT had greater capability than early GenAI.mil systems.
Though NIPRGPT was only able to process non-classified information, about 700,000 individuals across the Pentagon utilized the program. The Department of the Army had blocked NIPRGPT in April, however, based on concerns about “cybersecurity and data governance.” NIPRGPT was decommissioned on Dec. 31.
The same defense official also told Defense Scoop that AI tools Ask Sage and CamoGPT had “more security and applications” than the GenAI.mil suite.
CamoGPT cannot access the web, but it can process classified information on the Secret-level SIPRNet system. As of January, it was reported that CamoGPT would remain in use with the arrival of new technologies.
Worries Over Mass Surveillance, Autonomous Targeting
When Anthropic agreed to supply its technology to the Department of War, it stated that it would not allow those services to be used to support autonomous, lethal warfare or to undertake mass surveillance of Americans. After Anthropic declined to drop these red lines, the Department ended its Anthropic contracts and designated the company “a Supply-Chain Risk to National Security,” forbidding businesses to work with Anthropic if they contract with the federal government.
In turn, Anthropic sued the federal government. Initially, Anthropic was granted a preliminary injunction in one case to keep the Trump administration from enforcing its ban. However, a federal appeals court denied Anthropic’s request to block the Department’s ruling in April.
Though Google employees expressed similar concerns over the uses of their AI systems, Google’s contract to provide classified models to the Pentagon explicitly states that “the parties agree that the AI System is not intended for, and should not be used for, domestic mass surveillance or autonomous weapons (including target selection) without appropriate human oversight and control.”
Prognosis for War Department’s AI future
Operation Epic Fury has been a test case for the Department’s AI tools. The head of U.S. Central Command, Adm. Bradley Cooper, said in March that “eliminating Iran’s ability to threaten Americans and our friends” was enabled “through a combination of lethality, precision, and rapid innovation.” He noted that advanced AI tools “help us sift through vast amounts of data in seconds” so that humans can determine which targets to engage.
As additional tools come online in the GenAI.mil system, the use of AI is particularly likely to become ubiquitous in handling taskings that free up military and civilian personnel for the work of maintaining the U.S. military’s supremacy. In a world of increasing stability, pursuing every available advantage will have lasting impacts for the special operations personnel and conventional forces charged with keeping America safe.