Anthropic v. Pentagon reveals enduring rift between tech, national security

The Pentagon told Anthropic this week to open its AI technology for unrestricted military use by Friday or risk losing its government contract. 

Sarah Kreps, the John L. Wetherill Professor of government in the College of Arts and Sciences and director of Cornell’s Tech Policy Institute, is an expert on the intersection of international politics, technology, and national security; she comments on the tensions. 

Kreps says: “It’s striking that Anthropic appears caught off guard by how its model is being used. We’ve seen this pattern repeatedly with dual-use technologies. Engineers build tools to solve technical problems. Once those tools scale, governments and societies deploy them in ways the creators did not fully anticipate. Social media, encryption, nuclear research — each followed that trajectory. AI companies have spent years discussing risk and misuse, so there is some irony in seeing the same dynamic reappear here.

“The deeper issue is dual use. AI models are designed for broad civilian markets, but military and national security applications operate under a very different logic. Governments often develop bespoke systems for defense precisely because requirements around control, reliability, and authorization differ from commercial norms. But when civilian platforms are integrated into classified environments, they stop being ordinary software products. They become strategic assets. That shift changes expectations around access, safeguards, and control.

“It’s not surprising these two logics collided. What we’re seeing is less an anomaly than a recurring tension between commercial innovation and national security imperatives.”

More News from A&S

Aeriel view of a building complex; the center structure is pentagonal
Touch Of Light/Creative Commons license 4.0 The Pentagon, headquarters of the US Department of Defense