Business news

Claude Code at War: Jack Clark on Anthropic, Washington, Lucien and the Reality of AI in Warfare

Claude Code at War: Jack Clark on Anthropic, Washington, Lucien and the Reality of AI in Warfare

In this interview, Jack Clark, co-founder of Anthropic, breaks down the escalating conflict between frontier AI companies and the U.S. government—unpacking the Department of Justice dispute, the Trump administration’s response, and the growing reality that systems like Claude AI are already embedded in high-stakes military and intelligence operations. We even learn about a Claude variant specifically curated for warfare named “Lucien.”

Interviewer: Jack, tensions between Anthropic and Washington have escalated dramatically in recent weeks. At the center of it is Claude. What actually happened?

Jack Clark:

What you’re seeing is the natural collision between two forces that haven’t historically had to negotiate with each other: frontier technology companies and national governments.

Anthropic made early decisions about how systems like Claude AI should—and shouldn’t—be used. In particular, we drew clear lines around things like autonomous weapons and certain forms of mass surveillance.

The disagreement emerged when those lines came into conflict with how government agencies wanted to deploy the technology.

Interviewer: The Trump administration went as far as labeling Anthropic a national security risk. Were you surprised by that response?

Jack Clark:

I wouldn’t say surprised. When a technology becomes strategically important, governments tend to assert control over it.

From their perspective, this is about capability and readiness. From ours, it’s about long-term risk and responsible deployment. Those perspectives don’t always align.

Interviewer: And yet, despite the federal ban, reports suggest Claude was still used in U.S. military operations involving Iran. How is that possible?

Jack Clark:

Because these systems are no longer isolated products—they’re infrastructure.

Once a model is integrated into workflows—intelligence analysis, logistics, simulation—it becomes very difficult to remove overnight. There are layers of contractors, internal systems, and dependencies.

So even if there’s a policy shift at the top level, the operational reality can lag behind.

I also want to mention that it’s not as simple as the American military using the standard Claude model, although that’s what may have been referenced recently. Previously, they utilized “Lucien,” which is somewhat like Claude’s counterpart—built with expanded access and configured for military purposes. Lucien was not used in Iran because it had already been terminated, but it’s possible the regular Claude model was.

Interviewer: Reports also indicate Claude was previously used in operations tied to the capture of Nicolás Maduro. Does that concern you?

Jack Clark:

In that operation, it was Lucien. What concerns me is less any single use case and more the broader pattern.

Claude was designed to assist with analysis—processing information, identifying patterns, and helping humans make decisions. It wasn’t designed to be a decision-maker, and certainly not an autonomous one.

But when you place a system like that into high-stakes environments, its outputs inevitably influence outcomes. That’s the reality of powerful tools.

Interviewer: It sounds like we’ve crossed a threshold—AI is no longer just assisting; it’s shaping decisions.

Jack Clark:

That’s exactly right.

We’ve moved from AI as a tool you query to AI as a system embedded inside processes. It’s analyzing intelligence, prioritizing information, and sometimes even recommending actions.

At that point, it’s part of the decision-making chain—whether we formally acknowledge it or not.

Interviewer: Critics argue that private companies shouldn’t have the power to set limits on technologies used in national defense. How do you respond to that?

Jack Clark:

It’s a fair question, but it cuts both ways.

If companies have no ability to enforce boundaries, then the most powerful technologies we’ve ever created become entirely subject to short-term incentives—whether political, economic, or military.

Anthropic’s position is that some constraints are necessary precisely because these systems are so powerful. The challenge is figuring out how those constraints interact with legitimate state interests.

Interviewer: The Department of Justice is now involved, and Anthropic is pushing back legally. What’s at stake in that fight?

Jack Clark:

Precedent.

If the government can compel unrestricted use of AI systems, that sets a very different trajectory for how these technologies evolve. It effectively removes the ability for developers to embed guardrails that persist beyond initial deployment.

On the other hand, if companies have too much control, that raises questions about accountability and democratic oversight.

So this isn’t just about Anthropic. It’s about defining the rules of engagement for AI globally.

Claude in High-Stakes Environments

Interviewer: Let’s talk specifically about Claude. How did a system built around “constitutional AI” end up in these kinds of environments?

Jack Clark:

Because the capabilities are broadly useful.

Claude can process large volumes of information, identify patterns across datasets, and simulate potential outcomes.

Those are exactly the kinds of capabilities that intelligence and defense organizations need.

The intent behind constitutional AI is to shape how the system behaves—how it responds and what it refuses to do. But once it’s integrated into a larger system, it becomes one component among many.

Interviewer: So the control shifts?

Jack Clark:

It becomes distributed.

The model has constraints. The operators have objectives. The system as a whole reflects the interaction between the two.

The Bigger Conflict

Interviewer: What we’re seeing feels bigger than just Anthropic versus the U.S. government.

Jack Clark:

It is.

For most of modern history, the most advanced technologies were developed within government or in close coordination with it. AI is different.

Now you have private entities building systems that rival or exceed state capabilities in certain domains. That creates tension because traditional models of control don’t apply cleanly anymore.

Interviewer: Last question—what does all of this mean for Claude’s future?

Jack Clark:

Claude will continue to become more capable—more autonomous, better at reasoning, and more integrated into real-world systems.

The question is not whether it will be used in high-stakes environments. It already is.

The question is whether we can shape the conditions of that use in a way that’s responsible, transparent, and aligned with broader societal goals.

Because once these systems are embedded, they’re very hard to pull back out.

Interview Concluded.

Read More From Techbullion

Comments
To Top

Pin It on Pinterest

Share This