Today, and not at some distant point in the future, AI algorithms can shape public perception, disrupt markets, and steer missiles. Given the depth and breadth of its capabilities, it’s increasingly clear that the real frontier of artificial intelligence isn’t the battlefield but in the gray zone. Long before a shot is ever fired, nations are now locked in constant competition in an ill-defined space where influence operations, economic coercion, and cyber campaigns persistently blur the lines between peace and war. Over the last 10 years, many nations have tested how far they can push competitor states without tripping into open conflict, and AI is rapidly becoming the accelerant for these actions.
Defense policymakers and Congress have spent the last decade asking themselves how AI might transform warfighting. But that’s the wrong question, or at least an incomplete one. The more urgent question is how AI will transform the interstitial space that serves as a prelude to actual conflict. This space represents the ambiguous, persistent competition that defines today’s great power rivalry. And even as policymakers in Washington debate the responsible use of AI in targeting or command and control for theoretical future conflicts, they have yet to decide what “responsible use” means as it relates to the influence, disruption, and deterrence campaigns already underway.
So, how should the United States determine what uses of AI are acceptable in this murky, in-between space? Here, political theorist Michael Walzer offers a useful roadmap. In his landmark work, Spheres of Justice, Walzer argued that justice depends on separating distinct “spheres” of human activity (e.g., politics, economics, education) and that each sphere was governed by its own moral logic — a set of guiding principles that should shape activity in that particular sphere. When the boundaries between those spheres are confused, injustice follows.
Applied to AI-enabled statecraft, Walzer’s logic suggests a simple, but powerful, idea: Not all AI-enabled capabilities should be applied to every domain of strategic competition. A capability whose use is appropriate for combat may prove deeply corrosive in peacetime influence operations. The same models used by missiles to target enemy destroyers should not also be used to target voters, dissidents, or markets. In short, the problem that Walzer’s theory helps highlight isn’t that AI is too powerful for use in the gray zone, but rather that it’s currently too portable. This inherent promiscuity threatens to dissolve the boundaries that keep great power competition tolerable.
Making Walzer’s Logic Work for Washington
So, what to do? Without suspending too much disbelief, Walzer’s ethical construct maps surprisingly well onto the instruments of national power. Each domain — diplomatic, military, informational, and economic — already has its own accepted norms and moral baselines that typically define what legitimate competition looks like. But AI, because of its flexibility and myriad use cases, risks erasing those distinctions unless policymakers start drawing some clear lines.
When it comes to the diplomatic instrument of power, the governing “moral logic” is to persuade while still honoring the agency of each party. AI that improves general situational awareness and provides additional insight into a government’s thinking during negotiations fits this ethic by allowing diplomats to engage from a fact-based perspective rather than under coercive pressure. But if AI is used to simulate political actors, target the psychological vulnerabilities of foreign diplomats, or automate coercive diplomacy, the boundary between persuasion and manipulation is erased. It can’t be called diplomacy anymore if the goal is no longer to convince, but to corner.
In the information sphere, inclusive of both intelligence and information operations, the governing norms are truth, accountability, and cognitive autonomy. AI that accelerates intelligence analysis, identifies foreign interference with domestic audiences, or improves threat detection reinforces these norms when coupled with oversight and minimal intrusion. But the same tools can become corrosive, at home and abroad, when they are used to distort reality, automate covert information operations, or shape another country’s political behavior without consent. As the United States both collects and communicates with its allies, partners, and potential adversaries, the test is simple: Does AI deepen truthful understanding or erode it?
Within the military domain, the relevant moral logic since time immemorial, is necessity and proportionality. Military AI that enhances force protection, improves targeting accuracy to reduce collateral damage, or bolsters battlefield management to keep fighting focused all fit squarely within those principles, provided there is meaningful human accountability. Autonomous or AI-enabled systems are appropriate when there are clearly defined rules of engagement, but they lose moral legitimacy when repurposed to enable domestic control or deliberately increase the scope of a military engagement.
Finally, in the economic sphere, the core logic of free markets is fairness, consent, and transparency. AI utilized in this domain can help enforce sanctions or help detect illicit financial networks, supporting legitimate statecraft. But, if some of those same models are used to manipulate markets as a form of coercion, pressure international firms into policy compliance, or engage in other forms of opaque, algorithmic coercion, then they fundamentally undermine the rule-bound premises of fair economic exchange among states.
Each of these domains already operates with some degree of moral expectation. The emergent and urgent challenge for policymakers is to ensure that the design and deployment of AI respect the current logic of each sphere it serves rather than erasing the distinctions between them. Models that are defensible for use in combat are almost certainly unethical to employ in the marketplace or as part of an influence operation. The gray zone is not a moral vacuum, and if the United States does not draw these boundaries early and clearly, they are likely to erode — particularly if it lets other states shape them instead.
A New Framework to Guide Gray Zone AI Policy
A Walzer-inspired policy framework for AI in the gray zone offers a simple organizing principle that is both easily digestible and actionable: draw boundaries that define when AI-enabled systems should never be used (i.e., the prohibited), when they may be used under carefully defined conditions (i.e., the restricted), and when their use should be encouraged with very few limits (i.e., the permissive).
So, when should AI-enabled systems in the gray zone be prohibited? Some uses of AI, even if tactically or strategically advantageous to the United States, reduce the very legitimacy that helps sustain U.S. influence around the globe. Large-scale deception campaigns that include deepfakes, the creation of synthetic personas, and AI-amplified propaganda should all fall in the prohibited zone. Additionally, AI-enabled interference in a state’s democratic processes or its use to surveil civilians outside of conflict zones should be expressly prohibited. These uses erode the credibility of American values abroad and, once adopted by the United States, invite reciprocal campaigns from both adversaries and allies alike that will severely constrain de-escalation pathways.
As to when the use of AI should be restricted in the gray zone, the guiding principles recognize that some applications will support legitimate instruments of statecraft but should still be recognized as sensitive, dual-use technologies. For example, predictive modeling of adversary wartime decision-making, AI-enabled sanctions enforcement, and the automated detection of disinformation campaigns should all fall into the restricted category. Use of these tools is justified, so long as they are aligned with U.S. values, conducted with transparent oversight, and reinforce international norms of legitimate state behavior. Use of these types of AI-enabled systems should, however, remain auditable by policymakers, limited in scope, and subject to clear attribution mechanisms to help prevent mission creep or misuse. In the “restricted” category, the challenge is not whether to use AI, but how to govern it responsibly so that it does not avoid international or domestic accountability.
Finally, what AI-enabled capabilities should be permitted in the gray zone? Certain uses of AI should be embraced exactly because they enhance institutional resilience, norms, and deterrence without eroding the legitimacy of the United States. AI systems that improve early threat warning, protect critical infrastructure from either cyber or physical attack, or detect and identify foreign disinformation all serve to reinforce democratic norms and values, as well as international stability. Using these permissive tools becomes the gray zone equivalent of defensive fortifications: They are transparent, stabilizing, and consistent with both U.S. and allied norms that the United States has a strategic interest in promoting.
The casual reader will note that these lists are not exhaustive — they’re not meant to be. The United States needs a framework to think more effectively about the employment of AI-enabled systems in the gray zone, not a laundry list of categorized activities. Good policy frameworks are durable, iterative, and flexible, and thinking about AI-enabled gray zone operations should be no different.
But good policy frameworks are also not merely ideational — they are usually codified in formal policy guidance or statute. Similarly, the Trump administration should lead the policy discussion on the use of AI-enabled capabilities in the gray zone and direct, through Executive Order, that such a framework be developed by the National Security Council staff. This framework should be evaluated by the relevant cabinet-level departments and agencies and codified through further executive action or statutory language in partnership with Congress. This is not merely virtue signaling — it is a strategic wager that credible restraint in some areas enables sustainable advantage in others.
Clear Thinking that Enhances America’s Position
Many defense strategists have opined that the gray zone is where the future of great power competition will be decided. If that’s true, then artificial intelligence may very well be the deciding instrument. But the current debates, which often focus on effective strategic advantage more than ethics, risk repeating the early mistakes of cyber policy. Many will remember that the novelty of cyber technology resulted in rapid deployment, with little theorization about how to bound its effects. The international community is still living with the results of those early decisions, and cyber operations — even today — are often marked by secrecy, denial, and normalized escalation. Walzer’s theory is a good reminder that technological advantage devoid of moral guardrails is often corrosive to the legitimacy that helps underwrite both America’s hard and soft power.
If the United States adopts the moral geography of this framework for its use of AI in the gray zone, hard choices won’t be eliminated altogether. But a framework based on Walzer’s theory provides a common lexicon to help guide the hard choices that lie ahead as to how the United States will continually balance legitimacy and leverage. If policymakers normalize AI as a tool for unbounded influence or manipulation, the United States risks eroding the very American-led order it aims to defend. If AI is over-constrained and the foreign policy apparatus of the United States is hamstrung, the nation may end up watching competitors rewrite the rules unchecked by the influence of other states.
Inspired by Walzer, the framework laid out here provides a middle way. It doesn’t pretend that the gray zone can be made clean or comfortable — both descriptors are fundamentally antithetical to the very notion of a gray zone. But this framework does give policymakers a principled way to distinguish between legitimate strategic competition, dangerous escalation, and outright manipulation. It helps build habits worth keeping and draws red lines worth defending. The United States has an opportunity to define the international boundaries for using AI in the gray zone, before norms harden or operational habits form. The gray zone will always be messy, but Walzer teaches that the U.S. thinking about how to act within it doesn’t have to be.
