Artificial Intelligence and Military Information Support Operations in Strategic Competition: Cognitive Warfare in Ukraine, Taiwan, and the Indo-Pacific

December 8, 2025

By AJ Rutherford, PhD

Introduction

Military information support operations (MISO), historically rooted in the U.S. military’s psychological operations (PSYOP) doctrine, remain a pivotal strategic function within Special Operations Forces (SOF). In the context of 21st-century conflict, characterized by the weaponization of information and the diffusion of influence across social, cognitive, and digital domains, MISO must adapt to emerging technological and adversarial paradigms. The integration of artificial intelligence (AI) into MISO presents a transformative capability, enabling the automation, precision, and amplification of psychological effects at scale. The ongoing Ukraine–Russia conflict, a significant example of hybrid warfare and digitally mediated influence operations, highlights the intersection of AI-driven systems and MISO. AI-powered platforms—such as natural language generation (NLG), sentiment analysis, and adversarial deepfake detection—can augment SOF’s influence capabilities, bolster strategic messaging, and counter adaptive propaganda networks employed by near-peer competitors.¹ In parallel, China’s predictive AI-driven narrative strategy in the South China Sea highlights how algorithmic orchestration can be used not only to counter adversary messaging but to dominate regional discourse preemptively, offering a critical contrast and complement to the reactive disinformation approaches seen in Ukraine. This suggests that AI-powered platforms can significantly enhance SOF’s ability to conduct strategic influence operations by improving message precision, responsiveness, and resilience against evolving adversarial propaganda.

Doctrinal Foundations of Military Information Support Operations

Contemporary MISO doctrine, grounded in Joint Publication 3-13.2, reflects a doctrinal evolution from its PSYOP lineage toward a broader conceptualization of influence within the cognitive, emotional, and digital domains.² While the doctrinal focus remains the deliberate shaping of foreign audience perceptions and behaviors through culturally and contextually meaningful messaging, the operational environment has fundamentally transformed. In the era of great power competition, particularly against adversaries such as the Russian Federation and the People’s Republic of China, MISO must now be conducted as an integrated component of a larger informational architecture. This includes synchronization with cyber operations, public affairs, and intelligence functions to counter state-sponsored disinformation campaigns and exploit real-time digital influence opportunities. As highlighted in the Joint Concept for Operating in the Information Environment, the future of information operations depends on cross-domain interoperability and the seamless fusion of narrative, timing, and technological enablers to achieve persistent cognitive effects across both peacetime and conflict phases.³ This argues that effective future information operations require cross-domain integration and synchronized use of narrative, timing, and technology to sustain cognitive influence in both peace and conflict.

Artificial Intelligence and Cognitive Warfare

The integration of AI into contemporary national defense strategies has catalyzed a paradigm shift in the development and application of autonomous systems for influence operations. Advanced AI techniques, such as natural language processing, large-scale generative models, sentiment analysis, and psychographic segmentation (the process of dividing people into groups based on their lifestyle, values, personality, interests, and attitudes) have enabled the creation of hyper-personalized messaging architectures capable of delivering persuasive content across linguistic, cultural, and ideological boundaries. These technologies afford MISO the ability to shape narrative environments with unprecedented precision, speed, and scalability. However, their implementation also introduces complex risks related to algorithmic manipulation, misinformation amplification, and ethical ambiguity. Recent assessments from NATO’s Strategic Communications Centre of Excellence underscore the dual-use nature of these tools, highlighting their capacity not only to support strategic communications but also to facilitate adversarial information campaigns through synthetic media, automated botnets, and real-time influence orchestration.⁴ This implies that AI-enabled influence tools possess inherent dual-use risks, as they can be leveraged not only for legitimate strategic communications but also for malicious adversarial operations such as disinformation and psychological manipulation.

In the era of great power competition, particularly against adversaries such as the Russian Federation and the People’s Republic of China, military information support operations must now be conducted as an integrated component of a larger informational architecture. This includes synchronization with cyber operations, public affairs, and intelligence functions to counter state-sponsored disinformation campaigns and exploit real-time digital influence opportunities.

 

Ukraine as a Hybrid Warfare Laboratory

The ongoing Ukraine–Russia conflict marks a paradigmatic shift in the operationalization of real-time digital influence as a core element of ongoing hybrid warfare. Both state and non- state actors have leveraged decentralized communication platforms, most notably Telegram, Twitter (X), and short-form video media, as vehicles for psychological operations, information shaping, and narrative warfare. These platforms have been instrumental in mobilizing public sentiment, degrading adversary morale, and manipulating global perceptions. Ukraine has adopted an agile, bottom-up influence strategy characterized by humor, moral clarity, and authenticity, often employing viral content and emotionally resonant messaging to garner international support and strengthen domestic resolve. In contrast, Russian information operations have continued to reflect a more hierarchical and industrialized model, relying on coordinated botnets, disinformation campaigns, and state- controlled media ecosystems to obscure battlefield realities and sow discord. As observed in the RAND Corporation’s comprehensive analysis of the conflict’s information environment, the Ukraine case offers critical insight into the evolving interplay between strategic communications, digital platforms, and adversarial cognitive targeting in modern warfare.⁵

Artificial intelligence and military information support operations represents a pivotal moment in modern information warfare. As demonstrated by the Ukraine-Russia conflict and the Chinese Communist Party’s influence in Taiwan, AI-powered influence operations are achieving strategic outcomes with unparalleled speed, precision, and reach. (AI-generated Adobe Stock by Kaihkolmages)

The implication of the Ukraine conflict, as highlighted by RAND’s analysis, is that modern warfare is increasingly defined by the integration of strategic communications, digital platforms, and cognitive targeting to influence adversary perceptions and behaviors. These developments underscore the strategic utility of AI-MISO for SOF not just as an instrument of support but as a core capability in allied resilience-building and unconventional warfare support. Additional research by Pierri et al., highlights how AI-augmented bot networks and sentiment manipulation on Facebook and Twitter shaped perceptions around battlefield developments and legitimacy.⁶ For SOF, such tools offer both a blueprint and a threat; understanding their use informs counter-MISO training, operational security, and digital terrain analysis. Insights from Ukraine also support the development of predictive narrative frameworks that can guide U.S. influence planning in fluid conflict environments. These insights into Ukraine’s dynamic use of influence operations establish a critical foundation for understanding how AI can amplify and scale such efforts.

Strategic Objectives of Artificial Intelligence-Driven Military Information Support Operations

The strategic value of AI-augmented MISO derives not only from its capacity to shape what individuals believe but also from its ability to accelerate and scale the dissemination of tailored messaging across diverse operational theaters. AI-enhanced MISO capabilities can be divided into three conceptual categories:

  1. Narrative generation systems, such as NLG—AI technology that automatically creates human-like text from structured data— and deepfake content tools
  2. Perception mapping tools, including real-time sentiment analysis and psychographic segmentation
  3. Execution frameworks, encompassing platform selection, algorithmic targeting, and timing optimization

The integration of AI into MISO reflects a deeper operational logic in which advanced systems act synergistically to craft, calibrate, and deploy cognitive effects with unprecedented precision, adaptability, and speed across contested information environments. By leveraging machine learning, driven content generation, real-time sentiment analysis, and audience segmentation tools, AI enables the rapid propagation of coherent narratives, such as the defense of Ukrainian sovereignty, the ethical condemnation of aggression, and appeals to transnational solidarity in the face of disinformation. These capabilities significantly expand the psychological reach of MISO, facilitating the orchestration of simultaneous influence campaigns across multiple linguistic, cultural, and geopolitical contexts. As argued by Goldfarb, Lindsay, and Caverley⁷, AI’s role in strategic competition lies in its capacity to compress the decision cycle of narrative dominance, giving actors a temporal and informational advantage in shaping public perception during crises. This suggests that AI provides a strategic advantage in narrative competition by accelerating the decision-making cycle, enabling actors to shape public perception more quickly and effectively during crises. China’s application of similar technologies in the Indo-Pacific offer a comparative case that sharpens understanding of AI’s geopolitical versatility.

Chinese Artificial Intelligence-Driven Influence in the South China Sea

A comparative lens on Chinese operations in the South China Sea reveals how AI-enabled influence capabilities are being systematically deployed to shape regional narratives and strategic perceptions. The Chinese Communist Party (CCP) has increasingly leveraged AI tools, ranging from sentiment analysis and social botnets to algorithmic content curation, to dominate the informational terrain surrounding maritime territorial disputes. Through state- aligned media amplification and automated disinformation dissemination across platforms such as WeChat, TikTok, and Twitter, Beijing constructs legitimacy narratives aimed at both domestic and international audiences. These campaigns are further refined through machine-learning algorithms that optimize message timing and targeting based on behavioral analytics. Research by Hung and Hung documents China’s use of real-time sentiment tracking and AI-enhanced behavioral targeting to suppress dissent and amplify favorable narratives on platforms like WeChat and TikTok.⁸ For SOF, this highlights the operational importance of early warning systems tied to digital sentiment shifts and the need for regionally tailored counter-influence strategies that preempt adversarial cognitive control. The comparison between Russian and Chinese approaches is purposeful; both nations have invested heavily in AI-enhanced influence operations, yet their methodologies reflect divergent institutional and cultural paradigms. Russia favors rapid, disruptive disinformation rooted in psychological destabilization, while China emphasizes long-term, systemic control of narrative ecosystems.

The comparison between Russian and Chinese approaches is purposeful; both nations have invested heavily in AI-enhanced influence operations, yet their methodologies reflect divergent institutional and cultural paradigms.

 

Despite these differences, both models demonstrate increasing convergence in their use of predictive analytics, synthetic media, and cross-platform orchestration. There is evidence that China has observed and adapted aspects of Russia’s meme-based warfare and rapid propaganda loops during the Ukraine–Russia conflict, while Russia has mirrored aspects of China’s sentiment-driven targeting in its operations. For SOF, understanding this dual track of influence evolution is essential, not only because it reveals adversarial learning loops, but also because it highlights shared vulnerabilities in the information environment that requires a proactive and unified strategic response. This case underscores the need for SOF to conceptualize AI-enhanced MISO not merely as a tool of communication but as a strategic instrument of cognitive maneuver within competitive information environments. These tactics present direct challenges not only to regional stability but also to SOF’s future mission sets in defending Taiwanese information sovereignty and enabling pre-crisis cognitive resistance. While national strategies differ, both cases rely on common operational tools and platforms, explored in the following section on the techniques that underpin AI- enhanced influence operations.

Tools and Techniques

Advancements in NLG have enabled the scalable production of persuasive, contextually nuanced, and multilingual content tailored to specific psychological and cultural dimensions of target audiences. NLG and deepfakes (fake videos or images created using AI to make it look like someone said or did something they never actually did) fall under the category of narrative generation systems (AI tools that automatically create stories or messages to influence how people think or feel about a topic) as they enable the automated creation of persuasive textual and visual content designed to shape perception and influence cognition. These tools serve as the foundation for crafting high-volume, emotionally reverberating narratives that can be rapidly disseminated across multiple platforms. These systems can quickly craft narratives aligned with operational objectives, reducing the latency between information emergence and influence action.⁹

The evolution of synthetic media technologies, particularly deepfakes, presents a dual-use dilemma; they are simultaneously valuable for countering adversarial disinformation and capable of producing strategically calibrated deception content for deterrent or coercive purposes.

 

Complementing this capability, AI-powered sentiment analysis platforms systematically track attitudinal fluctuations across social media ecosystems such as Telegram, VKontakte, and emerging encrypted communication channels, allowing for adaptive modulation of messaging in near real-time. Moreover, the evolution of synthetic media technologies, particularly deepfakes, presents a dual-use dilemma; they are simultaneously valuable for countering adversarial disinformation and capable of producing strategically calibrated deception content for deterrent or coercive purposes. At the apex of precision influence, cognitive targeting through psychographic profiling (the process of analyzing a person’s values, interests, lifestyle, and personality to understand and predict their behavior) permits granular manipulation of audience behavior by exploiting individual belief systems, emotional triggers, and susceptibility patterns. These techniques, as documented in recent work by Gombar, signal a shift toward AI-mediated influence ecosystems that blur the boundaries between strategic communication, behavioral science, and algorithmic warfare.¹⁰ This suggests that emerging AI-driven influence techniques are transforming the information environment into a complex ecosystem where strategic communication, behavioral manipulation, and algorithmic decision-making converge, challenging traditional boundaries and operational norms.

Execution Platforms

The operational effectiveness of influence campaigns is increasingly contingent upon the strategic selection and exploitation of digital execution channels tailored to the demographic, cultural, and behavioral characteristics of target audiences.¹¹ Platforms such as Telegram and WhatsApp function as decentralized, encrypted conduits for peer-to-peer information dissemination, particularly within conflict zones and contested information environments. These platforms, along with TikTok and dark web forums, form part of the execution framework category—operational channels through which AI-generated narratives are strategically deployed and amplified.¹² Their selection is tailored to the behavioral, cultural, and psychological profiles of target audiences, enabling precise timing, delivery, and resonance of influence operations across both open and covert digital domains.

TikTok and Instagram, for instance, serve as high-velocity vectors for emotionally charged, short-form visual narratives designed to trigger rapid psychological engagement. In contrast, YouTube and Facebook support longer-form content suited for counternarratives, historical framing, and disinformation rebuttals.¹³ Twitter (X) occupies a hybrid role, facilitating real-time open-source intelligence amplification and memetic warfare, where symbolic content can rapidly achieve virality through algorithmic propagation. Additionally, dark web forums persist as relatively unregulated ecosystems where state and non-state actors conduct anonymous psychological operations, information laundering, and targeted misinformation seeding. As noted by Guarino et al., the deliberate orchestration of multi- platform influence architectures enables state actors to synchronize narrative effects across both surface and covert digital domains, thereby enhancing their cognitive and psychological reach.¹⁴ This suggests that state actors can amplify their cognitive and psychological impact by strategically coordinating narratives across both overt and covert digital platforms through multi-platform influence architectures.

Early Indicators and Measures of Effectiveness

Evaluating the operational value of AI-enhanced MISO necessitates a multi-dimensional framework that integrates both digital sentiment analytics and real-world behavioral intelligence.¹⁵ Sentiment analytics and behavior tracking fall under the category of perception mapping tools (technologies used to track and analyze how people feel, think, and react to messages or events) which enable real-time assessment of audience attitudes, emotional states, and engagement patterns. These tools (Brandwatch is a prime example) allow operators to dynamically adjust messaging strategies based on how target populations are responding across digital platforms. Quantitative indicators of campaign impact may include measurable shifts in sentiment across adversary or contested populations, as captured through social media trend analysis, sentiment polarity scoring, and increased prevalence of pro-ally hashtags or support narratives. Concurrently, reductions in engagement with hostile propaganda, whether via click-through attenuation, comment suppression, or network diffusion decay, may serve as proxies for declining message salience. Beyond the digital sphere, behavioral manifestations such as draft avoidance, elite fragmentation, and civil unrest, exemplified by Russian conscription evasion and dissident mobilization during the Ukraine–Russia conflict, offer critical evidence of cognitive disruption and narrative resonance. These phenomena, as articulated by Sabour et al., suggest that AI-driven influence operations can produce second- and third-order psychological effects, potentially shifting the methodology of adversarial decision-making and social cohesion.¹⁶ Together, these indicators underscore not only the potency of AI- MISO operations but also the need for doctrinal adaptation and strategic foresight.

Telegram and WhatsApp are decentralized, encrypted channels for sharing information, especially in conflict zones. These platforms, along with TikTok and dark web forums, are considered part of the execution framework—the operational channels used to strategically deploy and amplify AI-generated narratives. (AI-generated Adobe Stock image by Jonah)

While the strategic potential of AI-enhanced MISO is significant, its global adoption remains uneven. Most states lack the computational infrastructure, centralized media control, and doctrinal integration necessary to deploy these tools at scale. The Russian and Chinese cases are instructive not because AI-MISO is universally effective, but because these regimes operate in permissive digital ecosystems with minimal regulatory or ethical constraints. Additionally, both nations have cultivated hybrid civil-military AI ecosystems that feed real- time data into their influence architectures. These conditions are not easily replicable. For SOF, this suggests that rather than viewing AI-MISO as an all-powerful tool, it should be assessed through the lens of contextual viability; where, when, and how it can be employed effectively in support of U.S. interests. Learning from these case studies helps distinguish hype from operational reality and supports more calibrated doctrine development.

U.S. SOF must proactively forge AI-MISO partnerships with Ukraine and Taiwan focused on digital resilience, capacity-building, and ethical influence strategy.

 

Conclusion

The convergence of AI and MISO marks a decisive inflection point in modern information warfare. As seen in the Ukraine–Russia conflict and CCP influence in Taiwan, AI-powered influence operations are delivering strategic effects with unmatched speed, precision, and reach—reshaping the cognitive battlespace’s tempo and topology. In Ukraine, AI tools support grassroots mobilization and narrative shaping, while in Taiwan, Chinese sentiment manipulation sets pre-conflict psychological conditions. These examples illustrate how digital ecosystems are actively being weaponized to gain cognitive advantage. Rising Indo- Pacific tensions suggest future U.S.-China confrontations will feature sustained psychological campaigns targeting alliance cohesion and deterrence stability. U.S. SOF must proactively forge AI-MISO partnerships with Ukraine and Taiwan focused on digital resilience, capacity-building, and ethical influence strategy. Emerging research from the Russo-Ukraine war and China’s South China Sea operations highlights the need for SOF to adopt AI-driven capabilities, such as sentiment mapping, digital surveillance, and automated counter-messaging. These tools must be embedded into SOF doctrine, training, and planning frameworks to operationalize cognitive superiority. This transformation requires more than technical adaptation; it demands ethical and strategic recalibration. SOF must be equipped to lead in responsible AI-driven psychological operations that counter adversarial disinformation while advancing democratic narrative dominance in 21st-century great power competition.

About the Author

AJ Rutherford, PhD, a retired Marine Special Operations Team Chief, is a seasoned overseas security consultant in the private sector. He holds a PhD in information security and assurance and an MA in strategic intelligence. In addition to his consulting work, Dr. Rutherford teaches at American Military University, where he instructs courses in operating systems hardening and security, biometrics, and master’s-level cybersecurity capstone projects. He also teaches computer security incident response to graduate cybersecurity students at Norwich University and mentors PhD candidates at the University of the Cumberlands, with a focus on information governance. His primary research explores the integration of advanced technologies—particularly AI—into special operations. Dr. Rutherford’s work highlights the future of SOF in the context of fifth-generation warfare, and he actively welcomes opportunities for continued research, collaboration, and presentation.

Notes

1. Gourav Gupta et al., “A Comprehensive Review of DeepFake Detection Using Advanced Machine Learning and Fusion Methods,” Electronics 13, no. 1 (2024): 95. https:// doi.org/10.3390/electronics13010095.

2. Joint Chiefs of Staff, Military Information Support Operations, Joint Publication 3-13.2 (Joint Chiefs of Staff, 2011), https://info.publicintelligence.net/JCS-MISO.pdf.

3. Joshua A. Sipper, “Cyber and Space: Information Warfare and Joint All-Domain Effects in the Youngest Domains,” paper presented at CYBER 2023: The Eighth International Conference on Cyber-Technologies and Cyber-Systems, Porto, Portugal, September 25, 2023, https://personales.upv.es/thinkmind/dl/conferences/cyber/cyber_2023/ cyber_2023_1_10_80004.pdf.

4. Giulio Corsi et al., “The Spread of Synthetic Media on X,” Harvard Kennedy School Misinformation Review 5 (2024): 2, https://misinforeview.hks.harvard.edu/article/the- spread-of-synthetic-media-on-x/.

5. Francesco Pierri et al., “Propaganda and Misinformation on Facebook and Twitter During the Russian Invasion of Ukraine,” paper presented at WebSci ’23: Proceedings of the 15th ACM Web Science Conference 2023, Austin, Texas, April 30, 2023, https://dl.acm.org/ doi/10.1145/3578503.3583597.

6. Francesco Pierri et al., “Propaganda and Misinformation.”

7. Avi Goldfarb et al., “Prediction and Judgment: Why Artificial Intelligence Increases the Importance of Humans in War,” International Security 46, no. 3 (Winter 2021/22): 7–42, https://direct.mit.edu/isec/article/46/3/7/109668/Prediction-and-Judgment-Why- Artificial.

8. Tzu-Chieh Hung and Tzu-Wei Hung, “How China’s Cognitive Warfare Works: A Frontline Perspective of Taiwan’s Anti-Disinformation Wars,” Journal of Global Security Studies 7, no. 4 (2022): ogac016, https://doi.org/10.1093/jogss/ogac016.

9. Ben Kereopa-Yorke, “Clausewitz Framework: A New Frontier in Theoretical Large Language Model-Enhanced Information Operations,” Journal of Information Warfare 23, no 2 (2023), https://www.jinfowar.com/journal/volume-23-issue-2/clausewitzgpt- framework-new-frontier-theoretical-large-language-model-enhanced-information- operations.

10. Marija Gombar, “Algorithmic Manipulation and Information Science: Media Theories and Cognitive Warfare in Strategic Communication,” European Journal of Communication and Media Studies 4, no. 2 (2025): 41–63 https://www.ej-media.org/index.php/media/article/ view/41.

11. Christy J W Ledford, “Changing Channels: A Theory-Based Guide to Selecting Communication Channels in Social Marketing Campaigns,” Social Marketing Quarterly 18, no 3 (2012): 175-186, https://www.researchgate.net/ publication/258188480_Changing_Channels_A_Theory- Based_Guide_to_Selecting_Traditional_New_and_Social_Media_in_Strategic_Social_Mar keting.

12. Ren Zhou, “Understanding the Impact of TikTok’s Recommendation Algorithm on User Engagement,” International Journal of Computer Science and Information Technology 3, no. 2 (2024): 201–208, https://doi.org/10.62051/ijcsit.v3n2.24.

13. Homa Hosseinmardi et al., “Examining the Consumption of Radical Content on YouTube, Proceedings of the National Academy of Sciences 119, no. 32 (2021): e2117277119, https:// doi.org/10.1073/pnas.2101967118.

14. Pao Muñoz et al., “Modeling Disinformation Networks on Twitter: Structure, Behavior, and Cross-Platform Dynamics, Applied Network Science 9, no. 4 (2024), 15, https:// doi.org/10.1007/s41109-024-00610-w.

15. Rajesh Paul et al., “AI-Powered Sentiment Analysis in Digital Service Marketing: A Review of Customer Feedback Loops in IT Services,” American Journal of Scholarly Research and Innovation, 2(2), 166–192, https://doi.org/10.63125/61pqqq54.

16. Sahand Sabour et al., “Human Decision-Making Is Susceptible to AI-Driven Manipulation,” ArXiv, (2025), https://arxiv.org/abs/2502.07663.

Link to Article