Warfighting AI

Tracking Artificial Intelligence Integration Across Military Doctrine, Wargaming Simulation, and Defense Technology Procurement

Platform in Development -- Comprehensive Coverage Launching Q3 2026

Warfighting and artificial intelligence are converging across three distinct but interconnected domains. At the doctrinal level, military services worldwide are rewriting how they conceptualize and execute combat operations to account for AI-enabled capabilities spanning every warfighting function from intelligence to logistics. In the simulation and wargaming domain, AI is transforming how military planners test strategies, train officers, and evaluate force structures through computational modeling far exceeding traditional tabletop methods. And in the defense technology sector, a new generation of companies is competing alongside established contractors for billions in procurement funding directed at fielding AI-powered warfighting systems.

This independent research platform provides analytical coverage of AI's role across these three domains, tracking the doctrinal publications, simulation programs, budget allocations, and technology developments reshaping how nations prepare for and conduct military operations. Full editorial coverage begins Q3 2026.

Doctrine: AI and the Warfighting Functions

Joint Warfighting Doctrine and AI Integration

The term "warfighting" carries specific meaning within military doctrine, referring to the organized application of military force to achieve national objectives. Joint Publication 3-0, Joint Warfighting, establishes the doctrinal framework governing how the United States military plans and conducts operations across all domains. The seven joint warfighting functions -- command and control, intelligence, fires, movement and maneuver, protection, sustainment, and information -- provide the organizational architecture through which AI capabilities are being integrated into military operations. Each function presents distinct opportunities and constraints for AI adoption, and each military service is developing approaches tailored to its specific operational requirements.

The Department of Defense's January 2026 AI Strategy memorandum directed the department to become an "AI-first" warfighting force across all components, designating seven Priority Strategic Programs spanning warfighting, intelligence, and enterprise mission areas for fiscal year 2026 funding. The strategy requires each Service Chief and Combatant Commander to designate an AI Integration Lead responsible for coordinating with the Chief Digital and Artificial Intelligence Office on implementation. This directive reflects a decisive institutional commitment to embedding AI across the full spectrum of warfighting activities rather than confining it to niche experimental programs.

Service-Level Doctrinal Development

Air Force Doctrine Note 25-1, published in April 2025, provides the most comprehensive service-level articulation of how AI relates to warfighting operations. The document addresses AI terminology, capabilities, limitations, and integration considerations specific to air and space power employment. It acknowledges that AI systems are best suited for military applications with consistent patterns and require substantial high-quality data and computational power, while noting that complex warfighting problems fundamentally require human characteristics including contextual understanding, judgment, wisdom, and ethical reasoning. The doctrine note explicitly addresses the competitive dimension, observing that both China and Russia are accelerating efforts to integrate AI across their warfighting capabilities.

The Army's approach to warfighting AI centers on its Artificial Intelligence Integration Center, which conducts experiments to develop metrics for assessing human-machine teaming performance across effectiveness, vulnerability, and sustainability dimensions. The FY2026 National Defense Authorization Act encourages the Army to adopt a federated approach to AI centers of excellence positioned near platforms, existing infrastructure, and prototyping facilities. The Marine Corps has conducted experiments at Marine Corps University demonstrating how large language models can accelerate staff planning estimates and inject data-driven options into the operational planning process, pointing toward a fundamental transformation of how military staffs operate in combat.

International Doctrinal Competition

China's People's Liberation Army has evolved its doctrine from "Local Wars under Informatized Conditions" toward a concept of "intelligentized warfare" that treats AI as essential for transforming from a manpower-intensive force into a technology-enabled warfighting institution. Unlike Western approaches where innovation is often led by private-sector actors operating within liberal democratic frameworks, China's military AI development is centralized through civil-military fusion policies directed by the Chinese Communist Party. The PLA is focused on applying AI to command decision-making, logistics, cyber operations, swarm coordination, missile guidance, and cognitive domain operations.

Russia is integrating AI into its doctrine and strategies with particular emphasis on unmanned aerial vehicles, autonomous ground vehicles, and underwater systems, informed by operational lessons from the conflict in Ukraine. NATO's approach emphasizes interoperability among allied nations, with the alliance's Principles of Responsible Use establishing requirements for lawfulness, responsibility, accountability, explainability, traceability, reliability, governability, and bias mitigation. The United Nations Institute for Disarmament Research published a comprehensive report on AI in the military domain in 2025, responding to General Assembly Resolution 79/239 calling for analysis of AI's implications for international peace and security. The Responsible AI in the Military Domain summits, held in the Netherlands in 2023, South Korea in 2024, and Spain in 2025, represent growing multilateral engagement on governance frameworks for warfighting AI.

Wargaming and Simulation

AI-Augmented Military Wargaming

Wargaming has served as a cornerstone of military planning since the Prussian Kriegsspiel of the early nineteenth century, but artificial intelligence is transforming this discipline from its traditional analog foundations into computationally sophisticated decision-support environments. Joint Publication 5-0, Joint Planning, establishes wargaming as integral to the Joint Planning Process, requiring integration across intelligence, fires, maneuver, protection, sustainment, information, and command-and-control functions. Traditional methods relying on manual adjudication and static maps constrain the depth and iteration possible in multi-domain scenarios against peer adversaries. AI offers a path beyond these limitations through probabilistic outcome adjudication, automated adversary modeling, and rapid exploration of branching decision trees.

The U.S. Army Command and General Staff College conducted a landmark AI-augmented wargame exercise in November 2025, where 32 officers demonstrated that AI-powered planning tools could dramatically expand the cognitive battlespace available to military planners. The exercise used the Army's Vantage platform, which combines ontological-augmented generation with retrieval-augmented generation to adjudicate outcomes probabilistically while adhering to doctrinal constraints. Students underwent training in prompt discipline and human override protocols, drawing from lessons developed through the Army's AI Integration Center experiments. The results showed deeper doctrinal application, earlier risk identification, and richer exploration of branches and sequels than traditional methods permitted.

Air Force Simulation and Campaign Analysis

The Air Force Futures directorate is conducting market research and issuing Requests for Information to acquire cutting-edge AI technologies for advanced wargaming and simulations. The Air Force's Shadow Operations Center at Nellis Air Force Base has hosted capstone events integrating AI for dynamic targeting in 2024 and 2025 wargames, establishing operational precedents for AI-enabled battle management in realistic training environments. These initiatives move beyond traditional static simulations toward highly adaptive, AI-driven platforms capable of modeling complex adversary behavior, generating realistic threat scenarios, and evaluating unconventional strategies at speeds impossible through manual analysis.

Scale AI has secured Pentagon contracts for AI-powered wargaming and data processing, reflecting the Defense Department's willingness to engage commercial AI companies for simulation capabilities previously developed only within government laboratories and federally funded research centers. The integration of commercial large language models and generative AI into wargaming environments represents a significant departure from purpose-built military simulation software, offering greater flexibility and more sophisticated adversary modeling but introducing new questions about model validation, security classification, and doctrinal alignment.

Academic and Think Tank Wargaming Programs

Military wargaming extends well beyond government organizations into a robust ecosystem of academic institutions, think tanks, and research centers. RAND Corporation, which pioneered analytical wargaming during the Cold War, continues to develop AI-enabled simulation tools for defense planning. The Center for a New American Security conducts regular wargaming exercises examining emerging technology scenarios. The Naval War College, Army War College, and Air War College each maintain wargaming centers that increasingly integrate AI tools into their educational and analytical programs. These institutions serve as important bridges between academic AI research and operational military requirements, translating theoretical capabilities into practical warfighting applications.

The commercial wargaming and strategy gaming sector provides another vector for AI-warfighting convergence. Professional military education increasingly draws on commercial gaming technologies for simulation platforms, while commercial strategy games incorporate increasingly sophisticated AI opponents modeled on actual military doctrine. This bidirectional exchange creates a talent pipeline of individuals comfortable with AI-enabled decision-making in adversarial contexts, supporting broader military adoption of AI tools for planning and operations.

Allied and Coalition Simulation Programs

NATO conducts multinational wargaming exercises that test interoperability between allied AI systems and evaluate how AI-enabled capabilities affect coalition operations. Australia's Defence Science and Technology Group operates wargaming facilities that evaluate autonomous systems integration for Pacific theater scenarios. The United Kingdom's Defence Science and Technology Laboratory develops simulation environments for testing human-machine teaming concepts in expeditionary operations. These allied programs reflect the recognition that warfighting AI must function effectively not only within national military structures but across the complex command relationships inherent in coalition warfare.

Defense Technology and Procurement

Budget Landscape and Investment Scale

The fiscal year 2026 defense budget marks a watershed moment for warfighting AI investment. The Department of Defense dedicated a separate budget line for autonomy and AI systems for the first time, totaling $13.4 billion. This allocation covers unmanned and remotely operated aerial vehicles at $9.4 billion, maritime autonomous systems at $1.7 billion, underwater capabilities at $734 million, autonomous ground vehicles at $210 million, and supporting software and cross-domain integration. Congress passed an $839 billion defense spending bill directing $9.8 billion toward autonomous and unmanned systems across the department. The total DoD IT budget reached $66 billion for fiscal 2026, with every service branch increasing its AI allocation -- the Navy alone added $308 million in AI spending, representing a 22.7 percent year-over-year increase.

Analysis of Pentagon budget justification documents reveals the DoD budgeted $25.2 billion for programs incorporating artificial intelligence and autonomous systems in fiscal year 2025, representing approximately three percent of the department's total $850 billion budget. Nearly half of that total, $9.9 billion, originated from Fourth Estate organizations and combatant commands rather than the military services, with Special Operations Command and Cyber Command accounting for substantial classified allocations. The actual spending on warfighting AI likely exceeds publicly disclosed figures, as many intelligence community and special operations contracts remain outside public budget documentation.

Defense Technology Companies and Market Competition

The warfighting AI procurement landscape features an increasingly complex competitive dynamic between traditional defense contractors and technology-native entrants. Lockheed Martin, Northrop Grumman, RTX, and BAE Systems are strategically integrating AI into existing platforms and command-and-control systems, leveraging decades of institutional relationships with the Department of Defense. Simultaneously, a new cohort of AI-focused defense companies has gained significant market position. Palantir Technologies holds the contract for Project Maven, the seminal DoD AI effort to derive intelligence from satellite, drone, and sensor data. Anduril Industries offers its Lattice mesh-networking platform for battlefield data integration and autonomous drone coordination. Shield AI, valued at $5.6 billion as of 2025, develops the Hivemind autonomous piloting software deployed on V-BAT drones in Ukraine and across U.S. military operations.

These companies are forming unprecedented cross-partnerships that reshape procurement dynamics. Palantir and Anduril jointly integrate Project Maven with the Lattice platform for real-time drone coordination and intelligence fusion. Shield AI and Palantir connect autonomous aerial systems with battle management software for mission planning in Indo-Pacific scenarios. Anduril and Oracle leverage government cloud infrastructure for AI model training and autonomous vehicle testing. Palantir and Booz Allen Hamilton develop classified AI decision-support tools for logistics and battlefield telemetry. These collaborations reflect a broader shift toward consortium-based procurement models where interoperable software platforms, rather than standalone hardware systems, define competitive advantage in warfighting AI.

Collaborative Combat Aircraft and Autonomous Platforms

The Collaborative Combat Aircraft program represents the largest single investment in AI-enabled warfighting platforms, with $9.4 billion allocated in the FY2026 budget for unmanned aerial systems. The program aims to deploy between 1,000 and 2,000 autonomous wingman aircraft by the mid-2030s, operating alongside manned fighters through AI-driven tactical decision-making. Increment 1 contracts were awarded to both Anduril Industries and General Atomics, reflecting the Pentagon's strategy of maintaining competition between traditional and non-traditional defense contractors throughout the development cycle.

Beyond aerial platforms, the Department of Defense is investing in autonomous maritime systems for surface and undersea operations, ground robots for logistics and reconnaissance, and software-defined capabilities that enable existing platforms to incorporate AI-driven decision support. The Replicator initiative, launched to field thousands of relatively low-cost autonomous systems, demonstrated both the promise and the procurement challenges of scaling warfighting AI from experimental prototypes to production quantities. The initiative's initial funding went to established systems rather than cutting-edge startup products, illustrating the persistent tension between innovation ambition and acquisition reality that characterizes defense technology procurement.

Legislative Framework and Governance

The fiscal year 2026 National Defense Authorization Act includes extensive provisions governing warfighting AI development and deployment. Section 1533 tasks the Secretary of Defense with establishing a cross-functional team for AI model assessment and oversight by June 2026, with a standardized assessment framework covering performance standards, testing procedures, security requirements, and ethical use principles due by June 2027. The legislation directs integration of commercially available AI capabilities into logistics operations and mandates development of a roadmap for digital content provenance capabilities across the department. Both the House and Senate versions include provisions addressing AI mission planning for missile defense, data center infrastructure, cybersecurity governance for AI systems, and AI applications for training and readiness.

DoD Directive 3000.09 continues to govern the policy framework for autonomy in weapon systems, requiring human oversight of lethal autonomous weapons while permitting increasing levels of autonomous operation for defensive, surveillance, and logistics functions. The Chief Digital and Artificial Intelligence Office, working with the Defense Innovation Unit, established an AI Rapid Capabilities Cell to accelerate exploration, testing, and adoption of generative AI and large language models for warfighting applications. These institutional mechanisms reflect the department's effort to balance the imperative for rapid AI adoption against requirements for testing, evaluation, security, and ethical governance of systems that may operate in life-or-death contexts.

Technical Foundations

Human-Machine Teaming in Warfighting Contexts

The central technical challenge of warfighting AI lies in designing systems that augment human decision-making under conditions of extreme uncertainty, time pressure, and adversarial opposition. Unlike commercial AI applications where errors produce financial losses or user inconvenience, warfighting AI failures can result in loss of life, mission failure, or strategic escalation. This context demands architectures that maintain meaningful human oversight while operating at machine speed, a balance described in doctrine as human-machine teaming rather than full automation.

Research at institutions including the U.S. Army Combat Capabilities Development Command, Marine Corps University, and American University has explored how AI agents can augment military staff functions. Experiments demonstrate that AI can accelerate staff planning estimates, generate courses of action informed by doctrinal constraints, and enable dynamic red-teaming by varying key assumptions to produce a wider range of options than traditional methods allow. The most effective models embed AI agents within continuous human-machine feedback loops, drawing on doctrine, historical precedent, and real-time data to evolve plans adaptively rather than producing static recommendations.

Edge Computing and Contested Environments

Warfighting AI systems must operate effectively in electromagnetic environments degraded by jamming, cyber attacks, and communications disruption. This requirement drives development of edge computing architectures capable of running sophisticated AI models on platforms with limited power, bandwidth, and connectivity to centralized infrastructure. Autonomous aircraft must execute tactical maneuvers when satellite communications are denied. Ground robots must navigate and make decisions when network connectivity is intermittent. Maritime systems must process sensor data and coordinate with other platforms across vast oceanic distances where communications bandwidth is severely constrained.

The Department of Defense's AI Strategy emphasizes investment in AI compute infrastructure spanning datacenters to the tactical edge, leveraging private-sector capital investment through partnerships with commercial technology companies. Modular Open System Architectures enable third-party integration of AI capabilities into existing warfighting platforms without requiring complete system redesign. These technical standards allow the rapid incorporation of advancing commercial AI models into military systems, addressing the concern that warfighting platforms cannot operate on AI models that are months or years behind the commercial frontier.

Ethical and Legal Dimensions

Warfighting AI development operates within a framework of ethical principles and legal constraints that have no parallel in commercial AI applications. The DoD's AI Ethical Principles require systems to be responsible, equitable, traceable, reliable, and governable. NATO's Principles of Responsible Use add requirements for lawfulness, accountability, and explainability. International Humanitarian Law demands that all uses of force, including those involving AI systems, comply with principles of distinction between combatants and civilians, proportionality in the use of force, and precaution in attack. The UN Convention on Certain Conventional Weapons continues multilateral dialogue through its Group of Governmental Experts on lethal autonomous weapons systems.

These frameworks create requirements fundamentally different from those governing commercial AI. A warfighting AI system must not only perform its intended function accurately but must do so in a manner that is legally defensible, ethically justified, and traceable to human decision-making authority. The tension between operational speed and oversight requirements represents perhaps the most significant unresolved challenge in warfighting AI, one that technology alone cannot solve and that will require continued evolution of doctrine, law, and institutional culture.

Key Resources

Planned Editorial Series Launching Q3 2026