
AI in Military Planning — Sweden and Norway Put It to Test
Why this matters now
Artificial Intelligence (AI) in military planning is transitioning from demonstrations to formal doctrine. Sweden’s Defence University and Norway’s Defence University College used AI decision support during the three-week Comprehensive Shield 2025 (CS25) exercise at Kjevik Leir to see how it affected planning speed, mental effort, and learning benefits.
What the experiment tested
Teams tackled the same operational scenario using three methods: a standard NATO planning process, an AI-supported workflow, and a design/systems approach with AI in a supporting role. The setup let instructors compare how AI in military planning changes problem framing, option generation, and staff coordination in a headquarters context.
Results at a glance
Participants reported that AI in military planning accelerated sense-making, widened the solution space, and reduced mental workload when properly integrated into staff battle rhythm. However, AI tools demanded new technical literacy and careful role designs to avoid friction. Continuous AI support paired best with the NATO process, while the design-led team used AI sparingly because the tools did not match its workflow.

Human–machine teaming, not automation
The data align with broader research: effective human–machine teaming augments judgement rather than replaces it. In CS25, planners retained authority while AI proposed patterns, risks, and branches/sequels, improving red-team quality and time-on-target for key decisions. This reinforces graduate-level PME aims without short-circuiting military judgement.
Practical takeaways for staffs
- Match tools to the method: Choose AI that mirrors your JOPP/OPP steps; avoid bolt-ons that fight your workflow. Swedish Defence University
- Design roles up front: define who tasks, validates, and logs AI outputs to keep traceability and tempo. Diva Portal
- Measure cognitive load: Use structured instruments during rehearsals to confirm real gains, not perceived ones. Diva Portal
- Educate for trust: PME should teach promptcraft, model limits, and adversarial testing alongside doctrine. Swedish Defence University
Limits and risks
AI in military planning remains specialised; models excel at specific tasks but struggle to generalise across design thinking, wargaming, and campaign art without careful tuning. Explainability, data provenance, and legal-ethical guardrails also require commander-owned policy, not vendor defaults.

Where to watch next
Nordic programs are expanding dual-use AI research and operational trials, indicating increased application experimentation across NATO formations. For defence professionals, the lesson is clear: integrate AI intentionally, instrument outcomes, and keep humans in command.