Thinking Before Sinking: Red and Blue AI Tactics for Naval Combat
40 pages
Andrew Livsey
Given the profusion of commentaries about AI in future combat that on closer inspection turn out to be no more than vague prognostication, it is refreshing to find a team actually trying to make it work. In this instance it is the ‘Defense AI Observatory’, a team at Helmut Schmidt University in Hamburg funded by the German Armed Forces and the EU. This is their third paper, and though the results described in Thinking before Sinking may be slightly more equivocal than they think, their central conclusion that, “AI-based coordination provides military value beyond massing firepower”, is worth taking seriously.
Their scenario involves a ‘blue’ vessel they describe as a frigate escorting a tanker past some intercepting ‘red’ fast attack craft. Each vessel is controlled by an AI agent in a simulated battle environment. The fast attack craft have a variety of weapons, the frigate only an apparently ineffective 76 mm gun and either two or four short range guns they describe as close in weapon systems (CIWS). The scenario was run multiple times, with the AI agents allowed to learn from previous runs.
First, the strengths. The team managed to set the red vessels’ AI agents up such that they avoided the standard AI problem of all running after the target like six year olds playing football. The blue vessel then manoeuvred aggressively and the CIWS prioritised the more threatening targets. Tactics for both sides improved through their runs.
Now, the problems. The frigate seems to have neither long-range anti-ship nor point defence missile systems, which would have guaranteed defeat had red been using decent tactics. More critical was that though the red vessels tried to get past the frigate to the tanker, they closed with little regard for optimum weapon ranges. One fast attack craft with missiles with a 150 mile range, for example, got close enough to be destroyed by the frigate’s CIWS. From the diagrams provided it seems that most of the runs would have earned a PWO student a firm fail in the simulators in HMS Collingwood. I also did not understand the rules of engagement for each side: the AI agents seemed to take independent decisions about when to open fire first, gaining an advantage within the scenario for doing so but with the broader political consequences being less clear.
A pedant would add that while the team have done a reasonable literature review of publications from the last three years or so, some of their understanding of naval warfare would be aided by reading Norman Friedman’s Network Centric Warfare of 2009, which explains how navies worked out how to coordinate their forces in the 20th century, and Wayne Hughes on tactics. That might have helped them avoid suggestions such as the need for third party information being demonstrated in 2024 rather than quite a few decades earlier. Moreover, what they consider to be naval warfare of the past, i.e., that practised in the 1980s, had rather more distributed decision making than they think. They would also benefit from the equivalent of a warfare officer or weapon engineer on the team. Their belief that short range weapons are guided only by their own sensors seemed strange, and though overall they deserve praise for explaining things so well that even your humanities-based reviewer could keep up, their use of terms like Link and CIWS was unconventional to the point that I had to read several sentences twice.
But at least some of those criticisms miss the point. The importance of work like this is not that the AI is doing better than humans (and at the moment it very much is not), but that it can do something vaguely comparable. And once you get to that level, as seems to have happened, then more powerful computers, better modelling, tweaks to the AI agents’ reward feedback and a few million simulation runs make many things possible. What is also important is that the team is clearly developing as they go, including working out some useful heuristics for AI modelling, such as avoiding spurious effects, ie effects that may be artifacts of the simulation environment, by not making the initial simulation too high-resolution. This is a report on work in progress and we can expect more.
So, if you have an interest in naval tactics and near future warfare, (and, frankly, both are pretty fundamental to our profession), then this brief online report is worth reading. For all the hype about AI, today’s operations rooms are in their fundamentals much like those of the 1980s: Link, some auto-tracking and computer based ‘threat evaluation weapon allocation’, but much of the rest brought together by humans. We hear snippets about QinetiQ’s work to change all that in the UK. In the meantime, it is good to hear what others are doing. Warfare officers and weapon engineers may not be out of a job yet, but I expect that in the next decade both branches will have to update their skill sets to remain relevant.