AI Innovation: The impact technology could have had through history and the lessons learnt

Features

07 Dec 2023
Twitter
Linked In

AI Innovation: The impact technology could have had through history and the lessons learnt

Written by Ollie Carmichael, Product Manager at Adarga 

Would the Duke of Wellington have used Artificial Intelligence to support his military campaigns, if of course, such technologies had been available in the early nineteenth century?  

Famed for his shrewd use of intelligence networks during the Peninsular Wars, his exclamation of ‘Napoleon has humbugged me!’ when learning that Bonaparte’s army was on the move in June 1815 was as much a reflection of the atrophy of these intelligence networks as it was personal angst at being out manoeuvred by his arch nemesis. 

On receiving this news at a ball hosted by the Duchess of Richmond, one suspects Wellington would not have turned straight to ChatGPT and asked OpenAI what he should do next. Such was his attention to detail and rigorous academic prowess, he would likely have understood the limitations of a generative Large Language Model in supporting his military planning and strategy. More broadly, he would have recognised the range of risks associated with AI and carefully considered appropriate mitigations. But faced with a lack of intelligence in 1815, he would also have recognised the opportunities afforded by more specific applications of AI, not least the ability to extract value from vast amounts of information in quick time, or the joining of previously unconnected dots to illuminate hidden connections between the intelligence documents he did manage to collect. The cumulative effect of this may have been to expose Napoleon’s plans and perhaps would have forced Wellington to think twice about donning his finest and attending the Duchess of Richmond’s ball! 

Perhaps more pertinently, in this alternative history where AI was thrust upon commanders in the 19th century European theatre of war, Wellington would also have been cognisant of the risk of inaction, particularly when he would be in no doubt that his rival, Napoleon, would have been innovating at the first mention of computational linguistics, network science, machine learning or platform engineering. Given Napoleon’s ‘Revolution of Military Affairs’ focused on professionalisation and of note, innovation in logistic supply chains, it is likely Wellington would have faced a rival that would have used AI to drive efficiencies within his force, affording him additional time that might have been allocated to a more thorough assessment of the battle space prior to Waterloo. Would Napoleon have recognised the decisive role the delayed Prussian force would play, if he had access to an AI that conducted the resource intensive, human-error prone tasks of logistics, intelligence collection or operational planning that armies before and since have faced? Significantly, would the AI have provided Napoleon and his closest advisors with the capacity to let their creative, dynamic military minds do what they were best at, and identify the optimum way to defeat their adversary? 

Much of the discourse surrounding military applications of AI in recent months has focussed on its lethal applications, and the very real ethical considerations that must be taken into account. Whilst it is always important to distinguish between reductive perceptions of technology (perhaps based on science fiction or prevalent media narratives) with those based on anchored reality and a detailed understanding of the capabilities being discussed, the collective responsibility to mitigate the risks of weaponised, lethal AI is unquestionable.  

That said, as these Napoleonic counterfactuals start to suggest, there is also a moral imperative to accelerate the use of more benign AI in the military to support intelligence processing, logistics and enterprise level human resource functions. In doing so, the true potential of the professional soldier can be realised, allowing them to spend more time delivering output unique to human intelligence. The idea of humans teaming with machines to produce better outcomes than if the human or the machine had been operating in isolation has been popularised in recent years by DeepMind’s research into reinforcement learning, specifically in the context of complex strategy games. In the context of the military, where the human should always be ‘in-the-loop’, using AI to conduct process driven, supporting tasks buys capacity for military personnel (and indeed their colleagues across government) to focus on problem solving at the most complex level, where the realities of geopolitics have to be balanced against economic truths, ethical non-negotiables and the inevitability of Clausevitzian friction - all combining into a real life manifestation of DeepMind’s strategy games. 

To indulge in alternative history once more, imagine if military innovators of the past, Stirling, Windgate, Montgomery (to suggest some 1940s examples) had access to an AI teammate, aligned and limited in scope to delivering these process driven functions. We have to take a bet on both the brilliance of these individuals, and also their moral reasoning to hope that the tasks unique to human ingenuity they were freed up to deliver were focussed on ending hostilities as quickly as possible, or better still, preventing them in the first place. It is this vision and hope we must apply to future military applications of AI. 

Real historical analogies provide an opportunity to consider the conditions that have been required in the past to enable the responsible application of new technologies into the military. These might shed further light on the considerations required for the use of AI as a contemporary military technology. Charismatic leadership has been well covered, both with regards to Napoleon’s drive to modernise or Wellington’s innovative use of human intelligence networks. The personal rivalry between the two has also been referenced as an accelerant, both forced to innovate in response to the brilliance of the other. But it is a third category, that of necessity which is most prescient today. Whilst it was circumstance that forced Stirling, Windgate and Montgomery to rapidly innovate, it is existential threat that drives that necessity for the people of Ukraine today. They provide a heroic example. We would do well to take note.  

Sign up for the Adarga Newsletter and keep up-to-speed on the latest AI developments across defence, national security and the wider industry.  

Download the Adarga Vantage brochure to understand how our AI-driven information intelligence tool is delivering decision-advantage for defence, public and commercial sector organisations. 

 

 

Cookie Policy

We'd like to set Analytics cookies to help us to improve our website by collecting and reporting information on how you use it. The cookies collect information in a way that does not directly identify anyone.

For more detailed information about the cookies we use, see our Legal Page

Analytics Cookies