Responsible AI in the Military Domain Summit (REAIM): Seven Key Reflections

Events

09 Mar 2023
Twitter
Linked In

Responsible AI in the Military Domain Summit (REAIM): Seven Key Reflections

Last month our Data Science Manager, Stephen Bull, attended REAIM 2023 – the first global summit on responsible AI in the military domain. Aiming to put the topic higher on the political agenda, the summit provided a forum for stakeholders to explore the key challenges and opportunities associated with military applications of AI. It was held by the Netherlands government and well-attended by over 2000+ participants from 100+ countries.

As Adarga's Responsible AI Committee Lead, Stephen was keen to share his reflections from the conference.

I was struck by the unnerving video simulation that opened the conference. It showed a military mission control centre that suddenly observes a build-up of troops on the border between neighbouring states. The information being delivered to the mission control centre is vast – it includes time-lapsed satellite imagery combined with human intelligence and social media feeds. The soldiers seem confused by the quantity of information, how to interpret it and how to use it to make important decisions quickly. Undoubtedly, the consequences in this type of scenario are severe.

The reality is that humans no longer have the analytical capacity to deal with the sheer volume of information that is now available and required for decision-making. And it’s the side that can act quickly that will be victorious. AI now has a significant role to play in providing a mechanism for humans to turn data-overwhelm into decision-advantage - enabling complex information to be understood and analysed at speed.

But the arrival of AI in the decision space is raising a number of questions. Below I’ve outlined some of my reflections on the conversations we had around AI and trust and how we can address some of the challenges they present. 

  1. AI's role in decision-making: Decision-making remains a human-led endeavour, this will not change in the near future. As such, while AI will undoubtedly improve decision-making, it’s important to note that it will not prevent humans from making mistakes. It needs to be used like any other instrument or tool, and any decisions wrongly made should continue to be handled with the same rigor and investigation that exists now.
  1. Acknowledging bias: Bias undermines trust. It is therefore essential to acknowledge it is inherent in AI in much the same way it is evident in human behaviour. We must continue to work on strategies to reduce its impact, with several approaches to enforcing fairness constraints on AI models already in use.
  1. Building a culture of trust: To build a culture of trust, institutions must demonstrate that they are effective in fulfilling their mandates. Defence has a thriving AI ecosystem to call on with leading knowledge and expertise that has been developed over many years. They must lean on industry more heavily instead of trying to build new capability in-house. Speed is crucial.
  1. Creating transparency standards: Businesses must have robust standards in place which ensure their AI outputs and reasoning processes are fully explainable. Like many standards, these must be factored into process and implementation so an appropriate quality mark can be attached.
  1. Managing change: A key part of the change process is acknowledging the need for change and dealing with the uncertainty it causes head on. Though it can be uncomfortable, and not every day will have a positive outcome, there must be a strong commitment to pursuing transformation and the benefits that come with it in the long run. This belief needs to exist throughout the chain of command to enable innovation to flourish. People’s role in change must not be underestimated, and military leaders and politicians have an important part to play in this.
  1. Understanding that not all AI is equal: AI Legislation remains in its infancy. At the same time, the breadth of capabilities AI can deliver is evolving at an unrelenting pace. It’s important to remember that not all AI is equal, so the law cannot treat all AI equally. Different applications have highly varying degrees of risk and this needs to be reflected in regulation to ensure innovation is not stifled. The use of artificial intelligence in autonomous weapon systems compared to the use of natural language processing for information extraction, for example, is wholly different AI deployments that must be responsibly deployed in their own ways.
  1. Continuous learning and development: AI is not a project; it is now a lifestyle. Commercial organisations began to realise this at least 15 years ago. We won’t stop doing this in 2 years, and neither will our adversaries. Therefore, investing for the long-term is a wise choice. It will build security, knowledge and expertise as well as trust.

To conclude, AI will not make the world perfect, but it is a vital capability that will vastly improve our day-to-day lives. Humans will still be needed to make decisions. But AI should enable us to take better, smarter routes towards those decisions. We need to think about the application, but we do not need to overthink. Much of the structure, practices, guidelines and technology already exist. The awareness and willingness to use them is the lesson to learn.

Sign up for the Adarga Newsletter today to receive updates and thought pieces on the latest AI developments.

Alternatively, download the Adarga brochure to understand how you can leverage our AI-driven information intelligence capabilities for competitive advantage.

Cookie Policy

We'd like to set Analytics cookies to help us to improve our website by collecting and reporting information on how you use it. The cookies collect information in a way that does not directly identify anyone.

For more detailed information about the cookies we use, see our Legal Page

Analytics Cookies