Responsible AI deployment: enhancing LLM outputs with guardrails and RAG

News

04 Jun 2024
Twitter
Linked In

Responsible AI deployment: enhancing LLM outputs with guardrails and RAG

Large Language Models (LLMs) are arguably the most well-known and accessible application of AI, with the likes of ChatGPT enabling the public to benefit from a highly versatile technology that is transforming the way many people work, interact, and learn. When applied responsibly to specific and well-defined use-cases, their results can be hugely beneficial to both consumers and users in government and industry alike. LLMs are not a panacea, however, and there are pitfalls in their use, notably with regards to information assurance and security that undermines trust in their results. Alex Nim, Lead Data Scientist at Adarga, explores how the company has introduced a level of assurance in the outputs from LLMs that is enabling users to apply them to the most challenging of tasks, including in the high-stakes military, national security, and corporate risk domains. 

LLMs have transformed the field of Natural Language Processing (NLP) through their exceptional ability to understand, generate, and interact with human language in more sophisticated ways than earlier models and techniques.  

However, LLMs’ outputs largely depend on identifying patterns in their training data. Unfortunately, this training data can be flawed or outdated. LLMs will always attempt to generate text in response to a prompt, however, they may ‘hallucinate’ and fabricate information to provide a response. The risk of hallucinations has been identified as one of the critical challenges that the US Department of Defense faces in rolling out AI capabilities across the US military.  Even when not hallucinating, out-of-date information may cause an LLM to produce erroneous results. For example, if you ask an LLM about something that happened in January 2024, but the training data only includes information up to July 2023, it will answer your question but using only the stale information from its memory.  

Below, we outline Adarga’s approach to tackling some of the risks associated with LLM outputs and how we bring an unparalleled level of trust to users. This is notably brought to life within the innovative question and answer (Q&A) capability in Adarga Vantage, which enables users to generate natural language answers to complex questions rapidly and ask questions of datasets, reports, and outputs that have been generated in the platform. 

Applying RAG  

A Retrieval-Augmented Generation (RAG) pipeline is necessary to mitigate some of the risks associated with LLMs, particularly the lack of relevant data in the model. This involves a three-stage process:

Retrieve documents: The RAG pipeline retrieves information from a knowledge base (usually a large dataset of documents or articles), with the aim of retrieving the most relevant documents or passages that may contain answers to the question posed.

Calculate similarity metric: Once a large set of documents has been retrieved, we can check the relevance of the document against the user question by calculating the similarity metric between the document and the question. A similarity metric is a score that numerically quantifies how similar two pieces of text are. Cosine similarity is one such metric.

Refining with a re-ranker model: Although similarity metrics give us an indication of how similar a document is to a question, the accuracy can be greatly improved by applying a so-called re-ranker model. Similarity metrics are fast to compute but inaccurate, while re-ranker models are slower, but more accurate. The similarity metric is first used to provide a small set of relevant documents from a large corpus, and the re-ranker model is then applied to refine the relevance of the small set of documents, this enables efficient document retrieval. 

At this stage the LLM can be prompted to answer the user’s question with the documents retrieved from the RAG pipeline.  

This way, it is not only the LLM’s limited memory that is relied upon to answer the question, but instead we can leverage its exceptional reasoning capabilities. This ensures the returned answer is derived from up-to-date and relevant information based on a broader knowledge base, increasing the accuracy and confidence in the answer. 

But can the user really trust the LLM’s answer? The reality is LLMs are not perfect and that’s why Adarga puts multiple mechanisms in place to help minimise risk. 

Guardrails via citations  

It is here that guardrails play a crucial role. Guardrails are mechanisms that guide users towards the productive and safe use of LLM outputs. One type of guardrail Adarga has implemented is the presence of citations in LLM answers. Citations point to the exact location in the source document used to answer a question.  

Returning answers with citations involves careful prompt engineering. The success of this step very much depends on the ability of the chosen LLM, the input prompt format, and whether examples are provided in the prompt to help the LLM interpret the user question. Care must be taken to ensure that token limits (the size of the input to an LLM) are not exceeded. Citations enable the user to verify that the answer is indeed based on accurate information from reliable sources – a vital capability for our customers operating in defence, national security, and commercial intelligence. This transparency also supports further exploration of the topic, guiding users to more detailed or additional sources of information. 

Answers to complex questions with a flexible Q&A pipeline 

Adarga’s use of a microservices architecture has enabled us to build our pipeline as a collection of small, independently deployable services that work together to provide an innovative Q&A functionality that enables users to ask complex questions of their own curated data sets and reports. Our MLOps infrastructure allows us to easily alter which models are used in order to do this in the best possible way. Internally, we abstract the specific details of the models used behind proxy services to ensure security and robustness. Our microservice approach also makes it easier to update, maintain, and scale individual services without affecting the entire application. With this, it is possible to leverage a flexible pipeline and adapt and update the Q&A functionality rapidly amid the fast-changing AI landscape – all the while constantly testing, validating, and refining the models we use for utmost security.

The responsible deployment of AI lies at the heart of Adarga’s mission, underpinning our human-in-the-loop approach, which is so important for the defence and intelligence use-cases we service. Adarga has built a proprietary ecosystem of 35+ AI models spanning classical, bespoke, and large language models that can be deployed in a cost-effective manner at scale. Carefully selected after undergoing rigorous benchmarking, these models have been designed to perform advanced information extraction tailored to military and geopolitical use cases. Trained on domain-specific datasets, Adarga’s software contextualises and enriches outputs for higher quality results. 

Download our brochure to find out more about how Adarga’s AI platform powers specialist information intelligence tools and services.

Cookie Policy

We'd like to set Analytics cookies to help us to improve our website by collecting and reporting information on how you use it. The cookies collect information in a way that does not directly identify anyone.

For more detailed information about the cookies we use, see our Legal Page

Analytics Cookies