The Safety Summit is only one dimension of a much broader conversation on AI

Events

03 Nov 2023
Twitter
Linked In

The Safety Summit is only one dimension of a much broader conversation on AI

Written by Rob Bassett Cross, CEO at Adarga

Adarga has welcomed the UK hosting this week’s AI Safety Summit since it was announced. We are committed to developing and deploying AI responsibly, safely and transparently, and it is critical that other developers and users of AI do so and demand so too. My colleagues and I have really enjoyed having the chance to connect with other businesses, policy makers and thought leaders on these issues at various events this week.

The Summit is important for a number of reasons, but for those of us who see the existing and future value of AI, it is particularly timely. Leading the world on AI safety is the most important way the UK can retain public confidence in the technology, and it is only with that public trust that we can remain at the forefront of this foundational technology’s development and practical application. It is, therefore, one of the enablers that will allow the UK to reap the economic, societal and strategic benefits that AI will bring.

However, while the Summit, and the global public interest it generated, was a welcome development, it must only be seen as one of the joists in a much broader roof that spans the vital issues on AI.

Clues to what some of the others are can be seen in both who and what was absent from Bletchley Park this week. In particular, we need to augment the Summit and its focus on frontier model risks with a renewed focus on three things: existing risks, the role of SMEs, and innovation.

To maintain public trust, we must reassure on existing, as well as future, risks

I have discussed the importance of building and maintaining trust in AI to ensure that we don’t miss out on the considerable future benefits. While the Summit is, in part, a reaction to the proliferation of LLMs and subsequent, somewhat frenzied, media coverage about existential risks, it threatens to overshadow efforts to tackle existing vulnerabilities which have the potential to undermine trust in deployment, particularly in the public sector. To be a global leader in AI, the UK needs to be a centre for deployment which means being a leader in adoption. By far the greatest risk for us all is that the UK doesn’t adopt AI quickly enough. To achieve this, we have to ensure that we are addressing existing risks around bias, assurance and explainability, while deploying AI-powered technologies in the most appropriate early uses cases at scale to allow the public to see and understand the benefits – and not just fuelling unfounded fears – of AI.

SMEs need a voice in all AI debates

The industry guestlist at the Summit was notably weighted in favour of the largest global companies. Of course, these companies are important. But while they may be a significant slice of the global AI pie, they aren’t all of it. Other businesses are developing and deploying AI, including building exciting new applications using those larger companies’ foundational models, and would have brought an important alternative perspective to the Summit. It is a shame that view was missed, not least as those companies will be equally responsible for developing and using AI safely and any regulation will need to work for them as well. These companies also represent by far the largest community and are comprised of our budding generation of innovators, entrepreneurs, data scientists, developers and builders. I would like to see a concerted effort on the part of the UK and other governments to ensure their voice is now listened to.

The balance must be restored between innovation and safety

AI safety debates should not happen in isolation and should not take precedence over bold ambitions to be a global leader in innovation. The debate in the UK has swung wildly from a focus on innovation to a focus on existential risk, largely in response to reactions to the most recent advances in generative AI capabilities. That pendulum has gone too far, and we need to restore equilibrium. The goals and actions of the AI Strategy need to be delivered at pace, alongside measures to tackle existing and future risks. This also means avoiding any unnecessary preclusion of deployment of AI in lower-risk settings, in line with the UK’s white paper.

I still believe that, too often, we are significantly underestimating the sheer scale and impact of AI and the epoch-changing technological revolution we are currently witnessing. The magnitude of this change is mirrored by the size of the opportunity ahead of us. To seize it, we need a long-term focus on innovation and a more dynamic ethos of adoption alongside building a much broader understanding and confidence in the technology. That’s why this week’s events must be seen as merely marking an early waypoint on a far richer, multi-dimensional and enduring conversation on AI.

To learn more about AI safety and responsible AI practices, which sit at the heart of Adarga’s mission as an information intelligence specialist, get in touch with us today.

Cookie Policy

We'd like to set Analytics cookies to help us to improve our website by collecting and reporting information on how you use it. The cookies collect information in a way that does not directly identify anyone.

For more detailed information about the cookies we use, see our Legal Page

Analytics Cookies