Artificial Intelligence: from weakness, strength
Professor John P. Cunningham, Professor of AI at Columbia University and Chair of the Adarga Advisory Board, reflects on a fundamental question: what is true artificial intelligence, and where are we on the path towards it?
As the world celebrates the potential of artificial intelligence and in equal measure fears its potential dangers, it is worthwhile to reflect on a fundamental question: what is true artificial intelligence, and where are we on the path towards it?
Here I will argue that, despite some popular media claims to the contrary, we have made rather little progress towards to the utopian (or indeed dystopian) ideal of general “strong” artificial intelligence. Instead, we have made tremendous progress on narrow, highly problem-specific artificial intelligence. Far from being a negative, this is extremely hopeful: our modern technology, and the further massive developments that are to yet to come, will continue to add tremendous value to humanity, our economy and our general prosperity, but they will also require and extend — not displace — humankind’s central role.
The utopian hype and dystopian fears of AI centre around a form of artificial intelligence known as strong AI. Tracing its roots back to the 1950’s during the early age of modern computing, strong AI (variously called artificial general intelligence or full AI) is, at its ultimate interpretation, a computer or machine being able to successfully accomplish any human cognitive task: reasoning, communication, memory, planning, learning, knowledge, and even consciousness itself. From this sweeping definition come the sentient robots and all-knowing devices made popular in science fiction from Star Wars to Star Trek.
However, no widely agreed definition of strong AI yet exists. The now-popular Turing Test and Chinese Room Test are thought experiments designed to partially assess the validity of a strong AI, but do not give us the technological bar to be specifically met. Indeed, it was in the earliest days of AI that the great promise of strong AI was forecast to be just a generation away. And so, it was the abject failure of this prognostication in the 1960’s and 1970’s that contributed to the first “AI winter” — a time of disillusionment with research and development in AI. Now, after a number of AI seasons have turned over again, we find ourselves again predicting the coming of strong AI.
But are we right this time?
To answer that question, let us consider the other end of the AI spectrum, the highly problem-specific and somewhat pejoratively named weak AI (also called narrow AI). This function appeals to engineers and operationally minded executives alike: weak AI is a device designed to solve a specific problem that was previously considered a problem of human intelligence. Examples here include the ability to recognise human faces in images, the conversion of human speech to text (which includes services such as Siri and Alexa), financial instrument forecasting models, superhuman play in games like chess and Go, and natural language processing to transform vast quantities of unreadable, unstructured textual data. You would be right to think that these examples sound familiar. Indeed, all our current success in modern AI can be put into the bucket of weak AI. These most recent successes and pace of advancement is due entirely to a number of important drivers: the availability of vast quantities of data, the continuing commoditisation of computer power and its availability via the cloud, the statistics and computer science of machine learning, and the software pulling all these elements together. And importantly, all of these significant advances — from images to text to voice — are tools to help empower human decision makers today.
There are two conclusions that we can draw from the preceding. First is that, despite the hype, we are for all intents and purposes not notably closer to truly cracking strong AI than we previously were. Weak AI is, of course, an essential step in that developmental direction, but strong AI is also by no means a logical and linear next step from our foundations in weak AI. All of our successes to date are all examples of the weaker variety. Even the definitions of strong AI constrain our ability to demonstrate success — consciousness, human reasoning, and more are not universally agreed upon concepts, nor is the psychological or neurobiological underpinnings of these phenomena comprehensively understood in humans yet.
Our second conclusion is that, far from being a negative point, our success in weak AI should be a cause for great hope and celebration. We have already seen tremendous innovation and economic development from these successes, and we have every reason to believe that this trend and speed of advancement will continue into newer and more valuable areas of human endeavour. And what’s more, all of these successes have been tools for enhancing human ingenuity — that lofty economic and societal goal, and the reason we are here today.
This article was originally published in print on 5 September 2019 in the conference booklet for our event Enhancing Human Ingenuity.